bronxbomber92

Members
  • Content count

    365
  • Joined

  • Last visited

Community Reputation

275 Neutral

About bronxbomber92

  • Rank
    Member
  1. [quote name='Telios' timestamp='1345740756' post='4972645'] Thanks Hodgman. The only part I'm missing is that without parsing a material, for example, I can't tell what shaders it references. So I guess the parse() method will need to be able to back out and say "I'm missing these dependencies". It could then be pushed to the back of the batch, after the dependencies it needs. [/quote] Why not just load all shaders at once and ensure it happens before material resource loading? You possibly might get speed improvements this way too, since you'll have better instruction cache coherency and the shader compiler may under the hood be able to do multiple shader compilations in parallel.
  2. OpenGL Why do I need GLSL?

    [quote name='lride' timestamp='1345680707' post='4972429'] I don't understand why I need to learn GLSL. What are some stuff that is not possible to do in normal OpenGL, but possible with GLSL [/quote] For modern OpenGL development there is no practical difference between GLSL (or more broadly, shaders, for which GLSL is a language to write shaders) and OpenGL. Originally when OpenGL was conceived graphic processing units were only capable of performing a fixed-set of operations. Thus, OpenGL's API was designed to only expose what the graphic cards were capable of doing, and this it what came to be called the fixed function pipeline. But as time has passed, graphic processing units have evolved. They no longer expose just a fixed-set of operations; they still have a small set of fixed operations, but now also include the ability to be programmed (similar to how we can program our CPUs with C or C++ or Java or <insert favourite language>). The ability to program a GPU is important because now what is possible to do with a GPU is much less limited. However, we need a way to be able to program the GPUs, and hence OpenGL has evolved alongside the GPUs capabilities; this is why shaders were introduced. The bottom line is that GLSL is only necessary to learn if you intend to use features that are beyond the capabilities of the fixed function pipeline. Vaguely, the fixed function pipeline is only capable of fog, vertex lighting, coloring, and texturing. If you're requirements are more sophisticated than that, then you need shaders.
  3. Perhaps the easiest solution is to have a serial queue of resource loading requests that's get consumed by a second thread that is concurrent to your main thread. Just enqueue each resource in the same order as your original synchronous code and your resource loading is now synchronous relative to other resources being loaded. A callback notification system can then easily be created on top of this so other parts of your application will know when the resource's they need have been loaded.
  4. Horrible performance or not ?

    [quote name='kauna' timestamp='1345312764' post='4970880'] [quote name='Hodgman' timestamp='1345283090' post='4970783'] There's no point clearing any buffer that you're going to overwrite the contents of later on. Assuming that geometry always fills your entire screen, then new geometry is going to fill your g-buffer anyway, so clearing it is a waste of time. [/quote] In this case and in general this is true. However, there is a point clearing the render targets and that is the case when using SLI/Crossfire setup. That is one of the ways that the driver is able to recognize which surfaces aren't needed by the other GPU and may skip the transfer of framebuffer between the GPU memories. So keep your clear code there for the case when number of GPUs is bigger than 1. Otherwise, you may save some bandwidth if you use the hardware z-buffer for position creation instead of using another buffer for depth. The quality isn't as good, but should be enough for typical scenarios. Best regards! [/quote] [quote name='Hodgman' timestamp='1345283090' post='4970783'] [quote name='lipsryme' timestamp='1345282612' post='4970781']That is the first time I've heard that, can you elaborate why ?[/quote]There's no point clearing any buffer that you're going to overwrite the contents of later on. Assuming that geometry always fills your entire screen, then new geometry is going to fill your g-buffer anyway, so clearing it is a waste of time. [/quote] There's another reason to clear render targets on tiled architecture GPUs that are prevalent on mobile devices; according to [url="http://www.realtimerendering.com/downloads/MobileCrossPlatformChallenges_siggraph.pdf"]Unity's talk at Siggraph this year[/url], clearing render targets can avoid extra copies done by the driver. So if I'm reading the slides correctly, the render target clear can act as an equivalent of EXT_discard_framebuffer operation on devices that don't expose that extension.
  5. Good article on Drawing Policy

    Christer Ericson has a good write-up on his blog about the bucket-sorting approach he used in God of War 3: http://realtimecollisiondetection.net/blog/?p=86
  6. This should be very helpful: http://www.altdevblogaday.com/2012/04/29/software-rasterizer-part-2/. Eric Lengyel's book [i]Mathematics for 3D Game Programming & Computer Graphics, Third Edition[/i] has a great explanation, too. Without getting into the mathematical explanation, to do perspective correct attribute interpolation (such as texture-coordinate mapping) the easiest way is to realize that 1/view-space-z-coordinate (which for brevity lets call 1/z-view) can be interpolated linearly in screen-space and remain correct for perspective projections. Thus, if you divide your attributes by z-view (attribute/z-view) and linear interpolate both the attribute values divided by z-view and the 1/z-view, then at each pixel you can get the perspective-correct attribute value by multiplying your interpolated attribute/z-view value by the reciprocal of your interpolated 1/z-view value. And as the above article states, conveniently the w coordinate of our homogeneous coordinates (our vertices after be transformed by the projection matrix) is equal to z-view.
  7. C++ what other IDE other than VC++

    I'm not a Windows developer, but I believe Eclipse is an option. Also, I've heard great things about Sublime Text 2 which has plugins to integrate with Clang for autocompletion and GDB for debugging. I wouldn't know what the setup process would be like on Windows though.
  8. Software Rasterisation with Fixed Point Math

    I hope no one minds that I bump this. I still can't seem to grasp why the fixed point math in the article is correct. Thanks!
  9. Software Rasterisation with Fixed Point Math

    I was not aware of that article. Thanks for pointing it out to me. If it's as good as you say (which I'm sure it is -- it's Abrash ), I may very well implement that instead. However, for my own curiosity, I'd like to still try to understand my original questions.
  10. Hi, I'm trying my hand at writing a software renderer. I've read the [url="http://devmaster.net/forums/topic/1145-advanced-rasterization/"]Advance Rasterisation article by Nick Capens[/url] and it works wonderfully. However, in an attempt to learn the fixed-point representation of floating point numbers, I tried taking the algorithm he presents before he adds the fill convention and fixed point math and writing the fixed point code myself. My understanding is that the fixed point representation simply gives more precision (and thus more accuracy) and that's why it's effective. However I am not seeing a difference (as a note -- I did implement the fill convention insight Nick outlines, so that is not lacking and thusn't be the reason why my code doesn't work properly). I went back to the article and looked for differences between the implementations. I believe I'm missing an insight, because there are some parts of his code that don't make sense to me and I can only assume it is these concepts that are the reason my code doesn't work correctly. The particular areas of his implementation that don't make sense to me are: 1. The necessity of the FDX and FDY variables. Supposedly they are the fixed point representations of the deltas (hence the 4 bit shift), but aren't the DX variables all ready in a 28.4 fixed point format since they are computed from the X and Y variables which are in a 28.4 fixed point format? 2. The addition of 0xf to the min and max variables on their conversion back from 28.4 fixed point format. This is essentially adding .9999 to the fixed point number, correct? 3. Why convert the min and max variables back to normal integers at all? Would it not be equally valid to leave them in the 28.4 format, not do the 4 bit shifts in computing the CY variables, and increment the x, y counters by (1 << 4) in the two for loops? Thanks ahead of time for any light you can shed on my confusions!
  11. C++11

    Thanks for everyone's input. There is definitely a lot of material here, and I'm sure more than enough to get up to speed.
  12. C++11

    With both Clang and GCC having decent support for C++11, I'd like to start writing hobby projects in C++11. As far as I know, I don't believe there exists a published book on C++11, so I'm hoping to get from you all your favourites resources on the web where you've picked up C++11. I'm competent with C++03, so resources that assume prior C++03 experience are perfectly useful to me. Thanks for your help!
  13. Portability of C++ Compared to C

    [quote name='Antheus' timestamp='1326829278' post='4903737'] This question in such general form doesn't make much sense. [/quote] I think what the OP meant to ask was closer along the lines to: "Across different platforms, how standard compliant are the C++ compilers for that platform compared to the standard compliancy of the C compilers for that platform?" If this is indeed what the OP meant, I think SiCrane provided an adequate answer.
  14. Portability of C++ Compared to C

    [quote name='Dunge' timestamp='1326824551' post='4903707'] iPhone never supported C, it only support Obj-C (Apple way of beeing "different"). [/quote] That's wrong. Xcode and the iOS SDKs sport both GCC and Clang. You're free to use C, C++ or Objective-C. Objective-C is just the language Apple choose to implement various frameworks they provide (UIKit, Foundation, GLKit, etc.). Yes, you'll most likely have to use Objective-C somewhere in your codebase, but you can minimize that to only where you need it and use C for the rest of your program if that's what you want.
  15. OpenGL iOS OpenGLES memory issue

    [color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]Are you by chance creating a new framebuffer each frame accidentally? If you're drawing into a 32 bit pixel format at 480x320, that could account for the 5kb increase you're seeing. I would also try setting the [/left][/size][/font][/color][color=#666666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][font=Courier, Consolas, monospace][size=3]kEAGLDrawablePropertyRetainedBacking[/size][/font][/left][/size][/font][/color][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] property on your EAGLDrawable to NO.[/left][/size][/font][/color] [color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]I would agree that presentRenderBuffer is most likely not your problem. It just be might that the presentRenderBuffer triggers a context flush, so you would be seeing the effects of your previous gl* calls as well -- which means all of your GLES code should be suspect.[/left][/size][/font][/color]