Jump to content

  • Log In with Google      Sign In   
  • Create Account


bronxbomber92

Member Since 08 Oct 2006
Offline Last Active Nov 30 2012 08:58 PM
-----

#4972681 Asynchronous Loading - Dependencies?

Posted by bronxbomber92 on 23 August 2012 - 11:48 AM

Thanks Hodgman. The only part I'm missing is that without parsing a material, for example, I can't tell what shaders it references. So I guess the parse() method will need to be able to back out and say "I'm missing these dependencies". It could then be pushed to the back of the batch, after the dependencies it needs.

Why not just load all shaders at once and ensure it happens before material resource loading? You possibly might get speed improvements this way too, since you'll have better instruction cache coherency and the shader compiler may under the hood be able to do multiple shader compilations in parallel.


#4972513 Why do I need GLSL?

Posted by bronxbomber92 on 23 August 2012 - 02:15 AM

I don't understand why I need to learn GLSL.
What are some stuff that is not possible to do in normal OpenGL, but possible with GLSL

For modern OpenGL development there is no practical difference between GLSL (or more broadly, shaders, for which GLSL is a language to write shaders) and OpenGL.

Originally when OpenGL was conceived graphic processing units were only capable of performing a fixed-set of operations. Thus, OpenGL's API was designed to only expose what the graphic cards were capable of doing, and this it what came to be called the fixed function pipeline. But as time has passed, graphic processing units have evolved. They no longer expose just a fixed-set of operations; they still have a small set of fixed operations, but now also include the ability to be programmed (similar to how we can program our CPUs with C or C++ or Java or <insert favourite language>). The ability to program a GPU is important because now what is possible to do with a GPU is much less limited. However, we need a way to be able to program the GPUs, and hence OpenGL has evolved alongside the GPUs capabilities; this is why shaders were introduced.

The bottom line is that GLSL is only necessary to learn if you intend to use features that are beyond the capabilities of the fixed function pipeline. Vaguely, the fixed function pipeline is only capable of fog, vertex lighting, coloring, and texturing. If you're requirements are more sophisticated than that, then you need shaders.


#4972469 Asynchronous Loading - Dependencies?

Posted by bronxbomber92 on 22 August 2012 - 09:42 PM

Perhaps the easiest solution is to have a serial queue of resource loading requests that's get consumed by a second thread that is concurrent to your main thread. Just enqueue each resource in the same order as your original synchronous code and your resource loading is now synchronous relative to other resources being loaded. A callback notification system can then easily be created on top of this so other parts of your application will know when the resource's they need have been loaded.


#4949929 Good article on Drawing Policy

Posted by bronxbomber92 on 16 June 2012 - 07:00 PM

Christer Ericson has a good write-up on his blog about the bucket-sorting approach he used in God of War 3: http://realtimecollisiondetection.net/blog/?p=86


#4949928 Implement Perspective Correct Texture Mapping

Posted by bronxbomber92 on 16 June 2012 - 06:52 PM

This should be very helpful: http://www.altdevblogaday.com/2012/04/29/software-rasterizer-part-2/. Eric Lengyel's book Mathematics for 3D Game Programming & Computer Graphics, Third Edition has a great explanation, too.

Without getting into the mathematical explanation, to do perspective correct attribute interpolation (such as texture-coordinate mapping) the easiest way is to realize that 1/view-space-z-coordinate (which for brevity lets call 1/z-view) can be interpolated linearly in screen-space and remain correct for perspective projections. Thus, if you divide your attributes by z-view (attribute/z-view) and linear interpolate both the attribute values divided by z-view and the 1/z-view, then at each pixel you can get the perspective-correct attribute value by multiplying your interpolated attribute/z-view value by the reciprocal of your interpolated 1/z-view value. And as the above article states, conveniently the w coordinate of our homogeneous coordinates (our vertices after be transformed by the projection matrix) is equal to z-view.


#4903713 Portability of C++ Compared to C

Posted by bronxbomber92 on 17 January 2012 - 12:30 PM

iPhone never supported C, it only support Obj-C (Apple way of beeing "different").

That's wrong. Xcode and the iOS SDKs sport both GCC and Clang. You're free to use C, C++ or Objective-C. Objective-C is just the language Apple choose to implement various frameworks they provide (UIKit, Foundation, GLKit, etc.). Yes, you'll most likely have to use Objective-C somewhere in your codebase, but you can minimize that to only where you need it and use C for the rest of your program if that's what you want.


PARTNERS