Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


maxgpgpu

Member Since 29 May 2009
Offline Last Active Nov 16 2014 09:25 PM

Posts I've Made

In Topic: Game Engine, How do I make one

08 December 2013 - 01:08 AM

 

My opinion is --- depends on your skills and personal interests.

 

If you are sufficiently talented and dedicated, and decide you want to collaborate on an engine, contact me by PM and learn about the engine I'm developing.  This engine is fairly far along, several subsystems are complete and work, is multi-platform, very high-performance, and includes some special capabilities that make the engine potentially more interesting than others (if characteristics like "procedurally generated content" interest you).

 

Oh, and a "game engine" is something like a collection of code libraries (usually shared libraries or DLLs) that does most of the actual work in a game, but only those parts not specific to one game.  In other words, the game engine does pretty much everything that needs to be done by more than one out of a hundred games.

Do you have some kind of tutorials on making the game engine huh.png

 

No, sorry but I don't.  However, there are a couple fairly good books about game engine design or architecture, which may provide an equivalent (or what you need).  But something about the huge size and scope of "a game engine" makes me think "tutorials" isn't an appropriate match.  Find the most recent editions of those books.  If you're serious, definitely pay for the paper books rather than trying to download for free via some torrent.  The authors deserve their cuts.

 

If you have cool new ideas and you want to brainstorm about them, start a personal conversation with someone who built, or is currently building an engine.  And don't worry they will steal your ideas - they won't want to.  And even if they do, the work involved to implement their engine, much less your new enhancements is so huge that you need not worry --- and in fact you should be happy if they steal and implement them!


In Topic: Game Engine, How do I make one

05 December 2013 - 05:48 PM

My opinion is --- depends on your skills and personal interests.

 

If you are insanely talented, brilliant, energetic and can spend full time on the endeavor for at least many months, if not years (depending on how elaborate an engine you want to create), you can create an engine.  Been there, done that (twice under contract to game companies, once for myself).  So it is possible, but you're in for one hell of a lot of work.  Oh, and even if you do qualify in all the above ways, you should only create a game engine if that is more interesting to you personally than games (which applies to me).

 

If you can't quite meet the above level of abilities, capabilities and time, but you are still interested in game engines far more than games (at least now), then collaborate on a game engine development project that someone else started.  Believe me, if you are talented and motivated, your collaboration will be greatly appreciated, you will enjoy the process a lot more than doing everything yourself, and you will learn a lot more, a lot faster when you're working with other talented individuals.  The problem here is... finding an engine project that will be finished.  Which means, either the folks involved already have the skills (they've done this before) and they have the dedication (they finish projects they start).  Unfortunately, about 99% of projects mentioned on gamedev are pure fantasy, not finished, or are finished but are much less grandiose than a serious game engine.  So choose wisely.

 

If you are sufficiently talented and dedicated, and decide you want to collaborate on an engine, contact me by PM and learn about the engine I'm developing.  This engine is fairly far along, several subsystems are complete and work, is multi-platform, very high-performance, and includes some special capabilities that make the engine potentially more interesting than others (if characteristics like "procedurally generated content" interest you).

 

But if you're not a natural or highly talented programmer, you shouldn't try to develop a game engine.  Modest games might work.

 

Oh, and a "game engine" is something like a collection of code libraries (usually shared libraries or DLLs) that does most of the actual work in a game, but only those parts not specific to one game.  In other words, the game engine does pretty much everything that needs to be done by more than one out of a hundred games.


In Topic: difficult problem for OpenGL guru

16 September 2013 - 10:10 PM

Hodgman:

 

Okay, rather than copy your message, which makes this message a bit difficult to parse, I'll just ask my followup questions here.

 

As far as I know, the conventional output of the vertex shader has not had the following performed:

  1:  perspective division

  2: viewport transformation

 

Nonetheless, I see that your answer might be correct anyway, if the code added to the vertex shader is written properly.  First of all, I've always had a suspicion that gl_Position.w is always 1.000 and therefore the perspective division doesn't change anything, and can therefore be ignored.  However, even if that is not always true (tell me), perhaps my transformed vertices always have gl_Position.z equal to 1.0000 since they are at infinity, and my model-view and projection transformation matrices don't contain anything especially wacko.

 

Then there's the viewport transformation, which appears like maybe can also be ignored due to the way textures are accessed.  What I mean is, I guess the normal output of the vertex shader is clip coordinates (not NDC = normalized device coordinates), BUT if we assume the output coordinates of the vertex shader in gl_Position always contains gl_Position.w == 1.0000, then "clip coordinates" may be the same as "NDC" (which would then correspond to what you said).

 

Then the viewport transformation scales the NDC coordinates by the width and height of the framebuffer in order to map the NDC to specific pixels in the framebuffer and depthbuffer.  However, if my vertex shader is not able to directly access the framebuffer or depthbuffer and instead has to access a texture, then there's no reason my vertex shader needs to compute the x,y pixel location in the framebuffer or depthbuffer.  Instead, it needs to compute the corresponding texture coordinates (presumably with "none" or "nearest" filtering or something like that).  And since the range of NDC and texture-coordinates are only a factor of two different, your trivial equation does the trick.

 

Very cool!

 

I guess the only thing this depends upon is... gl_Position.w == 1.0000 (but for objects at distance infinity, I'm betting that's pretty much guaranteed).  I know I should remember, but when is the perspective division value in gl_Position.w != 1.0000?  Gads, I can't believe I forget this stuff... it's only been several years since I wrote that part of the engine - hahaha.

 

-----

 

I am programming with the latest version of OpenGL and nvidia GTX680 cards (supports the latest versions of OpenGL and D3D), so fortunately I don't need to worry about compatibility with more ancient versions.  But thanks for noting that anyway.

 

-----

 

I don't entirely follow your last section, but I probably don't need to unless you tell me there is some speed or convenience advantage to displaying these star images with quads instead of point-sprites.  Is there?

 

Note that I much prefer to draw computed color values to the framebuffer with the pixel shader rather than just display a point-sprite texture or quad-primitive texture.  That way I can simulate optical aberrations [that are a function of the position relative to the center of the field], or even simulate atmospheric turbulence (twinkling of the stars) with procedural techniques.  At the moment I forget how to do this, so I'll have to hit the books and OpenGL specs again.  But what I need to compute the appropriate color for each pixel in the 3x3 to 65x65 region is to know the x,y offset from the center of the point-sprite.

 

I suppose the obvious way to do that is to fill the x,y elements in the point-sprite "image" with x,y pixel offset values instead of RG color information (and receive the RGBA color values as separate variables from the original vertex).

 

I sorta maybe half vaguely recall there is a gl_PointSize output from the vertex shader, which would be perfect, because then I can specify the appropriate point-sprite size (1x1, 3x3, 5x5, 7x7, 9x9... 63x63, 65x65) depending on the star brightness.

 

I sorta maybe half vaguely also recall there is a gl_PointCoord input to the pixel shader that the GPU provides to identify where in the point-sprite the current pixel is.  If so, that's perfect, because then the pixel shader can compute the appropriate brightness and color to draw each screen pixel based upon the original vertex RGBA color (which presumably is passed through and not interpolated since there is only one such value in a point) and the gl_PointCoord.xy values, plus a uniform variable specifies "time" to based twinkling on.

 

Oh, and I guess I'll need to have the vertex shader output the NDC of the vertex unless the screen-pixel x,y is available to pixel shaders (which I don't think is).  Hmmm... except I need to multiply by the number of x and y pixels in the frame buffer to make the value proportional to off-axis angle.

 

Getting close!

 

Thanks for your help.


In Topic: difficult problem for OpenGL guru

16 September 2013 - 04:08 PM

 


#1:  We need to make the rendering of an extended region of the screen conditional upon whether a single vertex/point in the depth-buffer has been written.  I infer that the many pixels in a point-sprite rendering are independently subjected to depth tests by their fragment shaders, and therefore the entire point-sprite would not be extinguished just because the center happened to be obscured.  Similarly, I do not see a way for a vertex shader or geometry shader to discard the original vertex before it invokes a whole bunch of independent fragment shaders (either to render the pixels in the point-sprite, or to execute a procedural routine).
As you've already discovered: Use point sprites and disable depth testing.

To selectively discard a vertex, either return the actual transformed vertex, or return an invalid/off-screen vertex for vertices to be discarded, such as vec4(0,0,0,0)

 

 

 


#2:  It appears to me that vertex-shaders and geometry-shaders cannot determine which framebuffer pixel corresponds to a vertex, and therefore cannot test the depth-buffer for that pixel (and discard the vertex and stop subsequent steps from happening).
Disable hardware depth testing and implement it yourself in the vertex shader. Bind a texture to the vertex shader containing the depth values, and perform the comparison yourself.

 

 

You say, "To selectively discard a vertex, either return the actual transformed vertex, or return an invalid/off-screen vertex for vertices to be discarded, such as vec4(0,0,0,0)".  That sounds correct to me.  What I don't understand is:

 

#1:  How can my vertex shader know where in the framebuffer and depthbuffer the vertex will fall.
#2:  And if you have an answer to the previous question, how can my vertex shader access that value in the depthbuffer to determine whether it has been written or not?

 

If you have answers to these two questions, I guess the values I will receive back from the depthbuffer will be 0.000 to 1.000 with 1.000 meaning "never written during this frame".

 

-----

 

You say, "Disable hardware depth testing and implement it yourself in the vertex shader. Bind a texture to the vertex shader containing the depth values, and perform the comparison yourself".  Okay, I take this to mean you have a valid answer to question #1 above, but not #2 above (in other words, you do not know any way for my vertex shader to read individual x,y locations in the framebuffer or depthbuffer.  And therefore you propose that after rendering the conventional geometry into the framebuffer and depthbuffer, I should then call OpenGL API functions to copy the depth-buffer to a "depth-texture" (a texture having a depthbuffer format), then draw all these VBOs full of vertices with a vertex shader that somehow computes the x,y location in the framebuffer and depthbuffer each vertex would be rendered to, and on the basis of the depth value, draw the point-sprite if (depth < 1.000) and otherwise throw the vertex to some invisible location to effectively make the vertex shader discard the entire point-sprite.

 

Do I have this correct?  If so, two questions:

 

#1:  How does the vertex shader compute the x,y location in the depth texture to access?
#2:  Is the value I get back from the depth-texture going to be a f32 value from 0.000 to 1.000?  Or a s16,u16,s24,u24,s32,u32 value with the largest positive value being equivalent to "infinity" AKA "never written during this frame"?

 

Thanks for helping!


In Topic: difficult problem for OpenGL guru

16 September 2013 - 03:49 PM

maxgpgpu:

#2: geometry shader indeed CAN discard the vertices by simply not emitting any primitives. Any shader stage can read the depth texture as ADDMX suggested.

 

#1: I don't quite understand, you keep mixing per-fragment and per-vertex depth-tests. Remember that vertex/geometry/hull/tessellation shaders operate with vertices, they know nothing about the final fragments that a rasteriser might generate. They can, however, project anything anywhere and sample any textures they like. Only geometry shader has the ability of not emitting anything and effectively exitting the pipeline.

 

I assume you don't want to discard a whole primitive (2 triangle sprite) based on its centre. In such case, when you need a per-fragment depth-test, you'll need to do it in the fragment shader, indeed, and the above helps not.

 

Nevertheless, you might still do some kind of conservative geometry-shader killing, for example using some kind of conservative Hi-Z / "mip-mapped" depth texture and consvative AABB of the final primitive, or something similar, where only a couple texture samples would be enough to safely tell that the whole primitive is "behind".

 

 

You say "Any shader stage can read the depth texture".  Do you mean the vertex or geometry shader can read individual depth-values from any x,y location in the depth-buffer?  How?  What does that code look like?  Or if you only mean to say the entire depth-buffer can be copied to a "depth texture" (of the same size), what does that code look like?  I understand the general process, but never seem to understand how the default framebuffer or its depth buffers can be specified.

 

-----

 

Yes, I probably do sound like I'm "mixing per vertex and per fragment depth tests" in my discussion.  Actually, it only seems that way, and that's my problem.  What I need is for each vertex in the VBO to be depth-tested, but the entire 3x3 to 65x65 pixel point sprite must be drawn or non-drawn on the basis of that one test.  Of course the depth-test of that vertex needs to be tested against the value in the depth-buffer where that vertex would be drawn, but as far as I understand the vertex shader doesn't have a clue at that stage of the pipeline which x,y pixel on the framebuffer or depth-buffer the vertex will fall.

 

Though the vertex shader can't "discard" a pixel, it can change its coordinates to assure the vertex is far behind the camera/viewpoint, right?  So that may be one way of effectively performing a discard in the vertex shader (for points only, which is what we're dealing with here).  Or do you think that's a stupid idea?

 

-----

 

You say, "I assume you don't want to discard a whole primitive (2 triangle sprite) based on its centre".  That is precisely what I need to do!!!!!  And that is what makes this problem difficult (for me, and maybe for anyone).  Read the example I gave in my previous reply (to ADDMX) for an example.  I need to discard (not draw) the entire point-sprite if the vertex (the center of the point-sprite) has been drawn to during the previous normal rendering processes.

 

This is the correct behavior of the process we're talking about here.  Consider a star for example, or a streetlight or airplane landing lights many miles away.  They are literally (for all practical purposes) "point sources" of light.  However, in our eyeballs, in camera lenses, on film, and on CCD surfaces a bright pinpoint of light blooms into a many pixel blur (or "airy disc" if the optical system is extraordinarily precise).  So, when the line-of-sight to the star or landing-lights just barely passes behind the edge of any object, even by an infinitesimal distance, the entire blur vanishes.

 

This is the kind of phenomenon I am dealing with, and must represent correctly.  So this is the physical reason why I must in fact do what you imagine I can't possibly want to do, namely "discard the whole primitive (a largish point-sprite) based upon its center".  And thus I do NOT want a "per fragment depth test", unless somehow we can perform a per-fragment depth test ONLY upon the vertex (the exact center of the point-sprite), then SOMEHOW stop all the other pixels of the point-sprite from being drawn.  I don't think that's possible, because all those pixels have already been created and sent to separate shader cores in parallel with the pixel at the exact center of the point-sprite.  That is, unless I don't understand something about how the pipeline works in the case of point-sprites.

 

I don't understand your last paragraph, but that probably doesn't matter, because it appears I am trying to do something you think I can't possibly want to do!  Hahaha.


PARTNERS