Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 Nov 2005
Offline Last Active Oct 23 2014 04:41 PM

Topics I've Started

glDrawElementsInstanced not drawing

27 September 2014 - 12:51 PM

I'm pretty sure that I have the code setup properly to do this. My graphics card supports openGL 4.0, but I'm still using a GL 2.0 context.

Can I still use the extension? My code compiles and glew seems to find the extension pointer. I assume because it doesn't crash that everything is good and I just have to debug this but I wanted to make sure.

Nvidia GI Hardware Support

24 September 2014 - 01:45 PM

I saw the Nvidia moon landing GI demo and the corresponding GI demo on the Nvidia youtube channel.


They mention this is coming with their new gpu, is this some hardware supported feature? Does anyone know how it will integrate with other software if that is the case?

Also, in the demo with the sphere, they show the voxel data. I'm assuming that the pixel shader performs a raycast into the voxel data to find the closest voxel? I've never messed with voxels before. I'm not sure how that works. Is the data on the gpu a 3d texture and it will just ray cast  from the fragments world position + normal until it hits a non blank texel in the 3d texture and then use that information?

If so, wouldn't that 3d texture be pretty big? for that small scene it looks like it is about 64x64x64 if it is a 3d texture. Maybe I just don't know enough about voxels. I remember the latest Crysis tech pdf mentioning their GI stuff. Were they doing the same type of thing then?

Debugging dip in FPS

23 September 2014 - 10:14 PM

OpenGL btw.

I'm trying to determine what is causing this issue below. If you look at my 2 images, one is at an angle, one is straight on.


You are seeing only 2 triangles. The rest of my terrain is still rendered behind me with no culling so you can see the vert count is still the same. The side view actually renders LESS pixels as well but still has the much slower frame rate.


What I tried:

1.) I used to use anisotropic on everything, I disabled it. I figured it was more aniso samples when the view angle was high but that was not the case. With aniso on though, you see the same type of thing happen just both framerates are much lower. I swear it has to be this, unless my driver has started forcing anisotropic on, I do believe my application is controlling it however is wish.


2.) By only showing this portion of the terrain, any big textures (splats,lightmaps, etc) that are 4096 or so stretched over the terrain, would not be sampling any big jumps (mip-map caches etc).


3.) Its not pixel overdraw, its only 2 triangles.


4.) Its not some other thing like clipping as both images clip.


My guess is it has to be a texture sampling thing because it clearly is not a pixel shader cost, they both run the same and the one outputing LESS pixels is SLOWER. I'm done for the night, but are there any other thoughts? I'm going to have to really mess with debugging some simpler shaders/textures/filter parameters to see what is going on.

floating point texture, heightmap

16 September 2014 - 07:48 PM

I'm creating an FBO heightmap texture that I can paint to for a terrain editor.


glTexImage2D(GL_TEXTURE_2D,0, GL_R32F, width,height, 0, GL_RED, GL_FLOAT, NULL);



Attached an image. It does look like it is validly floating point. What you see on the right is a height brush placed into the heightmap. What happens when I draw it again a second time with blending as GL_ONE, GL_ONE  (as if I placed the same brush twice), it blows up positive and negative super high values. The vertices stretch crazy high into the air (maybe + FLOAT_MAX) and even negative (which is impossible because the brush is a GL_RGB 0-1 range texture). How it even went negative makes no sense.


Expected result would be adding original brush + the second splat = 2x as tall terrain. Not sure whats going on. You can see that bad height map has black specs inside of it, yet it was the same brush placed on the heightmap twice.

C++ to Java/C#

09 September 2014 - 11:02 PM

I've been in c++ for 8 years. I used Java to make a simple android game and one of the game I worked on professionally was partially C++ partially C#.

I'm wonder as a resume thing, when do I put C#/Java on my resume. They are all very similar languages to me. I recently picked up some books on the new languages and I'm on page 130 of a 700 page book (Programming C# 4.0), but I'm wondering what specifically makes you "know" C# (ie: put it on my resume). Is it simply more of knowing the .NET library classes vs c++ STL?


I would assume my c++ with reading this book and messing with the samples, I'd have C# fairly easily put on my resume and say I'm well versed in it, but then the same comes down when I brush up on Java. At what point am I well-versed in Java. I feel the general is you are a software engineer, period, but then you see jobs that as for specifically Java or C# or say Javascript. Now Javascript is obviously pretty different (at least from the little bit I know), so then it comes down to again: what makes you an 'A,B,C language' programmer. Is it really just you are a software engineer + you have worked in 'X' language enough to know the standard syntax + libraries/interfaces provided.


In other words, is just reading these books be enough to really say I'm well versed enough. Or even, if you are versed in C++, can't you almost by default put JAVA/C# on your resume, or at what point do you?