JPulham

Members
  • Content count

    172
  • Joined

  • Last visited

Community Reputation

120 Neutral

About JPulham

  • Rank
    Member
  1. Try this paper here. The whole site is worth a read. This guy has papers on ocean, terrain, vegetation, atmosphere, rivers, clouds, etc: http://www-evasion.imag.fr/Membres/Eric.Bruneton/. The paper describes rendering roads, lakes, etc. on terrain with a height map. In preprocessing you render a strip of polygons following your road spline into 3 'feature' textures: colour, height and blend factor. When rendering the height map you then blend between the terrain colour/height and feature colour/height with the blend factor you rendered earlier. The paper explains it more fully. *edited for clarity
  2. distance field shadow maps

    damn... I had an idea like this after reading how valve does decals in team fortress. Every good idea I have is taken :P Looks good, I'll be interested to follow your progress.
  3. I'm trying to reconstruct world space position from view space depth , following the tutorial here. I'm not sure I'm using the correct matrices. Instead of static RGB values the colors 'rotate' as I rotate or translate my view. Can anyone see any obvious errors or give me any pointers? I have a GBuffer storing spherical coordinate normals and depth in an RGBA16F texture. Here's the code... demo.cpp: [on resize] float tanFovDiv2 = tan(mFOV * (M_PI / 180.0f)) / 2.0f; mFarHeight = tanFovDiv2 * mFar; dmFarWidth = mFarHeight * aspect; [on draw - fullscreen quad for directional light] glUseProgram(mLightShaderDirectional.getProgramId()); glUniform1i(mLightShaderDirectional.uniform_location("gbuffer"),0); glUniform3f(mLightShaderDirectional.uniform_location("eye_pos"),mPos[0],mPos[1],mPos[2]); glUniform3f(mLightShaderDirectional.uniform_location("camera"),mFarWidth,mFarHeight,mFar); glBegin(GL_QUADS); glVertex2f( 1.0f, 1.0f); glVertex2f( 1.0f,-1.0f); glVertex2f(-1.0f,-1.0f); glVertex2f(-1.0f, 1.0f); glEnd(); directional light shader: [vertex shader] uniform vec3 camera; varying vec3 corner; void main() { gl_TexCoord[0].xy = gl_Vertex.xy * 0.5 + 0.5; corner = vec3(-camera.x * gl_Vertex.x,-camera.y * gl_Vertex.y,camera.z) * mat3(gl_ModelViewMatrixInverse); gl_Position.xy = gl_Vertex.xy; } [fragment shader] uniform sampler2D gbuffer; uniform vec3 eye_pos; varying vec3 corner; vec3 SphericalToCartesian(vec2 spherical) { vec2 sinCosTheta, sinCosPhi; spherical = spherical * 2.0f - 1.0f; float sc = spherical.x; sinCosTheta.x = sin(sc); sinCosTheta.y = cos(sc); sinCosPhi = vec2(sqrt(1.0 - spherical.y * spherical.y), spherical.y); return vec3(sinCosTheta.y * sinCosPhi.x, sinCosTheta.x * sinCosPhi.x, sinCosPhi.y); } void main(void) { vec3 light_dir = normalize(vec3(1.0,1.0,0.5)); vec3 gbu = texture2D(gbuffer, gl_TexCoord[0].xy).xyz; vec3 norm = SphericalToCartesian(gbu.xy); vec3 eyeVec = -((corner * gbu.z) + eye_pos); float specular = max(pow(dot(reflect(normalize(eyeVec), norm), light_dir), 100), 0.0); float NL = dot(norm,light_dir); //gl_FragColor = vec4(vec3(NL),specular * NL); // DEBUG output //gl_FragColor = vec4(vec3(specular * NL),0.0); //gl_FragColor = vec4(norm,0.0f); //gl_FragColor = vec4(corner,0.0f); //gl_FragColor = vec4(eye_pos,0.0f); gl_FragColor = vec4(eyeVec,0.0f); //gl_FragColor = vec4(vec3(gbu.z),0.0f); } Cheers
  4. Just place the script into you script directory and blender will detect it when it starts up. On windows the directory is: C:\Documents and Settings\USERNAME\Application Data\Blender Foundation\Blender\.blender\scripts or for Vista: C:\Users\USERNAME\AppData\Roaming\Blender Foundation\Blender\.blender\scripts where USERNAME is... well... ;) Under linux the directory is: ~/.blender/scripts
  5. Making Origami Yacht/Plane??

    You shouldn't be trying to rotate triangles, you should decide where the corner will be AFTER the rotation and set that as the value. That probably doesn't make too much sense but I recommend you get a 3D modeller and make the shape you're after. You can then copy the coodinates from there into your opengl code. Blender and wings 3d are good free modelers. Hope I helped ;)
  6. pseudo-random game content - Where to start

    You could have a probability function that takes into account physical laws... i.e. at the centre of a galaxy's arm or central bulge has a higher probability of spawning a star. So for each star in a cell: * Generate A point; * get a random number; * if the number if greate that the threshold then generate a star at that point in the centre of a galaxy the threshold would be 0-0.2 (where the random number is in the range 0-1) at the outer edge the threshold would be much higher, maybe 0.8 or 0.9. The point is it would be random, but the randomness would be shaped by several probability functions multiplied/added/something else together. Off the top of my head - distance from centre of galaxy, distance from galactic plane(changing at bulge) and distance from a fibonacci spiral (the arms)
  7. I hate blender...

    Quote:If they cleaned up their user interface and made the tools more accessible this is the one i would go for. Not to drag up an old thread but blender 2.5 alpha 0 is out now, a lot of features are still being rewritten, but it has custom toolbars/keymaps, new icons & controls, the render can finish the same benchmark twice before the old renderer plus volumetrics and smoke simulation... that 'catching up with the big leagues' seems to be happening, can't wait til mid 2010 when it'll be complete.
  8. You could multiply by the falloff of the light and endup with even less pixels to calculate, don't know how much of a performance gain this would make, if any, but worth a go? @mokaschitta: you could do it with one shadow buffer, however if your implementing any kind of shadow caching you'll need one per light, obviously :P. Depends on your architecture I guess.
  9. Thanks for the reply, but I understood that the keyboard & mouse would be handled in the game logic thread, therefore requiring a window and thus a display connection, while OpenGL render calls were made from a separate thread? Have I missed the point here? Maybe I wasn't clear enough...
  10. From reading around I have figured out that to do rendering in a separate thread to game logic you create a window and then in the render thread create the context, at least thats how it works with wgl. Is it the same with glx? Do I pass the display, visual and window to the render thread from the window creation and call glXCreateContext/glXMakeCurrent from the render thread? cheers.
  11. OpenGL Optimising my renderer.

    Yeah, I agree, totaly awesome... @Jason Z: What would that entail? would you have two command buffers and 'double-buffer' or maybe just a queue that the graphics thread looks at? cheers
  12. OpenGL Optimising my renderer.

    Instead of a list use a binary tree, that combines inserting and sorting, its then simply a case of rendering each node, left first then right, recursivly. You could add some kind of RenderState for each node and watch for change in the insert function and then watch a hasChanged flag during rendering to see if you need to change the render state. Thats what I'm doing (sort of) with my renderer.
  13. I am trying to compute the view vector for a simple GLSL raycaster. I know how to compute the view vector in the vertex shader for a normal object, but how do I do it with a fullscreen quad? I assume I have to use the inverse projection matrix or something to unproject a screen point, but can't seem to find what I need. Cheers.
  14. Quote:Original post by bubu LV If you are using OpenGL, then you can use z-buffer texture both for input and as actual depth buffer as long as it is read-only. Sorry to steal the thread for a moment... How would that look in terms of FBO attachments? Would you use a separate depth attachment along with a normal texture attachment?
  15. whats the fastest/best way to render to an offscreen buffer? (i.e. for an offline renderer) PBuffers, FBOs or platform specific stuff(PFD_DRAW_TO_BITMAP & GLXPixmap)