Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 04 Feb 2011
Offline Last Active Yesterday, 11:13 AM

#4948291 Clouds in OpenGL?

Posted by Yours3!f on 11 June 2012 - 02:34 PM

http://lmgtfy.com/?q...cloud rendering


#4941858 Blinn-Phong with Fresnel effect.

Posted by Yours3!f on 21 May 2012 - 02:34 AM

here's the relevant wiki article:
here's Schlick's original paper from 1994:
and if you're successful with implementing the Schlick method, then you may proceed to more advanced approximations:

so for you I think the fresnel term should be:

//Fresnel approximation
float base = 1-dot(f_viewDirection, halfWay);
float exp = pow(base, 5);
float fresnel = fZero + (1 - fZero)*exp;

I've taken a look at the first article (as well as several others that mentioned it). Haven't checked the original paper yet though. Anyhow, the code you wrote is exactly the same as the one I posted, it's just that the one I've posted is re-written. Both of them give: f0 + exp - f0*exp

oh, ok, then you just need to find out how to use the fresnel term, because then you calculated it right...
maybe try to add it:
outputColor.rgb =  ambientReflection + diffuseReflection + specularReflection + fresnel;

#4939092 Position Reconstruction from Depth in OpenGL

Posted by Yours3!f on 10 May 2012 - 01:05 PM

yep I describe the frustum corner method. I made it work in view space, but mjp describes a world space method as well. (so I guess the frustum one doesn't work in worldspace)

OMG!! I didn't know those were your articles! Those are awesome! I honestly take my hat off for you!

Just want to confirm with you that with your method#2 I'd not need to deal with frustum corners, right? Multiplication of view_vector and linear_depth would give me reconstructed position in the view space.

Also, could you suggest some article where I can learn more about view/projection spaces in order to get a better idea how it works and why is the depth projected into [-far, -near] range.

Thank you!

ummm... I think you misunderstood me. That is not my article. I just implemented it. That article belongs to MJP: http://www.gamedev.n...ser/118414-mjp/

you DO have to deal with it. That is the basis of the reconstruction. If you read the mjp article #3 (back in the habit) then you'll understand it. View vector is actually constructed from the coordinates of the far plane of the view frustum, that were interpolated along the whole screen.

just a couple of articles:
http://www.glprogramming.com/red/ ---> this is a whole book (although it is for OGL 1.1, some parts are still relevant like the article above)

this is an awesome book about the maths behind all this:
despite it describes maths in a left-handed coordinate system, I'm sure that after reading this you'll be able to apply it in the OpenGL world, the right-handed cooridnate system.

#4938643 Position Reconstruction from Depth in OpenGL

Posted by Yours3!f on 09 May 2012 - 05:54 AM

Hi Everybody,

I'm in a long journey of implementing deferred shading. I've got a clear idea of the algorithm, but have some troubles reconstructing object position. I've followed this article http://mynameismjp.w...ion-from-depth/ but still easily get lots. There are too many uncertainties. So I want to go over it step by step and make that I did not make any mistakes in previous steps.

Storing the depth into a texture:

// Depth Pass Vertex Shader

varying float depth;
void main (void)
	 gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
	 vec4 pos = gl_ModelViewMatrix * gl_Vertex;
	 depth = pos.z;

// Depth Pass Fragment Shader
varying float depth;
uniform float FarClipPlane;
void main (void)
	 float d = - depth / FarClipPlane;
	 gl_FragColor.r = d;

My understanding of this is that I do store the liner depth value into the texture. Is that correct?

Thank you,

Also, I'm easily getting lost when it gets to the perspective division. Can somebody suggest some article where i can learn about it.

Ashaman73 is correct, however you seem to be having trouble with the theory itself.
before reading take a look at this:
namely method #2

Ok so you store the depth like this:
[vertex shader]

varying float depth;
const float far_plane_distance = 1000.0f; //far plane is at -1000


//transform the input vertex to view (or eye) space
vec4 viewspace_position = modelview_matrix * input_vertex;

//this is not perspective division but scaling the depth value.
//initially you'd have depth values in the range [-near ... -far distance]
//but you have to linearize them, so that you can get more
//this is why you divide by the far plane distance
//and you divide by the negative one, so that you can
//check the result in the texture
depth = viewspace_position.z / -far_plane_distance;

[pixel shader]

varying float depth;
out vec4 out_depth;


//store the depth into the R channel of the texture
//assuming you chose a R16F or R31F format
out_depth.x = depth;

and retrieve (decode) the depth like this:
[vertex shader]

varying vec4 viewspace_position;
varying vec2 texture_coordinates;


//store the view space position of the far plane
//this gets interpolated as you could see in the mjp drawing
viewspace_position = modelview_matrix * input_vertex;
texture_coordinates = input_texture_coordinates;

[pixel shader]

varying vec4 viewspace_position;
varying vec2 texture_coordinates;
uniform sampler2D depth_texture;
const float far_plane_distance = 1000.0f;


//sample depth from the texture's R channel
//this will be in range [0 ... 1]
float linear_depth = texture(depth_texture, texture_coordinates).x;

//you have to construct the view vector at the extrapolated viewspace position
//and scale it by the far / z, but this is unnecessary if you use the far plane as the input viewspace position
//since the division will give you 1
//the z coordinate will be the far distance (negative, since we're in viewspace)
vec3 view_vector = vec3(viewspace_position.xy * (-far_plane_distance / viewspace_position.z), -far_plane_distance);

//scaling the view vector with the linear depth will give the true viewspace position
vec3 reconstructed_viewspace_position = view_vector * linear_depth;

hope this helped :) I had trouble with reconstructing position so search for it on gamedev.

#4937739 bump mapping in deferred renderer

Posted by Yours3!f on 06 May 2012 - 03:42 AM

Interpolated normals does not stay normalized. This is basic thing that is done for every lightmodel. Normal is renormalized at pixel shader.

Try to lerp between [1,0,0] and [0,1,0] and you clearly see the unormalized results.
But problem is more like. Is it needed for normal mapping? Or is the problem too small to be notified.

well, then they that should be done at pixel shader stage...

well, I haven't noticed any visible errors, so I guess the error is neglible.
when I simply displayed the normals, some error is visible, that is when you normalize in the vertex shader the normals in the end are not unit vectors, but they have a little bit less length, which makes lighting a little darker. (see the cubes on the pictures)

Attached Thumbnails

  • pixel.png
  • pixel_light.png
  • vertex.png
  • vertex_light.png

#4928309 bump mapping in deferred renderer

Posted by Yours3!f on 04 April 2012 - 03:57 PM

Ok, I think I got it right, but please confirm.

I had to change the vertex shader to this:
#version 410
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;
in vec4 v4_vertex;
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture;
out cross_shader_data
  vec2 v2_texture_coords;
  vec4 position;
  mat3 tbn;
} vertex_output;
void main()
  vec3 normal = normalize(m3_n * v3_normal);
  vec3 tangent = normalize(m3_n * v3_tangent);
  vec3 bitangent = normalize(cross(normal, tangent));
  vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x,
		  tangent.y, bitangent.y, normal.y,
		  tangent.z, bitangent.z, normal.z ));
  vertex_output.v2_texture_coords = v2_texture;
  vertex_output.position = m4_mv * v4_vertex;
  gl_Position = m4_p * vertex_output.position;

the issue was the column major tbn matrix vs the row major, which I used in the previous vertex shader.

I attached an image about the normals, but please confirm if it looks right.

Attached Thumbnails

  • normals.png

#4913138 Posssible memory leak in OpenGL Intel Win7 drivers

Posted by Yours3!f on 14 February 2012 - 04:23 PM

@Yours3!f - glMapBuffer doesn't work that way. I strongly suggest that you read the documentation for it. The "fix" you gave will actually crash the program (if you're lucky).
@MarkS - the code does make sense. That's the way glMapBuffer works, and it's correct and in accordance with the documentation (aside from an extra unnecessary glBindBuffer which shouldn't be a cause of the observed leak).

@OP: have you tried this using glBufferSubData instead of glMapBuffer? It would be interesting to know if the same symptoms are observed with that. The following should be equivalent:

void mapAndCopyToBuffer(char* img1)
	glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, pixelbufferHandle);
	glBufferSubData (GL_PIXEL_UNPACK_BUFFER_ARB, 0, w * h * s, img1);
	glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, w, h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, 0);

Oh - and make sure that the value of s is 4. ;)

hehe now you got me. I didn't read the specs for glMapBuffer, but now I see clearly how it works... :)
I just searched the code quickly to find some error prone stuff, but didn't catch the glMapBuffer

I hope the collision course I gave on pointers at least cleared some stuff in his head :)

#4913132 Posssible memory leak in OpenGL Intel Win7 drivers

Posted by Yours3!f on 14 February 2012 - 04:13 PM

Just to be sure, I tried to delete[] mappedBuf now (before the statement where I set it to NULL). It crashes.

ok, so I was wrong Posted Image but still you're doing pretty intresting things with those pointers...

I tried to decypher what you're trying to do with the pointers so here it is:

img = 0
img points to where texdata1 does
img points to the write-only memory from where OGL will take back the texture data
fill img with 0-s
copy img-s content back to GPU (unmapping to pixelbufferhandle)
since img points to the write-only memory of OGL, you make it point to texdata1
fill mappedbuf with img (essentially texdata1)
upload mappedbuf to GPU (unmapping to pixelbufferhandle)

img points to texdata1, so set it to texdata2
fill mappedbuf with img (essentially texdata2)
upload mappedbuf to GPU (unmapping to pixelbufferhandle)

img points to texdata2, so set it to texdata1
fill mappedbuf with img (essentially texdata1)
upload mappedbuf to GPU (unmapping to pixelbufferhandle)

the code essentially sets the texture to grey and white very fast, but with setting glutTimerFunc(5, timerCallback, 0); to glutTimerFunc(120, timerCallback, 0);
I can clearly see this behaviour.

So next I tried to modify the source you've given (on Linux Posted Image thanks for the portable code!) so that it makes more sense:
#include <iostream>
#include <stdio.h>
#include <string.h>
#include "GLee.h"
#include "GL/freeglut.h"
unsigned int w = 640;
unsigned int h = 480;
unsigned int s = 4;
char* texData1 = NULL;
char* texData2 = NULL;
char* mappedBuf = NULL;
GLuint pixelbufferHandle;
bool pingpong = true; //so that the first time we use texdata1
void timerCallback ( int value );
void initializeTextureBuffer();
void mapAndCopyToBuffer ( char* img1 );
void paintGL();
void changeSize ( int w, int h );
GLuint errorCode;
#define checkForGLError() \
	    if ((errorCode = glGetError()) != GL_NO_ERROR) \
			    printf("OpenGL error at %s line %i: %s", __FILE__, __LINE__-1, gluErrorString(errorCode) );

int main ( int argc, char **argv )
    texData1 = new char[w * h * s];
    texData2 = new char[w * h * s];
    memset ( texData1, 0, w * h * s );
    memset ( texData2, 255, w * h * s );
    glutInit ( &argc, argv );
    glutInitDisplayMode ( GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA );
    glutInitWindowPosition ( 300, 300 );
    glutInitWindowSize ( w, h );
    glutCreateWindow ( "Window" );
    glutDisplayFunc ( paintGL );
    glutReshapeFunc ( changeSize );
    glDisable ( GL_BLEND );
    glDepthMask ( GL_FALSE );
    glDisable ( GL_CULL_FACE );
    glDisable ( GL_DEPTH_TEST );
    glEnable ( GL_TEXTURE_2D );
    glEnable ( GL_MULTISAMPLE );
    timerCallback ( 0 );
    glDeleteBuffers ( 1, &pixelbufferHandle );
    delete[] texData1;
    delete[] texData2;
    return 0;
void initializeTextureBuffer()
    glGenBuffers ( 1, &pixelbufferHandle );
    glBindBuffer ( GL_PIXEL_UNPACK_BUFFER, pixelbufferHandle );
    glBufferData ( GL_PIXEL_UNPACK_BUFFER, w * h * s, 0, GL_DYNAMIC_DRAW );
    // Specify filtering and edge actions
    // initialize and upload the texture
    glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, 0 );
    mappedBuf = ( char* ) glMapBuffer ( GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY );
    if ( !mappedBuf )
	    std::cerr << "Couldn't create write-only mapping buffer.\n Exiting.\n";
	    exit ( 1 );
    memset(mappedBuf, 0, w * h * s);
void mapAndCopyToBuffer ( char* img_ptr )
    mappedBuf = (char*) glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
    memcpy ( mappedBuf, img_ptr, w * h * s );
    if (!glUnmapBuffer ( GL_PIXEL_UNPACK_BUFFER ))
	    std::cerr << "Buffer has already been unmapped.\n Exiting.\n";
    glTexSubImage2D ( GL_TEXTURE_2D, 0, 0, 0, w, h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, 0 );
void paintGL()
    if ( pingpong )
	    mapAndCopyToBuffer ( texData1 );
	    mapAndCopyToBuffer ( texData2 );
    pingpong = !pingpong;
    glMatrixMode ( GL_MODELVIEW );
    glBegin ( GL_QUADS );
	    glTexCoord2f ( 0,0 );
	    glVertex3f ( -1, 1, 0 );
	    glTexCoord2f ( 1,0 );
	    glVertex3f ( 1, 1, 0 );
	    glTexCoord2f ( 1,1 );
	    glVertex3f ( 1, -1, 0 );
	    glTexCoord2f ( 0,1 );
	    glVertex3f ( -1, -1, 0 );
void changeSize ( int w, int h )
    glViewport ( 0, 0, w, h );
void timerCallback ( int value )
    glutTimerFunc ( 1, timerCallback, 0 );
// kate: indent-mode cstyle; space-indent on; indent-width 0;

EDIT: corrected the not-mapping back issue :)

I don't know if the memory leak is still present with this version since I don't own an intel graphics card... could you please test this?

#4913038 Posssible memory leak in OpenGL Intel Win7 drivers

Posted by Yours3!f on 14 February 2012 - 11:35 AM

void mapAndCopyToBuffer(char* img1)
		glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, pixelbufferHandle);
		mappedBuf = (char*) glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
		memcpy(mappedBuf, img1, w * h * s);
		mappedBuf = NULL;

		glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelbufferHandle);

		glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, w, h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, 0);

seems like you don't free your mappedBuf after using it. you need to do this if you want to really free some memory:

void mapAndCopyToBuffer(char* img1)
		glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, pixelbufferHandle);
		mappedBuf = (char*) glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
		memcpy(mappedBuf, img1, w * h * s);
		delete [] mappedBuf; //this is what you need, see explanation below
		mappedBuf = NULL;

		glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelbufferHandle);

		glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, w, h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, 0);

this indicates you're not completely aware of how to use pointers.

to clear things up:

int* pointer = 0; // creates an 8-byte pointer to an integer, pointer's value is 0 (Null-pointer) EDIT: yep it was unintialized...
int value = *pointer; // ERROR: you tried to access the contents of a null-pointer, this is illegal
pointer = new int; // dynamically (at runtime) assigns 4 bytes of memory at the place which is pointed by pointer, pointer is now valid
int value_2 = *pointer; // VALID: pointer actually points to somewhere, so value_2 should be either 0 or a random number, this depends on the compiler
delete pointer; // tells the operating system to free the place pointed by pointer
int value_3 = *pointer; // VALID: pointer still points to somewhere in the memory, and most probably you'll get back the old value pointed by pointer, since nothing has overwritten it. BUT this cannot be guaranteed (so DO NOT do this)
pointer = 0; // now pointer is a null-pointer again, but this doesn't free up any space, as opposed to delete
int value_4 = *pointer; //ERROR: you tried to access a null pointer again.
// for arrays you need to use this:
pointer = new int[32]; // allocate 32 integers (4 bytes per int) to the place pointed by pointer
//pointer will actually point to the first element of the array meaning this will be valid, and give a value:
int value_5 = *pointer; // VALID: gives back pointer[0]
delete [] pointer; //this frees up the space pointed by pointer, but pointer will still be valid, so you need to set it to 0
pointer = 0; // now pointer is a null-pointer again
// to check wether pointer is valid you can use a simple if
if(pointer) // if pointer is 0 (logical false) it will be invalid (null-pointer) else it will be valid
  std::cout << "Pointer is valid.\n";
  std::cout << "Pointer is INvalid.\n";

oh and please write back if this solved your problem.

best regards,

#4906116 need help with rotation, transformation and scaling in opengl mathematics

Posted by Yours3!f on 25 January 2012 - 09:02 AM

so if i don't use shaders i can't do translation, rotation or scaling?
what is sharder? what is the use of them?

sorry for the newbie questions.

ummm... shaders aren't responsible for translation rotation etc. but rendering. Math libraries are supposed to handle this stuff. You can find many math libraries that replace the built-in opengl maths.
To mention a few: GLM, CML, Eigen, libmymath
A shader is a tiny program that runs on your video card. It supposed to replace the fixed pipeline rendering that you used in OGL 2.1.
It has many advantages over the old rendering one being the great flexibility it offers when it comes to rendering.

#4897754 WebGL and Javascript...

Posted by Yours3!f on 29 December 2011 - 05:25 AM

"Framebuffer: COLOR_ATTACHMENT0 exists on specification but COLOR_ATTACHMENT1 is not..."

So, is this means that mrt (Multiple Render Targets) is not supported?

Also, looks like it doesn't support depth/stencil textures. Is it true?

Best wishes, FXACE.

as far as I know in both OpenGL ES and WebGL MRT is not supported by the specification.

according to this:
OpenGL ES 2.0 §4.4.3

(as suggested on the WebGL specification page under the FramebufferRenderbuffer function)

color attachments other than the 0th are NOT supported. So you have to do multipass rendering if you still want to use this functionality.

according to this:
OpenGL ES 2.0 §3.7.1

you can only create (with TexImage2D) RGBA, ALPHA, and LUMINANCE textures, so depth and stencil texture creation isn't supported. You can only create FBOs with these (depth, stencil) attachments (and I think renderbuffers too), but still the attached textures can't have the hardware supported GL_DEPTH_COMPONENT as internal format.

according to this:
OpenGL ES 2.0 §3.7.13

you can use
void deleteTexture(WebGLTexture texture)
for deleting textures.

here's the online specification:

you can see that in a lot of places it refers to the OGL ES specification, but if you're concerned about the differences just look them up in the table of contents.

I tried to be as precise as I could, but if I'm wrong correct me :)

#4885974 FBO texture is empty / black

Posted by Yours3!f on 20 November 2011 - 03:15 PM

why do you unbind your RBO? don't you need it?

why do you set the viewport and perspective matrix settings when rendering to the FBO? Shouldn't you set it before any rendering?

are you sure you want to use mipmapping? + you don't generate any mipmaps...
glGenerateMipmap(GL_TEXTURE_2D); perhaps

can't think of any other stupid mistake I'd make :)

#4861191 How long until c++ disapear from game development

Posted by Yours3!f on 13 September 2011 - 12:12 PM

I hope it will never-ever because its the only language I'm good at :)

to take it seriously I think because someone can't handle c++ (and uses c# etc.) doesn't mean that it should be avoided. It only means that c# is good for learning programming and c++ is for hardcore, performance-aware programmers.

#4853564 Making an OpenGL wrapper

Posted by Yours3!f on 25 August 2011 - 02:52 AM

(Also, 'C' prefixes on classes make my brain vomit... yay for unreadable mess \o/)

totally agreed, it is just unreadable... that little snippet reminds me of win32 programming with LPVOIDs and other stuff...

#4840075 [discussion]DirectX 9 or 10 or 11?

Posted by Yours3!f on 25 July 2011 - 11:24 AM

(no flame intended)

why don't you use opengl?
It'd run on windows xp vista & 7 (and on other platforms as well) you'd get full dx10 level features with that graphics card until you get a dx11 level card, and the transition would be seamless. You wouldn't have to worry about whether you're on XP or Vista or 7 you'd get the same visuals and same features... Plus, as I recently experienced, the two API are like twins (or at least DX 9 and OpenGL 2.1), the same features, same workflow, etc. Although I felt like dx was harder to learn due to the lack of good tutorials.