Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 22 Aug 2011
Offline Last Active Yesterday, 04:23 AM

#5165269 VBO and glIsBuffer

Posted by Wh0p on 07 July 2014 - 09:05 AM

Before the program enters the entry point of main() global variables (like 'quad' in the example) are allocated on the stack.

VBOQuad quad;
int main () { ... }

I can't say much to the error but have you checked the stack trace of you application when it crashed? This might give a clue what functions are called that lead to the destruction of this object.

#5165212 [GLSL] send array receive struct

Posted by Wh0p on 07 July 2014 - 03:10 AM

I had a similar Problem. Make sure gGetUniformLocation is not -1 (thats what I got when using an array of struct).

Generally i woud suggest to keep away from the array of structs idea and instead use a struct of array, that is a struct like

struct Lights


  vec3 positions[MAX_LIGHTS];

  vec3 intensity[MAX_LIGHTS];




in many cases this woul be more cache efficient (think about updating on the client).


And then there would be another approach by using a samplerBuffer for each component in your shader, that you can texelFetch your lights from. This would enable you to have a variable count of lightsources.


Hope this could help

#5163037 SOLVED: Render a vector to screen in opengl

Posted by Wh0p on 26 June 2014 - 10:49 AM

Yes, but make sure you call glBindFrameBuffer(GL_FRAMEBUFFER, 0) befor glGetTexImage (), or else it will fail. Mapping/binding textures that are attached to a bound fbo will fail.

#5163029 SOLVED: Render a vector to screen in opengl

Posted by Wh0p on 26 June 2014 - 10:18 AM

If you are creating a framebuffer object you will most likly attach a texture (previously generated with glGenTextures ()...) as color rendertarget using the glFramebufferTexture() function.


I dont know if I understood you right but I guess you want to do something like this:

1 Bind the fbo for offscreen rendering

2 Render anything you like

3 Unbind the fbo

4 Retrieve texture data from your rendered image into main memory (you can do that using glGetTexImage() with the texture you have attached as colortarget to your fbo)

5 Modify the texture data in main memory and update the texture on you graphics card (with glTexSubImage2D ())

6 Render the modified texture in a fullscreen quad to your window backbuffer



This will work, although I highly recommend not to do this, and do what ever you want to modify in your texture on the gpu.

If it is just a pixel operation you want to do, then you can do the same as I described above but omit the steps 4 and 5 and do all modifications in the fragment shader stage.


#5163019 SOLVED: Compute shader atomic shared variable problem

Posted by Wh0p on 26 June 2014 - 09:29 AM

Check the reference on atomicMin () http://www.opengl.org/sdk/docs/man/


I states: performs an atomic comparison of data to the contents of mem, writes the minimum value into mem and returns the original contents of mem from before the comparison occured


The second part is the interesting one in your case. You are passing minDepth into the parameter 'mem'. So AFTER the function returns minDepth has already the minimum assigned. But immediately after that you are assigning minDepth the value it has had BEFORE the function call. I guess this is what breaks the synchronized behaviour.

Why are you assigning the value anyway?


My guess is leave the assignment away and it will work quite fine. (At least my implementation does and it looks pretty much the same in the parts you showed me here)


Hope I could help you there.


Unfortunately I'm terrible at reading posts.... at least i did some explaining of the error :)

#5162778 Shared vertices or not for primitives

Posted by Wh0p on 25 June 2014 - 09:23 AM

by working normally I mean making a beautiful sphere

When applying a rectangular texture to a sphere you always will suffer certain deformation of the image you are texturing. 


I don't know the exact representation of your sphere but I will just guess, that it consists of "stacks and slices" (Like starting with one vertex at the bottom and then creating concentric circles around an axis with first growing and then shrinking radii).

So lets just look at the body of the sphere (excluding the start and end points) the only thing you have to take care of here are the vertices located at the fissure points (the points where the UVs (0, t) and (1, t) meet). Since they have the same position and normal but different UVs you have to save 2 vertices here (like L. Spiro described). Note that the image will be interpolated to fit the sphere (just like the opposite, that is happening when you try to draw a world map onto a rectangular area the image is stretched).

For the top and bottom vertices you might want to save them multiple times so that each face can address its own vertex (you would then have number of slices times vertices with the same position and normal but different UVs for the top or the bottom vertice).


If its for learning just test around and you'll find your way.

Maybe try to texture an image that looks like this one: http://www.mediavr.com/belmorepark1left.jpg there the textured image would look a little nicer, because the image itself is streched beforehand.


hopefully i am not further confusing you.

#5162583 Shared vertices or not for primitives

Posted by Wh0p on 24 June 2014 - 11:37 AM

I don't see any problem with using texture coordinates the way you described, although it greatly depends on how you do the mapping of your texture coordinates.

When using a spheremap you could assign textcoord for each vertex just like you described, no need for dublicating any vertices.

If you go ahead an try to project a rectangular texture (or other more fancy things) onto the sphere, you have to dublicate the "top" and "bootom" of the sphere, where you have that triangle fan like structure.

Hope that answers your question.

#5160904 glsl syntax highlight and auto completion

Posted by Wh0p on 16 June 2014 - 01:22 PM

Yes that's right the never ending story...


I've gone through a lot now tested different editors, tools, plugins and there's always something not the way I wanted. I think most people here know what I mean.

So recently I began searching for another solution (again). I came across a pretty nice solution in a forum post from like ages.


The main idea is to let your c-compiler think that .glsl files are to be parsed as header files. Not overwhelmingly new so far. But then you can go ahead an write another include file that defines all the glsl names and symbols and you are basically done, fore the c-compiler does the rest.


I took a liking to it and spend a night long crawling the glsl reference pages copy pasting function definitions and so on. Ill gladly share the result with you: https://github.com/Wh0p/Wh0psGarbageDump

You are free to test and improve this yourself! (However, I am totally new to all this git stuff and might need some time to figure this out)


Just have a look at the example on the bottom, how it looks in Visual Studio...


Still there are some drawbacks, you have to write a preprocessor that resolves or stripps the "#include" directives from the .glsl file.

The syntax for uniform buffer is somewhat broke.


Sooo, tell me if you like/hate/ignore it, or even have a better solution to this. Personally I think I have found a solution I can be happy with (for the time being).









#5088047 SSAO artefacts

Posted by Wh0p on 22 August 2013 - 04:34 AM


I followed this article about ssao: http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-simple-and-practical-approach-to-ssao-r2753


But i encountered some difficulties as you can see below:


1) Why do the edges of the triangles reappear in the OcclusionBuffer and there for are visible in the final image (Thought that wouldn't be if i calc the normalized normals in pixel shader)?

2) If I look down on a plane surface this surface appears to be occluded (grows dark), should it be that way?

3) Artefacts like the black curvy lines and those rectangles nicly seen in the 3rd picture.


Here are the images:





As far as i can tell the normals and positions seem correct, but anyway:


This is how i generate my normal and position buffer:


	// viewspace position
	Out.viewpos = mul (float4 (input.position, 1.0f), WorldMatrix);
	Out.viewpos = mul (Out.viewpos, ViewMatrix);

	// projectited position
	Out.position = mul (Out.viewpos, ProjMatrix);

	// viewspace normals
	Out.normal = mul (float4 (input.normal, 0.0f), WorldMatrix);
	Out.normal = mul (Out.normal, ViewMatrix);

	return Out;

struct PS_OUTPUT
	float4 normal : SV_Target0;
	float4 viewpos : SV_Target1;

	Out.normal =  float4((normalize(In.normal)).xyz * 0.5f + 0.5f, 1.0f);
	Out.viewpos = In.viewpos;

	return Out;

This is the ssao algorithm (pretty much the one from the article):

float3 getPosition (in float2 uv)
	return PositionBuffer.Sample (LinearSampler, uv).xyz;

float3 getNormal (in float2 uv)
	return DepthNormalBuffer.Sample (LinearSampler, uv).xyz;

float2 getRandom (in float2 uv)
	return normalize (RandomTexture.Sample (LinearSampler, uv).xy * 0.5f + 0.5f);

float doAmbientOcclusion (in float2 tcoord, in float2 occluder, in float3 p, in float3 cnorm)
	// vector v from the occludee to the occluder
	float3 diff = getPosition (tcoord + occluder) - p;
	const float3 v = normalize (diff);

	// distance between occluder and occludee
	const float d = length (diff) * Scale;

	return max (0.0, dot (cnorm,v) - Bias) * (1.0 / (1.0 + d) * Intensity);

float PSMAIN (in PS_INPUT In) : SV_Target
	const float2 vec[4] = {
		float2 (1,0), float2 (-1,0),
		float2 (0,1), float2 (0,-1)

	float3 p = getPosition (In.tex);
	float3 n = getNormal (In.tex);
	float2 rand = getRandom(In.tex);

	// amboent occlusion factor
	float ao = 0.0f;
	float rad = Radius / p.z;

	int iterations = 4;
	for (int j = 0; j < iterations; ++j)
		float2 coord1 = reflect(vec[j], rand) * rad;
		float2 coord2 = float2 (coord1.x*0.707 - coord1.y*0.707,
		                        coord1.x*0.707 + coord1.y*0.707);

		ao += doAmbientOcclusion (In.tex, coord1*0.25, p, n);
		ao += doAmbientOcclusion (In.tex, coord2*0.5, p, n);
		ao += doAmbientOcclusion (In.tex, coord1*0.75, p, n);
		ao += doAmbientOcclusion (In.tex, coord2, p, n);
	ao /= (float)iterations*4.0;

	return 1 - ao;

Thank you if you are still reading :)