Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 13 Dec 2001
Offline Last Active Feb 10 2015 09:02 PM

Topics I've Started

SDL_SetRelativeMouseMode problem

16 January 2015 - 05:25 PM



I've been googling around for the better part of an hour and still have the same problem.


I am trying to hide the mouse on right-click and get continuous mouse movement data (to orbit a 3rd person camera continuously). SDL_SetRelativeMouseMode seems like definitely the right tool for the job. The problem is that it's still acting like the mouse is constrained by the window and when it "gets to the edge" of the window, my x and y deltas go to zero. IIRC that's not expected behavior. I've tried also using SDL_SetWindowGrab( SDL_TRUE ) which should have the additional feature of continuing to give me x/y delta information even if the mouse is pinned to the window edge, but that's not happening either.


I've also tried SDL_WarpMouseInWindow, but that has odd behavior as the call invokes another SDL_MOUSEMOTION event that undoes the relative mouse motion that just happened. If I try to fix that by wrapping the call with SDL_EventState(SDL_MOUSEMOTION, SDL_IGNORE); and then re-enabling that just makes the whole mouse event system take a big dump and give me really crappy choppy data...


Maybe a problem with: mouse driver, dual monitor setup, SDL_WINDOW_BORDERLESS??


If anyone has any ideas this is a pretty basic feature and it's suuuuper weird that I can't get it to work. I mean it's the basis of getting an FPS working correctly on PC, so should be pretty simple to get working


[EDIT: oh, if it matters, I'm using SDL2-2.0.1]

Basic OpenGL State Machine Questions

13 January 2015 - 05:24 PM



I can't seem to find this information on the OpenGL online documentation and it's been bugging me for a while. The gist of the question is "how/when is state saved".


For instance let's consider this fairly standard VBO initialization:

GLuint vaoID, vboID;

glGenVertexArrays(1, &vaoID); // Create our Vertex Array Object

glGenBuffers(1, &vboID); // Generate our Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(GLfloat), &vertices.front(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); // Set up our vertex attributes pointer

and Render hook (assume indexCount is initialize properly when the mesh is loaded and there is an associated index buffer object, yada yada)

glBindVertexArray( renderInfo.vbos.vaoID );
glDrawElements(GL_TRIANGLES, renderInfo.indexCount, GL_UNSIGNED_INT, (GLvoid*)0);
glBindVertexArray( 0 ); 

So this creates a Vertex Array Object with an associated Vertex Buffer Object that holds my vertex data. This works and I'm happy.


My question is, what if I want to change the data that is passed in the Vertex Buffer at runtime (say maybe I have several lists of vertices that are pre-transformed to specific animation keyframe positions or something; I'm not doing that but it's an example that's easily digestible). Can I do that? How do I do that? Is the state set in glVertexAttribPointer in the above invocation somehow "saved" to the vertex array object?


Would the "re-binding" look something like the following (assuming that vboID_2 is created just like vboID but with different data)


//does this work as re-binding??
glBindBuffer(GL_ARRAY_BUFFER, vboID_2);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);

glDrawElements(GL_TRIANGLES, renderInfo.indexCount, GL_UNSIGNED_INT, (GLvoid*)0);
glBindVertexArray( 0 );

Hopefully this question is clear, if long-winded. Let me know if I need to clarify anything...



OpenGL 4 Shadow Map Problem

07 January 2015 - 08:52 PM



Back again for what is likely just a trivial error. I am doing some basic & standard Shadow Map rendering.

  1. geometry only pass writing to a FBO depth_buffer
  2. second pass comparing fragment depth in light space against the depth_buffer

Problem: I'm nearly positive that I am either (a) not actually writing to depth_buffer or (b) failing to bind it to the second pass fragment shader correctly for reading


Shadow Map Initialization

glGenFramebuffers(1, &shadowmap_framebuffer);
	glBindFramebuffer(GL_FRAMEBUFFER, shadowmap_framebuffer);

	glGenTextures(1, &shadowmap_texture);
	glBindTexture(GL_TEXTURE_2D, shadowmap_texture);
	glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT32, shadowmap_size,shadowmap_size, 0,GL_DEPTH_COMPONENT, GL_FLOAT, 0);
	glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowmap_texture, 0);

	// No color output in the bound framebuffer, only depth.

	// check that our framebuffer is ok

	glBindFramebuffer(GL_FRAMEBUFFER, 0);

First-Pass shadow render

I've completely eliminated everything except the depth clearing just to sanity check things


c++ side code:

	glBindFramebuffer(GL_FRAMEBUFFER, shadowmap_framebuffer);

	glBindFramebuffer(GL_FRAMEBUFFER, 0);

Second-Pass depth read


relevant c++ side code

	glActiveTexture( GL_TEXTURE3 );
	glBindTexture( GL_TEXTURE_BUFFER, shadowmap_texture );

fragment shader

#version 440

layout(binding=3) uniform sampler2DShadow shadowMap;

in vec4 ex_ShadowCoord;

out vec4 outColor;

void main()
	//let's just read something that should be valid
	//and also which should definitely pass. If the shadowMap is being cleared to 1.f
	//then a lookup with a z == 0.0 should always return 1.0 from texture(...)
	vec3 lookupVec = vec3(0.5,0.5,0.0);
	float depth = texture( shadowMap, lookupVec );
	float visibility = mix( 0.5, 1.0, depth );

	// VALIDATE shadowMap
	vec3 color = vec3(0.0,0.0,0.0);
	if ( depth == 0.0 )
		color.r = 1.0;
	else if ( depth == 1.0 )
		color.b = 1.0;
		color.r = 1.0;
		color.g = 1.0;

	outColor = vec4( color, 1.0 );

Everything is drawing completely RED which means that the texture(...) call is failing at least how I understand it should work. That suggests to me that it's mostly likely not bound correctly (based on prior errors I've made) or perhaps less likely it's not getting written to ever.


I've otherwise validated that the rest of the pipeline is correct (ex_ShadowCoord is coming in correctly and I have full coverage of the scene being rendered)

Simple imageBuffer Problem

16 December 2014 - 02:43 PM



I think this should be a fairly straight forward thing to figure out, but I'm having problems getting data into my fragment shader from a texture buffer object.


Problem Summary: When I eventually get to the shader and call imageLoad it is always returning 0.0.


Slightly more details: The texture buffer is the size of the FBO and is going to be used for some I/O work using imageAtomicMin to manage some weird-o transparency stuff I'm prototyping for a specific game feature


Texture and buffer + creation:

GLuint a_texID;
GLuint a_bufID;

    glGenTextures( 1, &a_texID );

    glGenBuffers( 1, &a_bufID );
    glBindBuffer( GL_TEXTURE_BUFFER, a_bufID );
    glBufferData( GL_TEXTURE_BUFFER, screenInfo.Width*screenInfo.Height*sizeof(float), NULL, GL_DYNAMIC_DRAW );

Pre-Render work (called every frame). If it matters this logic gets executed before the shader program is bound

    //reset the buffer data since the fragment shader will eventually write to it
    const float initializationValue = 1.0f;
    glBindBuffer( GL_TEXTURE_BUFFER, a_bufID );
    glClearBufferData( GL_TEXTURE_BUFFER, GL_R32F, GL_RED, GL_FLOAT, &initializationValue );

    //set up the texture buffer in the right place so the fragment shader can access it
    glActiveTexture( GL_TEXTURE3 );
    glBindTexture( GL_TEXTURE_BUFFER, a_texID );
    glTexBuffer( GL_TEXTURE_BUFFER, GL_R32F, a_bufID );

Fragment Shader

uniform int screenWidth;
layout(binding=3, r32f) uniform coherent imageBuffer myBuffer;

    int index = int(gl_FragCoord.x) + int(gl_FragCoord.y) * screenWidth;
    vec4 data = imageLoad( myBuffer, index );
    //PROBLEM: data.r is always == 0.0

Thing I tried:

I tried just skipping the glClearBufferData part and instead passed a data array that was all set to 1.0. That didn't work either which suggests to me I'm doing something wrong with binding...


Bind a VBO as a TexImage2D?

25 June 2014 - 09:06 PM



Random question to optimize a bizarro pipeline I have going.


I'm using a Geometry Shader + Transform Feedback right now to generate a data set. I want to then pass that data set through another Shader but have it addressable as a sampler2d because I need access to multiple data points in the next Shader (evaluating neighbors).


I know that I can use glGetBufferSubData to extract the output VBOs and then bind that data to a texture with the standard glTexImage2D stuff. But I'm wondering if there is a shortcut whereby I don't need to pull the information off of the graphics card or copy it from one location to another using a COPY_WRITE_BUFFER or whatever. Can I just rebind the VBO somehow and get it treated as a texture addressable by a sampler2d?


Please feel free to ask questions if that's not clear. It's a mouthful...