Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1315 Excellent

About Palidine

  • Rank
  1.   It's returning zero
  2. ok, well I solved it but it's jenky. There's got to be a better "built in" way.   First I hide the mouse and call SDL_SetWindowGrab so that the mouse will not move outside of the window. void LockMouse( bool lock ) { SDL_ShowCursor( (lock)? SDL_DISABLE : SDL_ENABLE ); SDL_bool b = (lock)? SDL_TRUE: SDL_FALSE; SDL_SetWindowGrab( mainWindow, b ); if ( lock ) { mouseLock[0] = mouseInfo.xCur; mouseLock[1] = mouseInfo.yCur; } } Then, whenever I receive the mouse movement events I do the following to ignore the movement event caused by calling SDL_WarpMouseInWindow: int32_t deltaW = mouseInfo.xCur; deltaW -= mouseInfo.xPrev; deltaW -= mouseDeltaAccum[0]; int32_t deltaH = mouseInfo.yCur; deltaH -= mouseInfo.yPrev; deltaH -= mouseDeltaAccum[1]; HandleMouseMovement( deltaW, deltaH ); //warp back to the position where the mouse is "locked" //and add that movement into the accumulator so it can be ignored later SDL_WarpMouseInWindow( mainWindow, mouseLock[0], mouseLock[1] ); mouseDeltaAccum[0] = mouseLock[0] - mouseInfo.xCur; mouseDeltaAccum[1] = mouseLock[1] - mouseInfo.yCur;
  3.   Sorry for the very late reply...   It is expected that it's confined to the window, but SDL_SetWindowGrab is supposed to keep generating mouse movement deltas even when it is confined along the edge of the window. So the user wouldn't see the mouse moving to the right outside of the window but your code would still get a deltaX in a rightwards direction.  Anyway, this is still bogging me down so if you or anyone else has additional ideas, let me know!   I'll post if/when I eventually do find a solution, but as I said it's weird that I'm having problems here. I must be missing something kind of obvious...
  4. Hey,   I've been googling around for the better part of an hour and still have the same problem.   I am trying to hide the mouse on right-click and get continuous mouse movement data (to orbit a 3rd person camera continuously). SDL_SetRelativeMouseMode seems like definitely the right tool for the job. The problem is that it's still acting like the mouse is constrained by the window and when it "gets to the edge" of the window, my x and y deltas go to zero. IIRC that's not expected behavior. I've tried also using SDL_SetWindowGrab( SDL_TRUE ) which should have the additional feature of continuing to give me x/y delta information even if the mouse is pinned to the window edge, but that's not happening either.   I've also tried SDL_WarpMouseInWindow, but that has odd behavior as the call invokes another SDL_MOUSEMOTION event that undoes the relative mouse motion that just happened. If I try to fix that by wrapping the call with SDL_EventState(SDL_MOUSEMOTION, SDL_IGNORE); and then re-enabling that just makes the whole mouse event system take a big dump and give me really crappy choppy data...   Maybe a problem with: mouse driver, dual monitor setup, SDL_WINDOW_BORDERLESS??   If anyone has any ideas this is a pretty basic feature and it's suuuuper weird that I can't get it to work. I mean it's the basis of getting an FPS working correctly on PC, so should be pretty simple to get working   [EDIT: oh, if it matters, I'm using SDL2-2.0.1]
  5. So, then to clarify your answer is what I was wanting to do just not possible specifically the way I was thinking of it?   i.e. Say I have a Vertex Array with two possible sets of vertices stored, respectively, in vboID and vboID_2. Is it not possible to swap between those vertex arrays as inputs to my vertex shader without re-calling glBufferData?   What I was envisioning is something like: Hey, GPU, here is one set of vertices. Save them in Vertex Buffer 'vboID' Hey, GPU, here is another set of vertices. Save them in Vertex Buffer 'vboID_2' Then at runtime, depending on conditional logic just swap which buffer is feeding into my shader here: layout(location=0) in vec3 in_Position; ??
  6. Hey,   I can't seem to find this information on the OpenGL online documentation and it's been bugging me for a while. The gist of the question is "how/when is state saved".   For instance let's consider this fairly standard VBO initialization: GLuint vaoID, vboID; glGenVertexArrays(1, &vaoID); // Create our Vertex Array Object glBindVertexArray(vaoID); glGenBuffers(1, &vboID); // Generate our Vertex Buffer Object glBindBuffer(GL_ARRAY_BUFFER, vboID); glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(GLfloat), &vertices.front(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); // Set up our vertex attributes pointer glEnableVertexAttribArray(0); and Render hook (assume indexCount is initialize properly when the mesh is loaded and there is an associated index buffer object, yada yada) glBindVertexArray( renderInfo.vbos.vaoID ); glDrawElements(GL_TRIANGLES, renderInfo.indexCount, GL_UNSIGNED_INT, (GLvoid*)0); glBindVertexArray( 0 );  So this creates a Vertex Array Object with an associated Vertex Buffer Object that holds my vertex data. This works and I'm happy.   My question is, what if I want to change the data that is passed in the Vertex Buffer at runtime (say maybe I have several lists of vertices that are pre-transformed to specific animation keyframe positions or something; I'm not doing that but it's an example that's easily digestible). Can I do that? How do I do that? Is the state set in glVertexAttribPointer in the above invocation somehow "saved" to the vertex array object?   Would the "re-binding" look something like the following (assuming that vboID_2 is created just like vboID but with different data) glBindVertexArray(vaoID); //does this work as re-binding?? glBindBuffer(GL_ARRAY_BUFFER, vboID_2); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(0); glDrawElements(GL_TRIANGLES, renderInfo.indexCount, GL_UNSIGNED_INT, (GLvoid*)0); glBindVertexArray( 0 ); Hopefully this question is clear, if long-winded. Let me know if I need to clarify anything...   Thanks
  7. Palidine

    OpenGL 4 Shadow Map Problem

    ok. Actually after making hte GL_TEXTURE_BUFFER -> GL_TEXTURE_2D change shadow maps are totally working! Woo! The other thing that was going on, for the curious, is that I had the Z-Clip plane on my orthographic transform for the depth-writing pass set WAY to high so I was washing out the signal with much too far a clip plane. Basically losing important resolution on my floating point depth value since I was brute forcing a clip from [0 .. FLT_MAX]. Setting the far clip plane to just beyond my scene makes it all happy.
  8. Palidine

    OpenGL 4 Shadow Map Problem

    PS, Wired:     Are you sure this is correct? The openGL 4 documentation of glTexParameter and also the documentation of the GLSL texture(...) calls both suggest this is necessary for texture(...) to be able to make the depth check correctly from a sampler2dshadow https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glTexParameter.xml https://www.opengl.org/sdk/docs/man/html/texture.xhtml http://stackoverflow.com/questions/22419682/glsl-sampler2dshadow-and-shadow2d-clarification   Granted that tutorial is quite old, so I could just be wrong here
  9. Palidine

    OpenGL 4 Shadow Map Problem

    Hey,   A couple of good directions, thanks. WiredCat and Spiro, the GL_TEXTURE_BUFFER -> GL_TEXTURE_2D solved the immediate problem and the second pass is now apparently able to read the shadowMap.   Spiro, I think I either pruned my code too much or didn't explain what was going on sufficiently. The reason it's rendering is that you're only looking at the FBO def for the first-pass which is a depth-only write pass. There is a second pass that has been working totally fine for quite a while that's actually doing the rendering. I'm trying to wire the depth from the first pass (rendered ortho from the skylight) into the fragment shader for the second pass (rendered perspective from the camera) so that I can do shadow maps. The calls to glDrawBuffer/glReadBuffer only affect the FBO that's set up for the depth rendering and so are fine, I'm nearly certain, since no color information is being written in that pass.   There are more problems now (it seems depth isn't written correctly now during the first pass) but I'll debug for a while and come back with either my solution or more questions. Meanwhile if there's anything obvious there relating to depth not being written in the first pass, let me know The shaders for that pass are just trivial pass throughs and the fragment shader is empty since nothing is being done since all I care about is the automated depth writing.   If it's helpful, I've been following this old old old tutorial. I'm probably running in to either dumb math errors (typical), more lame copy/paste errors like the above, or maybe openGL 1.2 -> 4.x problems: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/
  10. Hey,   Back again for what is likely just a trivial error. I am doing some basic & standard Shadow Map rendering. geometry only pass writing to a FBO depth_buffer second pass comparing fragment depth in light space against the depth_buffer Problem: I'm nearly positive that I am either (a) not actually writing to depth_buffer or (b) failing to bind it to the second pass fragment shader correctly for reading   Shadow Map Initialization glGenFramebuffers(1, &shadowmap_framebuffer); glBindFramebuffer(GL_FRAMEBUFFER, shadowmap_framebuffer); glGenTextures(1, &shadowmap_texture); glBindTexture(GL_TEXTURE_2D, shadowmap_texture); glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT32, shadowmap_size,shadowmap_size, 0,GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowmap_texture, 0); // No color output in the bound framebuffer, only depth. glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); // check that our framebuffer is ok ASSERT( glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE ); glBindFramebuffer(GL_FRAMEBUFFER, 0); First-Pass shadow render I've completely eliminated everything except the depth clearing just to sanity check things   c++ side code: glBindFramebuffer(GL_FRAMEBUFFER, shadowmap_framebuffer); glViewport(0,0,shadowmap_size,shadowmap_size); glClearColor(0.f,0.f,0.f,1.f); glClearDepthf(1.f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindFramebuffer(GL_FRAMEBUFFER, 0); glViewport(0,0,screenInfo.Width,screenInfo.Height); Second-Pass depth read   relevant c++ side code glActiveTexture( GL_TEXTURE3 ); glBindTexture( GL_TEXTURE_BUFFER, shadowmap_texture ); fragment shader #version 440 layout(binding=3) uniform sampler2DShadow shadowMap; in vec4 ex_ShadowCoord; out vec4 outColor; void main() { //shadow //let's just read something that should be valid //and also which should definitely pass. If the shadowMap is being cleared to 1.f //then a lookup with a z == 0.0 should always return 1.0 from texture(...) vec3 lookupVec = vec3(0.5,0.5,0.0); float depth = texture( shadowMap, lookupVec ); float visibility = mix( 0.5, 1.0, depth ); // // VALIDATE shadowMap // vec3 color = vec3(0.0,0.0,0.0); if ( depth == 0.0 ) color.r = 1.0; else if ( depth == 1.0 ) color.b = 1.0; else { color.r = 1.0; color.g = 1.0; } outColor = vec4( color, 1.0 ); } Everything is drawing completely RED which means that the texture(...) call is failing at least how I understand it should work. That suggests to me that it's mostly likely not bound correctly (based on prior errors I've made) or perhaps less likely it's not getting written to ever.   I've otherwise validated that the rest of the pipeline is correct (ex_ShadowCoord is coming in correctly and I have full coverage of the scene being rendered)
  11. Palidine

    Simple imageBuffer Problem

    Thanks! I've been googling various usage/example things forever and I didn't find one that put everything together. That worked. Easy peasy. :)
  12. Hey,   I think this should be a fairly straight forward thing to figure out, but I'm having problems getting data into my fragment shader from a texture buffer object.   Problem Summary: When I eventually get to the shader and call imageLoad it is always returning 0.0.   Slightly more details: The texture buffer is the size of the FBO and is going to be used for some I/O work using imageAtomicMin to manage some weird-o transparency stuff I'm prototyping for a specific game feature   Texture and buffer + creation: GLuint a_texID; GLuint a_bufID; Create() { glGenTextures( 1, &a_texID ); glGenBuffers( 1, &a_bufID ); glBindBuffer( GL_TEXTURE_BUFFER, a_bufID ); glBufferData( GL_TEXTURE_BUFFER, screenInfo.Width*screenInfo.Height*sizeof(float), NULL, GL_DYNAMIC_DRAW ); } Pre-Render work (called every frame). If it matters this logic gets executed before the shader program is bound Bind() { //reset the buffer data since the fragment shader will eventually write to it const float initializationValue = 1.0f; glBindBuffer( GL_TEXTURE_BUFFER, a_bufID ); glClearBufferData( GL_TEXTURE_BUFFER, GL_R32F, GL_RED, GL_FLOAT, &initializationValue ); //set up the texture buffer in the right place so the fragment shader can access it glActiveTexture( GL_TEXTURE3 ); glBindTexture( GL_TEXTURE_BUFFER, a_texID ); glTexBuffer( GL_TEXTURE_BUFFER, GL_R32F, a_bufID ); } Fragment Shader uniform int screenWidth; layout(binding=3, r32f) uniform coherent imageBuffer myBuffer; main { int index = int(gl_FragCoord.x) + int(gl_FragCoord.y) * screenWidth; vec4 data = imageLoad( myBuffer, index ); //PROBLEM: data.r is always == 0.0 } Thing I tried: I tried just skipping the glClearBufferData part and instead passed a data array that was all set to 1.0. That didn't work either which suggests to me I'm doing something wrong with binding...  
  13. Palidine

    Bind a VBO as a TexImage2D?

    ok. Yeah cool. That looks like it will work. I think you have to create the GL_TEXTURE_BUFFER separately with a glGenTextures call, but then it looks like you can just straight up bind a buffer to that. I'll play around with it.    Thanks
  14. Palidine

    Bind a VBO as a TexImage2D?

    It occurs to me, I'm being dumb because I can probably find a way to just have the first Shader render the data directly to a texture instead of extracting the data through Transform Feedback... But anyway, I guess I'm still curious about the OP anyway. :)
  15. Hey,   Random question to optimize a bizarro pipeline I have going.   I'm using a Geometry Shader + Transform Feedback right now to generate a data set. I want to then pass that data set through another Shader but have it addressable as a sampler2d because I need access to multiple data points in the next Shader (evaluating neighbors).   I know that I can use glGetBufferSubData to extract the output VBOs and then bind that data to a texture with the standard glTexImage2D stuff. But I'm wondering if there is a shortcut whereby I don't need to pull the information off of the graphics card or copy it from one location to another using a COPY_WRITE_BUFFER or whatever. Can I just rebind the VBO somehow and get it treated as a texture addressable by a sampler2d?   Please feel free to ask questions if that's not clear. It's a mouthful...
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!