Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

145 Neutral

About hudovisk

  • Rank
  1. I am currently implementing a deferred shader and I would like to do some CPU process with the g-buffer. The most efficient way I found was to use Pixel Buffer Objects for asynchronous read, here is the tutorial I am following http://www.songho.ca/opengl/gl_pbo.html.   Here is the way i create my pixel buffer objects: //Create the PBO glGenBuffers(2, m_pbo); glBindBuffer(GL_PIXEL_PACK_BUFFER, m_pbo[0]); glBufferData(GL_PIXEL_PACK_BUFFER, m_width * m_height * 3 * sizeof(float), 0, GL_STREAM_READ); glBindBuffer(GL_PIXEL_PACK_BUFFER, m_pbo[1]); glBufferData(GL_PIXEL_PACK_BUFFER, m_width * m_height * 3 * sizeof(float), 0, GL_STREAM_READ); and here i create my Frame Buffer Object and attach the 2d textures. // Create the FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); // Create the gbuffer textures glGenTextures(GBUFFER_NUM_TEXTURES, m_textures); glGenTextures(1, &m_depthTexture); for (unsigned int i = 0 ; i < GBUFFER_NUM_TEXTURES; i++) { glBindTexture(GL_TEXTURE_2D, m_textures[i]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, m_width, m_height, 0, GL_RGB, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); } // depth glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, m_width, m_height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); GLenum DrawBuffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3 }; glDrawBuffers(4, DrawBuffers); here is my render method, i simply render my geometry into the gbuffer and then read the normal buffer and try to write it on the screen: unsigned int bufferIndex = 0; unsigned int nextBufferIndex = 0; void render(Scene& scene) { bufferIndex = (bufferIndex + 1)%2; nextBufferIndex = (bufferIndex + 1)%2; glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); renderToGBuffer(scene); // renderGBuffer(); glBindBuffer(GL_PIXEL_PACK_BUFFER, m_pbo[bufferIndex]); glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo); glReadBuffer(GL_COLOR_ATTACHMENT0 + GBUFFER_TEXTURE_TYPE_NORMAL); glReadPixels(0, 0, m_width, m_height, GL_RGB, GL_FLOAT, 0); glBindBuffer(GL_PIXEL_PACK_BUFFER, m_pbo[nextBufferIndex]); m_pixelBuffer = (float*) glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_WRITE); if(m_pixelBuffer != nullptr) { glBindFramebuffer(GL_FRAMEBUFFER, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawPixels(m_width, m_height, GL_RGB, GL_FLOAT, m_pixelBuffer); glUnmapBuffer(GL_PIXEL_PACK_BUFFER); } else { std::cout<<"m_pixelBuffer is NULL"<<std::endl; } glBindBuffer(GL_PIXEL_PACK_BUFFER, 0); glBindFramebuffer(GL_FRAMEBUFFER, 0); SDL_GL_SwapWindow(m_window); } And it draws nothing! But it is calling the glDrawPixels because the message in the else statement doesn't show. The method renderGBuffer that is commented works and draws the gbuffer on the screen, so there is something in the buffer.
  2. hudovisk


    Visualization of a kd-tree implemented by me! 8000 spheres randomly placed
  3. Thanks for the articles, now i see the way i was using ( and taught ) was wrong. About the input, i changed to an event manager that i saw in Game Code Complete 4, using fast delegates http://www.codeproject.com/Articles/11015/The-Impossibly-Fast-C-Delegates. Now i am going to learn how to do it multithread so i can properly handle SDL.   So much to learn..   Thanks for the help .
  4. Thanks for your reply.   Ok, let me see if i got it. I dont need handlers, because the inputs goes direct to my buffer and the buffer is directed accessed from logic update right? But i still need a way to parse the input data into game data, and this must be relative to the context that i'm using. The way i see, a player would have a context, a menu would have a context and so on. Then whenever i would read the input, i would pass something like message = currentContext.parse(input); and check whatever i need to execute the message. So i still need a way to parse my XML into a generic message or there is a more elegant way of doing it?   Also, could you explain me why singleton is wrong? its not the first time i see it, but no one pointed me an article or explained why.
  5. Awesome,   Can you say which data structure you used to accelerate the intersection test? I'm struggling  to implement my k-d tree efficient enough to use it dynamically.
  6. Hi everybody, I am thinking in how to implement my game engine and started with the input system. I read http://www.gamedev.net/blog/355/entry-2250186-designing-a-robust-input-handling-system-for-games/ and came up with this. What i have is an InputManager that handles SDL way of input and send them to a list of InputContexts. Each InputContext has a pointer to IInputListenner, that is the one who handles the input, but the InputContext has the role of translate the InputEvent object into a game message. And is here that i dont know how to really do it. I thought put the information needed by InputContext inside a XML file, but i dont know how to parse a XML into a generic type of data, which may have one, two or three values inside it and pass it to game logic. My problem isn't how to load a XML file, it is how do i create a generic MessageObject that my game can handle. InputManager.h: class InputManager { public: static InputManager* getInstance(); void dispatchEvents(); void registerContext(InputContext* c); void deregisterContext(InputContext* c); private: InputManager(); static InputManager* instance; std::list<InputContext*> contexts; } The only interesting function is the dispatchEvent(). which is called every frame: void InputManager::dispatchEvents() { SDL_Event event; while(SDL_PollEvent(&event) != 0) { InputEvent inputEvent; std::map<unsigned int, InputEventType>::iterator it = inputEventMap.find(event.type); if(it == inputEventMap.end()) {             std::cout<<"Unknown Event"<<std::endl;              continue; } inputEvent.type = it->second; inputEvent.timestamp = event.common.timestamp; if(inputEvent.type == KEYBOARD) { inputEvent.key = mapKeyInput(event); } else if(inputEvent.type == MOUSE) { inputEvent.mouse = mapMouseInput(event); } for(std::list<InputContext*>::iterator it = contexts.begin(); it != contexts.end(); it++) { if((*it)->dispatch(inputEvent)) { break; } } } } Here is a possible syntax for the XML : <?xml version="1.0" encoding="ISO-8859-1"?> <context> <key value="W" pressed="true" message="MOVE_FORWARD"/> <key value="SHIFT + W" pressed="true" message="RUN_FORWARD"/> ... <mouse value="MOTION" message="CAMERA_MOTION"> <param value="X" inverted="false"/> <param value="Y" inverted="false"/> </mouse> </context> i dont like the idea of passing the messages as string because string comparison isnt efficient. I am open to suggestions.   Thanks in advance.
  7. hudovisk

    OpenGL Matrix issues

    Thanks haegarr,   You are right, the view matrix should be the inverse not only the transpose. Unfortunately i dont have enough knowledge to keep this conversation, I'm so confused right now that dont even know what to ask :). Guess that i will try from the beggining with modern opengl and improve my own matrix library. Thanks for pointing me the errors guys! really appreciate it. Only one last favor, do you know any good resources to learn the modern opengl?
  8. hudovisk

    OpenGL Matrix issues

    Thanks for your reply mhagain!   Your are right! I'm not handling my own matrices, i just define some. I'm implementing a Ray Tracing render, and the opengl is used only to visualize my kd-tree. I choose to made it post-multiplication because, for me, it is more intuitive to have the transformations applied from first to last. So when i finish with the opengl part it will be easier.   Ok, i edited my code to the following: void render() { glMatrixMode(GL_MODELVIEW); glViewport(0,0,width,height); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); Matrix4x4 modelview = scene->camera.getGLTransform(); glColor3f( 1.f, 1.f, 0.f ); for(unsigned int i = 0; i < scene->primitives.size(); i++) { Primitive* object = scene->primitives[i]; glPushMatrix(); vector3f pos = object->globalOrigin; glTranslatef(pos.coord[0],pos.coord[1],pos.coord[2]); glMultMatrixf((GLfloat*)modelview.data); glutSolidSphere(1, 20, 20); glPopMatrix(); } } But it still don't give me the fps style camera, is my concept of fps camera wrong? i mean, shouldn't i first translate and then rotate? I guess that my post-multiplication matrices are interfering in it, but i don't see how.    I thought that glFlush() was only to force the commands to be written in the current buffer, I'm actually using double buffer technique. The command to swap buffer is being called after the render function call, at my main loop.   This is my first opengl project, still have a lot to learn.
  9. Hello!   I'm writing an engine that uses opengl, and I'm want to implement an FPS style camera. I'm handling my own matrices. So each object has its own matrix. The matrix were implemented in a row-major order and using post-multiplication (I know, exactly the inverse of the opengl, but i need it somehow independent of the opengl "style" ).   My render function is like this: void render() { glMatrixMode(GL_MODELVIEW); glViewport(0,0,width,height); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor3f( 1.f, 1.f, 0.f ); for(unsigned int i = 0; i < scene->primitives.size(); i++) { Primitive* object = scene->primitives[i]; glPushMatrix(); vector3f pos = object->globalOrigin; glTranslatef(pos.coord[0],pos.coord[1],pos.coord[2]); glutSolidSphere(1, 20, 20); glPopMatrix(); } Matrix4x4 modelview = scene->camera.getGLTransform(); glMultMatrixf((GLfloat*)modelview.data); glFlush(); } It basically draw a bunch of spheres. The function getGLTransform() just get the matrix from camera and transposes it. From what i understand, for a fps style camera, i need first translate my world space in relation to the camera position, and then rotate it. So, i must apply the camera transformation last in my code, because opengl uses pre-multiplication and the transformations are applied from last to first. But the above code dont respond to my camera location and rotations.   If i apply the transformations from camera right after glLoadIdentity(), it respond to camera position and rotation, but it dont rotate like a fps system. It first rotate my world space, and then translate.   I already checked the matrices and they are okay.   I just cant see what I'm doing wrong, please help me.   Thanks in advance.  
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!