Jump to content
  • Advertisement

nico3000

Member
  • Content Count

    7
  • Joined

  • Last visited

Community Reputation

153 Neutral

About nico3000

  • Rank
    Newbie
  1. Update: It IS possible since OpenGL 4.2! http://www.opengl.org/sdk/docs/man/html/glTextureView.xhtml
  2. Hi, I'm trying to setup a FBO with a single color attachment and a depth texture (no renderbuffer, because I want to read it in my shaders) attachment. Basically my code looks like this:   Depth texture generation: glGenTextures(1, &depthTex); glBindTexture(GL_TEXTURE_2D, depthTex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_DEPTH_COMPONENT); glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT32, 1024, 1024); glBindTexture(GL_TEXTURE_2D, 0); Framebuffer generation: glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); // attach color texture (works fine) glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTex, 0); glCheckFramebufferStatus(GL_FRAMEBUFFER); The problem now is after glFramebufferTexture2D I get an GL_INVALID_OPERATION. However if I do this glBindTexture(GL_TEXTURE_2D, depthTex); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTex, 0); glBindTexture(GL_TEXTURE_2D, 0); instead, the glFramebufferTexture2D succeeds but afterwards I get GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT from glCheckFramebufferStatus().   My OpenGL initialized with version 4.4.0 and I'm using glew in it's most recent version. Does anyone have a clue what I'm doing wrong?   Edit: It doesn't make a difference if I bind the color attachment or not. However if I do not bind the depth attachment everything works fine.   Edit2: After 24 hours of debugging I found my error. It's nothing OpenGL related, so topic can be closed :)
  3. I'm writing an abstraction layer for the used graphics API in my engine. I built it around Direct3D11 and am now working on the OpenGL implementation. In Direct3D you have this resource / view concept and you can define a shader resource view for a single mip or array slice or a sub array. Currently I have no need for this feature but I thought it would be nice to implement the possibility ;)   A scenario I can think of would be to write a general purpose shader (e.g. an edge detection filter) that takes a single texture. Now you have a texture array, e.g. an MRT, one slice containing depth values. If you want to do an edge detection just for this depth slice you can't use your general purpose edge detection. Of course there are ways to realize this without this feature but if you think in D3D this would be the intuitive way (at least for me ;) )
  4. Hi, I've created a texture at the target GL_TEXTURE_2D_ARRAY with n array slices. Is it pssible to bind the ith slice of my texture to a sampler2D object in my GLSL shader? Or even the subarray from the ith to the jth slice to a sampler2DArray object? And is it possible to bind a single mip slice of a GL_TEXTURE_2D texture to a sampler2D object?
  5. Yes, I read that before. I think Windows does also use the the high precision data from your mouse and moves the cursor one pixel for each unit of mouse motion IF you set the pointer speed in your mouse properties to the 6th notch. I Set my mouse to 5700dpi and chose the 3rd notch. The cursor has now perfect speed for me in windows and rawinput  gives more than pixel accuracy. I think the MSDN article is a little bit confusing at that point.
  6. And how do I achieve this? 
  7. I set up Rawinput to process mouse events like this m_rawSize = 0; m_pRaw = 0; RAWINPUTDEVICE pMouseRid[1]; pMouseRid->usUsagePage = 0x01; pMouseRid->usUsage = 0x02; pMouseRid->dwFlags = 0;//RIDEV_NOLEGACY; pMouseRid->hwndTarget = m_hwnd; if(!RegisterRawInputDevices(pMouseRid, 1, sizeof(RAWINPUTDEVICE))) { IG_ERROR("RegisterRawInputDevices() failed!"); return false; } else { return true; } And I listen to them like this UINT dwSize = 0; UINT result = GetRawInputData((HRAWINPUT)p_lParam, RID_INPUT, NULL, &dwSize, sizeof(RAWINPUTHEADER)); if(result != 0) { IG_WARNING("GetRawInputData() failed (Result " << result << ", lParam " << p_lParam << ")."); return; } if(m_rawSize < dwSize) { SAFE_DELETE_ARRAY(m_pRaw); m_pRaw = (RAWINPUT*)(IG_NEW unsigned char[dwSize]); m_rawSize = dwSize; if(!m_pRaw) { IG_WARNING("Failed to allocate space."); return; } } if(m_pRaw) { result = GetRawInputData((HRAWINPUT)p_lParam, RID_INPUT, m_pRaw, &dwSize, sizeof(RAWINPUTHEADER)); if(result == (UINT)-1) { IG_WARNING("GetRawInputData() failed."); return; } if(result != dwSize) { IG_WARNING("GetRawInputData does not return correct size! Expected " << dwSize << ", got " << result); return; } else if(m_pRaw->header.dwType == RIM_TYPEMOUSE) { WmInput input; input.m_message = WM_INPUT; input.m_deltaX = m_pRaw->data.mouse.lLastX; input.m_deltaY = m_pRaw->data.mouse.lLastY; input.m_flags = m_pRaw->data.mouse.usButtonFlags; input.m_data = m_pRaw->data.mouse.usButtonData; IG_INFO("WM_INPUT " << m_pRaw->data.mouse.lLastX << ", " << m_pRaw->data.mouse.lLastY << "."); m_inputs.push(input); } } But with this code the WM_INPUT events only come in as the cursor moves one pixel and the lLastX/Y vars do represent that on pixel. So it doesn't have the promised advantage of being more precise than the WM_MOUSEMOVE messages. Am I wrong? What is the proper way to get high resolution input from my mouse?   I have a Logitech G500 + Logitech Gaming Software 8.45.88 installed. The DPI is set to 1500.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!