Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Jul 2004
Offline Last Active Today, 04:21 PM

Posts I've Made

In Topic: Textures becoming transparent as camera approaches

03 September 2013 - 10:10 AM

Without seeing any code . The first thing that popped into my head is that something is wrong with texture mipmapping.

In Topic: Font rendering with Freetype - Quality problems

19 May 2013 - 12:00 PM

Yes, this makes sense to me.  When dealing with any pixel-perfect rendering, you should always represent your positions and sizes as integers.  When you specify a 0.5f, the GPU will ultimately have to choose which pixel it gets mapped to (this is actually done based on your projection matrix).  I'm not sure why it's different on NVIDIA vs ATI, but it's probably due to floating point math differences.


For pixel perfect rendering I would use an orthographic projection that maps directly to your screen resolution.  Also make sure the viewport (glViewport() if you're using OpenGL) is also set to your screen resolution.  Don't rely on the default viewport that is set for you when you bind the OpenGL context.


I've recently implemented font rendering using freetype and have had results almost exactly to what you have shown in your image.

In Topic: Capturing Win32 Mouse Repeat Events

28 April 2013 - 10:00 AM

Thanks for the replies.  This is exactly the behavior I've encountered.  Using a timer isn't an issue at all, I was mostly curious if it was possible otherwise.


I'm also using SystemParametersInfo() with SPI_GETKEYBOARDDELAY and SPI_GETKEYBOARDSPEED to query the initial delay and repeat speeds of the keyboard settings, and just applying it to a custom mouse-down repeat event.

In Topic: Capturing Win32 Mouse Repeat Events

28 April 2013 - 01:23 AM

I did try that, and my mouse clicks don't seem to be triggering a WM_KEYDOWN message.  Should they be?  Or can VK_LBUTTON only be queried through GetKeyState() and GetAsyncKeyState()?

In Topic: Learning GLSL all over again

05 March 2013 - 06:57 PM

I believe OpenGL handles all of this during glLinkProgram().  It will bind the inputs and outputs of the vertex and fragment shader... and complain (i.e. fail) if anything doesn't line up.  For older versions of GLSL, there was varying.  I believe it's the same concept... just new keywords.


The only output of a fragment shader is going to be a vec4, since you are writing a color (or value) to a render target.