Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 13 Aug 2011
Offline Last Active Nov 13 2014 12:13 AM

Topics I've Started

How to orbit around a plane without affecting the camera Z position?

08 November 2014 - 01:41 AM

I have a simple scene rendered in OpenGL that contains grid plane, same as you'd find in any 3D modeling app. I set the initial lookat position using a lookat matrix to point to the center of the scene (0,0). What I want to do is rotate the entire scene with the ground plane around 0,0 without affecting the Z orientation of the "camera." The way it is now, I detect the mouse position and use that to transform the scene via a rotation matrix. However, depending on the angle, the plane will cant fore/aft and won't stay perfectly level. I'm sure I'm missing some math to correct for the angles based on the eye position. I did some searching and found references to atan2(), which appears to be used to correct the angles but I'm not sure how to implement it. Here's what I have:

//set up lookat matrix once when app initializes to give some nice perspective

//rotate view around the scene
//x,y,z struct members are from the mouse position
glRotatef(userinput.rot.angle, userinput.rot.x, userinput.rot.y, userinput.rot.z);

Need help identifying/marking white areas in 2D image

03 December 2013 - 09:59 AM

I'd like to take a grayscale image that contains white blotchy areas and identify the center of these areas. For example, if there's an area that contains pixel values above 240, I want to be able to get the x,y position of the middle of that area. These images explain pretty much what I'm trying to accomplish:


Image containing white areas to be identified:

Attached File  track01.png   1.55KB   5 downloads


And what I'd like to be able to do:

Attached File  track02.png   1.65KB   5 downloads


As you can see, I'd like to figure out the center of these areas so I can mark them. Keep in mind that the areas may be irregularly shaped. Also, there would need to be some way of separating the white blobs so they can be considered separate objects. Maybe they are considered separate only if there's a certain amount of black pixels between them or something.

Color saturation in GLSL

05 July 2013 - 09:12 AM

I need a way to saturate (not desaturate) colors in a GLSL shader. There's code all over the place for desaturating an image. Example:

vec3 desaturate(vec3 color, float amount)
    vec3 gray = vec3(dot(vec3(0.2126,0.7152,0.0722), color));
    return vec3(mix(color, gray, amount));

Many suggest converting RGB to HSV space before increasing saturation. However, I don't need to change hue, only saturation. If I pass negative values to the above function, it indeed appears to saturate the image. Is there anything technically wrong about doing it this way? Am I trying to take a dangerous shortcut here?

Multiple render passes in GLSL: separate shaders or one shader?

04 March 2013 - 12:05 PM

I've been playing around with doing multiple render passes in a fragment shader. I have FBOs with attached textures that I bind and then render to. On each pass, the previous rendered texture is available for reading in the fragment shader. I am doing three passes, all with the same shader. I simply update a uniform variable named "pass" between passes, and that variable is linked to if statements that contain what should be done for each pass.


It all works, but I'm wondering if there's a better way to do this. I read that others will use separate shaders altogether, and swap them between passes (by making a call to glLinkProgram, I assume). That seems like it would have more overhead unless they're already compiled and attached. Is this a good approach or am I overlooking something?

How to reduce aliasing in viewport when downscaling from larger render?

06 February 2013 - 03:46 AM

I have a simple shader that does some image processing on 2D images (textures) and then renders them at video resolutions like 1920x1080. The problem is that the viewport in the UI through which the user views the render is smaller, say a phone size screen. So, although my render is 1920x1080, the viewport is actually much smaller. The result of this is lots of aliasing in the viewport due to the downscaling of the larger render to the smaller viewing area.

What can I do to reduce the aliasing? Is there a standard technique used in this case?