Jump to content

  • Log In with Google      Sign In   
  • Create Account

xargon123

Member Since 09 Dec 2005
Offline Last Active Dec 04 2012 06:36 PM

Posts I've Made

In Topic: Catmull Rom spline derivative

22 November 2010 - 08:18 AM

One final question about this:

The reason I need this is because I need to calculate the gradient of the 2D image which was generated by resampling the original image using the Catmull-Rom spline interpolation kernel.

So, the way I understand it is that the gradient of the resampled image should be the same as convolution of the resampled image with this derivative kernel. Does that sound right?

So, since it is a 2D image, the gradient would be a vector field with a 2-element vector at each pixel. The original interpolation worked by looking at the neighborhood 16 values and producing the interpolated intensity. I am struggling to understand how I can get the gradient vector field from this resampled image. I am guessing I somehow have to do the convolution in each axes separately, but struggling to see how that would work.

Thanks,
xarg

In Topic: Catmull Rom spline derivative

22 November 2010 - 06:43 AM

Great!

Thanks for that :)

In Topic: newbie OpenGL question. Please help!

18 May 2010 - 04:36 PM

Thank you guys! Sorry, very new to this and making these silly mistakes.

Many thanks!

xarg

In Topic: Need help with OpenGL transformations

17 May 2010 - 10:49 AM

Thanks for that.

I tried a simple test as follows:


extern "C" __declspec(dllexport) BOOL draw_gl(HDC *hdc,
HGLRC *hglrc)
{
if (hdc && hglrc) {
float test[16] = {0.0f};
test[0] = 1.0f;
test[5] = 1.0f;
test[10] = 1.0f;
test[12] = -1.0f;
test[13] = -1.0f;
test[14] = 0.0f;
test[15] = 1.0f;
wglMakeCurrent(*hdc, *hglrc);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glLoadMatrixf(test);
glPushMatrix();
glBegin(GL_LINES);
glColor3f(1.0f, 0.0f, 0.0f);
const float delta = 0.1f;
const float limit
for (float i = -1.0f; i<= 1.0f; i+= delta) {
glVertex2f( 1.0f, i);
glVertex2f(-1.0f, i);
}
for (float i = -1.0f; i<= 1.0f; i+= delta){
glVertex2f(i, 1.0f);
glVertex2f(i, -1.0f);
}
glEnd();
glPopMatrix();
return TRUE;
}
return FALSE;
}


This seems to work as expected. There is just one last bit. My transformation matrix is in GDI+ screen pixel coordinates. Is there a simple way to transform this to the OpenGL coordinates???

Thanks again for your help.

xarg

In Topic: Problems initializing OpenGL when rendering to bitmap

17 May 2010 - 09:01 AM

Hello,

I finally managed to solve it. It was because on resize the underlying bitmap was being recreated which meant I had to clean up OpenGL resources and reinitialize it with the new HDC. Now, it seems to work ok.

Thanks,

xarg

PARTNERS