Odd issue when rendering

Started by
6 comments, last by Sponji 11 years ago

Alright... short story is that stuff like this always happens when I try and do graphics. One thinks I would learn, but apparently I'm a masochist. So...

Nothing really fancy, small project that sets up a window to draw to.

What I am trying to achieve, for personal reasons, is to write my own rasterizer.

So first things first here are some links to the issue that I am encountering:

https://www.dropbox.com/s/fmuk9rycvnnyqqg/HATE.png

https://www.dropbox.com/s/dyqibeo31qc448y/HATE%202.png

https://www.dropbox.com/s/979twrhf1g8dkcx/HATE%203.png

And here is the code:

Creating the view:


wglMakeCurrent(hdc, hglrc);

glClearColor(0.5f, 0.1f, 0.15f, 1.0f);
glClearDepth(1.0f);
glShadeModel(GL_FLAT);

int width = 0;
int height = 0;
_renderTarget->GetSize(&width, &height);

glViewport(0, 0, width, height);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-width/2, width/2, -height/2, height/2, -1.0, 1.0);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

Render the fixed function test background


  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

  glPushMatrix();
  glRotatef(_spin, 0.0f, 0.0, 1.0f);
  glColor3f(0.3f, 0.1f, 0.15f);
  glRectf(-150.0f, -150.0f, 150.0f, 150.0f);
  glPopMatrix();

  _rasterizer->Render();

  HDC hdc = GetDC(_renderTarget->GetHandle());
  SwapBuffers(hdc);
  ReleaseDC(_renderTarget->GetHandle(), hdc);

Render my rasterizer


  glBegin(GL_POINTS);
  glPushMatrix();
  int const RenderBufferSize = m_bufferWidth * m_bufferHeight;
  for (int i = 0; i < RenderBufferSize; ++i)
  {
    int x = i % m_bufferWidth;
    int y = i / m_bufferWidth;
    int halfWidth = m_bufferWidth / 2;
    int halfHeight = m_bufferHeight / 2;

    glColor4ubv(m_renderBuffer);
    glVertex2i(x - halfWidth, halfHeight - y);
  }
  glPopMatrix();
  glEnd();

Hopefully that is enough info, if not just ask.

Any help is greatly appreciated.

"My word is my bond and here I stand."
Advertisement

So what is the issue?

In:


int x = i % m_bufferWidth;
int y = i / m_bufferWidth;
int halfWidth = m_bufferWidth / 2;
int halfHeight = m_bufferHeight / 2;

Shouldn't it be:


int y = i / m_bufferHeight;

And why are you using prehistorical OpenGL? Or are you stuck with good-for-nothing video card (like I am)?

So what is the issue?

The first image shows the issue: when I draw points on the left half of the screen it seems to decide to skip a few columns. It's an 800x600 window and I'm drawing 800x600 points.

Second and third images show me drawing my rasterizer to a localized region of the window (400x300 points).

--

In:


int x = i % m_bufferWidth;
int y = i / m_bufferWidth;
int halfWidth = m_bufferWidth / 2;
int halfHeight = m_bufferHeight / 2;


Shouldn't it be:


int y = i / m_bufferHeight;

No, as it is an array that I am indexing. As such, the rows are calculated as (i / width).

--

And why are you using prehistorical OpenGL? Or are you stuck with good-for-nothing video card (like I am)?

Because I was referencing openGL books I had on hand.

--

It's just frustrating that I tell the thing to render 800x600 points and it apparently gets lazy halfway through.

"My word is my bond and here I stand."

Is there a reason you're drawing all those vertices? Wouldn't it be easier to create a texture and render it on a fullscreen quad?

Derp

Is there a reason you're drawing all those vertices? Wouldn't it be easier to create a texture and render it on a fullscreen quad?

From what I saw, rendering to a texture was about as equivalent to rendering to the screen. i.e. you just targeted a different framebuffer that was bound to a texture.

Maybe I missed something?

"My word is my bond and here I stand."

Render my rasterizer

  glBegin(GL_POINTS);
  glPushMatrix();
  int const RenderBufferSize = m_bufferWidth * m_bufferHeight;
  for (int i = 0; i < RenderBufferSize; ++i)
  {
    int x = i % m_bufferWidth;
    int y = i / m_bufferWidth;
    int halfWidth = m_bufferWidth / 2;
    int halfHeight = m_bufferHeight / 2;

    glColor4ubv(m_renderBuffer);
    glVertex2i(x - halfWidth, halfHeight - y);
  }
  glPopMatrix();
  glEnd();


Erm... this is silly. Just upload your buffer as texture and render it as one quad.

As for the problem: your vertex coordinates are wrong ... of course. Last i used something as ancient was at least over half a decade ago ... i do not remember the intrinsics of the deprecated glOrtho stuff. So, can not give a fixed version of your code (i sure as hell do not want to remember that old crap). However, the core of the issue is bound to be bad coordinates:
* bottom left of normalized device coordinates (ie. after modelview/projection transformation and perspective divide) addresses the bottom left of the pixel at bottom left.
* pixels have size (ie. bottom left pixel is between (0,0) and (1,1)) ... your integer coordinates tell to draw at the exact middle of adjecent pixels.
=> add floating point errors and stuff will expectedly be drawn at "random" to whichever pixel it sees being the closest.

PS. if you are learning OpenGL then please consider throwing the ancient book away and follow something newer (recommending "OGL3.3 core" as the starting point - ignore all the deprecated cruft).

edit: also, your viewport and glOrtho are probably mismatching (by 1 pixel).

All right, got it fixed. It's been too long since I last touched graphics stuff so I forgot about the whole 0.5 pixel location thing. Which is one of the reasons that I'm doing this... experiment.

And yes, I realize that pushing to a texture and rendering a quad is the better solution but when I looked for how to do this, all I got was this whole binding a frame buffer to a texture and doing the draw calls to that. Ergo, it seemed pointless to go that route.

Thanks.

"My word is my bond and here I stand."

And yes, I realize that pushing to a texture and rendering a quad is the better solution but when I looked for how to do this, all I got was this whole binding a frame buffer to a texture and doing the draw calls to that. Ergo, it seemed pointless to go that route.

You probably want something like this:


// Initialization
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);

// Updating
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);

// Rendering
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0,1); glVertex2f(-1, -1);
glTexCoord2f(1,1); glVertex2f( 1, -1);
glTexCoord2f(1,0); glVertex2f( 1,  1);
glTexCoord2f(0,0); glVertex2f(-1,  1);
glEnd();

Hopefully got all that correct because I just typed it in there :p And that "pixels" is your "m_renderBuffer".

Derp

This topic is closed to new replies.

Advertisement