• Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About chawk

  • Rank
  1. That's an option, of course, but I'm really rusty with threaded programming :P Do you have any ideas on how to coordinate/synchronize data reading/writing between my calculation thread and my main thread (which will display results) ? Something using globals and mutexes and some such?
  2. er crap, I just realized I posted this in Game Programming and not in General Programming like I intended. if an admin wants to move it, please do :P
  3. My program currently starts in WinMain, creates a non-modal dialog box with CreateDialog and a dialog resource file, and then it pumps the message loop until program exit. Inside the dialog proc, I process the various WM_COMMAND messages for the controls on my main dialog window. Straightforward so far. One of my controls is a button that I want to launch a second dialog window, have that window modal, but I want control of the message loop of this new dialog because I'm doing intensive calculations through the life of the dialog window until I either meet my stopping condition or the user presses cancel. Therefore, I need to be able to process the messages and then loop through my calculation code. I'm failing right now in creating a window which operates normally and disables the previous main dialog and and returns proper control once it closes. code: // in WinMain dlg = CreateDialog(hInstance, MAKEINTRESOURCE(DLG_MAIN), NULL, dialogProc); while ((result = GetMessage(&msg, NULL, 0, 0)) != FALSE) { if (!IsDialogMessage(dlg, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } // done // in the dialog proc switch (msg) { case WM_INITDIALOG: // init stuff return TRUE; case WM_COMMAND: switch (LOWORD(wParam)) { case CMD_BUTTON: // here's where I want to disable input to the main dlg window, // launch my new dialog box, have it complete, then return // control to the main dlg EnableWindow(dlg, FALSE); // is this necessary? launchNewDialog(); EnableWindow(dlg, TRUE); return TRUE; } } // now for launchNewDialog // parent hWnd is the dlg newDlg = CreateDialog(GetModuleHandle(NULL), MAKEINTRESOURCE(DLG_CALCULATE), dlg, calculationProc); while (!done) { // check if there are messages to process if (PeekMessage(&msg, newDlg, 0, 0, PM_NOREMOVE) != 0) { while (PeekMessage(&msg, newDlg, 0, 0, PM_REMOVE) != 0) { if (msg.message == WM_QUIT) done = TRUE; else { if (!IsDialogMessage(newDlg, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } } } // here's where I do my calculations calculationFunction(); } //and finally, the newDlg's proc switch (msg) { case WM_COMMAND: switch (LOWORD(wParam)) { case CMD_CANCEL_CALCULATION: DestroyWindow(hDlg); return TRUE; } case WM_DESTROY: PostQuitMessage(0); return TRUE; } I going about this correctly? Currently, once I close the new dialog box (which for now is just hitting cancel), it does close as expected, but the previous main dialog is frozen and unresponsive. I have to close it with ctrl-alt-del. I'm confused :( Any help is greatly appreciated!
  4. Then apply your view transformation (using a camera object, a combination of glTranslatef/glRotatef, or whatever else you choose), then retrieve and store the matrix: // clear the buffers and reset modelview matrix glClear(...) glMatrixMode(GL_MODELVIEW); glLoadIdentity(); float viewMatrix[16]; // set your "view" transformation first camera.applyView(); glGetFloatv(GL_MODELVIEW_MATRIX, viewMatrix); // clear it again glLoadIdentity(); // set your "world" matrix transformations glTranslatef(...) glRotatef(...) // whatever else... float worldMatrix[16]; glGetFloatv(GL_MODELVIEW_MATRIX, worldMatrix); As a side note, if you're using a camera object (I do), you'd probably already have the matrix stored somewhere, so retrieving it would be a matter of grabbing data from the class, but that's extra. :P Is that how you wanted it?
  5. If I understand correctly, you are trying to render the cubes with an alpha value (opaqueness) dependent on how close the camera is to the cube (or pixel)?
  6. The concept of modelview matrix in openGL combines the camera/view transformation, which you can create yourself or use gluLookAt, and the object/world transformation which is for rotating and positioning the objects in your scene, thus model + view. Creating your "world matrix" would likely be any combination of glTranslatef and glRotatef to rotate and position your objects in the scene. To allow adjustable viewing in your scene with a camera, you'd apply the "view" matrix from your camera at the start of the render loop. Something like: // clear the buffers and reset modelview matrix glClear(...) glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // set your "view" transformation first camera.applyView(); // or gluLookAt(...); // render objects glTranslatef(...); glRotatef(...); object.draw(); Hope that helps!
  7. Quote: There, fixed the quote. Seriously, the OpenGL specification have strict rules on how to rasterize lines (as well as any other primitive), and I can guarantee that the lines ARE rendered correct. Correct according to the specification. Not correct according to what you expect. Like I said, the way I expect ^_^ Clearly I'm doing something wrong. Also, I guess it'd behoove me to read the opengl spec once in a while. I've come pretty accustomed to learning by example code from others that I forget about the spec documentation :X The responses look great and I'll definitely try that out when I get home. Thanks a lot!
  8. In my latest rework of my GUI system, I've decided, for now, to add a Windows classic-like border around a number of the GUI controls to give them a little definition. The GUI Window control, for instance, is a textured pair of triangles (serving as the background) drawn in ortho mode, and I add some GL_LINES rendering to create the border. I'm having some issues in specifying pixel-exact coordinates on where my lines draw to and from, though. I've tested several methods of calculating what the glVertex2f values should be to render the lines I desire. So far, I've had inconsistent results and I'm wondering if my understanding of how GL_LINES are rasterized is off. As for some relevant information, my ortho mode is setup in the "screen space" direction, where X increases from left to right, and Y increases from top to bottom. The actual call I make is as follows: glOrtho(0.0, width, height, 0.0, zNear, zFar); Now consider an example objective of drawing from pixel (0,0) to (3,0), as in the very top left row of 4 pixels from the left edge going right. For my calls to glVertex2f, looking something like: glBegin(GL_LINES); glVertex2f(x0, y0); glVertex2f(x1, y1); glEnd(); ... what, exactly, should I specify as coordinates? The goal is to map a given pair of screen pixel coordinates (pixels have area) to a pair of cartesian coordinates (infinitely small points). I whipped up 3 images to illustrate the 3 ways I tested: specifying vertices at the center of the pixel: specifying the left/right bounds of the line on the X and the Y coordinate is centered on the pixel: specifying the left/right bounds of the line on the X and the Y coordinate is set to 0.0: None of these 3 ways have actually rendered correctly in all test cases when I specify different line coordinates. Help? :(
  9. Simple Shadows

    Couldn't you do it in one draw with something like: // before frame glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); // draw shadow glDisable(GL_DEPTH_TEST); // setup normal blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // render only where stencil is 0 (all of it, initially), but as the character // renders, pixels that render cause the stencil value to increment, thus // future attempts to render over that pixel fail glEnable(GL_STENCIL_TEST); glStencilFunc(GL_EQUAL, 0, 0xFFFFFFFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); renderShadow(); glDisable(GL_STENCIL_TEST); glDisable(GL_BLEND); glEnable(GL_DEPTH_TEST); I tested it briefly and it seemed to work, though I stayed up the whole night and I'm really tired, so perhaps I made an error.
  10. Special Projection (math help)

    I read that document over a few times -- good read. I'm wondering, though, what you mean exactly by perspective in the y direction? That document takes the approach of defining your volume to project to the screen, so what does your volume look like in this case? If you can't visualize what the volume looks like, then can you describe specifically how you'd like a point to project? My guess at what you wanted was a projection of all x values with no special transformation, so just x' = x (excluding the viewport stuff). Do you want the y values to be projected like normal perspective which is based on the z? i.e. a point (x1, y, z1) maps to a different y value than point (x2, y, z2) because their z's differ? If I can visualize this correctly, then objects will be squished vertically as they go further towards z-far?
  11. Recently, I've been working on yet another version of my basecode/game-engine code/etc. As far as singletons go, I've never really paid much attention to them, but I had stumbled across an article after browsing these forums a bit and I was intrigued. To get right to the point, I'm torn between sticking with my current method of implementing "singletons", which is essentially a class with all static data and all static functions, and actual singletons. I never instantiate the object and all calls to functions, accessors, etc, are things like Engine::loadThis() and Window::setThat(). After looking over a few simple true singleton implementations, I can't decide on whether the pros outweigh the cons or not. I see it as such: // static version example int main() { Engine::init(); Engine::doStuff(); Engine::shutdown(); } // or singleton, with a global pointer Engine* e; int main() { e = Engine::getInstance(); e->init(); e->doStuff(); e->shutdown(); } It's a little less typing using the global pointer, since I'm not constantly writing Engine:: or Window:: or FileSys::, but is there an advantage under the hood over what I'm doing currently? My initial thoughts make me feel like constantly accessing pointer to call functions (__thiscall) is extra machine code, but with compiler optimizing, this may not be an issue. Any thoughts on the good, the bad, and the ugly? What do you guys use? I'm stuck.
  12. It seems you compiled it in debug mode, as it wants MSVCP71D.dll. I don't have that, and you included the release versions :(
  13. Memory Management

    Don't let this post die! I'm eagerly awaiting the next installment. For a while now, I've thought about investing some time into a useful memory management scheme rather than relying on new/delete all the time. When's the next one comin'?
  14. problem with display lists

    edit, hmm...
  15. Texture space question

    I've been coding with openGL for a while, and I'd often take long breaks and forget lots of stuff, then relearn it; I'm crazy. Maybe I'm forgetting something really important about texture space. How does the linear memory space of the data buffers passed to glTexImage2D map to texture coordinate uv space? If I wrote my linear memory buffer to the screen, the top left coordinate in screen space is (0,0) and in memory, that's offset 0, right? The bottom left of the screen would be (0, h - 1) and in memory, that's offset (h - 1) * h, right? I create a memory buffer, char* buffer = new char[256 * 256 * 4] let's say, to hold RGBA values for a 256x256 texture. If I want to then fill the first 10 rows of pixels with grey (127, 127, 127), I'd do memset(buffer, 0x7F, 256 * 10 * 4), right? 256 pixels per row, 10 rows, 4 bytes per pixel. Now to make it a texture and draw it: unsigned int id; glGenTextures(GL_TEXTURE_2D, &id); glBindTexture(GL_TEXTURE_2D, id); glTexImage2D(GL_TEXTURE_2D, // normal target 0, // 0th mipmap level GL_RGBA, // internal format 256, // width 256, // height 0, // no border GL_RGBA, // buffer's format GL_UNSIGNED_BYTE, // buffer's data type buffer); // then normal linear filtering glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // now down in drawing code glBegin(GL_QUADS); // bottom left glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); // bottom right glTexCoord2f(1.0f, 0.0f); glVertex3f(1.0f, -1.0f, 0.0f); // top right glTexCoord2f(1.0f, 1.0f); glVertex3f(1.0f, 1.0f, 0.0f); // top left glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); So then I expect to see a square on the screen with the top 10 rows of pixel being grey, but they're at the bottom instead. I'm endlessly confused, why is that? They should be up top! My idea of linear memory/screen space is X-axis going positive to the right, Y-axis going positive downwards. My idea of texture space is X-axis (or u or s) going positive to the right, and Y-axis (or v or t) going positive upwards. Where did I go wrong? Pardon the long post.