Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 01 Mar 2007
Offline Last Active Jul 25 2013 01:29 PM

Topics I've Started

Rendering a texture

23 July 2013 - 08:20 AM



I am very new to OpenGL and have a basic problem: I want to render 2 quads each with its own texture.

First I have an init function which loads the texture, generates the texture IDs and uploads the texture:

void initTex(TexObj& texObj, const char* imageFilename) // Images are .xpm files
   QImage image = QImage(imageFilename);
    texObj.w = image.width();
    texObj.h = image.height();

    QImage glImage;
    glImage = QGLWidget::convertToGLFormat(image);

    char* data = static_cast<char*>( calloc( texObj.w *  texObj.h, 4) ); 
    memcpy(data, glImage.bits(), texObj.w * texObj.h * 4);

    texObj.data = data
    glGenTextures(1, &texObj.texID);
    glBindTexture(GL_TEXTURE_2D, texObj.texID);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texObj.w, texObj.h, 0, GL_RGBA, GL_UNSIGNED_BYTE,  texObj.data);

Function initTex is called 2 times (for 2 different textures).


To render the 2 quads I use this function:

static void draw(TexObj& tex, float scale)
    glEnable( GL_TEXTURE_2D );
    glFrontFace( GL_CCW );

    glActiveTexture( GL_TEXTURE0 );
    glBindTexture(GL_TEXTURE_2D, tex.texID);

   // glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, tex->w, tex->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, tex->data );


    glTranslated( 0.0, 0.0, 0.0 );
    glScalef( scale, scale, 1.0f );
      glPolygonMode(GL_FRONT, GL_FILL);
        glTexCoord2f( 0.0, 0.0 );
        glVertex2d(0.0, 1.0);
        glTexCoord2f( 0.0, -1.0 );
        glVertex2d(0.0, 0.0);
        glTexCoord2f( 1.0, -1.0 );
        glVertex2d(1.0, 0.0);
        glTexCoord2f( 1.0, 0.0 );
        glVertex2d(1.0, 1.0);

    glDisable( GL_TEXTURE_2D );

The problem is: When I let this code run the quads are completely white!

The strange thing: When I comment in the outcommented line (glTexImage2D), the quads are textured! It seems like as if the texture I load into the VRAM in function init() is disappeared and I have to upload it every frame.


Does anyone have an idea what could cause this?




Loading and caching resources

23 November 2012 - 05:58 AM


in my Engine I have a ResourceManager class, that loads resources (textures, shaders, materials etc.) from hard drive and caches them. I.e:
ShaderPtr p1 = manager->loadShader("Color.psh");
ShaderPtr p2 = manager->loadShader("Color.psh");
When shader Color.psh is loaded from the first time, it is loaded from hard drive, a ShaderPtr is stored in a map and returned. Now when Color.psh is requested for the second time, the Shader is just returned from the cache map instead of loading it again.

This works fine, but now I have a problem with textures. I have to pass the size of the texture to the load function, because internally I use D3DXCreateTextureFromFileEx() to load the texture and If I do not explicitely specify the real size of the texture, DirectX will use the next power of 2 size - something I do not want.
So my loading looks like this:
TexturePtr t1 = manager->loadTexture("Grass.png", 500, 200);  // A
TexturePtr t2 = manager->loadTexture("Grass.png", 300, 300);  // B
In line A the manager really loads Grass.png from the disk and creates a texture of size 500x200. Now in line B the manager detects that Grass.png is already loaded and just returns a reference to the already created texture of size 500x200. Thus t2 now stores a texture object with size of 500x200, even though I wanted the texture to be loaded with a size of 300x300! (In other words: Another instance with another size).

Do you guys have an idea how I could solve this problem? How I could keep a caching system but also support resources with different values.

UI components: Scaling

21 November 2012 - 01:08 PM


I have made a little UI system to render UI elements (button, combo boxes etc.) in my DirectX 9 game.

Now I struggle a little bit with scaling of my components. To make my problem more clear I made a sketch:
Lets say I start my game windowed with resolution of 2000x1000. The red rectangle is my UI component. Now If i resize the window down to 1000x500 I guess it would be desirable to scale down the UI component proportionally.
So my algorithm could be something like this: After resize of window, resize all UI components proportionally horizontally and then vertically.

But then I have a problem when the user only applies a resize of the window in one dimension (bottom image). If I applied my algorithm the UI component would be distorted - clearly something I do not want.

So how could I implemented a solid scaling mechanism that scales my UI elements when the resolution changes?


Distributing C++ App with Python

12 August 2012 - 10:10 AM


I have written an application C++ that compiles on Windows and Linux. Now I have also written some .py Python Scripts that shall be shipped with my application.

My questions:
1) If I want to call a function from one of my Python scripts from C++ all I have to do is to do include <Python.h> and link against libPython.so (on Linux) and python27.lib on Windows. Anything else I have to do?

2) There is one thing about Python on Windows I don't understand. I have installed Python 2.7 on my Windows Machine and in the installed directory there is an "include", "libs" and "DLLs" directory. In the "libs" directory is a python27.lib and I guess I have to link against that one. But why is there no python27.dll in the DLLs directory?
I guess this .lib is an import lib, so there HAS to be a corresponding .dll, but I can't find it. Anyone knows where the python dll is?

3) If I want to ship my C++ application (that calls python code) along with the .py files: All I need to add into my distribution is the python .dll/.so and the python interpreter? Anything else I need?

Thanks for any help!

Engine Design: Shadow Mapping

19 July 2012 - 10:46 AM


In my (Direct9 based) Render Engine I render objects and each object has a material. A material is basically just a pixel and vertex shaders, a set of textures and render states.

So lets say I render an object with a material that uses 3 textures, I call 3 times setTexture:
device->setTexture(0, tex0);
device->setTexture(1, tex1);
device->setTexture(2, tex2);

It works great, but now I want to add Shadow Mapping to my Engine. The number of shadow casting lights should be flexible.
Since I need one Shadow Texture for each shadow casting light and since ALL material shaders need access to the shadow maps I guess the best (only) way to do this is to start with the shadow map textures at a specific sampler register.

For example: Lets say I have 4 lights and Light0 and Light2 cast shadows. The shadow map (aka depth map) textures start at sampler register s10. Then Depth Map for Light0 is in sampler s10 and Light2 in sampler s11.
Is this a good idea or rather crappy?

Thanks for any feedback!