Jump to content

  • Log In with Google      Sign In   
  • Create Account

eastcowboy

Member Since 22 Aug 2006
Offline Last Active Oct 15 2014 03:29 AM

Topics I've Started

May I use Warcraft3 models/textures/etc. in my Android/iOS Game?

21 February 2014 - 08:45 PM

May I use Warcraft3 models/textures/etc. in my Android/iOS Game?


problem writing complex vertex shader

23 July 2013 - 01:39 PM

Greetings, everyone.

 

Recently I've been interested in Warcraft3's model system.

I download the War3ModelEditor source code (from: http://home.magosx.com/index.php?topic=6.0), read it, and rewrite a program witch can render Warcraft3's model using OpenGL ES.

When I run this code on an Android phone, it looks good but, when there're more than 5 models in the screen, the FPS becomes very low.

 

Currently I do all the bone animation(matrix calculation and vertex position calculation) in CPU side.

I think it might be faster if we can do all these works in GPU side.

But I just don't know how to do it sad.png

The Warcraft3's vertex position calculation is complex for me.

 

Let me explain a little more.

In a Warcraft3's model, each vertex is linked to one or moe bone.

Here is how the War3ModelEditor calculate the vertex's position:

step1. for each bone[i], calculate matrix_list[i]
step2. for each vertex
           position = (matrix_list[vertex_bone[0]] * v
                    +  matrix_list[vertex_bone[1]] * v
                    +  ...
                    +  matrix_list[vertex_bone[n]] * v) / n

note: n is the length of 'vertex_bone', each vertex may have a different 'vertex_bone'.

Actually, several vertex can share a same 'vertex_bone' array,

while several other vertex share another 'vertex_bone' array.

For example, a model with 500 vertices may have only 35 different 'vertex_bone' arrays.

But I don't know how can I make use of this, to optimize the performance.



 

 

The step1 may be easy. Since a typical Warcraft3 model will have less than 30 bones, we can do this step in CPU side without much performance hit.

But step2 is quite complex.

 

If I write a vertex shader (GLSL) it will be something like this:

uniform mat4 u_matrix_list[50]; /* there might be more ?? */
attribute float a_n;
attribute float a_vertex_bone[4]; /* there might be more ?? */
attribute vec4 a_position;
void main() {
  float i;
  vec4 p = vec4(0.0, 0.0, 0.0, 1.0);
  for (i = 0; i < a_n; ++i) {
    p += u_matrix_list[int(a_vertex_bone[int(i)])] * a_position;
  }
  gl_Position = p / float(a_n);
}

There're some problems.

1. When I compile the vertex shader above (on my laptop, either than an Android phone), it reports 'success' with a warning message 'OpenGL does not allow attributes of type float[4]'.

And some times (when I change the order of the 3 attributes) it cause my program goes down, with a message 'The NVIDIA OpenGL driver lost connection with the display driver due to exceeding the Windows Time-Out limit and is unable to continue.'

2. The book <OpenGL ES 2.0 Programming Guide> page 83, says that 'many OpenGL ES only mandates that array indexing be supported by constant integral expressions (there is an exception to this, which is the indexing of uniform variables in vertex shaders that is discussed in Chapter 8).', so the statement 'a_vertex_bone[int(i)]' might not work on some OpenGL ES hardware.

 

 

Actually I've never write such a complex(?) shader before.

Any one could you give me some advice?

Thank you.


problem with wgl pixel format

09 June 2008 - 02:30 PM

Hello. I got a problem with wgl pixel format. the code is quite simple, I choose a pixel format which has 16 color bits, 16 depth bits, and 8 alpha bits.
        int pixelformat;
        PIXELFORMATDESCRIPTOR pfd = {0};
        pfd.nSize = sizeof(pfd);
        pfd.nVersion = 1;
        pfd.dwFlags = PFD_DRAW_TO_WINDOW |
                PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
        pfd.iPixelType = PFD_TYPE_RGBA;
        pfd.cColorBits = 16;
        pfd.cDepthBits = 16;
        pfd.cAlphaBits = 8;
        pixelformat = ChoosePixelFormat(hDC, &pfd);
        SetPixelFormat(hDC, pixelformat, &pfd);
        hRC = wglCreateContext(hDC);


Before I choose pixel format, I called the ChangeDisplaySettings function to change the screen display settings to 1024*768*16. The strange problem is, when my desktop display setting is 1280*800*16, the program works well (change display setting to 1024*760*16, choose a pixel format, use glRectf to draw a rectangle), but when my desktop display setting is 1280*800*32, all things go wrong. I use the glColor3f(1, 1, 1) color but I see (0, 1, 1) color on my screen. also, the FPS is quite low. And more, if I delete the "pfd.cAlphaBits = 8;" from my code, all thing go well again. I've tested my program code on two different PC, one is: intel 945G, and other is: nVidia Geforce Go 7300. but both have the same problem. I don't know why. complete code here: (language: "C", IDE: "Visual Studio 2005")
#include <windows.h>
#include <GL/gl.h>
#include <stdio.h>

#pragma comment (lib, "opengl32.lib")

static PCTSTR gs_CLASS_NAME = TEXT("Default Window Class");
static int width = 1024;
static int height = 768;
static int bpp = 16;
static int fullscreen = 1;

static HWND  hWnd;
static HDC   hDC;
static HGLRC hRC;

#define MAX_CHAR       128
void drawString(const char* str) {
    static int isFirstCall = 1;
    static GLuint lists;
    if( isFirstCall ) {
        isFirstCall = 0;
        lists = glGenLists(MAX_CHAR);
        wglUseFontBitmaps(wglGetCurrentDC(), 0, MAX_CHAR, lists);
    }
    for(; *str!='\0'; ++str)
        glCallList(lists + *str);
}

void showfps() {
    char str[20];
    double fps;

    // cal fps
    {
        int current;
        static int last;
        static double last_fps;
        static const int n = 50;
        static int count = 0;

        if( ++count < n )
            fps = last_fps;
        else {
            count = 0;
            current = GetTickCount();
            fps = 1000.0 * n / (current - last);
            last = current;
            last_fps = fps;
        }
    }

    // draw string
    sprintf(str, "FPS: %g", fps);
    glColor3f(1, 1, 1);
    glRasterPos2f(-1.0f, 0.9f);
    drawString(str);
}

static LRESULT CALLBACK wndProc(
        HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam) {
    if( msg == WM_KEYDOWN )
        PostQuitMessage(0);
    return DefWindowProc(hWnd, msg, wParam, lParam);
}

int main() {
    DWORD style, exstyle;
    int position;

    // register class
    {
    WNDCLASSEX wc;
    wc.cbSize = sizeof(wc);
    wc.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC | CS_NOCLOSE;
    wc.lpfnWndProc = wndProc;
    wc.cbClsExtra = 0;
    wc.cbWndExtra = 0;
    wc.hInstance = GetModuleHandle(0);
    wc.hIcon = LoadIcon(0, IDI_APPLICATION);
    wc.hCursor = LoadCursor(0, IDC_ARROW);
    wc.hbrBackground = 0;
    wc.lpszMenuName = 0;
    wc.lpszClassName = gs_CLASS_NAME;
    wc.hIconSm = 0;
    if( !RegisterClassEx(&wc) )
        return 1;
    }

    // set style & change resolution
    {
        if( fullscreen ) {
            DEVMODE mode = {0};
            style = WS_POPUP;
            exstyle = WS_EX_APPWINDOW | WS_EX_TOPMOST;
            position = 0;

            mode.dmSize = sizeof(mode);
            mode.dmFields = DM_PELSWIDTH | DM_PELSHEIGHT | DM_BITSPERPEL;
            mode.dmPelsWidth = width;
            mode.dmPelsHeight = height;
            mode.dmBitsPerPel = bpp;
            if( ChangeDisplaySettings(&mode, CDS_FULLSCREEN) !=
                    DISP_CHANGE_SUCCESSFUL ) {
                fullscreen = 0;
                goto windowed;
            }
        } else {
            RECT rect;
windowed:
            style = WS_OVERLAPPEDWINDOW & (~WS_THICKFRAME);
            exstyle = WS_EX_APPWINDOW;
            position = CW_USEDEFAULT;

            rect.left = 0;
            rect.top = 0;
            rect.right = width;
            rect.bottom = height;
            AdjustWindowRectEx(&rect, style, FALSE, exstyle);
            width = rect.right - rect.left;
            height = rect.bottom - rect.top;
        }
    }

    // create window
    hWnd = CreateWindowEx(exstyle, gs_CLASS_NAME, TEXT(""), style,
            position, position, width, height,
            0, 0, GetModuleHandle(0), 0);
    if( !hWnd )
        return 1;
    ShowWindow(hWnd, SW_SHOW);
    UpdateWindow(hWnd);
    hDC = GetDC(hWnd);
    if( !hDC )
        return 1;

    // initialize opengl
    {
        int pixelformat;
        PIXELFORMATDESCRIPTOR pfd = {0};
        pfd.nSize = sizeof(pfd);
        pfd.nVersion = 1;
        pfd.dwFlags = PFD_DRAW_TO_WINDOW |
                PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
        pfd.iPixelType = PFD_TYPE_RGBA;
        pfd.cColorBits = 16;
        pfd.cDepthBits = 16;
        pfd.cAlphaBits = 8;
        pixelformat = ChoosePixelFormat(hDC, &pfd);
        SetPixelFormat(hDC, pixelformat, &pfd);
        hRC = wglCreateContext(hDC);

        printf("pixel format: %d\n", pixelformat);

        wglMakeCurrent(hDC, hRC);
    }

    // message loop
    {
        MSG msg;
        for(;;) {
            if( PeekMessage(&msg, 0, 0, 0, PM_REMOVE) ) {
                if( msg.message == WM_QUIT )
                    return 0;
                TranslateMessage(&msg);
                DispatchMessage(&msg);
            } else {
                glClear(GL_COLOR_BUFFER_BIT);
                glColor3f(1, 1, 1);
                glRectf(-0.5f, -0.5f, 0.5f, 0.5f);
                showfps();
                SwapBuffers(hDC);
            }
        }
    }

    // end
    return 0;
}



confused about matrix and transform

21 May 2008 - 03:49 PM

Hello there. I'm reading some articles about matrix these days. someone says that the matrix transform looks like this: [ m, m, m, m, [x, y, z, w] X m, m, m, m, = [x', y', z', w'] m, m, m, m, m, m, m, m ] but other say that the matrix transform looks like: [x, [ m, m, m, m, [x', y, X m, m, m, m, = y', z, m, m, m, m, z', w] m, m, m, m ] w'] I've studied OpenGL for some time, and I beleave the first style is correct, but I see many people use the second style. Is there any difference? Which one should I use?

Intel 945G not support GLSL?

13 April 2008 - 04:04 PM

Hi, I'm beginning to study the GLSL. I use the GLEE to load gl extensions, and when I call: iVertexShader = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB); the glCreateShaderObjectARB always return 0. Does that mean my graphics card not support the GLSL language? My graphics card is "Intel 945G"(glGetString(GL_RENDERER) returns the string "Intel 945G"), and I think I downloaded the latest driver.

PARTNERS