• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

170 Neutral

About eastcowboy

  • Rank
  1.   Have you considered creating your own assets?   Not yet. Currently our team have only 2 programmers and no artist. If we don't managed to find some assets, we may consider to spend some money and let someone to help us.
  2. Thank you. I've read them now. I will consider to find some other free assets.
  3. May I use Warcraft3 models/textures/etc. in my Android/iOS Game?
  4. Yes the 'AncientOfLore.mdx' has many bones. When I found this for the first time, I am surprised too. Once again the Warcraft3's model do not obey the rule they've made in the <Warcraft III Art Tools Documention.pdf>. According to the documention, a building should have at most 15 bones. And a really big unit should have at most 30 bones.   By the way, OpenGL 2.0 spec is released on the year 2004. Warcraft3 is released before that. So I think Warcraft3 is not using a shader to do the bone animation.   I've noticed that, not all the bones are used by the mesh. Some of the bones are used for attaching another model, or used by a particle emitter, etc. For example, when an AncientOfLore tree was badly damaged, some places of the tree body will be on fire. Each place uses a particle emitter to draw the fire, and a particle emitter needs a bone. Simply speaking, 6 places of fire will use 6 bones. We can ignore these bones when we are loading bone matrices to the shader.   There is a concept named "geoset" in the Warcraft3's model. A geoset contains data like vertex positions, texture coords, normals, and the indices of bone matrix. One model may have one or more geoset(s). Before today I thought that each vertex in each geoset can be linked to any bone of this model. When I see these words "split the mesh" I guess we may make use of the geoset directly, rather than split the mesh by an algorithm. So I did a simple test. The 'AncientOfLore.mdx' model has 12 geosets. And in the animation sequence "stand work alternate" there're 6 of them are visible(The documention says that one model should have at most 5 visible geosets!). The number of bones used in each geoset are: 27, 62, 3, 3, 8, 2. All these numbers are much lesser than 202. But for OpenGL ES, the 62 bones is still too many and will need to split into smaller parts.   So if I need to display an 'AncientOfLore.mdx' on my Android phone, I have to design an algorithm to split a geoset into two or more small geosets. The next step is to design and implement this algorithm. I think that will not be easy for me. But I'll try it.
  5. Here's some snapshot of my test program. I'd like to share my happy feeling with you. Thank you again. [attachment=16979:testGL.01.png] [attachment=16980:testGL.02.png] [attachment=16981:testGL.03.png] [attachment=16982:testGL.04.png]
  6. Koehler, thank you very much for your reply. It helps me a lot. Especially the 'dot product', that is wonderful.   But let me point out this. The code "vec4 p = vec4(0,0,0,1);" you wrote, will actually be "vec4 p = vec4(0,0,0,0);". Or the transformation will not be correct.   Based on your idea, I've changed my source code. I'm not very famillar about OpenGL version 2.0 and above. Fortunately I did it with a success:). And there're still some issues that need to be think about.   Let me put my shader source code down here: (Yes you can see there's something like gl_TextureMatrix and gl_ModelViewProjectionMatrix. That's because the first version of my program is written on an old PC witch only supports OpenGL 1.4. I'll modify these when necessary) /* vertex shader */ uniform mat4 u_matrix_list[202]; attribute vec3 a_position; attribute vec2 a_texcoord; attribute vec4 a_mat_indices; attribute vec4 a_mat_weights; varying vec2 v_texcoord; void main() { v_texcoord = (gl_TextureMatrix[0] * vec4(a_texcoord, 0.0, 1.0)).xy; vec4 p0 = vec4(a_position, 1.0); vec4 p = vec4(0.0, 0.0, 0.0, 0.0); p += (u_matrix_list[(int)a_mat_indices[0]] * p0) * a_mat_weights[0]; p += (u_matrix_list[(int)a_mat_indices[1]] * p0) * a_mat_weights[1]; p += (u_matrix_list[(int)a_mat_indices[2]] * p0) * a_mat_weights[2]; p += (u_matrix_list[(int)a_mat_indices[3]] * p0) * a_mat_weights[3]; p /= dot(a_mat_weights, a_mat_weights); gl_Position = gl_ModelViewProjectionMatrix * p; }; /* fragment shader */ uniform sampler2D tex; uniform vec4 u_color; varying vec2 v_texcoord; void main() { gl_FragColor = u_color * texture2D(tex, v_texcoord); } Issues: 1. I wrote "uniform mat4 u_matrix_list[202];", this is a very large array for GPU.     I found that many of Warcraft3's unit model have less than 100 bones. For example a water elemental has 69 bones, and a footman has 49 bones.     But the buildings' model have many more bones. When I use the model 'AncientOfLore.mdx' for test. I found that it has 202 bones. So I declared such a large array. According to the MDX format, there can be up to 256 nodes(since the node's ID is a BYTE). But when I wrote "uniform mat4 u_matrix_list[256];" the glLinkProgram fails, with an error message "error C6007: Constant register limit exceeded; more than 1024 constant registers needed to compiled program".    I hear that if we store a mat4 as 3 vec4, it may save some space. But that may not be enough. The OpenGL ES 2.0 only ensure to have 128 vec4 uniform variables (glGetIntegeri with GL_MAX_VERTEX_UNIFORM_VECTORS), so we can only use 128 / 3 = 42 bones or less?   Or we can try to use a texture to store some more data. The book <OpenGL ES 2.0 Programming Guide> says that "Samplers in a vertex shader are optional". The POWERVR SGX seems to support it. But we need some more information to decide whether or not to use it.   2. Yes, the <Warcraft III Art Tools Documention.pdf> says that "Up to four bones can influence one vertex.". So we can use an vec4 attribute to simulate an float[4] array.     But I found there're some exceptions. For example a water elemetal has some vertices that are influenced by up to 6 bones. This is not very critical because we can add 2 more attribute to fix it.     In my test I just use the first 4 bones, and ignore the last 2, it looks fine without any obvious problem. So let's just ignore it for now:)
  7. OpenGL

    Ah, if you still want to use the DevIL library, just use the ilutGLLoadImage function, to replace your loadImage function. This works very well for me.
  8. Greetings, everyone.   Recently I've been interested in Warcraft3's model system. I download the War3ModelEditor source code (from: http://home.magosx.com/index.php?topic=6.0), read it, and rewrite a program witch can render Warcraft3's model using OpenGL ES. When I run this code on an Android phone, it looks good but, when there're more than 5 models in the screen, the FPS becomes very low.   Currently I do all the bone animation(matrix calculation and vertex position calculation) in CPU side. I think it might be faster if we can do all these works in GPU side. But I just don't know how to do it The Warcraft3's vertex position calculation is complex for me.   Let me explain a little more. In a Warcraft3's model, each vertex is linked to one or moe bone. Here is how the War3ModelEditor calculate the vertex's position: step1. for each bone[i], calculate matrix_list[i] step2. for each vertex position = (matrix_list[vertex_bone[0]] * v + matrix_list[vertex_bone[1]] * v + ... + matrix_list[vertex_bone[n]] * v) / n note: n is the length of 'vertex_bone', each vertex may have a different 'vertex_bone'. Actually, several vertex can share a same 'vertex_bone' array, while several other vertex share another 'vertex_bone' array. For example, a model with 500 vertices may have only 35 different 'vertex_bone' arrays. But I don't know how can I make use of this, to optimize the performance. ?     The step1 may be easy. Since a typical Warcraft3 model will have less than 30 bones, we can do this step in CPU side without much performance hit. But step2 is quite complex.   If I write a vertex shader (GLSL) it will be something like this: uniform mat4 u_matrix_list[50]; /* there might be more ?? */ attribute float a_n; attribute float a_vertex_bone[4]; /* there might be more ?? */ attribute vec4 a_position; void main() { float i; vec4 p = vec4(0.0, 0.0, 0.0, 1.0); for (i = 0; i < a_n; ++i) { p += u_matrix_list[int(a_vertex_bone[int(i)])] * a_position; } gl_Position = p / float(a_n); } There're some problems. 1. When I compile the vertex shader above (on my laptop, either than an Android phone), it reports 'success' with a warning message 'OpenGL does not allow attributes of type float[4]'. And some times (when I change the order of the 3 attributes) it cause my program goes down, with a message 'The NVIDIA OpenGL driver lost connection with the display driver due to exceeding the Windows Time-Out limit and is unable to continue.' 2. The book <OpenGL ES 2.0 Programming Guide> page 83, says that 'many OpenGL ES only mandates that array indexing be supported by constant integral expressions (there is an exception to this, which is the indexing of uniform variables in vertex shaders that is discussed in Chapter 8).', so the statement 'a_vertex_bone[int(i)]' might not work on some OpenGL ES hardware.     Actually I've never write such a complex(?) shader before. Any one could you give me some advice? Thank you.
  9. OpenGL

    some days passed but no answer :( any body could you help me please? I also used the NeHe's OpenGL framework to test. But the problem happens again. Let me describe my problem again. follow these steps and I got the problem: (1) set my desktop's bpp(bits-per-pixel) to 32. (though the Microsoft Windows System's user interface, right click the desktop and select 'property' and select 'settings', not use the Windows API functions) (2) (in my program, ) change display settings, set bpp to 16 (3) (in my program, ) choose a pixel format with 16 color bits, 16 depth bits, and 8 alpha bits, then create OpenGL context. (4) (in my program, ) use glColor3f(1, 1, 1) and then use glRectf to draw a white rectange. but what I saw is a rectangle with color (0, 1, 1), the red bits are missing. also, the FPS is quite slow (FPS < 20.0 at 1024*768*16) (5) in step 1, if I set desktop's bpp to 16, then all problems disappear. I can get a white rectangle and the FPS is nearly 90. By the way, whether I set desktop's bpp to 16 or 32, the ChoosePixelFormat returns the same value. (6) in step 3, if I choose a pixel format without alpha bits, then all problems disappear too. (7) I test this on two different computer. one is: intel 845G, and the other is: nVidia GeForce Go 7300. but both have the same problem.
  10. Hello. I got a problem with wgl pixel format. the code is quite simple, I choose a pixel format which has 16 color bits, 16 depth bits, and 8 alpha bits. int pixelformat; PIXELFORMATDESCRIPTOR pfd = {0}; pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 16; pfd.cDepthBits = 16; pfd.cAlphaBits = 8; pixelformat = ChoosePixelFormat(hDC, &pfd); SetPixelFormat(hDC, pixelformat, &pfd); hRC = wglCreateContext(hDC); Before I choose pixel format, I called the ChangeDisplaySettings function to change the screen display settings to 1024*768*16. The strange problem is, when my desktop display setting is 1280*800*16, the program works well (change display setting to 1024*760*16, choose a pixel format, use glRectf to draw a rectangle), but when my desktop display setting is 1280*800*32, all things go wrong. I use the glColor3f(1, 1, 1) color but I see (0, 1, 1) color on my screen. also, the FPS is quite low. And more, if I delete the "pfd.cAlphaBits = 8;" from my code, all thing go well again. I've tested my program code on two different PC, one is: intel 945G, and other is: nVidia Geforce Go 7300. but both have the same problem. I don't know why. complete code here: (language: "C", IDE: "Visual Studio 2005") #include <windows.h> #include <GL/gl.h> #include <stdio.h> #pragma comment (lib, "opengl32.lib") static PCTSTR gs_CLASS_NAME = TEXT("Default Window Class"); static int width = 1024; static int height = 768; static int bpp = 16; static int fullscreen = 1; static HWND hWnd; static HDC hDC; static HGLRC hRC; #define MAX_CHAR 128 void drawString(const char* str) { static int isFirstCall = 1; static GLuint lists; if( isFirstCall ) { isFirstCall = 0; lists = glGenLists(MAX_CHAR); wglUseFontBitmaps(wglGetCurrentDC(), 0, MAX_CHAR, lists); } for(; *str!='\0'; ++str) glCallList(lists + *str); } void showfps() { char str[20]; double fps; // cal fps { int current; static int last; static double last_fps; static const int n = 50; static int count = 0; if( ++count < n ) fps = last_fps; else { count = 0; current = GetTickCount(); fps = 1000.0 * n / (current - last); last = current; last_fps = fps; } } // draw string sprintf(str, "FPS: %g", fps); glColor3f(1, 1, 1); glRasterPos2f(-1.0f, 0.9f); drawString(str); } static LRESULT CALLBACK wndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam) { if( msg == WM_KEYDOWN ) PostQuitMessage(0); return DefWindowProc(hWnd, msg, wParam, lParam); } int main() { DWORD style, exstyle; int position; // register class { WNDCLASSEX wc; wc.cbSize = sizeof(wc); wc.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC | CS_NOCLOSE; wc.lpfnWndProc = wndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = GetModuleHandle(0); wc.hIcon = LoadIcon(0, IDI_APPLICATION); wc.hCursor = LoadCursor(0, IDC_ARROW); wc.hbrBackground = 0; wc.lpszMenuName = 0; wc.lpszClassName = gs_CLASS_NAME; wc.hIconSm = 0; if( !RegisterClassEx(&wc) ) return 1; } // set style & change resolution { if( fullscreen ) { DEVMODE mode = {0}; style = WS_POPUP; exstyle = WS_EX_APPWINDOW | WS_EX_TOPMOST; position = 0; mode.dmSize = sizeof(mode); mode.dmFields = DM_PELSWIDTH | DM_PELSHEIGHT | DM_BITSPERPEL; mode.dmPelsWidth = width; mode.dmPelsHeight = height; mode.dmBitsPerPel = bpp; if( ChangeDisplaySettings(&mode, CDS_FULLSCREEN) != DISP_CHANGE_SUCCESSFUL ) { fullscreen = 0; goto windowed; } } else { RECT rect; windowed: style = WS_OVERLAPPEDWINDOW & (~WS_THICKFRAME); exstyle = WS_EX_APPWINDOW; position = CW_USEDEFAULT; rect.left = 0; rect.top = 0; rect.right = width; rect.bottom = height; AdjustWindowRectEx(&rect, style, FALSE, exstyle); width = rect.right - rect.left; height = rect.bottom - rect.top; } } // create window hWnd = CreateWindowEx(exstyle, gs_CLASS_NAME, TEXT(""), style, position, position, width, height, 0, 0, GetModuleHandle(0), 0); if( !hWnd ) return 1; ShowWindow(hWnd, SW_SHOW); UpdateWindow(hWnd); hDC = GetDC(hWnd); if( !hDC ) return 1; // initialize opengl { int pixelformat; PIXELFORMATDESCRIPTOR pfd = {0}; pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 16; pfd.cDepthBits = 16; pfd.cAlphaBits = 8; pixelformat = ChoosePixelFormat(hDC, &pfd); SetPixelFormat(hDC, pixelformat, &pfd); hRC = wglCreateContext(hDC); printf("pixel format: %d\n", pixelformat); wglMakeCurrent(hDC, hRC); } // message loop { MSG msg; for(;;) { if( PeekMessage(&msg, 0, 0, 0, PM_REMOVE) ) { if( msg.message == WM_QUIT ) return 0; TranslateMessage(&msg); DispatchMessage(&msg); } else { glClear(GL_COLOR_BUFFER_BIT); glColor3f(1, 1, 1); glRectf(-0.5f, -0.5f, 0.5f, 0.5f); showfps(); SwapBuffers(hDC); } } } // end return 0; }
  11. I see it. m X v = transpose(v) X transpose(m) so there's little difference between two styles. I am a little confused about 'transposed matrix' and the 'row major/column major'. In my code, I use all matrix in 'row major', and the OpenGL uses the 'column-major', this could be a problem for me. but don't worry, I can deal with it. :) Here's some code I wrote these days. It is used for matrix calculating. It uses the 'row-major' to store my matrices. but in the mat4_inverse function, I use the 'col-major' for mistake, and it works well, I don't know why. #include <math.h> void vec4_fromXYZ(vec4 v, float x, float y, float z) { v[0] = x; v[1] = y; v[2] = z; v[3] = 1.0f; } void vec4_fromXYZW(vec4 v, float x, float y, float z, float w) { v[0] = x; v[1] = y; v[2] = z; v[3] = w; } void vec4_copy(vec4 to, vec4 from) { to[0] = from[0]; to[1] = from[1]; to[2] = from[2]; to[3] = from[3]; } void mat4_identity(mat4 m) { int i; for(i=0; i<16; ++i) m[i] = 0.0f; m[0] = m[5] = m[10] = m[15] = 1.0f; } void mat4_copy(mat4 to, mat4 from) { int i; for(i=0; i<16; ++i) to[i] = from[i]; } void mat4_multiply(mat4 m1, mat4 m2, mat4 mResult) { int i, j, k; for(i=0; i<4; ++i) { for(j=0; j<4; ++j) { float tmp = 0.0f; for(k=0; k<4; ++k) tmp += m1[i*4+k] * m2[k*4+j]; mResult[i*4+j] = tmp; } } } void mat4_transform(mat4 m, vec4 v, vec4 result) { int i, k; for(i=0; i<4; ++i) { float tmp = 0.0f; for(k=0; k<4; ++k) tmp += m[i*4+k] * v[k]; result[i] = tmp; } } void mat4_translate(mat4 m, float x, float y, float z) { mat4 transform = { 1, 0, 0, x, 0, 1, 0, y, 0, 0, 1, z, 0, 0, 0, 1 }; mat4 tmp; mat4_multiply(m, transform, tmp); mat4_copy(m, tmp); } void mat4_rotateX(mat4 m, float theta) { mat4 transform = { 1, 0, 0, 0, 0, cosf(theta), -sinf(theta), 0, 0, sinf(theta), cosf(theta), 0, 0, 0, 0, 1 }; mat4 tmp; mat4_multiply(m, transform, tmp); mat4_copy(m, tmp); } void mat4_rotateY(mat4 m, float theta) { mat4 transform = { cosf(theta), 0, sinf(theta), 0, 0, 1, 0, 0, -sinf(theta), 0, cosf(theta), 0, 0, 0, 0, 1 }; mat4 tmp; mat4_multiply(m, transform, tmp); mat4_copy(m, tmp); } void mat4_rotateZ(mat4 m, float theta) { mat4 transform = { cosf(theta), -sinf(theta), 0, 0, sinf(theta), cosf(theta), 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; mat4 tmp; mat4_multiply(m, transform, tmp); mat4_copy(m, tmp); } void mat4_transpose(mat4 m, mat4 result) { int i, j; for(i=0; i<4; ++i) for(j=0; j<4; ++j) result[i*4+j] = m[j*4+i]; } static float det3(float* m, int a1, int a2, int a3, int a4, int a5, int a6, int a7, int a8, int a9) { return m[a1] * m[a5] * m[a9] + m[a2] * m[a6] * m[a7] + m[a3] * m[a4] * m[a8] - m[a3] * m[a5] * m[a7] - m[a2] * m[a4] * m[a9] - m[a1] * m[a6] * m[a8]; } void mat4_inverse(mat4 m, mat4 result) { float d = +m[ 0] * det3(m, 5, 9, 13, 6, 10, 14, 7, 11, 15) -m[ 4] * det3(m, 1, 9, 13, 2, 10, 14, 3, 11, 15) +m[ 8] * det3(m, 1, 5, 13, 2, 6, 14, 3, 7, 15) -m[12] * det3(m, 1, 5, 9, 2, 6, 10, 3, 7, 11); d = 1.0f / d; result[ 0] = d * det3(m, 5, 9, 13, 6, 10, 14, 7, 11, 15); result[ 1] = -d * det3(m, 1, 9, 13, 2, 10, 14, 3, 11, 15); result[ 2] = d * det3(m, 1, 5, 13, 2, 6, 14, 3, 7, 15); result[ 3] = -d * det3(m, 1, 5, 9, 2, 6, 10, 3, 7, 11); result[ 4] = -d * det3(m, 4, 8, 12, 6, 10, 14, 7, 11, 15); result[ 5] = d * det3(m, 0, 8, 12, 2, 10, 14, 3, 11, 15); result[ 6] = -d * det3(m, 0, 4, 12, 2, 6, 14, 3, 7, 15); result[ 7] = d * det3(m, 0, 4, 8, 2, 6, 10, 3, 7, 11); result[ 8] = d * det3(m, 4, 8, 12, 5, 9, 13, 7, 11, 15); result[ 9] = -d * det3(m, 0, 8, 12, 1, 9, 13, 3, 11, 15); result[10] = d * det3(m, 0, 4, 12, 1, 5, 13, 3, 7, 15); result[11] = -d * det3(m, 0, 4, 8, 1, 5, 9, 3, 7, 11); result[12] = -d * det3(m, 4, 8, 12, 5, 9, 13, 6, 10, 14); result[13] = d * det3(m, 0, 8, 12, 1, 9, 13, 2, 10, 14); result[14] = -d * det3(m, 0, 4, 12, 1, 5, 13, 2, 6, 14); result[15] = d * det3(m, 0, 4, 8, 1, 5, 9, 2, 6, 10); }
  12. Thank you for your reply. I already know that the second style I wrote is wrong. So just as erissian said, we treat a vector as a matrix with 4 rows and only 1 columns. Then what is the correct order of transform? For example, I want to rotate a vector and then translate it. So I calculated two matrix M(R), M(T). The M(R) is for rotate and the M(T) is for translate. What should I do next? 1. M = M(R) X M(T) V' = M X V 2. M = M(T) X M(R) V' = M X V I think the first one is correct, is that right?
  13. Hello there. I'm reading some articles about matrix these days. someone says that the matrix transform looks like this: [ m, m, m, m, [x, y, z, w] X m, m, m, m, = [x', y', z', w'] m, m, m, m, m, m, m, m ] but other say that the matrix transform looks like: [x, [ m, m, m, m, [x', y, X m, m, m, m, = y', z, m, m, m, m, z', w] m, m, m, m ] w'] I've studied OpenGL for some time, and I beleave the first style is correct, but I see many people use the second style. Is there any difference? Which one should I use?
  14. I didn't see any article talks about this subject, so what I say might be wrong :( well, I think 32bit is better. however, the ChoosePixelFormat function helps us to find the most fitable pixelformat, not the exactly pixelformat. so a driver support 32bit color depth will use 32bit color depth, a driver not support 32bit color depth will try 24bit, 16bit, etc. If a driver support both 32bit and 24bit color depth, we prefer it using the 32bit one, even when we don't need alpha bits.
  15. Thank you for your reply. But what is "ASM shaders"? I don't heard about it.