• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

xargon123

Members
  • Content count

    98
  • Joined

  • Last visited

Community Reputation

138 Neutral

About xargon123

  • Rank
    Member
  1. I am very new to OpenGL and am reading about mapping images to textures and came across this API with the following signature: void glTexImage2D(GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid * data); Now, I have read many places that graphics hardware works better if things are aligned to 4 byte boundaries. So, I was wondering about the following case where I have a RGB image. So, the format parameter should be GL_RGB. However, to take advantage of this 4 byte alignment I want to keep it internally as GL_RGBA (where the alpha channel is not really used).   Reading the API, I got a bit confused as to whether I need to internally change my image to have this RGBA structure or would just setting the internal format to GL_RGBA and format to GL_RGB suffice?   Another question that I wonder is what if I have grayscale data i.e. just a single channel. Would it also be beneficial to convert that to RGBA structure at the cost of more memory or would I not get a performance hit by keeping it as GL_RED, for example.   Thanks, Luca
  2. Hello, I have a question about glGenTextures, which I use to create some 2D textures from 3D volumes as follows: During my initialization, I have something as follows: [source lang="cpp"]m_pXTexNames = new unsigned int[m_nXDim]; glGenTextures(m_nXDim, m_pXTexNames); m_pYTexNames = new unsigned int[m_nYDim]; glGenTextures(m_nYDim, m_pYTexNames); m_pZTexNames = new unsigned int[m_nZDim]; glGenTextures(m_nZDim, m_pZTexNames);[/source] And for each of these I use glTexImage2D to generate the texture with some data. Now, my understanding is that when you use this the data resides on the GPU memory but I see the system RAM increasing quite a bit after these calls and they stay high till I deallocate the texture (I verified this using the windows task manager). Is this normal? Have I misunderstood how this works? I can show more detailed code if you want. Thanks, xarg
  3. Hi everyone, I have some legacy code which I am converting/porting to a new framework using Trolltech's Qt framework. The version from 4.7 also has support for GLSL shaders. I have some shader code in my application which is as follows. It is a fragment shader and not too complex (I think!) [code] //initialise fragment program char* szShader = new char[1024]; strcpy(szShader, "!!ARBfp1.0\n"); strcat(szShader, "PARAM c[1] = { { 65535, 0.00390625, 256, 0.0039215689 } };\n"); strcat(szShader, "TEMP R0;\n"); strcat(szShader, "TEX R0.x, fragment.texcoord[0], texture[0], 2D;\n"); strcat(szShader, "MUL R0.x, R0, c[0];\n"); strcat(szShader, "MUL R0.y, R0.x, c[0];\n"); strcat(szShader, "ABS R0.z, R0.y;\n"); strcat(szShader, "FRC R0.z, R0;\n"); strcat(szShader, "MUL R0.z, R0, c[0];\n"); strcat(szShader, "FLR R0.y, R0;\n"); strcat(szShader, "CMP R0.x, R0, -R0.z, R0.z;\n"); strcat(szShader, "MUL_SAT R0.xy, R0, c[0].w;\n"); strcat(szShader, "TEX result.color, R0, texture[1], 2D;\n"); strcat(szShader, "END\n\0"); [/code] Could someone help me compare this to GLSL as the Qt framework does not support shaders in this (assembly?!) sort of format. I have no experience with shadersa myself and have unfortunately hit a wall and am having difficulty making progress! I will pay in beers if you are in London anytime! Many thanks. I really appreciate any help. xarg P.S: So, it turns out that I need to convert this ASM like shader to GLSL. I looked for such conversion tools but could not really find one. This shader code should be simple. Does anyone know of any such tools that can be used to convert from ASM to GLSL or other shader languages?
  4. One final question about this: The reason I need this is because I need to calculate the gradient of the 2D image which was generated by resampling the original image using the Catmull-Rom spline interpolation kernel. So, the way I understand it is that the gradient of the resampled image should be the same as convolution of the resampled image with this derivative kernel. Does that sound right? So, since it is a 2D image, the gradient would be a vector field with a 2-element vector at each pixel. The original interpolation worked by looking at the neighborhood 16 values and producing the interpolated intensity. I am struggling to understand how I can get the gradient vector field from this resampled image. I am guessing I somehow have to do the convolution in each axes separately, but struggling to see how that would work. Thanks, xarg
  5. Great! Thanks for that :)
  6. Hello everyone, I am currently using a Catmull Rom spline to interpolate intensities in an image using the matrix described here: http://en.wikipedia.org/wiki/Cubic_Hermite_spline#Interpolation_on_the_unit_interval_without_exact_derivatives So, I am looking at the 16 values in the neighbourhood and applying the shown matrix and calculating the interpolated intensity and this works quite well. Now, I also need to calculate the derivative of this spline. Is it then simply, taking the derivatives of that kernel, which would be: 0.5 * |4x - 3x^2 - 1| |9x^2 - 10x | |8x - 9x^2 + 1| |3x^2 - 2x | Thanks, xarg
  7. Hello everyone, I am using the bicubic kernel described here ( http://en.wikipedia.org/wiki/Bicubic_interpolation#Bicubic_convolution_algorithm ) to interpolate my image after applying some transformations. I an using the matrix kernel described here with a = -0.5. Now, what I also need to do is estimate the kernel that is the derivative of this bicubic kernel. There is a bit of discussion in this page on the derivative but I am finding it impossible to be able to derive this kernel. I would be really grateful if someone can help me derive this derivative kernel. My calculus skills are quite rusty but this is a roadblock for me for quite a few days now. I have also searched high and low on the internet for this derivative kernel but to no avail. I look forward to any assistance you can give me. xarg
  8. Thank you guys! Sorry, very new to this and making these silly mistakes. Many thanks! xarg
  9. Hello everyone, I have been trying all sorts of things for hours to get this working but no luck! I have a simple code that creates a 2D grid. My problem is that I am unable to apply any transformation to the grid. Even if I try to load an identity matrix, everything goes crazy! Ok, here is how I set my OpenGL views. I have transformation that comes in screen space, so I do the following: glViewport(0, 0, w, h); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glOrtho(0, w, 0, h, -1, 1); Now, when I have no transformation, I can draw the grid just fine. The code is as follows: glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_LINES); glColor3f(1.0f, 0.0f, 0.0f); int i, end; for (i = 0; i < ih; i += 2) { glVertex2i(0, i); glVertex2i(iw, i); } end = i - 2; for (int i = 0; i < iw; i += 2) { glVertex2i(i, 0); glVertex2i(i, end); } glEnd(); This works just fine and shows me a grid. Now as soon as I try to load a transformation (even an identity transformation!), the grid disappears and I think it gets drawn somewhere far away. So, the code for transformation is as follows: // identity transformation! float transformation[16] = {0.0f}; transformation[0] = 1.0f; transformation[5] = 1.0f; transformation[10] = 1.0f; transformation[15] = 1.0f; glLoadMatrixf(transformation); // grid drawing code glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_LINES); glColor3f(1.0f, 0.0f, 0.0f); int i, end; // ih = 512 // iw = 512 for (i = 0; i < ih; i += 2) { glVertex2i(0, i); glVertex2i(iw, i); } end = i - 2; for (int i = 0; i < iw; i += 2) { glVertex2i(i, 0); glVertex2i(i, end); } glEnd(); glPopMatrix(); So, here I have tried to use the glLoadMatrixf and loaded my identity transformation through that. What am I doing wrong? I would really appreciate some help as this is quite urgent for me and I have spend about 5 hours trying all sorts of thing now... Please help! Many thanks, xarg
  10. Thanks for that. I tried a simple test as follows: extern "C" __declspec(dllexport) BOOL draw_gl(HDC *hdc, HGLRC *hglrc) { if (hdc && hglrc) { float test[16] = {0.0f}; test[0] = 1.0f; test[5] = 1.0f; test[10] = 1.0f; test[12] = -1.0f; test[13] = -1.0f; test[14] = 0.0f; test[15] = 1.0f; wglMakeCurrent(*hdc, *hglrc); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glLoadMatrixf(test); glPushMatrix(); glBegin(GL_LINES); glColor3f(1.0f, 0.0f, 0.0f); const float delta = 0.1f; const float limit for (float i = -1.0f; i<= 1.0f; i+= delta) { glVertex2f( 1.0f, i); glVertex2f(-1.0f, i); } for (float i = -1.0f; i<= 1.0f; i+= delta){ glVertex2f(i, 1.0f); glVertex2f(i, -1.0f); } glEnd(); glPopMatrix(); return TRUE; } return FALSE; } This seems to work as expected. There is just one last bit. My transformation matrix is in GDI+ screen pixel coordinates. Is there a simple way to transform this to the OpenGL coordinates??? Thanks again for your help. xarg
  11. Hello, I am very new to OpenGL and 3D graphics in general and am a bit stuck at how to achieve the following. Currently, I have a simple OpenGL drawing routine that draws a aimple 2D grid as follows. The code is inside a DLL. extern "C" __declspec(dllexport) BOOL draw_gl(HDC *hdc, HGLRC *hglrc) { if (hdc && hglrc) { wglMakeCurrent(*hdc, *hglrc); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glBegin(GL_LINES); glColor3f(1.0f, 0.0f, 0.0f); const float delta = 0.1f; for (float i = -1.0f; i<= 1.0f; i+= delta) { glVertex2f( 1.0f, i); glVertex2f(-1.0f, i); } for (float i = -1.0f; i<= 1.0f; i+= delta){ glVertex2f(i, 1.0f); glVertex2f(i, -1.0f); } glEnd(); glPopMatrix(); return TRUE; } return FALSE; } The code works fine and I see a 2D grid fill the screen. Now, I want to extend the function as follows: extern "C" __declspec(dllexport) BOOL draw_gl(HDC *hdc, HGLRC *hglrc, float *affine) where affine points to a 9 element array that represents a homogeneous affine transformation. Now, what I want is to apply this transformation to my grid. My default the grid fills the whole screen. Now say for example, my transformation is a simple translation, then I want the grid to be translated. So when I translate the grid (say to the right), then the grid starts drawing from the new position rather than the absolute screen left...Does that make sense? Same with the scaling...I guess the delta value should change with scaling... So, considering the drawing code above, is there a way to apply a 3x3 affine transformation to my grid vertices using OpenGL(I guess the z component can be set to 0). I hope the question is sensible. I am sorry if it is a very basic one but I am quite new to this and it took me hours just to write this simple rendering code! Many thanks, xarg
  12. Hello, I finally managed to solve it. It was because on resize the underlying bitmap was being recreated which meant I had to clean up OpenGL resources and reinitialize it with the new HDC. Now, it seems to work ok. Thanks, xarg
  13. Hello, Thanks for the reply. Setting the colorBits to 32 seemed to have solved the problem and I see the image getting rendered now. I have one more problem. When I basically resize the bitmap, the subsequent rendering crashes with an access violation. I tried setting the viewport to the new size but this still happens. The application crashes with the error "Attempted to read or write to protected memory. This is often an indication of the other memory being corrupt". The drawing routine is very simple: extern "C" __declspec(dllexport) BOOL draw_gl(HDC *hdc, HGLRC *hglrc) { if (hdc && hglrc) { wglMakeCurrent(*hdc, *hglrc); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glRotatef(0.0f, 0.0f, 0.0f, 1.0f ); glBegin(GL_TRIANGLES); glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(0.0f, 0.9f ); glColor3f(0.0f, 1.0f, 0.0f); glVertex2f(0.9f, -0.9f); glColor3f(0.0f, 0.0f, 1.0f); glVertex2f(-0.9f, -0.9f); glEnd(); glPopMatrix(); return TRUE; } return FALSE; } Any ideas what might be happening? Thanks, xarg
  14. Hello everyone, I am using OpenGL between C# and C++ where the C++ DLL is handling all the OpenGL initialization and drawing. I am quite new to 3D programming and OpenGL. So, what I do is pass a bitmap from my managed C# code and I would like to render to this bitmap using OpenGL. Just to give a background, the relevant managed code is as follows: Graphics graphics = System.Drawing.Graphics.FromImage(_bitmap); IntPtr hDC = graphics.GetHdc(); try { // If we have not initialized OpenGL, do it now. if (!_isOpenGLInit) { bool retval = NativeMethods.InitGL(out hDC, out this._hglrc); _isOpenGLInit = true; } } finally { graphics.GetHdc(hDC); } Here _bitmap is the bitmap that I want to draw on and the format that GDI+ initializes it with is 32bit ARGB. Now, my openGL initialization code is as follows: extern "C" __declspec(dllexport) BOOL InitGL(HDC * hdc, HGLRC * hglrc) { BOOL retval = FALSE; PIXELFORMATDESCRIPTOR pfd; int format; ZeroMemory(&pfd, sizeof(pfd)); pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_BITMAP | PFD_SUPPORT_OPENGL | PFD_SUPPORT_GDI; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; format = ChoosePixelFormat(*hdc, &pfd ); retval = SetPixelFormat(*hdc, format, &pfd); // this fails. *hglrc = wglCreateContext(*hdc); return retval; } The SetPixelFormat call fails and returns 0. ChoosePixelFormat returns a value of 104. GetLastError() returns ERROR_INVALID_FUNCTION. I am guessing that the pixelformat is somehow not possible with this bitmap. GDI+ does use 32bit ARGB while this OpenGL format that I could choose was RGBA. I wonder if that is a problem but GDI+ does not have RGBA and OpenGL does not seem to support ARGB! So, I am not able to test this theory... Besides that, it is rendering to a bitmap and I wonder if there is something special I need to do. However, I did not find anything on the net. I have spend a whole day trying to make this work with no luck. I would be really grateful if someone could help me with this. Many thanks, xarg
  15. Many thanks for the replies. Ahhhhh...so they can really be any points in space as long as they are described in spherical coordinates. Is that correct? So considering the earlier example of radiance, the directions only form a sphere if the direction vectors are all same length. So, say I describe the amount of light coming to a point through these directions but the length of the direction vectors (radius) changes with each different angle... Would I still be able to use spherical harmonics on this case? Thanks, /x