• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

117 Neutral

About littleeboy

  • Rank
  1. I am very interesting about the work in http://www.gamedev.net/community/forums/topic.asp?topic_id=504549 but the the paper couldnot be download form www.mediafire.com I hope someone who have the paper or new link can provide. Thanks advance.
  2. does anyone know how to make a tank track animation work in the game engine I want to create tank track animation in 3D game, not just animate texture. Some programmer propose using the skeletal animation. Is that method valid. Is there other methods to realize it. In 3ds max, tank track animation is relative trival. But that is not for real time application. If using skeletal aniamtion,how to control the different rotation direction of tracks accoring to tank's forward or backward. I am an OpenGL Programmer. Thanks advance.
  3. Thanks v-man In my original project, I use glGetTexImage2d to get data, then save to disk in PNG format.I found my rendering pause for a few second. If I use pbos to get data, does this would solve the pause phenomen. I do need to save data as PNG format, then I still mutithread,right?
  4. If I use Pbos, then do you mean I donot need multithread any more,right?
  5. Thanks, I think I only need one render context. When I use wglMakeCurrent to sync between the two thread,I got the texture data. My purpose is to alleviate the burden for saving the texture data to disk for my project. I found even use assistant thread to save data, the project still "pause" one second to save data to disk. Do I need use pbos to solve my problem?
  6. I have a terrain rendering project. The terrain heightmap texture was modified in main thread. I want to use the glGetTexImage in another assistant thread to save the modified texture data. Unfortunally I am failed. The Project only has one window. Do I need two different opengl rendering contexts to solve my problem? I know the heightmap texture belong to the main thread, but how to get the data in another thread? I use Microsoft OS and OpenGL 3.0 programming language and Geforce 9 Card.
  7. Help! Load 16bit grayscale PNG heightmap texture error!

    Thanks Erik Rufelt! adding: glPixelStori(GL_UNPACK_ALIGNMENT,2); glPixelStori(GL_UNPACK_SWAP,TRUE); I success.
  8. Help! Load 16bit grayscale PNG heightmap texture error!

    Thanks again. Are you sure that the png_tex->texels was stored in GLubye is correct? I use microsoft viual c++ 6.0 envioment. Because I use Geforce 9500 card, Do you mean I need update gl.h glut.h . I will go my office to try in your guidence. Hope you can focus on this topic.
  9. Help! Load 16bit grayscale PNG heightmap texture error!

    Thank Erik Rufelt. Quote: glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, w, h, 0, GL_LUMINANCE, GL_UNSIGNED_SHORT, data). But I think the png_tex->texels was stored in GLubye format in memory ,not the GL_UNSIGNED_SHORT. I have tried using glTexImage2D (GL_TEXTURE_2D, 0, GL_LUMINANCE16 , png_tex->width, png_tex->height, 0, GL_LUMINANCE , GL_UNSIGNED_SHORT, png_tex->texels); The texture is still not correct in my head.
  10. Help! Load 16bit grayscale PNG heightmap texture error!

    Thanks for your reply! I am now online. I think i load the 16bit height data correctly in byte format.
  11. I plan to use vertex texture fetch to render terrain, so I want to load puget sound 1k*1k png image as heightmap texture. The png image is 16bit grayscale. When I load the 16bit grayscale image using libpng,I allocate iWidth*iHieht*2 bytes space. But how to assemble the two bytes to GL_LUMINANCE16 texture internalformat. I am confused about that. I have read the libpng project from www.libpng.org , but I found 16bit grayscale was always translated to 8bit. I can dispay 8bit grayscale texture. If I translate the original 16bit grayscale puget sound data to 8bit grayscale, the height data will be less accurate. The following is my code written in C and OpenGL, NV Geforce 9500 card,Windows XP OS. /* OpenGL texture info */ typedef struct { GLsizei width; GLsizei height; GLenum format; GLint internalFormat; GLuint id; GLubyte *texels; } gl_texture_t; /* texture Id */ GLuint texId; gl_texture_t *png_tex = NULL; void GetPNGtextureInfo (int color_type, gl_texture_t *texinfo) { switch (color_type) { case PNG_COLOR_TYPE_GRAY: texinfo->format = GL_LUMINANCE; texinfo->internalFormat =GL_LUMINANCE16 ;// break; default: /* Badness */ break; } } gl_texture_t * ReadPNGFromFile (const char *filename) { gl_texture_t *texinfo; png_byte magic[8]; int bit_depth, color_type; FILE *fp = NULL; png_bytep *row_pointers = NULL; int i; /* open image file */ fp = fopen (filename, "rb"); if (!fp) { fprintf (stderr, "error: couldn't open \"%s\"!\n", filename); return NULL; } /* read magic number */ fread (magic, 1, sizeof (magic), fp); /* check for valid magic number */ if (!png_check_sig (magic, sizeof (magic))) { fprintf (stderr, "error: \"%s\" is not a valid PNG image!\n", filename); fclose (fp); return NULL; } /* create a png read struct */ png_ptr = png_create_read_struct (PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose (fp); return NULL; } /* create a png info struct */ info_ptr = png_create_info_struct (png_ptr); if (!info_ptr) { fclose (fp); png_destroy_read_struct (&png_ptr, NULL, NULL); return NULL; } /* create our OpenGL texture object */ texinfo = (gl_texture_t *)malloc (sizeof (gl_texture_t)); /* initialize the setjmp for returning properly after a libpng error occured */ if (setjmp (png_jmpbuf (png_ptr))) { fclose (fp); png_destroy_read_struct (&png_ptr, &info_ptr, NULL); if (row_pointers) free (row_pointers); if (texinfo) { if (texinfo->texels) free (texinfo->texels); free (texinfo); } return NULL; } /* setup libpng for using standard C fread() function with our FILE pointer */ png_init_io (png_ptr, fp); /* tell libpng that we have already read the magic number */ png_set_sig_bytes (png_ptr, sizeof (magic)); /* read png info */ png_read_info (png_ptr, info_ptr); /* get some usefull information from header */ bit_depth = png_get_bit_depth (png_ptr, info_ptr); color_type = png_get_color_type (png_ptr, info_ptr); /* convert index color images to RGB images */ if (color_type == PNG_COLOR_TYPE_PALETTE) png_set_palette_to_rgb (png_ptr); /* convert 1-2-4 bits grayscale images to 8 bits grayscale. */ if (color_type == PNG_COLOR_TYPE_GRAY && bit_depth < 8) png_set_gray_1_2_4_to_8 (png_ptr); if (png_get_valid (png_ptr, info_ptr, PNG_INFO_tRNS)) png_set_tRNS_to_alpha (png_ptr); /* if (bit_depth == 16) png_set_strip_16 (png_ptr); else if (bit_depth < 8) png_set_packing (png_ptr); */ /* update info structure to apply transformations */ png_read_update_info (png_ptr, info_ptr); /* retrieve updated information */ png_get_IHDR (png_ptr, info_ptr, (png_uint_32*)(&texinfo->width), (png_uint_32*)(&texinfo->height), &bit_depth, &color_type, NULL, NULL, NULL); /* get image format and components per pixel */ GetPNGtextureInfo (color_type, texinfo); /* we can now allocate memory for storing pixel data */ texinfo->texels = (GLubyte *)malloc (sizeof (GLubyte) * texinfo->width * texinfo->height * 2 ); /* setup a pointer array. Each one points at the begening of a row. */ row_pointers = (png_bytep *)malloc (sizeof (png_bytep) * texinfo->height); for (i = 0; i < texinfo->height; ++i) { row_pointers[i] = (png_bytep)(texinfo->texels + i * texinfo->width * 2); } /* read pixel data using row pointers */ png_read_image (png_ptr, row_pointers); /* finish decompression and release memory */ png_read_end (png_ptr, NULL); png_destroy_read_struct (&png_ptr, &info_ptr, NULL); /* we don't need row pointers anymore */ free (row_pointers); fclose (fp); return texinfo; } GLuint loadPNGTexture (const char *filename) { GLuint tex_id = 0; png_tex = ReadPNGFromFile (filename); if (png_tex && png_tex->texels) { glGenTextures (1, &png_tex->id); glBindTexture (GL_TEXTURE_2D, png_tex->id); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D (GL_TEXTURE_2D, 0, png_tex->internalFormat, png_tex->width, png_tex->height, 0, png_tex->format, GL_UNSIGNED_BYTE, png_tex->texels); tex_id = png_tex->id; free (png_tex->texels); free (png_tex); } return tex_id; }
  12. thank harveypekar! [/q] just set the heightmap to a different texture before you render an individuals block. [/q] I will try. [/q] Texture arrays (or even the big heightmap texture) would be an optimization, so you can use batching/instancing on all your small blocks. [/q] I am not very clear your description about texture array . I found a paper "Real-time Rendering and Manipulation of Large Terrains 2009_12.pdf" published by IEEE. In this paper ,the author use texture array for terrain rendering
  13. I have read the paper <GPU Terrain Rendering> in game programming gems 6. The method is using one small VBO ,resuse it over and over again to render a large terrain. I can use 33x33 vertex buffer to render the whole terrain by vertex texture fetch ,using one 2049*2049 sized heightmap texture. My question is how to get rid of the limitation of only using one heightmap texture. How to use several heightmap textures to do VTF? Is the texture array extension in SM4.0 a corret method to resolve it? The following description is what I would use for large terrain rendering. I plan to split one 8192*8192 sized heightmap into 8*8 heightmaps using L3DT software, so each has 1024*1024 size. I plan to use the texture array extension to load each 1024*1024 block as a layer for the whole 8192*8192 terrain. I am not sure I am correct. Is there any paper which describe the combination of VTF and texture array in large terrain rendering.
  14. Quote: Now you do the height fetching on the GPU so you only need to send a 64*64 grid to the GPU for every chunk. method 1: only draw one 64*64 grid ,then translate overall in vertex shader method 2: noramlly draw the whole gride ,each block has 64*64 Is there differ between the two method? Maybe I should try .I need use my hand to find. Tanks
  15. Thanks FeverGames! Quote:what language are you writing your code in? I am writing code in opengl, GLSL V1.2,and I use Geforce 9500 card. Quote:if you want to transform the data not realtime on the gpu, you can create (1024/128)^2 tiles of data and transform them on the cpu I want do frustum in GPU, not on CPU. Is it possible? Can you provide some code ? Tanks a lot
  • Advertisement