Jump to content
  • Advertisement

Sword7

Member
  • Content Count

    39
  • Joined

  • Last visited

Everything posted by Sword7

  1. Ok, I removed a few glm::transpose since I only transpose rotations part but translations part stay unchanged. I changed GL_FALSE to GL_TRUE in glUniformMatrix4fv... It displayed very distorted. All lines exploded all direction from center (origin). Then I changed back. Oops! I corrected a few codes. I changed glUniformMatrix4fx to glUniform1fv for gCamera. It still resulted the same... I will study my book more and figure them out....
  2. I am working on planet terrain rendering with OpenGL 4.x shader programming. When I move closer to ground, ground is becoming more earthquake. Does anyone know any solution with 'earthquake effect' elimination when closer to ground for landing, etc? I use unit per cubic kilometer for terrain mapping.
  3. Without transpose, I tried to move but it move oppositely and more confusing. With transpose, I move around correctly.
  4. Perhaps I will try Eigen instead but I plan to develop my own math library. Look at my code above. I cleared forth row of matrix for translation since matrix was already transposed. I noticed that camera position is part of translation area of MV matrix according to debugging printout of matrix.
  5. Ok, I studied my 3D Engine Design for Virtual Globes book. I tried to implemented RTE method with DSFUN90 but it did not work. Also I implemented vertices with error difference. I saw nothing when I cleared translation parts of matrix. When I leave matrix unchanged, planet re-appeared but still jitters when very close. I searched for some solution but can't find any solution so far. My codes (CPU side and GPU side) is: prm.dmProj = glm::perspective(glm::radians(DOFS_DEFAULT_FOV), double(gl.getWidth()) / double(gl.getHeight()), DDIST_NEAR, DDIST_FAR); prm.dmView = glm::transpose(glm::toMat4(prm.crot)); vec3f_t gCamerah, gCameral; convertDoubleToTwoFloats(prm.obj.cpos, gCamerah, gCameral); prm.dmWorld = prm.dmView * glm::translate(glm::transpose(prm.obj.orot), prm.obj.cpos); prm.dmWorld[3].x = 0; prm.dmWorld[3].y = 0; prm.dmWorld[3].z = 0; prm.mvp = mat4f_t(prm.dmProj * prm.dmWorld); uint32_t mvpLoc = glGetUniformLocation(pgm->getID(), "mvp"); glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, glm::value_ptr(prm.mvp)); uint32_t chLoc = glGetUniformLocation(pgm->getID(), "gCamerah"); glUniformMatrix4fv(chLoc, 1, GL_FALSE, glm::value_ptr(gCamerah)); uint32_t clLoc = glGetUniformLocation(pgm->getID(), "gCameral"); glUniformMatrix4fv(clLoc, 1, GL_FALSE, glm::value_ptr(gCameral)); #version 420 // vertex buffer objects //layout (location=0) in vec3 vPosition; layout (location=0) in vec3 vPositionh; layout (location=1) in vec3 vPositionl; layout (location=2) in vec3 vNormal; //layout (location=2) in vec3 vColor; layout (location=3) in vec2 vTexCoord; //uniform mat4 gView; // Projection * View //uniform mat4 gWorld; // World space uniform mat4 mvp; uniform vec3 gCamerah; uniform vec3 gCameral; out vec4 myColor; out vec2 texCoord; void main() { vec3 t1 = vPositionl - gCameral; vec3 e = t1 - vPositionl; vec3 t2 = ((-gCameral - e) + (vPositionl - (t1 - e))) + vPositionh - gCamerah; vec3 hdiff = t1 + t2; vec3 ldiff = t2 - (hdiff - t1); // gl_Position = gView * gWorld * vec4(hdiff + ldiff, 1.0); gl_Position = mvp * vec4(hdiff + ldiff, 1.0); myColor = vec4(0.7, 0.7, 0.7, 1.0); // vec4(vColor, 1.0); texCoord = vTexCoord; }
  6. Ok, I now found your blog after I searched for that. Thanks. I will work on new LOD method to replace my old code that use trigonometry for LOD determination, etc.
  7. Where is your blog that you mention? I have a book of "3D Engine Design for Virtual Globes" book.
  8. I switched to all 64-bit transform matrix on my code (CPU side) and converted to 32-bit matrix for rendering. Also I implemented per-tile world matrix before rendering each tile. void TerrainTile::setWorldMatrix(renderParameter &prm) { int nlat = 1 << lod; int nlng = 2 << lod; double lat = PI * double(ilat) / double(nlat); double lng = PI*2 * (double(ilng) / double(nlng)) - PI; double dx = /* prm.obj.orad * */ sin(lat) * cos(lng); double dy = /* prm.obj.orad * */ cos(lat); double dz = /* prm.obj.orad * */ sin(lat) * -sin(lng); // Determine offsets from object center for tile center prm.dtWorld = prm.obj.orot; prm.dtWorld[3][0] = dx*prm.obj.orot[0][0] + dy*prm.obj.orot[0][1] + dz*prm.obj.orot[0][2] + prm.obj.cpos.x; prm.dtWorld[3][1] = dx*prm.obj.orot[1][0] + dy*prm.obj.orot[1][1] + dz*prm.obj.orot[1][2] + prm.obj.cpos.y; prm.dtWorld[3][2] = dx*prm.obj.orot[2][0] + dy*prm.obj.orot[2][1] + dz*prm.obj.orot[2][2] + prm.obj.cpos.z; } I tried dtWorld with radius size and ran it. Planet is exploding all directions! I had commented 'prm.obj.orad' out and it was back to normal. I brought camera a few centimeters above ground. Everything are very stable (no shaky) but 32-bit float errors are clearly visible. Terrain is not flat. When I move camera around, terrain jumps up and down like blocky movement. I am still figuring why terrain is not flat but looks like saw-like wave but camera movement and orientation is very smooth. I tried per-tile matrix (tile center is origin) for higher accuracy but it did not work. I am looking for another solution for tile center as origin for world matrix.
  9. Well, I just split model from view and project matrix and ran it. Good news! Dynamic shaky was gone!!! Now static errors occurs. I will now work on per-tile matrix. Yes, I have a book called "3D Engine Design for Virtual Globes". #version 420 // vertex buffer objects layout (location=0) in vec3 vPosition; layout (location=1) in vec3 vNormal; //layout (location=2) in vec3 vColor; layout (location=2) in vec2 vTexCoord; uniform mat4 gWorld; uniform mat4 gViewProj; out vec4 myColor; out vec2 texCoord; void main() { gl_Position = gViewProj * gWorld * vec4(vPosition, 1.0); myColor = vec4(0.7, 0.7, 0.7, 1.0); // vec4(vColor, 1.0); texCoord = vTexCoord; } prm.model = glm::translate(glm::transpose(prm.obj.orot), prm.obj.cpos); prm.mvp = prm.mproj * prm.mview; uint32_t mwLoc = glGetUniformLocation(pgm->getID(), "gWorld"); glUniformMatrix4fv(mwLoc, 1, GL_FALSE, glm::value_ptr(prm.model)); uint32_t mvpLoc = glGetUniformLocation(pgm->getID(), "gViewProj"); glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, glm::value_ptr(prm.mvp));
  10. OK, I figured out why... Yes, I assigned tiles at 6371 km from origin. I tried normalized coordinate from planet origin and did not resolve that problem. I tested both on IEEE 754 float convertor link above and both failed due to float errors. That is a float-related problem. I think that a problem is applying world space to planet origin due to float errors. In Scene::render prm.mproj = glm::perspective(glm::radians(OFS_DEFAULT_FOV), float(gl.getWidth()) / float(gl.getHeight()), DIST_NEAR, DIST_FAR); prm.mview = glm::transpose(glm::toMat4(prm.crot)); In TerrainManager::render prm.model = glm::translate(glm::transpose(prm.obj.orot), prm.obj.cpos); prm.mvp = prm.mproj * prm.mview * prm.model; I now found that problem above because it is using local world space relative to planet center (origin). For possible solution, I have to split model (world space) from view and projection matrix and have to apply world space to each tile center (origin) to render. I will try that and see what happens... (planet position + tile position) - camera position.
  11. Ok, I will work on it about clipping test tomorrow. My camera moved and panned very smooth by using my keyboard as movement controls. Here are my video clips to watch. Update: I decided to test clipping. I set 1.0 to far and see nothing as result. Good. I commented a line out in fragment shader code and saw nothing as result. Should I set any simple color to see it as test? https://youtu.be/-GHSpStO5Pc https://youtu.be/pAnQTJtRrC8
  12. Ok, I updated my code by switching to all 64-bit vertices, etc... Converted to 32-bit floats for rendering/vertices buffer with 32-bit model/view/projection matrix. Distorted is now gone but vertices are still shaky. LOD determination is now stable. Rendering is now faster and smoother than before. I believe some issues inside GPU space due to float errors through shader processing (limitation of 32-bit floats). A few hundred meters above ground, they look stable but landing on ground (a few meters above) is becoming shaky or "earthquake effect".
  13. I had not implemented elevation yet but just flat texture at this time. I calculates 32-bit vertices from origin (0, 0, 0) in normalized or planet-sized coordinate system for spherical vertices with 16-bit indices and 32-bit texture coordinates. It uses GL_TRIANGLES with CCW winding orders. Both coordinates resulted same (locally shaky/distorted). I am using glEnable(GL_DEPTH_TEST) and glDepthFunc(GL_LEQUAL). I am using GL_LINEAR for texture mapping. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); Here are my shader programs to rendering planet textures. Here is vertex program. #version 420 // vertex buffer objects layout (location=0) in vec3 vPosition; layout (location=1) in vec3 vNormal; //layout (location=2) in vec3 vColor; layout (location=2) in vec2 vTexCoord; uniform mat4 mvp; out vec4 myColor; out vec2 texCoord; void main() { gl_Position = mvp * vec4(vPosition, 1.0); myColor = vec4(0.7, 0.7, 0.7, 1.0); // vec4(vColor, 1.0); texCoord = vTexCoord; } Here is fragment program. #version 420 // #extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420. layout (binding=0) uniform sampler2D sTexture; in vec2 texCoord; in vec4 myColor; out vec4 fragColor; void main() { fragColor = texture(sTexture, texCoord); // fragColor = myColor; }
  14. Ok, When far away, textures are stable (no shaky or distorted). I believe possible float inaccuracies due to large offset between viewpoint and vertices since planet's radius is 6400km and using one unit per one kilometer. One meter is each .0001 unit. At horizon, distant textures are more stable than local textures. I tried to reduce planet size to normalized coordinate system (one unit is one planet radius) and moved it to (0,0,-5.0) but it still got shaky and distorted - no difference. I tried different near and far planes but it did not resolve that problem. When I tried smaller far plane, it got worse shaky/distorted. Original was near .001 and far 1.0e9 (need that see stars in 3D coordinates system). I tried near .0000001 and far 10.0, it got worse shaky and distorted. I set camera position is always (0, 0, 0) as origin to relative world coordinate system by subtracting view position from planet position (relative world coordinate system). I still need solution by remove an offset from viewpoint and vertices to eliminate shaky/distorted.
  15. No, when landing from air or space, ground is more shaking and distorted. I think values are too small to rendering. One meter is .0001 each. When further from terrain textures, they are more stable.
  16. I googled about HDR support for consumer video cards but I learned that Nvidia/AMD blocked 10-bit color access from OpenGL programming for windowed mode but only support DirectX for 10-bit color depth in both windowd and full screen. However, they support 10-bit color depth for Linux and Mac fully. Does anyone have any successful implementation for windowed OpenGL mode by using buffered RGB10A2 mode? Or have we to use only full-screen OpenGL mode for HDR10 programming?
  17. Good news! I now found it when I googled. NVIDIA recently released new driver for Windows now support 10-bit color fully. https://petapixel.com/2019/07/29/nvidia-unveils-new-studio-driver-with-support-for-10-bit-color-for-creatives/
  18. I am developing my orbital flight simulator (space simulator) myself. I am figuring how to write a routine to generate/create Gaussian star/glare texture for starry sky. I googled for that but can't find any source so far. I only found it in open-source Celestia code but it did not explain clearly. Does anyone know any good source in books or website that provides coding for creating Gaussian star/glare texture? Tim
  19. Folks, I tried to google Vulcan and sdl2 but they did not help so well. I only found stand-alone or GLFW tutorials. Does anyone have good tutorial about Vulkan with SDL2? Thanks
  20. I recently learned about voxel (volumetric) terrain mapping. That is very new technique to me. I have a few questions. Is that possible to generate voxel terrain on earth-sized planet? That can render caves, mines, arches, etc... That could be Avatar-like planet with floating air islands above. That information is in GPU Gems 3 book. That is available on NVIDIA website. Look at that link. https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch01.html
  21. Try voxel terrain because it can form caves, mines, arches, etc. on landscape.
  22. I tried to port DirectX program to OpenGL version. My planets were inverted horizontally (opposite rotation Y axis). Z-coordinate was inverted too. I figured them out for some months. I finally recognized that there are two different coordinate systems - left-handed and right-handed. DirectX uses left-handed coordinate but OpenGL uses right-handed coordinate as default. I googled it and learned a lot. I prefer left-handed coordinate system for easy programming instead. Also I tried google projection matrix formula for left-handed coordinate but did not find any sources. Does anyone know any matrix formula for left-handed coordinate system? Does anyone know any good guides for DirectX to OpenGL conversion for porting? Tim
  23. I got a book called "High Dynamic Range Imaging" (published by Elsevier) and read some pages. I learned that HDR requires deep color depth (10/12-bit color depth). I heard that many vendors and independent developers implemented HDR features into their games and applications because UHD standard requires deep color depth (HDR) to eliminate noticeable color-banding. I had seen color banding on SDR displays and want to get rid of it. I googled HDR for OpenGL but learned that they require Quadro or FireGL cards to support that. How do they get HDR working on consumer video cards? That's why I want HDR implementation for my OpenGL programs. Tim
  24. Sword7

    HDR programming

    Ok, I got it. I googled it and learned that on NVIDIA website. NVIDIA said that all 900 and 1000 series GPU card support HDR (deep color) output for HDR displays. It mentioned about HDMI 2.0. Does it support DP port for HDR output? Some day I will buy new HDR monitor and try it. Thanks. I forget to tell something about OS and HDR support. Windows 10 support HDR at full screen at this time and Microsoft plans to add HDR support for OS system soon. Do Linux and Mac support HDR output? Also I have Ubuntu 18.04. I heard that MesaGL added 10bpc for HDR support.
  25. Sword7

    HDR programming

    Ok, I got it. Thanks for information. But that old HDR implementation still mention LDR output (cause color-banding). I reviewed pictures on that and noticed some color-banding. How about HDR10 and Dolby Vision (10/12-bit color output) that UHD standard requires? There are new HDR10 monitors on market now. That gives wider color spectrum that eliminates color-banding.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!