Jump to content
Site Stability Read more... ×
  • Advertisement

Project: Project Taival

Dev Diary #040 - Further Examination Of Heightmap Upscaling

ProjectTaival

702 views

Hello and welcome to this weeks Dev Diary!

Today I'll be continuing on how upscaling heightmaps affects the end result, when you wan't to create more detailed terrain with ease.

After last weeks test with the heightmap upscales, I decided to try to upscale the heigthmap even further - from 3000 x 3000 pixels to 40 000 x 40 000 pixels, just to see if there would be any return on investment. Or in other words, if there would be any benefit from the extra detail, that the upscaling algorithm creates.

 

Initial Testing

First, lets compare the first set of renders. Like last week, all the settings remain unchanged between renders and only the heightmap is changed between them. This is done in order to eliminate the changes that changing the the lighting or camera angle could cause by creating/showing shadows in different perspective, if moved between renders.

  • Plane 8m x 8m in size.
  • Subdivided several times, first by 100, second by 6 and third by 1 (or 100:6:1 subdivision)
  • Diffuse Scale 1.0, midpoint 0.5 (defaults)
  • Resolutions from top to bottom; 750 x 750, 3000 x 3000, 40 000 x 40 000.

00_750_x_750_Render_(100x6x1).thumb.png.27063172f19cb47023eafac6a55d1ad2.png

The above 750 x 750 pixel heightmap sample is from the original 3000 x 3000 heightmap.

 

00_3k_x_3k_Render_(100x6x1).thumb.png.ba772b076ea31c99f78fb74027b457d1.png

The above 3000 x 3000 pixel heightmap sample is from the upscaled 12 000 x 12 000 pixel heightmap.

 

00_10k_x_10k_Render_(100x6x1).thumb.png.3cb89d12851f372dd319e6501ccf64cd.png

The above 10 000 x 10 000 pixel heightmap sample is from the upscaled 40 000 x 40 000 pixel heightmap.

 

Both upscales were made using the original 3000 x 3000 pixel heightmap and slicing a 1:16th piece (or 1 km2 piece) from the bottom left corner of each of them, in order to highlight the difference in details.

As you can see, the differences are most notable between the Original and the smaller upscale, but from there, the added detail is a tad harder to notice - but it is there.

After this, I also tried to ramp up the data points, or in other words subdivide the plane even further and see how much that changed the outcome. The settings are identical, but the subdivision is greater (100:10:1 instead of 100:6:1 subdivision)

03_750_x_750_Render_(100x10x1).thumb.png.2f190d03d2c3249525cf43d2ec838e14.png

750 x 750 pixel heightmap sample, with the 100:10:1 subdivision.

 

04_3k_x_3k_Render_(100x10x1).thumb.png.94ee023d50595027ddc25183e9c7473a.png

3000 x 3000 pixel heightmap sample, with the 100:10:1 subdivision.

 

05_10k_x_10k_Render_(100x10x1).thumb.png.bcebbfa0fda0a35bb70bdf01530d4b7f.png

10 000 x 10 000 pixel heightmap sample, with the 100:10:1 subdivision.

 

The added data points did increase the level of detail quite a bit. The difference between the two upscales are still hard to see, but still visible after closer inspection. The difference is more visible on 4K monitors, but can be seen on Full HD monitors also. Having these images on top of each other and toggling between them shows the differences much more clearly.

 

Going For More Detail

Since it will look essentially the same on the rendered images, if I increase the land area of the plane in order to try to bring out even more detail out of the renders, I'm not conducting those tests. How ever, there is still one more way to up the ante - an even further zoom in to the minute detail differences of the upscales. To do this, I needed to take even smaller samples, dividing the heigthmaps in 8x8 grids (instead of 4x4 as shown above) and taking one piece out of each of them. For consistency, I'll use the bottom left corner as a sample.

06_375_x_375_Render_(100x10x1).thumb.png.26384cb67b9b68dac8a0592ef19f0e1e.png

375 x 375 pixel heightmap sample from the 3k heightmap, with the 100:10:1 subdivision.

 

07_1500_x_1500_Render_(100x10x1).thumb.png.aba60953ab3c6ee77f592877f30d1c78.png

1500 x 1500 pixel heightmap sample from the 12k heightmap, with the 100:10:1 subdivision.

 

08_5000_x_5000_Render_(100x10x1).thumb.png.6cd1c06abe05e03f799a3d5bd71184fe.png

5000 x 5000 pixel heightmap sample from the 40k heightmap, with the 100:10:1 subdivision.

Edit; You can see the difference most clearly on the illuminated back portion of the landscape as more "grain" or roughness among the white.

The Question Remains

But is it really worth it? That depends if you want all the possible variation out of the heightmap, that could possibly get squeezed out of it, as it is not necessary. but it might just create enough minute differences in the terrain, that the end result will look more natural in the end. But on the down side, the more faces you have, the more the performance hit will be.

Even as the added data points, or mesh resolution will affect the performance of the game, Skyrim also has high def mesh mods, which make the original rather low poly models into much more detailed meshes, especially the rocks, mountain faces and characters. In this light, it might be more than feasible to increase the detail level of your terrain meshes even further. The land area of the smaller samples of the above testing is roughly 500 square meters, that has been fitted on a 8 by 8 meter plane object.

The second thing to consider is how many objects you would need to divide your terrain in to, in order to maximize the resolution you can get out of the heightmaps. As creating a single object that would be miles long would require some serious machine power and RAM from your computer, it's not feasible for all to make a large monolithic map area. Creating a large amount of smaller pieces on the other hand is more work intensive and requires more time and patience from the developer to insert them all in the game world and line them all up in the correct order.

 

The Conlusion

Edit; The benefits of upscaling your heightmap only show, when you divide it to smaller samples > the smaller the pieces, that you divide your upscaled heightmap at, the more benefit the upscaling has. Alternatively, the same benefits of the upscaled heightmap can be seen when making a larger and closer to IRL size map, but to my knowledge, a huge monolithic map with lots of detail takes up more processing power. Though this might be an utterly out-dated notion these days - I need to do more research on that. What i know for sure, is that it limits the amount of subdivision that can be used while editing the landscape, as Blender is very likely to crash on my PC when I go with any larger subdivision than 100:10:1 and the amount of subdivision matters.

Edit; When you are upscaling your heightmap, the algorithm you do it with matters the most, as the more the algorithm creates more gradual shades between white and black while trying to make sense of the original shapes, the more likely you get more added details to the landscape. Still, this method is not for purists, as it does not bring the absolute original shape of the land back to the image, just adds more variation.

Edit; As an example, if you know that the heightmap represents a 4 kmsized area and the accuracy of the satellite/airplane measurement was something between 2 - 30 meters. If your heightmap has an accuracy rating of 10 meters, then by upscaling the heightmap to 10 times the size of your source image, your get a simulated accuracy of 1 meters. If you want to have more granularity than that, you can go for as large as you want (or can), but at some point there will be no visual benefits. To determine that point, you need to experiment for your self on a case-by-case basis.

Edit; As the GameDev forum does not show the images in full screen, some of the differences between the images is apparently lost, when trying to spot tiny differences. The original images in full screen show the differences much more clearly. You can download them from here 040.7z

All in all, the more you are willing to work for your terrain mesh fidelity, the more you seem to be able to get out of upscaling your heightmaps. This seems to continue as far as you your self are willing to take it, or at least to much smaller (or larger, depending on how you look at it) scales than what my testing indicates here. It will be interesting to see how much of a performance hit it will have, when using a higher detail mesh for terrain. This will be revealed when I get to the first testing phase, with only a small land area and the 3D character model that I created earlier.

But before that, next weeks Dev Diary will be about slicing big images seamlessly into smaller image files semi-automatically using GIMP, so stay tuned!

Thank you for tuning in, and I'll see you on the next one!

You can check out every possible mid week announcements about the project on these official channels;

• YouTube • Facebook • Twitter • Discord • Reddit • Pinterest • SoundCloud • LinkedIn •




0 Comments


Recommended Comments

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By vivek soni
      Hi guys,
      i am trying to implement a simple font renderer using bitmap font texture with a dynamic vertex buffer, i am  able to successfully display text with correct glyph from bitmap texture.
      right now i am trying to draw a dynamic string that changes at user input, i.e user able change the displayded text by typing the new one.
      The issue is that  length of string is exactly same as what it was initialized with even when updating string at every render frame. string gets updated every fram but is capped at length equal to what is was initialized with.
      i am suspecting that vertexBufferDesc.ByteWidth is not getting updated even when i update vertexbuffer by map and unmap it.
      initialize
      bool GlyphClass::Initialize(ID3D11Device* device, HWND hwnd, int screenWidth, int screenHeight, WCHAR* path) { bool result; m_GlyphWidthData.open("Font/AgencyFBFont_64x64_width.txt"); while (1) { if (m_GlyphWidthData.eof()) { break; } int tmp = 0; m_GlyphWidthData >> tmp; if (tmp != 0) { m_GlyphWidth.push_back(tmp); } } m_GlyphWidthData.close(); m_GlyphCharacter = 'A'; m_StringToDraw = "TEXTTEST@xyz";//userInputString != "" ? userInputString : "VIVEK599 xyz"; m_fontTextureShader = new TextureShaderClass; if (!m_fontTextureShader) { return false; } result = m_fontTextureShader->Initialize(device, hwnd); if (!result) { MessageBox(hwnd, L"Could not initialize font texture shader object!", L"Error", MB_OK); return false; } m_ScreenWidth = screenWidth; m_ScreenHeight = screenHeight; result = InitializeBuffers(device); if (!result) { return false; } result = LoadTexture(device, path); if (!result) { return false; } return true; } updatebuffer
      bool GlyphClass::UpdateBuffers(ID3D11DeviceContext* context, int posX, int posY) { m_StringToDraw = userInputString != "" ? userInputString : "STRING 555@xyz0123456789"; VertexType* vertices; D3D11_MAPPED_SUBRESOURCE mappedResource; VertexType* vertexPtr; HRESULT hr; vertices = new VertexType[m_VertexCount * m_StringToDraw.length()]; if (!vertices) { return false; } // Initialize vertex array to zeros at first. memset(vertices, 0, sizeof(VertexType) * m_VertexCount * m_StringToDraw.length() ); float posXOffset = (float)posX; float posYOffset = (float)posY; for ( int i = 0; i < m_StringToDraw.length(); i++ ) { int cx = m_StringToDraw[i] % 16; int cy = m_StringToDraw[i] / 16; float tex_left = (float)cx * (1.f / 16.f); float tex_top = (float)cy * (1.f / 16.f); float tex_right = tex_left + (1.f / 16.f) * ((float)m_GlyphWidth[m_StringToDraw[i]] / 64.f); float tex_bottom = tex_top + (1.f / 16.f); int totalCharWidth = 64; float left = (float)((float)(m_ScreenWidth / 2.f) * -1) + posXOffset; float right = left + (float)m_GlyphWidth[m_StringToDraw[i]]; float top = (float)(m_ScreenHeight / 2.f) - posYOffset; float bottom = top - (float)totalCharWidth; //triangle 1 - clockwise vertices[0 + m_VertexCount * i].position = Vector3(left, top, 0.f); vertices[0 + m_VertexCount * i].texture = Vector2(tex_left, tex_top); vertices[1 + m_VertexCount * i].position = Vector3(right, bottom, 0.f); vertices[1 + m_VertexCount * i].texture = Vector2(tex_right, tex_bottom); vertices[2 + m_VertexCount * i].position = Vector3(left, bottom, 0.f); vertices[2 + m_VertexCount * i].texture = Vector2(tex_left, tex_bottom); //triangle + i 2 vertices[3 + m_VertexCount * i].position = Vector3(left, top, 0.f); vertices[3 + m_VertexCount * i].texture = Vector2(tex_left, tex_top); vertices[4 + m_VertexCount * i].position = Vector3(right, top, 0.f); vertices[4 + m_VertexCount * i].texture = Vector2(tex_right, tex_top); vertices[5 + m_VertexCount * i].position = Vector3(right, bottom, 0.f); vertices[5 + m_VertexCount * i].texture = Vector2(tex_right, tex_bottom); posXOffset += m_GlyphWidth[m_StringToDraw[i]]; } hr = context->Map(m_VertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); if (FAILED(hr)) { return false; } vertexPtr = (VertexType*)mappedResource.pData; int bufferSize = sizeof(VertexType) * m_VertexCount * m_StringToDraw.length(); memcpy(vertexPtr, (void*)vertices, bufferSize); D3D11_BUFFER_DESC tmpDesc; m_VertexBuffer->GetDesc(&tmpDesc); context->Unmap(m_VertexBuffer, 0); delete[] vertices; vertices = 0; return true; }  
    • By vivek soni
      Hi guys,
      i am trying to implement a simple font renderer using bitmap font texture with a dynamic vertex buffer, i am  able to successfully display text with correct glyph from bitmap texture.
      right now i am trying to draw a dynamic string that changes at user input, i.e user able change the displayded text by typing the new one.
      The issue is that  length of string is exactly same as what it was initialized with even when updating string at every render frame. string gets updated every fram but is capped at length equal to what is was initialized with.
      i am suspecting that vertexBufferDesc.ByteWidth is not getting updated even when i update vertexbuffer by map and unmap it.
      initialize
      bool GlyphClass::Initialize(ID3D11Device* device, HWND hwnd, int screenWidth, int screenHeight, WCHAR* path) { bool result; m_GlyphWidthData.open("Font/AgencyFBFont_64x64_width.txt"); while (1) { if (m_GlyphWidthData.eof()) { break; } int tmp = 0; m_GlyphWidthData >> tmp; if (tmp != 0) { m_GlyphWidth.push_back(tmp); } } m_GlyphWidthData.close(); m_GlyphCharacter = 'A'; m_StringToDraw = "TEXTTEST@xyz";//userInputString != "" ? userInputString : "VIVEK599 xyz"; m_fontTextureShader = new TextureShaderClass; if (!m_fontTextureShader) { return false; } result = m_fontTextureShader->Initialize(device, hwnd); if (!result) { MessageBox(hwnd, L"Could not initialize font texture shader object!", L"Error", MB_OK); return false; } m_ScreenWidth = screenWidth; m_ScreenHeight = screenHeight; result = InitializeBuffers(device); if (!result) { return false; } result = LoadTexture(device, path); if (!result) { return false; } return true; } updatebuffer
      bool GlyphClass::UpdateBuffers(ID3D11DeviceContext* context, int posX, int posY) { m_StringToDraw = userInputString != "" ? userInputString : "STRING 555@xyz0123456789"; VertexType* vertices; D3D11_MAPPED_SUBRESOURCE mappedResource; VertexType* vertexPtr; HRESULT hr; vertices = new VertexType[m_VertexCount * m_StringToDraw.length()]; if (!vertices) { return false; } // Initialize vertex array to zeros at first. memset(vertices, 0, sizeof(VertexType) * m_VertexCount * m_StringToDraw.length() ); float posXOffset = (float)posX; float posYOffset = (float)posY; for ( int i = 0; i < m_StringToDraw.length(); i++ ) { int cx = m_StringToDraw[i] % 16; int cy = m_StringToDraw[i] / 16; float tex_left = (float)cx * (1.f / 16.f); float tex_top = (float)cy * (1.f / 16.f); float tex_right = tex_left + (1.f / 16.f) * ((float)m_GlyphWidth[m_StringToDraw[i]] / 64.f); float tex_bottom = tex_top + (1.f / 16.f); int totalCharWidth = 64; float left = (float)((float)(m_ScreenWidth / 2.f) * -1) + posXOffset; float right = left + (float)m_GlyphWidth[m_StringToDraw[i]]; float top = (float)(m_ScreenHeight / 2.f) - posYOffset; float bottom = top - (float)totalCharWidth; //triangle 1 - clockwise vertices[0 + m_VertexCount * i].position = Vector3(left, top, 0.f); vertices[0 + m_VertexCount * i].texture = Vector2(tex_left, tex_top); vertices[1 + m_VertexCount * i].position = Vector3(right, bottom, 0.f); vertices[1 + m_VertexCount * i].texture = Vector2(tex_right, tex_bottom); vertices[2 + m_VertexCount * i].position = Vector3(left, bottom, 0.f); vertices[2 + m_VertexCount * i].texture = Vector2(tex_left, tex_bottom); //triangle + i 2 vertices[3 + m_VertexCount * i].position = Vector3(left, top, 0.f); vertices[3 + m_VertexCount * i].texture = Vector2(tex_left, tex_top); vertices[4 + m_VertexCount * i].position = Vector3(right, top, 0.f); vertices[4 + m_VertexCount * i].texture = Vector2(tex_right, tex_top); vertices[5 + m_VertexCount * i].position = Vector3(right, bottom, 0.f); vertices[5 + m_VertexCount * i].texture = Vector2(tex_right, tex_bottom); posXOffset += m_GlyphWidth[m_StringToDraw[i]]; } hr = context->Map(m_VertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); if (FAILED(hr)) { return false; } vertexPtr = (VertexType*)mappedResource.pData; int bufferSize = sizeof(VertexType) * m_VertexCount * m_StringToDraw.length(); memcpy(vertexPtr, (void*)vertices, bufferSize); D3D11_BUFFER_DESC tmpDesc; m_VertexBuffer->GetDesc(&tmpDesc); context->Unmap(m_VertexBuffer, 0); delete[] vertices; vertices = 0; return true; }  
    • By RoKabium Games
      On Lelantos you can use Portals to quickly go from one side of the map to the other. Each Portal has a number that once discovered will show up on the mini-map to help you keep track of where you are. Just be careful when using these teleportation devices, you never know what waits on the other side of the gate!
    • By bzt
      Hi,
      I'm not sure if this is the right forum for this, or it should go to General Programming. Please feel free to move it.
      I'd like to read skeleton from a model, using Assimp. As it turned out, interpreting Assimp structures is just as complicated as parsing many different formats at once. Judging by huge number of assimp-bone related questions on forums (some of them here too) I'm not alone with this problem. I've searched a lot, and I could find many answers to my questions, however the pieces are still not fitting together entirely. To explain what my problem is, I'll try to summarize what I've gathered so far, and please correct me if I'm wrong somewhere.
      The main problem is, both bone structure and mesh structure is stored in the same node structure, however it would be false to assume they are correlated. Meaning you have to traverse the same node-tree two times, completely independently, to get the correct results: one time for the meshes, and one time for the bones. Is this true? The best instructions I could find so far is here under section "Bones", but it is just a pseudo-description of a rather ridiculous algorithm (I'm sorry, but I really think that). Some important answers I've found here.
      Most assimp tutorials (like here and here) I've found seems to miss key parts of the whole procedure. They usually simply iterate through aiScene->mMeshes or walking the node tree collecting mMeshes in a vector, but from what I've learned so far, this is wrong (or more precisely only works in an exceptional case when only the root node has meshes). To explain what I mean:
      - model space: this is what we use to display our model
      - mesh node space: all mMeshes in a node contains vertices in this space (is this correct? or are they stored in model space in the first place?)
      - bone node space: similar to mesh node space, however totally unrelated to the node space that contains the mesh
      So, to correctly load all vertices into model space, we have walk through the node tree, concatenating the node's transformation matrix along the path and apply it to the vertices. Otherwise if there's only one node with meshes, and it's the root node, then, and only then model space == mesh node space. Is this correct?
      Bone nodes' transformation matrices are not model space related, rather skeleton hierarchy related, and they also have an offset matrix which transforms from mesh node space into this bone's node space. This is clear (at least something is :-))
      Traversing the node tree for bones and concatenating their transformation matrices along the path will result in a matrix that converts from bone node space into model space. If there's only one node with meshes and that's the root node, then, and only then this concatenated matrix is the inverse of the offset matrix. This seems reasonable if I understood everything correctly.
      If we later want to use an animation, then we have to
      1. get the list of bones which changed on that particular frame
      2. collect all vertices that are affected by that bone (mMeshes[]->aiBone)
      3. collect all of the bones that control those vertices and all of those bone's children (in a unique list, as we have to recalculate all vertices belonging to those bones)
      4. using the corresponding offset matrices, convert those vertices from their "bind-pose" skeleton mesh node space into one or more bone node spaces ( V -> V'[1..numWeights])
      5. use the transformation required by the animation frame on all vertices that belong to the modified bone using their corresponding bone node space versions (V'[x]),
        or do we multiply the frame transformation matrix temporarily with the bone's transformation matrix? (In other words, should we transform the vertices in the bone space or their coordinate system in model space?)
      6. get a weighted average of each vertex in their corresponding bone node space (w[1..numWeights] * V'[1..numWeights] -> Vm), then transform the result into model space (or should we / would it be better to convert the points into model space first and calculate the weighted average there?
      (Let's assume we have a frame for the sake of simplicity, I know how to iterate skeletons.)
      A little note on 5th question: although it seems to be irrelevant whether we transform the points or their coordinate system, because we'll get the same result (in model space), however this affects the points of the children bone spaces differently. I guess we must not convert the bone node spaces into model spaces, rather keep them parent bone node space relative, and only convert the final points back to model space. Am I correct?
      Thanks,
      bzt
    • By RoKabium Games
      Aura enemies – ”Heeble” is a spider-like creature that is closely related to the Creeble, Greeble and Beeble and it can crawl across any type of block. The ice-webs this one spins causes a lingering damage so stay clear and burn those webs from afar.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!