• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By Kjell Andersson
      Spectral rendering enters the computer graphics world. After a period with not that many feature improvements, Dual Heights Software have now finally released a new major version of their water texture generator software called Caustics Generator. The new version includes full spectrum color rendering allowing the simulation of spectral refraction which produces prismatic coloring effects and thus bringing caustics simulation to a new level. The spectral functionality is available in the Pro version on both Windows, Mac and Linux.
      Read more about the Caustics Generator at https://www.dualheights.se/caustics/
       

      View full story
    • By Kjell Andersson
      Spectral rendering enters the computer graphics world. After a period with not that many feature improvements, Dual Heights Software have now finally released a new major version of their water texture generator software called Caustics Generator. The new version includes full spectrum color rendering allowing the simulation of spectral refraction which produces prismatic coloring effects and thus bringing caustics simulation to a new level. The spectral functionality is available in the Pro version on both Windows, Mac and Linux.
      Read more about the Caustics Generator at https://www.dualheights.se/caustics/
       
    • By Ty Typhoon
      I like to build my A - Team now.
       
      I need loyal people who can trust and believe in a dream.
      If you got time and no need for direct pay please contact me now.
       
      We cant pay now, you will recieve a lifetime percentage if the released game will give earnings. 
      If I get the best people together for a team, everything should be possible.
       
       
      What i need:
      - Programmer c++
      - Unity / Unreal - we must check whats possible, please share your experience with me.
      - Sculpter, 3D Artist
      - Animator
      - Marketing / Promotion 
       
       
      What i do:
      - Studio Owner
      - Director
      - Recruit exactly you
      - Sounddesign
      - Main theme composing
      - Vocals
      - Game design
      - Gun, swords, shields and weapon design
      - Character, plants and animal design
       
       
      Please dont ask about the Name of the Game, about Designs or Screenshots.
      The game will be defintitly affected about our and your skills if you join the team.
       
       
      Planned for the big Game:
      - 1st person shooter
      - online multiplayer
      - character manipulation
      - complete big open world with like lifetime actions and reactions
      - gunstore with many items to buy
      - many upgrades for players
      - specials like mini games
       
      So if you are interested in joining a team with a nearly complete game idea, contact me now and tell me what you can do.
       
      discord:
      joerg federmann composing#2898
       
       
    • By Myopic Rhino

      Introduction
      If you've been looking into terrain texturing techniques, you've probably heard about texture splatting. The term was coined by Charles Bloom, who discussed it at http://www.cbloom.com/3d/techdocs/splatting.txt. With no disrespect to Charles Bloom, it was not the most clear or concise of articles, and has left many confused in its wake. While many use and recommend it, few have taken the time to adequately explain it. I hope to clear up the mystery surrounding it and demonstrate how to implement it in your own terrain engine.
      The Basics
      What is texture splatting? In its simplest form, it is a way to blend textures together on a surface using alphamaps.

      I will use the term alphamap to refer to a grayscale image residing in a single channel of a texture. It can be any channel - alpha, red, green, blue, or even luminance. In texture splatting, it will be used to control how much of a texture appears at a given location. This is done by a simple multiplication, alphamap * texture. If the value of a texel in the alphamap is 1, that texture will appear there in full value; if the value of a texel in the alphamap is 0, that texture will not appear there at all.

      For terrain, the texture might be grass, dirt, rock, snow, or any other type of terrain you can think of. Bloom refers to a texture and its corresponding alphamap as a splat. An analogy would be throwing a glob of paint at a canvas. The splat is wherever you see that glob. Multiple globs of paint layered on top of each other create the final picture.

      Let's say you have a 128x128 heightmap terrain, divided into 32x32 chunks. Each chunk would then be made up of 33x33 vertices. Each chunk has the base textures repeated several times over it - but the alphamap is stretched over the entire area. (0, 0) of the chunk would have alphamap coordinates of (0, 0) and texture coordinates of (0, 0). (33, 33) of the chunk would have alphamap coordinates of (1, 1) and texture coordinates of (x, x), where x is the number of times you want the textures to be repeated. x will depend on the resolution of the textures. Try to make sure they repeat enough to make detail up close, but not so much that the repetition is obvious from far away.

      The resolution of your alphamap per chunk is something you need to decide for yourself, but I recommend powers of two. For a 32x32 chunk, you could have a 32x32 alphamap (one texel per unit), a 64x64 alphamap (two texels per unit), or even a 128x128 alphamap (four texels per unit). When deciding what resolution to use, remember that you need an alphamap for every texture that appears on a given chunk. The higher the resolution is, the more control over the blending you have, at the cost of memory.

      The size of the chunk is a little trickier to decide. Too small and you will have too many state changes and draw calls, too large and the alphamaps may contain mostly empty space. For example, if you decided to create an alphamap with 1 texel per unit with a chunk size of 128x128, but the alphamap only has non-zero values in one small 4x4 region, 124x124 of your alphamap is wasted memory. If your chunk size was 32x32, only 28x28 would be wasted. This brings up a point: if a given texture does not appear at all over a given chunk, don't give that chunk an alphamap for that texture.

      The reason the terrain is divided into chunks is now apparent. Firstly, and most importantly, it can save video memory, and lots of it. Secondly, it can reduce fillrate consumption. By using smaller textures, the video card has less sampling to do if the texture does not appear on every chunk. Thirdly, it fits into common level of detail techniques such as geomipmapping that require the terrain to be divided into chunks anyway.
      Creating the Blends
      The key to getting smooth blending is linear interpolation of the alphamaps. Suppose there is a 1 right next to a 0. When the alphamap is stretched out over the terrain, Direct3D creates an even blend between the two values. The stretched alphamap is then combined with terrain texture, causing the texture itself to blend.

      Rendering then becomes the simple matter of going through each chunk and rendering the splats on it. Generally, the first splat will be completely opaque, with the following splats having varying values in their alphamaps. Let me demonstrate with some graphics. Let's say the first splat is dirt. Because it is the first that appears on that chunk, it will have an alphamap that is completely solid.
       *  =   

      After the first splat is rendered, the chunk is covered with dirt. Then a grass layer is added on top of that:
       *  = 
      +  = 

      The process is repeated for the rest of the splats for the chunk.

      It is important that you render everything in the same order for each chunk. Splat addition is not commutative. Skipping splats won't cause any harm, but if you change the order around, another chunk could end up looking like this:
       +  = 

      The grass splat is covered up because the dirt splat is completely opaque and was rendered second.

      You may be wondering why the first splat should be opaque. Let's say it wasn't, and instead was only solid where the grass splat was clear. Here's what happens:
       +  = 

      It's obvious this does not look right when compared with the blend from before. By having the first splat be completely opaque, you prevent any "holes" from appearing like in the picture above.
      Creating the Alphamaps
      Now that we know what texture splatting is, we need to create the alphamaps to describe our canvas. But how do we decide what values to give the alphamaps?

      Some people base it off of terrain height, but I recommend giving the ability to make the alphamaps whatever you want them to be. This gives you the flexibility to put any texture wherever you want with no limitations. It's as simple as drawing out the channels in your paint program of choice. Even better, you could create a simple world editor that allows the artist to see their alphamap and interact with it in the actual world.
      Implementation
      Let's take a step back and look at what we have:
      Some sort of terrain representation, such as a heightmap A set of textures to be rendered on the terrain An alphamap for each texture Look at #3. We know that each alphamap has to be in a texture. Does this mean that every alphamap needs its own texture? Thankfully, the answer is no. Because the alphamap only has to be in a single channel of a texture, we can pack up to four alphamaps into a single texture - one in red, one in green, one in blue, and one in alpha. In order to access these individual channels we will need to use a pixel shader, and because we need five textures (one with the alphamaps and four to blend), PS 1.4 is required. Unfortunately this is a still stiff requirement, so I will show how to use texture splatting with the fixed function pipeline as well as with a pixel shader.
      Splatting with the Fixed Function Pipeline
      Using the fixed function pipeline has one benefit that the pixel shader technique lacks: it will run on virtually any video card. All it requires is one texture unit for the alphamap, one texture unit for the texture, and the correct blending states.

      I chose to put the alphamap in stage 0 and the texture in stage 1. This was to stay consistent with the pixel shader, which makes most sense with the alphamap in stage 0. The texture stage states are relatively straightforward from there. Stage 0 passes its alpha value up to stage 1. Stage 1 uses that alpha value as its own and pairs it with its own color value.
      // Alphamap: take the alpha from the alphamap, we don't care about the color g_Direct3DDevice->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1); g_Direct3DDevice->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE); // Texture: take the color from the texture, take the alpha from the previous stage g_Direct3DDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_SELECTARG1); g_Direct3DDevice->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_TEXTURE); g_Direct3DDevice->SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1); g_Direct3DDevice->SetTextureStageState(1, D3DTSS_ALPHAARG1, D3DTA_CURRENT); We have to set the blending render states as well in order to get the multiple splats to combine together correctly. D3DRS_SRCBLEND is the alpha coming from the splat being rendered, so we set it to D3DBLEND_SRCALPHA. The final equation we want is FinalColor = Alpha * Texture + (1 - Alpha) * PreviousColor. This is done by setting D3DRS_DESTBLEND to D3DBLEND_INVSRCALPHA.
       
      g_Direct3DDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE); g_Direct3DDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); g_Direct3DDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); Splatting with a Pixel Shader
      Why even bother with the pixel shader? Using all the channels available in a texture instead of only one saves memory. It also allows us to render four splats in a single pass, reducing the number of vertices that need to be transformed. Because all of the texture combining takes place in the shader, there are no texture stage states to worry about. We just load the texture with an alphamap in each channel into stage 0, the textures into stages 1 through 4, and render.
      ps_1_4 //////////////////////////////// // r0: alphamaps // r1 - r4: textures //////////////////////////////// // Sample textures texld r0, t0 texld r1, t1 texld r2, t1 texld r3, t1 texld r4, t1 // Combine the textures together based off of their alphamaps mul r1, r1, r0.x lrp r2, r0.y, r2, r1 lrp r3, r0.z, r3, r2 lrp r0, r0.w, r4, r3 The mul instruction multiplies the first texture by its alphamap, which is stored in the red channel of the texture in sampler 0. The lrp instruction does the following arithmetic: dest = src0 * src1 + (1 - src0) * src2. Let's say r0.x is the alphamap for a dirt texture stored in r1, and r0.y is the alphamap for a grass texture stored in r2. r2 contains the following after the first lrp: GrassAlpha * GrassTexture + (1 - GrassAlpha) * DirtBlended, where DirtBlended is DirtAlpha * DirtTexture. As you can see, lrp does the same thing as the render states and texture stage states we set before. The final lrp uses r0 as the destination register, which is the register used as the final pixel color. This eliminates the need for a final mov instruction.

      What if you only need to render two or three splats for a chunk? If you want to reuse the pixel shader, simply have the remaining channels be filled with 0. That way they will have no influence on the final result. You could also create another pixel shader for two splats and a third for three splats, but the additional overhead of more SetPixelShader calls may be less efficient than using an extra instruction or two.

      Multiple passes are required if you need to render more than four splats for a chunk. Let's say you have to render seven splats. You first render the first four, leaving three left. The alpha channel of your second alphamap texture would be filled with 0, causing the fourth texture to cancel out in the equation. You simply set the alphamap texture and the three textures to be blended and render. The D3DRS_BLEND and D3DRS_SRCBLEND stages from before perform the same thing as the lrp in the pixel shader, allowing the second pass to combine seamlessly with the first.
      The Demo
      The demo application uses the two techniques described here to render a texture splatted quad. I decided not to go for a full heightmap to make it as easy as possible to find the key parts in texture splatting. Because of this, the demo is completely fillrate limited. The initial overhead of the pixel shader may cause some video cards to perform worse with it than with its fixed function equivalent, so take the frame rates with a grain of salt. The pixel shader will almost always come out ahead in a more complex scene.

      You can toggle between the fixed function pipeline and the pixel shader through the option in the View menu.

      The textures used are property of nVidia(R) and are available in their full resolution at http://developer.nvidia.com/object/IO_TTVol_01.html.
      The Problem of Seams
      If texture splatting has a downfall, it is this: when two neighboring splats come together, an unattractive seam forms between them. The look can be recreated by tiling four of the example splats from before.



      Why does this happen? Let's look at the grass alphamap of the top two sections:


      Here space has been added between them and the problem becomes apparent. The one on the left does not know the border values of the one on the right. When the video card performs its linear blend, it has no way of knowing that there is in fact a black texel right next to a white one. It simply assumes the same color is on the border.

      This is not an easy problem to fix and many games leave it untouched. It can be disguised a little by properly wrapping textures and a skilled level designer, but I have not thought of an elegant solution. In my experience it has been more trouble than it's worth to solve this problem, and I believe the advantages of texture splatting far outweigh the issue.
      Conclusion
      Hopefully this article has cleared up the mystery behind texture splatting. There are, of course, enhancements to be made, but texture splatting in its basic form is a powerful and flexible technique. It creates a smooth blend between different layers of terrain while giving detail at any distance and avoids the patterned look a detail map can give. Its main disadvantage is that it is very fillrate consuming, but with video cards becoming ever more powerful and the abilities of pixel shaders increasing, this is not an issue on modern and future hardware. Its ease of use and flexibility make it a perfect choice for texturing your terrain.

      Happy splatting!
      Sources
      Terrain Texture Compositing by Blending in the Frame-Buffer by Charles Bloom, http://www.cbloom.com/3d/techdocs/splatting.txt

      And, of course, the helpful people at http://www.gamedev.net.

      Feel free to send any questions or comments to email or private message @Raloth on the forums!
       
       
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Packing more than four pixel elements into OpenGL Texture

Recommended Posts

Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.

In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).

My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.

Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.

Edited by Solid_Spy

Share this post


Link to post
Share on other sites
Advertisement

There is no 32 texture limit.  True, there are only GLenums up to GL_TEXTURE31, but for textures higher than that you can just use GL_TEXTURE0 + num.

Share this post


Link to post
Share on other sites

Two ideas come to mind:

Use Spherical Gaussians instead of Spherical Harmonics. You can knock your textures down to 5 textures and still have some pretty acceptable results.

Use deferred shading and your texture limit concerns disappear entirely.

You can definitely store 2 sets of data inside a single texture by packing the data. However, you'll lose all ability to interpolate your results, and with 3D textures that I imagine are somewhat coarse, interpolation is not something you'll want to lose.

Edited by Styves

Share this post


Link to post
Share on other sites
48 minutes ago, Styves said:

Use Spherical Gaussians instead of Spherical Harmonics. You can knock your textures down to 5 textures and still have some pretty acceptable results.

I would love to learn how to use spherical gaussians, but unfortunately I can't find much info or any tutorials on the subject. The only info I could find were calc and theory based dissertations sadly. Do you know where I can find any practical information on the subject?

 

Quote

Use deferred shading and your texture limit concerns disappear entirely.

Do you mean using only 9 SH textures for one texture atlas, then another 9 for another, and then combine both textures using deferred rendering? That I can do.

 

Quote

You can definitely store 2 sets of data inside a single texture by packing the data. However, you'll lose all ability to interpolate your results, and with 3D textures that I imagine are somewhat coarse, interpolation is not something you'll want to lose.

Ah, that blows :(.

Edited by Solid_Spy

Share this post


Link to post
Share on other sites
1 hour ago, mhagain said:

There is no 32 texture limit.  True, there are only GLenums up to GL_TEXTURE31, but for textures higher than that you can just use GL_TEXTURE0 + num.

Really? I hear that a lot of graphics cards in the last 5 years can only hold up to 32 textures per shader execution. My card is about a year old, so I assume I cannot hold more than 32. Though I could be wrong. I'll look into that.

Edited by Solid_Spy

Share this post


Link to post
Share on other sites

 

On 7/22/2017 at 0:09 AM, Solid_Spy said:

I would love to learn how to use spherical gaussians, but unfortunately I can't find much info or any tutorials on the subject. The only info I could find were calc and theory based dissertations sadly. Do you know where I can find any practical information on the subject?

MJP made a real nice post about this and also wrote a sample demo for people to try. Should be enough to get you going.

On 7/22/2017 at 0:09 AM, Solid_Spy said:

Do you mean using only 9 SH textures for one texture atlas, then another 9 for another, and then combine both textures using deferred rendering? That I can do.

Each of your 3D textures likely has a world bounds, so you can render this bounds as a deferred volume (box, whatever) and apply it to the screen using deferred shading, adding up all the results of your 3D volumes (or using blending so that they overwrite what's below). This way none of your geometry shaders need to know about which volumes affect them, you don't need to scan for volumes that the object needs to be lit by, and you don't have to worry about excessive overdraw (where a light might relight the same pixel many times).

If you don't have a deferred shading pipeline in your engine or aren't sure how to add one, then you can also do an old-school forward shading pipeline where you render the geometry several times with additive blending over a base (which has ambient + sunlight/global lighting) using a new set of volume textures in each pass. Heavier on geometry load if you have heavy scenes but simple to implement (just iterate over your geometry draw for every N lights/volumes and set the appropriate data and blend state).

However if you can keep your texture count within whatever limit you set (either from hardware or a decision on your part) you can just use the single pass. I assume each object likely won't have more than 1-3 volumes affecting it, so if that assumption is correct and you've got a 32-texture limit, you should be able to handle several volumes. Even with 9 textures per volume that's 3 volumes per object + 5 textures for other stuff (diffuse, spec/gloss/roughness, normals, emissive - all of which can be combined in certain ways to reduce the number of samples and textures you actually need, the above can be combined down to 2 textures if you know what you're doing). I think in this case you should be just fine.

Cutting down to 5-gaussian SG lighting gives you more volumes and textures.

Edited by Styves

Share this post


Link to post
Share on other sites
On 22/07/2017 at 0:32 AM, Solid_Spy said:

Really? I hear that a lot of graphics cards in the last 5 years can only hold up to 32 textures per shader execution. My card is about a year old, so I assume I cannot hold more than 32. Though I could be wrong. I'll look into that.

Check GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS  for the vertex shader and GL_MAX_TEXTURE_IMAGE_UNITS  for the fragment shader.

Share this post


Link to post
Share on other sites

Your hardware's shader model may have a limit on the number of textures that can be accessed in a single PASS, but most (modern) scenes require multiple passes. If you're changing geometry and texture at the same time, there's really not much overhead to draw stuff using multiple passes - however I wonder why you need so many textures, have you considered using a texture atlas / megatexture approach?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement