
Advertisement
Search the Community
Showing results for tags '3D' in content posted in Graphics and GPU Programming.
Found 401 results

Hi there! I have one issue for now. I'm creating a RayCasting application, and for my floor and ceiling I'm trying to use Mode7 for rendering (I think this is easier to understand). but, I cant align the RayCasting walls with the mode7 floor. I use a rotate matrix to make the rotation of floor. Do you know what a need to think in the implementation to fix that? Or do you know if there is some tutorial explaining about it? Thanks!!! (Check the image below for understand) Here is my mode7 code: function mode7() { let _x = 0; let _y = 0; let z = 0; let sin = Math.sin(degreeToRadians(data.player.angle)); let cos = Math.cos(degreeToRadians(data.player.angle)); for(let y = data.projection.halfHeight; y < data.projection.height; y++) { for(let x = 0; x < data.projection.width; x++) { _x = ((data.projection.width  x) * cos)  (x * sin); _y = ((data.projection.width  x) * sin) + (x * cos); _x /= z; _y /= z; if(_y < 0) _y *= 1; if(_x < 0) _x *= 1; _y *= 8.0; _x *= 8.0; _y %= data.floorTextures[0].height; _x %= data.floorTextures[0].width; screenContext.fillStyle = data.floorTextures[0].data[Math.floor(_x) + Math.floor(_y) * data.floorTextures[0].width]; screenContext.fillRect(x, y, 1, 1); } z += 1; } }

3D Diligent Engine 2.4.c  Dear Imgui, GLTF2.0, Shadows, API Improvements and more
DiligentDev posted a topic in Graphics and GPU Programming
The latest release of Diligent Engine combines a number of recent updates (Vulkan on iOS, GLTF2.0 support, shadows), significantly improves performance of OpenGL backend, updates API, adds integration with Dear Imgui and implements new samples and tutorials. Some of the new features in this release: GLTF2.0 support (loader, PBR renderer and sample viewer) Shadowing Component and Shadows Sample Integration with Dear Imgui library and Dear Imgui demo Tutorial13  Shadow Map Tutorial14  Compute Shader Tutorial15  Multiple Windows Check it out on GitHub. 
how is the BSDF function used in the kajiya rendering équations ? We know that path tracing proivde an analytical solution and we saw the BSDF function at first time in the path tracing algorithm. After that, is there a way to use mutliple BSDF function in a full rendering process ? If you have some links to any books or website, please share it !

I'm unable to find my TerrainTypeSampler when I try to call it from my code! Why? struct VertexToPixel { float4 Position : POSITION; float4 Color : COLOR0; float LightingFactor : TEXCOORD0; float2 TextureCoords: TEXCOORD1; //texture2D Texture : TEXTURE; //TODO: Figure out how to change the texture used for the current vertex. }; struct PixelToFrame { float4 Color : COLOR0; }; float4x4 xView; float4x4 xProjection; float4x4 xWorld; float3 xLightDirection; float xAmbient; bool xEnableLighting; bool xShowNormals; // Texture Samplers  texture2D xTexture; SamplerState TextureSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture2D TerrainType; SamplerState TerrainTypeSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture2D Grass; SamplerState GrassSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture2D Rock; SamplerState RockSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture2D Sand; SamplerState SandSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture2D Snow; SamplerState SnowSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture2D Water; SamplerState WaterSampler { magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; // Technique: Textured  VertexToPixel TexturedVS(float4 inPos : POSITION, float3 inNormal : NORMAL, float2 inTexCoords : TEXCOORD0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position = mul(inPos, preWorldViewProjection); Output.TextureCoords = inTexCoords; float3 Normal = normalize(mul(normalize(inNormal), xWorld)); Output.LightingFactor = 1; if (xEnableLighting) Output.LightingFactor = saturate(dot(Normal, xLightDirection)); return Output; } PixelToFrame TexturedPS(VertexToPixel PSIn) { PixelToFrame Output = (PixelToFrame)0; float4 Color = TerrainType.Sample(TerrainTypeSampler, PSIn.TextureCoords); float4 GrassClr = float4(0, 255, 0, 255); float4 RockClr = float4(255, 0, 0, 255); float4 SandClr = float4(255, 255, 0, 255); float4 SnowClr = float4(255, 255, 255, 255); float4 WaterClr = float4(12, 0, 255, 255); float4 Diff = Color  GrassClr; if (!any(Diff)) Output.Color = tex2D(GrassSampler, PSIn.TextureCoords); Diff = Color  RockClr; if (!any(Diff)) Output.Color = tex2D(RockSampler, PSIn.TextureCoords); Diff = Color  SandClr; if(!any(Diff)) Output.Color = tex2D(SandSampler, PSIn.TextureCoords); Diff = Color  SnowClr; if(!any(Diff)) Output.Color = tex2D(SnowSampler, PSIn.TextureCoords); Diff = Color  WaterClr; if(!any(Diff)) Output.Color = tex2D(WaterSampler, PSIn.TextureCoords); Output.Color = tex2D(TextureSampler, PSIn.TextureCoords); Output.Color.rgb *= saturate(PSIn.LightingFactor + xAmbient); return Output; } technique Textured_2_0 { pass Pass0 { VertexShader = compile vs_4_0_level_9_1 TexturedVS(); PixelShader = compile ps_4_0_level_9_1 TexturedPS(); } } technique Textured { pass Pass0 { VertexShader = compile vs_4_0_level_9_1 TexturedVS(); PixelShader = compile ps_4_0_level_9_1 TexturedPS(); } } When I'm calling this, TerrainTypeSampler returns null! public void Draw() { Matrix WorldMatrix = Matrix.CreateTranslation(m_TerrainWidth / 2.0f, 0, m_TerrainHeight / 2.0f); m_Effect.CurrentTechnique = m_Effect.Techniques["Textured"]; m_Effect.Parameters["TerrainTypeSampler"].SetValue(m_TerrainType); m_Effect.Parameters["GrassSampler"].SetValue(m_Grass); m_Effect.Parameters["RockSampler"].SetValue(m_Rock); m_Effect.Parameters["SandSampler"].SetValue(m_Sand); m_Effect.Parameters["SnowSampler"].SetValue(m_Snow); m_Effect.Parameters["WaterSampler"].SetValue(m_Water); //m_Effect.Parameters["TextureSampler"].SetValue(m_Grass); m_Effect.Parameters["xView"].SetValue(m_CController.View/*m_ViewMatrix*/); m_Effect.Parameters["xProjection"].SetValue(m_CController.Projection/*m_ProjectionMatrix*/); m_Effect.Parameters["xWorld"].SetValue(WorldMatrix/*Matrix.Identity*/); RasterizerState RS = new RasterizerState(); RS.CullMode = CullMode.None; RS.FillMode = FillMode.WireFrame; m_Device.RasterizerState = RS; m_Device.Clear(Color.CornflowerBlue); foreach (EffectPass Pass in m_Effect.CurrentTechnique.Passes) { Pass.Apply(); m_Device.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, m_Vertices, 0, m_Vertices.Length, m_Indices, 0, m_Indices.Length / 3, VertexPositionNormalTexture.VertexDeclaration); } } Please help!

3D Surface Gradient Based Bump Mapping Framework
mmikkelsen posted a topic in Graphics and GPU Programming
I have just released a new technical paper on a new framework for bump mapping. In my own opinion it presents a new and much more practical way to both manage and to think of bump mapping. It allows you to blend correctly across multiple normal maps across multiple sets of texture coordinates and can handle both tangent space as well as object space normal maps including the ability to adjust the bump intensity on object space normal maps as well as tangent space normal maps. Most applications store one tangent space precomputed per vertex which allows for tangent space normal maps used with one set of texture coordinates only and works primarily with rigid body meshes. Traditional vertex tangent space is supported by the framework but solutions for tangent space generated on the fly in the pixel shader are also provided. These are very well suited for additional sets of texture coordinates, procedural texture coordinates as well as for deformed meshes. Furthermore, the framework also allows you to blend correctly with bump maps defined on a volume such as decal projectors, triplanar projection and procedural noise functions. Yet another nice trick is "post resolve bump mapping" where we can bump map a surface and then rather than blending or adding contributions we can do a second round of bump mapping applied to the new virtual surface resulting from the first round of bump mapping. Good examples of this are detail normal mapping applied to a virtual surface after we have done Parallax Mapping/POM in the first round or alternatively triplanar projection applied to a surface that has already been normal mapped. https://mmikkelsen3d.blogspot.com/p/3dgraphicspapers.html 
3D HLSL  how to streamline this shadow map filter?
george7378 posted a topic in Graphics and GPU Programming
Hey everyone, I have the following loop in my main shader which samples my shadow map with a 3x3 PCF grid to achieve antialiased shadows: int shadowFactor = 0; for (int y = 1; y <= 1; y++) { for (int x = 1; x <= 1; x++) { float2 shadowSampleOffset = float2(x/ShadowMapSize, y/ShadowMapSize); if (tex2D(ShadowMapTextureSampler, projectedTextureCoordinatesLight + shadowSampleOffset).r < input.ProjectedPositionLight.z  0.002f) { shadowFactor += 1; } } } float lightingCorrectionFactor = 1  shadowFactor/9.0f; diffuseLightingFactor *= lightingCorrectionFactor; specularLightingFactor *= lightingCorrectionFactor; ...however, it always pushes the number of instructions up to the point where I normally need to use a higher shader model (in MonoGame, it causes me to go from the default 4_0_level_9_1 up to 4_0_level_9_3). Can anyone see a more obvious and efficient way to do the PCF operation? Thanks! 
DX11 3D smooth voxel terrain LOD on GPU
MasterReDWinD posted a topic in Graphics and GPU Programming
Hi, I currently have Naive Surface Nets working in a Compute Shader and I'm using a Dictionary to load chunks around the player. Retrieving meshes from the Compute Shader back to the CPU and creating Unity collision meshes is causing a lot of stalling. I've decided to tackle this by implementing a LOD system. I've found the 0fps articles on simplifying isosurfaces to be a weath of information. After studying those and looking at the papers I've identified what I think are the two main directions for approaching LOD: Mesh Simplification and Geometry Clipmaps. I am aiming to get one or both of these working in a Compute Shader. This brings me to my questions: 1) I've seen a lot of feedback that Mesh Simplification is going to be too slow for realtime terrain editing, is there a consensus on this? Most algorithms appear to use the halfedge format but I can't find any information on efficiently converting an indexed mesh into halfedge mesh format. Can anyone provide an example? 2) Can the Geometry Clipmap approach work with 3D terrain as opposed to heightmaps? Would I need to store my chunks in Octrees to make it work? Are there any GPU implementation examples that don't work on heightmaps? Any advice to help me further focus my research would be much appreciated. Thanks. 
3D Looking for spherical terrain collision alogrithms
Sword7 posted a topic in Graphics and GPU Programming
I googled for terrain collisions but found only some sources about flat worlds. I want collision detection for spherical terrain worlds. I have "3D Engine Design for Virtual Globe" book but it did not mention any algorithms about collision detection for landing, rolling, walking, crashing, etc... Does anyone have any good sources for spherical terrain collision detection? 
3D Downsides to light binning with rasterization?
VertexSoup posted a topic in Graphics and GPU Programming
So I've been researching light binning for tiled and clustered rendering. From what I've read, rasterizing light volumes seems to beat async compute in most areas except for: Async compute can be overlaid with the depth / shadowmap passes while rasterizing cannot Conservative rasterization not fully supported by some HW (GCN most notably) Its complex to implement Dmitry Zhdan has a nice presentation where he laid out how well rasterizing scales as resolution and scene complexity increases. Call of Duty has an impressive zbinning implementation, and MJP used it on The Order and his own DeferredIndexing demo. But it still seems like there are way more async compute implementations, including Unity with their new HDRP. Is light binning with rasterization really that hard to implement? Or perhaps there are many more games using it that I might have missed? 
Shader source This compiles just fine, but when I try to load it in MonoGame, SharpDX gives me a cryptic "Invalid Parameter" error.

3D XNA  should I manually dispose VertexBuffers?
george7378 posted a topic in Graphics and GPU Programming
Hey everyone, I've got a memory leak in my XNA/MonoGame app, and I'm pretty sure it's due to the fact that I create a lot of VertexBuffers manually and never dispose of them. I'm dealing with a dynamic quadtree terrain where every possible tile, at all levels of detail (down to a limit I impose), is created at loading time, each with its own vertex buffer which is sent straight to the GPU when I want to draw that particular tile. This means that, each time I load a level, I have something like 24,000 small VertexBuffers comprising my terrain, and each frame I pick and choose which ones to draw based on the camera position. The problem is that when I exit to the main menu and reload, I create another NEW quadtree which generates another 24,000 VertexBuffers. I can see that the memory allocated to the previous VertexBuffers is NOT being reclaimed, even though the quadtree objects which own them are no longer referenced and have been garbage collected. So, my question is  do I have to manually dispose of a VertexBuffer every time I'm done with it in my program? I've seen a load of tutorials for XNA which create VertexBuffers and IndexBuffers manually and allocate data into them (here and here, to name two of the most famous) and they never dispose of them. Are they doing it wrong, or are they correct since the buffers are in use throughout the whole program and so will be disposed cleanly when it exits anyway? Appreciate the help 
Why, at the fragment shader level, when it is intended to obtain normal value in a specific pixel, is it obtained by interpolating the normal vectors of the vertices that form the triangle? Being a point on the surface of a triangle, wouldn't the normal vector directly be the normal vector of that triangle? I think there is something I am not considering. Thank you.

3D How to use Lux as unit for directional light intensity?
Silverlan posted a topic in Graphics and GPU Programming
I would like to use Lux as the unit for light intensity values of the directional lights in my engine, but I'm not sure how that would actually work. Lux describes the light intensity for an area, but how would I translate that to the light intensity for a fragment? As far as I know, both Unity and Unreal use Lux as unit, and the glTF light source specification uses it as well, so it's obviously possible somehow? I can't find any references on this topic whatsoever. I assume I have to calculate the size of a fragment in world units? If so, how would one do that? 
3D Cascaded Voxel Cone Tracing GI  How to sample sky color?
Anfaenger posted a topic in Graphics and GPU Programming
I've implemented a basic version of Voxel Cone Tracing that uses a single volume texture (covering a small region around the player). But I want to have large and open environments, so I must use some cascaded (LoD'ed) variant of the algorithm. 1) How to inject sky light into the voxels and how to do it fast? (e.g. imagine a large shadowed area which is lit by the blue sky above.) I think, after voxelizing the scene I will introduce an additional compute shader pass where, from each surface voxel, I will trace cones in the direction of the surface normal until they hit the sky (cubemap), but, I'm afraid, it would be slow with Cascaded Voxel Cone Tracing. 2) How to calculate (rough) reflections from the sky (and distant objects)? If the scene consists of many "reflective" pixels, tracing cones through all cascades would destroy performance. Looks like Voxel Cone Tracing is only suited for smallish indoor scenes (like Doom 3style cramped spaces). 
DX11 Voxelization (VCT GI)  Handing outofbounds writes
Anfaenger posted a topic in Graphics and GPU Programming
I'm implementing singlepass surface voxelization (via Geometry Shader) for Voxel Cone Tracing (VCT), and after some debugging I've discovered that I have to insert outofbounds checks into the pixel shader to avoid voxelizing geometry which is outside the voxel grid: void main_PS_VoxelTerrain_DLoD( VSOutput pixelInput ) { const float3 posInVoxelGrid = (pixelInput.position_world  g_vxgi_voxel_radiance_grid_min_corner_world) * g_vxgi_inverse_voxel_size_world; const uint color_encoded = packR8G8B8A8( float4( posInVoxelGrid, 1 ) ); const int3 writecoord = (int3) floor( posInVoxelGrid ); const uint writeIndex1D = flattenIndex3D( (uint3)writecoord, (uint3)g_vxgi_voxel_radiance_grid_resolution_int ); // HACK: bool inBounds = writecoord.x >= 0 && writecoord.x < g_vxgi_voxel_radiance_grid_resolution_int && writecoord.y >= 0 && writecoord.y < g_vxgi_voxel_radiance_grid_resolution_int && writecoord.z >= 0 && writecoord.z < g_vxgi_voxel_radiance_grid_resolution_int ; if( inBounds ) { rwsb_voxelGrid[writeIndex1D] = color_encoded; } else { rwsb_voxelGrid[writeIndex1D] = 0xFF0000FF; //RED, ALPHA } } Why is this check needed, and how can I avoid it? Shouldn't Direct3D automatically clip the pixels falling outside the viewport? (I tried to ensure that outofbounds pixels are clipped in the geometry shader and I also enable depthClip in rasterizer, but it doesn't work.) Here's a picture illustrating the problem (extraneous voxels are highlighted with red): And here the full HLSL code of the voxelization shader: 
OpenGL Recalculate normals of a regular grid surface (C++/OpenGL)
fabian7 posted a topic in Graphics and GPU Programming
I'm trying to calculate normals of my grid surface. The map is 29952px x 19968px and each cell is 128px x 128px. So I have 36895 vertices. My flat map array is sent to shaders with the following structure: float vertices[368950] = { // x y z znoise xTex yTex xNorm yNorm zNorm Type 16384,16256,16256, 0, 0.54, 0.45, 0, 0, 1, 1, 16256,16384,16384, 0, 0.54, 0.45, 0, 0, 1, 1, ...... } I calculate the zNoise with a function float noise(float x, float y){}; And it works (I add it to y and z in the vertex shader). Method 1 If i calculate normals using finitedifference method i obtain a nice result. PseudoCode: vec3 off = vec3(1.0, 1.0, 0.0); float hL = noise(P.xy  off.xz); float hR = noise(P.xy + off.xz); float hD = noise(P.xy  off.zy); float hU = noise(P.xy + off.zy); N.x = hL  hR; N.y = hD  hU; N.z = 2.0; N = normalize(N); But, in the case I need to edit the map manually, for example in a Editor context, where you set the zNoise with a tool to create mountains as you want, this method won't help. I get this nice result (seen from minimap) (Normals are quite dark purposely) Method 2    61+ \ \  Y  \  \  ^  \  \    \ \  5+2 +> X \ \   \  \   \  \   \ \ +43    So i'm trying to calculate the normal using the adjacent triangles, but the result is very different (it seems that there's a bug somewhere): Code: std::array<glm::vec3, 6> getAdjacentVertices(glm::vec2 pos) { std::array<glm::vec3, 6> output; output = { // getVertex is a function that returns a glm::vec3 // with values x, y+znoise, z+znoise getVertex(pos.x, pos.y + 128), // up getVertex(pos.x + 128, pos.y), // right getVertex(pos.x + 128, pos.y  128), // downright getVertex(pos.x, pos.y  128), // down getVertex(pos.x  128, pos.y), // left getVertex(pos.x  128, pos.y + 128), // upleft }; return output; } And the last function: glm::vec3 mapgen::updatedNormals(glm::vec2 pos) { bool notBorderLineX = pos.x > 128 && pos.x < 29952  128; bool notBorderLineY = pos.y > 128 && pos.y < 19968  128; if (notBorderLineX && notBorderLineY) { glm::vec3 a = getVertex(pos.x, pos.y); std::array<glm::vec3, 6> adjVertices = getAdjacentVertices(pos); glm::vec3 sum(0.f); for (int i = 0; i < 6; i++) { int j; (i == 0) ? j = 5 : j = i  1; glm::vec3 side1 = adjVertices[i]  a; glm::vec3 side2 = adjVertices[j]  a; sum += glm::cross(side1, side2); } return glm::normalize(sum); } else { return glm::vec3(0.3333f); } } I get this bad result (seen from minimap) unfortunately Note: The buildings are in different positions but the surface has the same seed using the two methods. Could anyone help? 🙂 Thanks in advance. 
3D XNA  what needs to go in UnloadContent()?
george7378 posted a topic in Graphics and GPU Programming
Hey everyone, I'm working on an XNA (well, MonoGame) app and I'm wondering what needs to go in the autogenerated UnloadContent() method? Most of my models, textures, effects, etc... are loaded from ContentManager and so don't need to be manually unloaded. I do have a few tiny things that I'm creating myself though  for example, a couple of 1x1 Texture2Ds which I use for rendering coloured rectangles and lines via SpriteBatch, and a RenderTarget2D that I use for my shadow mapping. What's your rule for when something needs to go into UnloadContent(), and what should I actually do with it there? Thanks so much!