Jump to content
  • Advertisement

Search the Community

Showing results for tags '3D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 273 results

  1. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  2. Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
  3. Hi, I'm trying to produce volumetric light in OpenGL following the implementation details on "GPU Pro 5 Volumetric light effects in KillZone". I am confused on the number of passes needed to create the effect. So I got the shadow pass which renders the scene from the light's POV, then I have the GBuffer pass which renders to texture the whole scene. and finally a 3rd pass which computes the ray marching on every pixel and computes the amount of accumulated scattering factor according to its distance of the light in the scene (binding the shadow map form the first pass). Then what ? blending these 3 buffers on a full screen quad finally pass ?? or maybe should I do the ray marching in the resulting buffer of blending the shadow map and the Gbuffer? Thanks in advance
  4. Keith P Parsons

    3D Exponential Shadow Maps

    Does any one have a working demo of exponential shadow maps? I'm working on some back scatter code and I need a quick way to generate soft shadows in the volume. With my current implementation of exponential shadows I'm finding that blurring the exponential shadow buffer decreases the size of the shadowed area but isn't creating any soft edges. Found a link to the original implementation by Marc Salvi but is seems to be broken. https://pixelstoomany.wordpress.com/category/shadows/exponential-shadow-maps/
  5. Hello, I have recently joined a game company, and we are using an own, old DX9 engine. It renders meshes with software, which is not only slow, but also got broken with some latest windows updates. The current code is; auto vb = _getVB(); auto ib = _getIB(); if (_node) ::device->SetStreamSource( 0, vb, 0, sizeof(AMVERTEX)); else ::device->SetStreamSource( 0, vb, 0, sizeof(MVERTEX)); ::device->SetIndices(ib); if(_node) ::device->SetSoftwareVertexProcessing(TRUE); if (_node) ::device->SetRenderState( D3DRS_INDEXEDVERTEXBLENDENABLE, m_dwNodeCount ? TRUE : FALSE); ::device->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, m_vbIndex, 0, m_data->m_count, m_meshIB[index]->_ib->m_dwPOS, m_mesh[index]->_count / 3); ::device->SetRenderState( D3DRS_INDEXEDVERTEXBLENDENABLE, FALSE); so the question is, what's the best way to do this correctly? When I don't use the software processing, it renders just let's say 1/10 of the whole mesh, rest of it are just lines going throughout the whole screen. It uses the _node to determine if there are nodes for animation. I'm kinda new to D3D9 aswell. So, how exactly do I approach this method of rendering? The animations are then done with WORLDMATRIX(i), so I guess that's the problem, since meshes without animations work without problems. So what do you think that would be the best approach to this, is it shaders? Because I can't really imagine me writing one for this I mean because of time, so if any of you know any solution that could work, I would be thankful. If any of you could help me, I could even pay you some small money. Thanks a lot.
  6. Hello! When I implemented SSR I encountered the problem of artifacts. Screenshots here Code: #version 330 core uniform sampler2D normalMap; // in world space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform sampler2D positionMap; // in world space uniform mat4 projection, view; uniform vec3 cameraPosition; in vec2 texCoord; layout (location = 0) out vec4 fragColor; void main() { mat4 vp = projection * view; vec3 position = texture(positionMap, texCoord).xyz; vec3 normal = texture(normalMap, texCoord).xyz; vec4 coords; vec3 viewDir = normalize(position - cameraPosition); vec3 reflected = reflect(viewDir, normal); float L = 0.5; vec3 newPos; for (int i = 0; i < 10; i++) { newPos = position + reflected * L; coords = vp * vec4(newPos, 1.0); coords.xy = 0.5 + 0.5 * coords.xy / coords.w; newPos = texture(positionMap, coords.xy).xyz; L = length(position - newPos); } float fresnel = 0.0 + 2.8 * pow(1 + dot(viewDir, normal), 4); L = clamp(L * 0.1, 0, 1); float error = (1 - L); vec3 color = texture(colorMap, coords.xy).xyz; fragColor = mix(texture(colorMap, texCoord), vec4(color, 1.0), texture(reflectionStrengthMap, texCoord).r); } I will be grateful for help!
  7. Hello! During the implementation of SSLR, I ran into a problem: only objects that are far from the reflecting surface are reflected. For example, as seen in the screenshot, this is a lamp and angel wings. I give the code and screenshots below. #version 330 core uniform sampler2D normalMap; // in view space uniform sampler2D depthMap; // in view space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform mat4 projection; uniform mat4 inv_projection; in vec2 texCoord; layout (location = 0) out vec4 fragColor; vec3 calcViewPosition(in vec2 texCoord) { // Combine UV & depth into XY & Z (NDC) vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r); // Convert from (0, 1) range to (-1, 1) vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1); // Undo Perspective transformation to bring into view space vec4 ViewPosition = inv_projection * ScreenSpacePosition; ViewPosition.y *= -1; // Perform perspective divide and return return ViewPosition.xyz / ViewPosition.w; } vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) { dir *= 0.25f; for (int i = 0; i < 20; i++) { hitCoord += dir; vec4 projectedCoord = projection * vec4(hitCoord, 1.0); projectedCoord.xy /= projectedCoord.w; projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5; float depth = calcViewPosition(projectedCoord.xy).z; dDepth = hitCoord.z - depth; if(dDepth < 0.0) return projectedCoord.xy; } return vec2(-1.0); } void main() { vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0; vec3 viewPos = calcViewPosition(texCoord); // Reflection vector vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal))); // Ray cast vec3 hitPos = viewPos; float dDepth; float minRayStep = 0.1f; vec2 coords = rayCast(reflected * minRayStep, hitPos, dDepth); if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r); else fragColor = texture(colorMap, texCoord); } Screenshot: colorMap: normalMap: depthMap: I will be grateful for help
  8. Hello, I'm trying to make a PBR vulkan renderer and I wanted to implement Spherical harmonics for the irradiance part (and maybe PRT in the future but that's another story). the evaluation on the shader side seems okay (it look good if I hardcode the SH directly in the shader) but when I try to generate it from a .hdr map it output only gray scale. It's been 3 days I'm trying to debug now I just have no clue why all my colour coefficients are gray. Here is the generation code: SH2 ProjectOntoSH9(const glm::vec3& dir) { SH2 sh; // Band 0 sh.coef0.x = 0.282095f; // Band 1 sh.coef1.x = 0.488603f * dir.y; sh.coef2.x = 0.488603f * dir.z; sh.coef3.x = 0.488603f * dir.x; // Band 2 sh.coef4.x = 1.092548f * dir.x * dir.y; sh.coef5.x = 1.092548f * dir.y * dir.z; sh.coef6.x = 0.315392f * (3.0f * dir.z * dir.z - 1.0f); sh.coef7.x = 1.092548f * dir.x * dir.z; sh.coef8.x = 0.546274f * (dir.x * dir.x - dir.y * dir.y); return sh; } SH2 ProjectOntoSH9Color(const glm::vec3& dir, const glm::vec3& color) { SH2 sh = ProjectOntoSH9(dir); SH2 shColor; shColor.coef0 = color * sh.coef0.x; shColor.coef1 = color * sh.coef1.x; shColor.coef2 = color * sh.coef2.x; shColor.coef3 = color * sh.coef3.x; shColor.coef4 = color * sh.coef4.x; shColor.coef5 = color * sh.coef5.x; shColor.coef6 = color * sh.coef6.x; shColor.coef7 = color * sh.coef7.x; shColor.coef8 = color * sh.coef8.x; return shColor; } void SHprojectHDRImage(const float* pixels, glm::ivec3 size, SH2& out) { double pixel_area = (2.0f * M_PI / size.x) * (M_PI / size.y); glm::vec3 color; float weightSum = 0.0f; for (unsigned int t = 0; t < size.y; t++) { float theta = M_PI * (t + 0.5f) / size.y; float weight = pixel_area * sin(theta); for (unsigned int p = 0; p < size.x; p++) { float phi = 2.0 * M_PI * (p + 0.5) / size.x; color = glm::make_vec3(&pixels[t * size.x + p]); glm::vec3 dir(sin(phi) * cos(theta), sin(phi) * sin(theta), cos(theta)); out += ProjectOntoSH9Color(dir, color) * weight; weightSum += weight; } } out.print(); out *= (4.0f * M_PI) / weightSum; } outside of the SHProjectHDRImage function that's pretty much the code from MJP that you can check here: https://github.com/TheRealMJP/LowResRendering/blob/2f5742f04ab869fef5783a7c6837c38aefe008c3/SampleFramework11/v1.01/Graphics/SH.cpp I'm not doing anything fancy in term of math or code but I that's my first time with those so I feel like I forgot something important. basically for every pixel on my equi-rectangular hdr map I generate a direction, get the colour and project it on the SH but strangely I endup with a SH looking like this: coef0: 1.42326 1.42326 1.42326 coef1: -0.0727784 -0.0727848 -0.0727895 coef2: -0.154357 -0.154357 -0.154356 coef3: 0.0538229 0.0537928 0.0537615 coef4: -0.0914876 -0.0914385 -0.0913899 coef5: 0.0482638 0.0482385 0.0482151 coef6: 0.0531449 0.0531443 0.0531443 coef7: -0.134459 -0.134402 -0.134345 coef8: -0.413949 -0.413989 -0.414021 with the HDR map "Ditch River" from this web page http://www.hdrlabs.com/sibl/archive.html but I also get grayscale on the 6 other hdr maps I tried from hdr heaven, it's just different gray. If anyone have any clue that would be really welcome.
  9. Hello! I tried to implement the Morgan's McGuire method, but my attempts failed. He described his method here: Screen Space Ray Tracing. Below is my code and screenshot. SSLR fragment shader: #version 330 core uniform sampler2D normalMap; // in view space uniform sampler2D depthMap; // in view space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform mat4 projection; uniform mat4 inv_projection; in vec2 texCoord; layout (location = 0) out vec4 fragColor; vec3 calcViewPosition(in vec2 texCoord) { // Combine UV & depth into XY & Z (NDC) vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r); // Convert from (0, 1) range to (-1, 1) vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1); // Undo Perspective transformation to bring into view space vec4 ViewPosition = inv_projection * ScreenSpacePosition; ViewPosition.y *= -1; // Perform perspective divide and return return ViewPosition.xyz / ViewPosition.w; } // By Morgan McGuire and Michael Mara at Williams College 2014 // Released as open source under the BSD 2-Clause License // http://opensource.org/licenses/BSD-2-Clause #define point2 vec2 #define point3 vec3 float distanceSquared(vec2 a, vec2 b) { a -= b; return dot(a, a); } // Returns true if the ray hit something bool traceScreenSpaceRay( // Camera-space ray origin, which must be within the view volume point3 csOrig, // Unit length camera-space ray direction vec3 csDir, // A projection matrix that maps to pixel coordinates (not [-1, +1] // normalized device coordinates) mat4x4 proj, // The camera-space Z buffer (all negative values) sampler2D csZBuffer, // Dimensions of csZBuffer vec2 csZBufferSize, // Camera space thickness to ascribe to each pixel in the depth buffer float zThickness, // (Negative number) float nearPlaneZ, // Step in horizontal or vertical pixels between samples. This is a float // because integer math is slow on GPUs, but should be set to an integer >= 1 float stride, // Number between 0 and 1 for how far to bump the ray in stride units // to conceal banding artifacts float jitter, // Maximum number of iterations. Higher gives better images but may be slow const float maxSteps, // Maximum camera-space distance to trace before returning a miss float maxDistance, // Pixel coordinates of the first intersection with the scene out point2 hitPixel, // Camera space location of the ray hit out point3 hitPoint) { // Clip to the near plane float rayLength = ((csOrig.z + csDir.z * maxDistance) > nearPlaneZ) ? (nearPlaneZ - csOrig.z) / csDir.z : maxDistance; point3 csEndPoint = csOrig + csDir * rayLength; // Project into homogeneous clip space vec4 H0 = proj * vec4(csOrig, 1.0); vec4 H1 = proj * vec4(csEndPoint, 1.0); float k0 = 1.0 / H0.w, k1 = 1.0 / H1.w; // The interpolated homogeneous version of the camera-space points point3 Q0 = csOrig * k0, Q1 = csEndPoint * k1; // Screen-space endpoints point2 P0 = H0.xy * k0, P1 = H1.xy * k1; // If the line is degenerate, make it cover at least one pixel // to avoid handling zero-pixel extent as a special case later P1 += vec2((distanceSquared(P0, P1) < 0.0001) ? 0.01 : 0.0); vec2 delta = P1 - P0; // Permute so that the primary iteration is in x to collapse // all quadrant-specific DDA cases later bool permute = false; if (abs(delta.x) < abs(delta.y)) { // This is a more-vertical line permute = true; delta = delta.yx; P0 = P0.yx; P1 = P1.yx; } float stepDir = sign(delta.x); float invdx = stepDir / delta.x; // Track the derivatives of Q and k vec3 dQ = (Q1 - Q0) * invdx; float dk = (k1 - k0) * invdx; vec2 dP = vec2(stepDir, delta.y * invdx); // Scale derivatives by the desired pixel stride and then // offset the starting values by the jitter fraction dP *= stride; dQ *= stride; dk *= stride; P0 += dP * jitter; Q0 += dQ * jitter; k0 += dk * jitter; // Slide P from P0 to P1, (now-homogeneous) Q from Q0 to Q1, k from k0 to k1 point3 Q = Q0; // Adjust end condition for iteration direction float end = P1.x * stepDir; float k = k0, stepCount = 0.0, prevZMaxEstimate = csOrig.z; float rayZMin = prevZMaxEstimate, rayZMax = prevZMaxEstimate; float sceneZMax = rayZMax + 100; for (point2 P = P0; ((P.x * stepDir) <= end) && (stepCount < maxSteps) && ((rayZMax < sceneZMax - zThickness) || (rayZMin > sceneZMax)) && (sceneZMax != 0); P += dP, Q.z += dQ.z, k += dk, ++stepCount) { rayZMin = prevZMaxEstimate; rayZMax = (dQ.z * 0.5 + Q.z) / (dk * 0.5 + k); prevZMaxEstimate = rayZMax; if (rayZMin > rayZMax) { float t = rayZMin; rayZMin = rayZMax; rayZMax = t; } hitPixel = permute ? P.yx : P; // You may need hitPixel.y = csZBufferSize.y - hitPixel.y; here if your vertical axis // is different than ours in screen space sceneZMax = texelFetch(csZBuffer, ivec2(hitPixel), 0).r; } // Advance Q based on the number of steps Q.xy += dQ.xy * stepCount; hitPoint = Q * (1.0 / k); return (rayZMax >= sceneZMax - zThickness) && (rayZMin < sceneZMax); } void main() { vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0; vec3 viewPos = calcViewPosition(texCoord); // Reflection vector vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal))); vec2 hitPixel; vec3 hitPoint; bool tssr = traceScreenSpaceRay( viewPos, reflected, projection, depthMap, vec2(1366, 768), 0.0, // zThickness -1.0, // nearPlaneZ 1.0, // stride 0.0, // jitter 32, // maxSteps 32, // maxDistance hitPixel, hitPoint ); //fragColor = texture(colorMap, hitPixel); if (tssr) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, hitPixel), texture(reflectionStrengthMap, texCoord).r); else fragColor = texture(colorMap, texCoord); } Screenshot: I create a projection matrix like this: glm::perspective(glm::radians(90.0f), (float) WIN_W / (float) WIN_H, 1.0f, 32.0f) This is what will happen if I display the image like this fragColor = texture(colorMap, hitPixel) colorMap: normalMap: depthMap: What am I doing wrong? Perhaps I misunderstand the value of csOrig, csDir and zThickness, so I would be glad if you could help me understand what these variables are.
  10. The algorithm of POM link:https://developer.amd.com/wordpress/media/2012/10/Tatarchuk-POM.pdf This PPT got the neat effect at the oblique angle: What I got more like a relief mapping (maxstep 32, minstep 8): here are some of my code snip: int NumberSteps = (int)lerp(MaxSteps, MinSteps, dot(ViewWS, NormalWS)); float CurrHeight = 0.0; float StepSize = 1.0 / (float)NumberSteps; float PrevHeight = 1.0; float NextHeight = 0.0; int StepIndex = 0; bool Condition = true; float2 TexOffsetPerStep = StepSize * ParallaxOffsetTS; float2 TexCurrentOffset = TexCoord; float CurrentBound = 1.0; float ParallaxAmount = 0.0; float2 Pt1 = 0; float2 Pt2 = 0; float2 TexOffset2 = 0; while(StepIndex < NumberSteps) { TexCurrentOffset -= TexOffsetPerStep; CurrHeight = Texture2DSampleGrad(BumpMap,BumpMapSampler, TexCurrentOffset, InDDX, InDDY).x; CurrentBound -= StepSize; if(CurrHeight > CurrentBound) { Pt1 = float2(CurrentBound, CurrHeight); Pt2 = float2(CurrentBound+StepSize, PrevHeight); TexOffset2 = TexCurrentOffset - TexOffsetPerStep; StepIndex = NumberSteps + 1; } else { StepIndex++; PrevHeight = CurrHeight; } } float Delta2 = Pt2.x - Pt2.y; float Delta1 = Pt1.x - Pt1.y; ParallaxAmount = (Pt1.x*Delta2 - Pt2.x*Delta1)/(Delta2 - Delta1); float2 ParallaxOffset = ParallaxOffsetTS * (1.0 - ParallaxAmount); float2 TexSampleBase = TexCoord - ParallaxOffset; TexSampleCoord = TexSampleBase; Is something I missing here? Thanks a lot!
  11. Been struggling to find the best way to create LOD's for a delaunay triangulated terrain mesh. Currently I triangulate the entire area, then have a simple chunk system where triangles are assigned to chunks by their center. Naturally this results in the jagged edges as shown in the pic, so it's nearly impossible I think to create LOD's from that. But triangulating by chunks has it's own set of issues. Mainly that chunk edges would not match exactly. And even if they did, it would be visible in a bad way. I haven't found any free mesh decimation tools that can retain the edges correctly. Plus any tools here need to integrate into a procedural pipeline in Unity. I was thinking maybe there was some way to handle this in a shader. I'm using a single global texture with planar mapping for the terrain, so I was thinking maybe I could use a stencil buffer in some way to get the pixels representing the gaps in the LOD's? Any ideas appreciated.
  12. But somehow continuity is reduced from 2 to 1 on vertices with valence != 4. When I repeat the subdivision (in blender), most vertices have a valence of 4. So is this c1 just a point? I mean I do not observe any discontinuities with CookTorr shading. I think the size of the highlight would have to suddenly change. Furthermore, I could not find any application of c3. Looking into physics: roller-coaster and car racing: c1 is fun and works. c2 is for comfort. Also fluid dynamics solver should not really depend on continuity. Okay, they may slow down without enough of it. Additionally having "crease", is there any reason to miss rational b-splines? I thought about increasing the order of the Polynoms around vertices with valence != 4, thinking that maybe one order is being eaten in the process. Then I would not need to hide these bad spots by moving them into the armpits of my model. I would trade in the extra effort of some more control points (a finer base mesh). But the math behind these vertices (what I find in google) seems to be overly complicated. Nonrational b-splines are straight forward expansion of nonrational splines, which in turn are generated by using a sliding window smoothing filter on a sequence of delta distributions. Lots of publications seem to frown upon the recursive algorithm. Lisp has told me that recursion is good. Also I do not get why hardware should generally have a problem with that. Feels like a smoke screen to me.
  13. I'm trying to figure out how to design the vegetation/detail system for my procedural chunked-lod based planet renderer. While I've found a lot of papers talking about how to homogeneously scatter things over a planar or spherical surface, I couldn't find too much info about how the location of objects is actually encoded. There seems to be a common approach that involves using some sort of texture mapping where different layers of vegetation/rocks/trees etc. define what and where things are placed, but can't figure out how this is actually rendered. I guess that for billboards, these textures could be sampled from the vertex shader and then use a geometry shader to draw a texture onto a generated quad? what about near trees or rocks that need a 3D model instead? Is this handled from the CPU? Is there a specific solution that works better with a chunked-lod approach? Thanks in advance!
  14. Hi there! I was wondering if there existed any cheats/shortcuts/optimisations we could perform if we know beforehand that there will only be a single non-skinned mesh that will be casting shadows in our scene. Our scene is entirely baked, and the only non-static element is this mesh. I was wondering if there was some way to 'precompute'/'bake' down the shadow information for this mesh so that we can use that to cast shadows on to the objects in the scene. One idea I had was similar to the octaherdron imposter technique, but instead of baking down color, you bake down what the shadows would look like from various angles, and all the baked surfaces in your game sample the appropriate shadow texture based on the position and rotation of that dynamic mesh. Seems simple in theory but there are a few implementation details which I haven't been able to wrap my head around. I figured I would ask here and pick the brains of all the guru's incase I am reinvent some wheel here! :)
  15. Hi all, Even though PBR is basically today’s standard. I was wondering what a good approach is to determine ambient, diffuse and specular values for different types of material, using the “old” blinn phong approach. Are there for example studies or reference data for common used materials, like grass, metals, concrete, tiles etc. Any input would be appreciated.
  16. Hello, I have regular matrix-based skinning on the GPU working for quite a while now and I stumbled upon an implementation of Dual Quaternion skinning. I've had a go at implementing this in a shader and after spending a lot of time on making small changes to the formulas to make it work, I sort of got it working but there seems to be an issue when blending bones. I found this pretty old topic on GameDev.net ( ) which, I think, describes my problem pretty well but I haven't been able to find the problem. Like in that post, if the blendweight of a vertex is 1, there is no problem. Once there is blending, I get artifacts Just for the sake of just focussing on the shader side of things first, I upload the dual quaternions to GPU which are converted from regular matrices (because I knew they should work). Below an image comparison between matrix skinning (left) and dual quaternion skinning (right): As you can see, especially on the shoulders, there are some serious issues. It might be because of a silly typo however I'm surprised some parts of the mesh look perfectly fine. Below some snippets: //Blend bones float2x4 BlendBoneTransformsToDualQuaternion(float4 boneIndices, float4 boneWeights) { float2x4 dual = (float2x4)0; float4 dq0 = cSkinDualQuaternions[boneIndices.x][0]; for(int i = 0; i < MAX_BONES_PER_VERTEX; ++i) { if(boneIndices[i] == -1) { break; } if(dot(dq0, cSkinDualQuaternions[boneIndices[i]][0]) < 0) { boneWeights[i] *= -1; } dual += boneWeights[i] * cSkinDualQuaternions[boneIndices[i]]; } return dual / length(dual[0]); } //Used to transform the normal/tangent float3 QuaternionRotateVector(float3 v, float4 quatReal) { return v + 2.0f * cross(quatReal.xyz, quatReal.w * v + cross(quatReal.xyz, v)); } //Used to transform the position float3 DualQuatTransformPoint(float3 p, float4 quatReal, float4 quatDual) { float3 t = 2 * (quatReal.w * quatDual.xyz - quatDual.w * quatReal.xyz + cross(quatDual.xyz, quatReal.xyz)); return QuaternionRotateVector(p, quatReal) + t; } I've been staring at this for quite a while now so the solution might be obvious however I fail to see it. Help would be hugely appreciated Cheers
  17. This is a follow up to a previous post. MrHallows had asked me to post the project, so I am going to with a new fresh thread so that I can get the most needed help. I have put the class in the main .cpp to simplify for your debugging purposes. My error is : C1189 #error: OpenGL header already included, remove this include, glad already provides it I tried adding : #define GLFW_INCLUDE_NONE, and tried adding this as a preprocessor definitions too. I also tried to change the #ifdef - #endif, except I just couldn't get it working. The code repository URL is : https://github.com/Joshei/GolfProjectRepo/tree/combine_sources/GOLFPROJ The branch is : combine_sources The Commit ID is: a4eaf31 The files involved are : shader_class.cpp, glad.h, glew.h glad1.cpp was also in my project, I removed it to try to solve this problem. Here is the description of the problem at hand: Except for glcolor3f and glRasterPos2i(10,10); the code works without glew.h. When glew is added there is only a runtime error (that is shown above.) I could really use some exact help. You know like, "remove the include for gl.h on lines 50, 65, and 80. Then delete the code at line 80 that states..." I hope that this is not to much to ask for, I really want to win at OpenGL. If I can't get help I could use a much larger file to display the test values or maybe it's possible to write to an open file and view the written data as it's outputted. Thanks in advance, Josheir
  18. I'm looking to create a small game engine, though my main focus is the renderer. I'm trying to decide which of these techniques I like better: Deferred Texturing or Volume Tiled Forward Shading ( https://github.com/jpvanoosten/VolumeTiledForwardShading ). Which would you choose,if not something else? Here are my current goals: I want to keep middleware to a minimum I want to use either D3D12 or Vulkan. However I understand D3D best so that is where I'm currently siding. I want to design for today's high-end GPU's and not worry too much about compatibility, as I'm assuming this is going to take a long time anyway I'm only interested in real-time ray-tracing if/when it can be done without an RTX-enabled card PBR pipeline that DOES NOT INCLUDE METALNESS. I feel there are better ways of doing this (hint: I like cavity maps) I want dynamic resolution scaling. I know it's simply a form of super-sampling, but I haven't found many ideal sources that explain super-sampling in a way that I would understand. I don't want to use any static lighting. I have good reasons which I'd be happy to explain. So I guess what I'm asking you fine people, is that if time were not a concern, or money, what type of renderer would you write and more importantly "WHY"? Thank you for your time.
  19. Hi, I have C++ Vulkan based project using Qt framework. QVulkanInstance and QVulkanWindow does lot of things for me like validation etc. but I can't figure out due Vulkan low level API how to troubleshoot Vulkan errors. I am trying to render terrain using tessellation shaders. I am learning from SaschaWillems tutorial for tessellation rendering. I think I am setting some value for rendering pass wrong in MapTile.cpp but unable to find which cause I dont know how to troubleshoot it. Whats the problem? App freezes on second end draw call Why? QVulkanWindow: Device lost Validation layers debug qt.vulkan: Vulkan init (vulkan-1.dll) qt.vulkan: Supported Vulkan instance layers: QVector(QVulkanLayer("VK_LAYER_NV_optimus" 1 1.1.84 "NVIDIA Optimus layer"), QVulkanLayer("VK_LAYER_RENDERDOC_Capture" 0 1.0.0 "Debugging capture layer for RenderDoc"), QVulkanLayer("VK_LAYER_VALVE_steam_overlay" 1 1.1.73 "Steam Overlay Layer"), QVulkanLayer("VK_LAYER_LUNARG_standard_validation" 1 1.0.82 "LunarG Standard Validation Layer")) qt.vulkan: Supported Vulkan instance extensions: QVector(QVulkanExtension("VK_KHR_device_group_creation" 1), QVulkanExtension("VK_KHR_external_fence_capabilities" 1), QVulkanExtension("VK_KHR_external_memory_capabilities" 1), QVulkanExtension("VK_KHR_external_semaphore_capabilities" 1), QVulkanExtension("VK_KHR_get_physical_device_properties2" 1), QVulkanExtension("VK_KHR_get_surface_capabilities2" 1), QVulkanExtension("VK_KHR_surface" 25), QVulkanExtension("VK_KHR_win32_surface" 6), QVulkanExtension("VK_EXT_debug_report" 9), QVulkanExtension("VK_EXT_swapchain_colorspace" 3), QVulkanExtension("VK_NV_external_memory_capabilities" 1), QVulkanExtension("VK_EXT_debug_utils" 1)) qt.vulkan: Enabling Vulkan instance layers: ("VK_LAYER_LUNARG_standard_validation") qt.vulkan: Enabling Vulkan instance extensions: ("VK_EXT_debug_report", "VK_KHR_surface", "VK_KHR_win32_surface") qt.vulkan: QVulkanWindow init qt.vulkan: 1 physical devices qt.vulkan: Physical device [0]: name 'GeForce GT 650M' version 416.64.0 qt.vulkan: Using physical device [0] qt.vulkan: queue family 0: flags=0xf count=16 supportsPresent=1 qt.vulkan: queue family 1: flags=0x4 count=1 supportsPresent=0 qt.vulkan: Using queue families: graphics = 0 present = 0 qt.vulkan: Supported device extensions: QVector(QVulkanExtension("VK_KHR_8bit_storage" 1), QVulkanExtension("VK_KHR_16bit_storage" 1), QVulkanExtension("VK_KHR_bind_memory2" 1), QVulkanExtension("VK_KHR_create_renderpass2" 1), QVulkanExtension("VK_KHR_dedicated_allocation" 3), QVulkanExtension("VK_KHR_descriptor_update_template" 1), QVulkanExtension("VK_KHR_device_group" 3), QVulkanExtension("VK_KHR_draw_indirect_count" 1), QVulkanExtension("VK_KHR_driver_properties" 1), QVulkanExtension("VK_KHR_external_fence" 1), QVulkanExtension("VK_KHR_external_fence_win32" 1), QVulkanExtension("VK_KHR_external_memory" 1), QVulkanExtension("VK_KHR_external_memory_win32" 1), QVulkanExtension("VK_KHR_external_semaphore" 1), QVulkanExtension("VK_KHR_external_semaphore_win32" 1), QVulkanExtension("VK_KHR_get_memory_requirements2" 1), QVulkanExtension("VK_KHR_image_format_list" 1), QVulkanExtension("VK_KHR_maintenance1" 2), QVulkanExtension("VK_KHR_maintenance2" 1), QVulkanExtension("VK_KHR_maintenance3" 1), QVulkanExtension("VK_KHR_multiview" 1), QVulkanExtension("VK_KHR_push_descriptor" 2), QVulkanExtension("VK_KHR_relaxed_block_layout" 1), QVulkanExtension("VK_KHR_sampler_mirror_clamp_to_edge" 1), QVulkanExtension("VK_KHR_sampler_ycbcr_conversion" 1), QVulkanExtension("VK_KHR_shader_draw_parameters" 1), QVulkanExtension("VK_KHR_storage_buffer_storage_class" 1), QVulkanExtension("VK_KHR_swapchain" 70), QVulkanExtension("VK_KHR_variable_pointers" 1), QVulkanExtension("VK_KHR_win32_keyed_mutex" 1), QVulkanExtension("VK_EXT_conditional_rendering" 1), QVulkanExtension("VK_EXT_depth_range_unrestricted" 1), QVulkanExtension("VK_EXT_descriptor_indexing" 2), QVulkanExtension("VK_EXT_discard_rectangles" 1), QVulkanExtension("VK_EXT_hdr_metadata" 1), QVulkanExtension("VK_EXT_inline_uniform_block" 1), QVulkanExtension("VK_EXT_shader_subgroup_ballot" 1), QVulkanExtension("VK_EXT_shader_subgroup_vote" 1), QVulkanExtension("VK_EXT_vertex_attribute_divisor" 3), QVulkanExtension("VK_NV_dedicated_allocation" 1), QVulkanExtension("VK_NV_device_diagnostic_checkpoints" 2), QVulkanExtension("VK_NV_external_memory" 1), QVulkanExtension("VK_NV_external_memory_win32" 1), QVulkanExtension("VK_NV_shader_subgroup_partitioned" 1), QVulkanExtension("VK_NV_win32_keyed_mutex" 1), QVulkanExtension("VK_NVX_device_generated_commands" 3), QVulkanExtension("VK_NVX_multiview_per_view_attributes" 1)) qt.vulkan: Enabling device extensions: QVector(VK_KHR_swapchain) qt.vulkan: memtype 0: flags=0x0 qt.vulkan: memtype 1: flags=0x0 qt.vulkan: memtype 2: flags=0x0 qt.vulkan: memtype 3: flags=0x0 qt.vulkan: memtype 4: flags=0x0 qt.vulkan: memtype 5: flags=0x0 qt.vulkan: memtype 6: flags=0x0 qt.vulkan: memtype 7: flags=0x1 qt.vulkan: memtype 8: flags=0x1 qt.vulkan: memtype 9: flags=0x6 qt.vulkan: memtype 10: flags=0xe qt.vulkan: Picked memtype 10 for host visible memory qt.vulkan: Picked memtype 7 for device local memory qt.vulkan: Color format: 44 Depth-stencil format: 129 qt.vulkan: Creating new swap chain of 2 buffers, size 600x370 qt.vulkan: Actual swap chain buffer count: 2 (supportsReadback=1) qt.vulkan: Allocating 1027072 bytes for transient image (memtype 8) qt.vulkan: Creating new swap chain of 2 buffers, size 600x368 qt.vulkan: Releasing swapchain qt.vulkan: Actual swap chain buffer count: 2 (supportsReadback=1) qt.vulkan: Allocating 1027072 bytes for transient image (memtype 8) QVulkanWindow: Device lost qt.vulkan: Releasing all resources due to device lost qt.vulkan: Releasing swapchain I am not so sure if this debug helps somehow :(( I dont want you to debug it for me. I just want to learn how I should debug it and find where problem is located. Could you give me guide please? Source code Source code rendering just few vertices (working) Difference between links are: Moved from Qt math libraries to glm Moved from QImage to gli for Texture class Added tessellation shaders Disabled window sampling Rendering terrain using heightmap and texturearray (Added normals and UV) Thanks
  20. Hey guys! In my 3D terrain generator, I calculate simple texture coordinates based on x,z (y being up down) coordinates as if terrain was flat - simple planar projection. That of course introduces texture stretching on sloped parts of the terrain. Trying to solve that I first implemented tri-planar mapping (like this), but it is really performance(PS) heavy and the results are very weird looking in some cases. Then I found another technique, which looks better, and most importantly, the heavy work is done in a preprocess - generating an indirection map of terrain which is then used in pixel shader to offset uv coords: Indirection mapping for on quasi-conformal relief texturing Has anyone ever implemented this solution and is willing to share some code for indirection map generation (spring grid relaxation)? I couldnt find any implementation or sample, and am really not sure how to go about it. Thanks!
  21. Hello guys! So, I'm currently working on our senior game-dev project and I'm currently tasked with implementing animations in DirectX. It's been a few weeks of debugging and I've gotten pretty far, only a few quirks left to fix but I can't figure this one out. So, what happens is when a rotation becomes too big on a particular joint, it completely flips around. This seems to be an issue in the FBX data extraction and I've isolated it to the key animation data. First off, here's what the animation looks like with a small rotation: Small rotation in Maya Small rotation in Engine Looks as expected! (Other than the flipped direction, which I'm not too concerned about at this point; however, if you think this is part of the issue please let me know!) Now, here's an animation with a big rotation (+360 around Y then back to 0): Big rotation in Maya Big rotation in Engine As you can see the animation completely flips here and there. Here's how the local animation data for each joint is retrieved: while (currentTime < endTime) { FbxTime takeTime; takeTime.SetSecondDouble(currentTime); // #calculateLocalTransform FbxAMatrix matAbsoluteTransform = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode(), takeTime); FbxAMatrix matParentAbsoluteTransform = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode()->GetParent(), takeTime); FbxAMatrix matInvParentAbsoluteTransform = matParentAbsoluteTransform.Inverse(); FbxAMatrix matTransform = matInvParentAbsoluteTransform * matAbsoluteTransform; // do stuff with matTransform } // GetAbsoluteTransformFromCurrentTake() returns: // pNode->GetScene()->GetAnimationEvaluator()->GetNodeGlobalTransform(pNode, time); This seems to work well, but on the keys when the flip happens it returns a matrix where the non-animated rotations (Y and Z in this case) have a value of 180, rather than 0. The Y value also starts "moving" in the opposite direction. From the Converter we save out the matrix components as T, R, S (R in Euler) and during import in engine the rotation is converted to a quaternion for interpolation. I'm not sure what else I can share that might help give a clue as to what the issue is, but if you need anything to help me just let me know! Any help/ideas are very much appreciated! ❤️ E. Finoli
  22. So as I am toying around with lighting shaders, great looking results can be achieved. However, I struggle to fully grasp the idea behind it. Namely, the microfacet BRDF doesn't line up with how I intuitively understand the process. Expectedly, the perceived brightness on a surface is highest at NdotH, but this gets to be increased two-fold by the denominator as the L and V angles diverge. The implicit geometry term would cancel this out, but something like Smith-Schlick with a low roughness input would not do much in that department, making gracing angles very bright despite there being no fresnel involved. The multiplication of the whole BRDF with NdotL then only partially cancels it out. Am I missing something, or should a relatively smooth metallic surface indeed have brighter highlights when staring at it with a punctual light near the horizon of said surface?
  23. Hi,guys. I need to project a picture from a projector(maybe a camera) onto some meshes and save those into the mesh texture according to the mesh's unfolded UV.It just like the light map which encode the lighting-info into the texture instead of the project-info. The following picture is an example(But it just project without writting into texture).I noticed blender actually has this function that allow you to draw a texture on to a mesh.But i have no idea on how to save those project pixel into the mesh's texture. I think maybe i can finish this function if i have a better understanding about how to produce Light map.Any advises or matertials can help me out?(any idea,any platform,or reference)>?
  24. Hi guys, I wanted my roads to look a little more bumpy on my terrain so I added in bump mapping based on what i had working for the rest of the models. It works and looks nice enough (I'll need to fiddle with the normal map to get the pebble looking just the right amount of sharpness) but anyway.. a problem cropped up that hadn't occurred to me: I don't want it applied to the whole terrain, just the roads. The road texture is simply added using a blend map with green for grass, red for rock, blue for road. So the more blue there more the road texture is used. I don't wan't the other textures bump mapped.. i mean I guess i could but for now i'd rather not. So the code is something like: float3 normalFromMap = PSIn.Normal; if (BumpMapping) { // read the normal from the normal map normalFromMap = tex2D(RoadNormalMapSampler, PSIn.TexCoord * 4); //tranform to [-1,1] normalFromMap = 2.0f * normalFromMap - 1.0f; //transform into world space normalFromMap = mul(normalFromMap, PSIn.WorldToTangentSpace); } else { //tranform to [-1,1] normalFromMap = 2.0f * normalFromMap - 1.0f; } //normalize the result normalFromMap = normalize(normalFromMap); //output the normal, in [0,1] space Output.Normal.rgb = 0.5f * (normalFromMap + 1.0f); I tried checking if the blendmap's blue component was > 0 then use the bump mapping but that just makes a nasty line where it switches between just using the normal of the whole vertex or using the normal map. How do I blend between the two methods? Thanks
  25. I'm trying to add some details like grass, rocks, trees, etc. to my little procedurally-generated planet. The meshes for the terrain are created from a spherified cube which is split in chunks (chunked LOD). To do this I've wrote a geometry shader that takes a mesh as input and uses its vertex positions as locations where the patches of grass will be placed (as textured quads). For an infinite flat world (not spherical) I'd use the terrain mesh as input to the geometry shader, but I've found that this won't work well on a sphere, since the vertex density is not homogeneous across the surface. So the main question would be: How to create a point cloud for each terrain chunk whose points were equally distributed across the chunk? Note: I've seen some examples where these points are calculated from intersecting a massive rain of totally random perpendicular rays from above... but I found this solution overkill, to say the least. Another related question would be: Is there something better/faster than the geometry shader approach, maybe using compute shaders and instancing?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!