Jump to content
  • Advertisement

Search the Community

Showing results for tags '3D' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 245 results

  1. Hi, guys. I am developing a path tracing baking renderer now, which is based on OpenGL and OpenRL. It can bake some scenes e.g. the following, I am glad it can bake bleeding diffuse color. : ) I store the irradiance directly, like this. Albedo * diffuse color has already come into irradiance calculation when baking, both direct and indirect together. After baking, In OpenGL fragment shader, I use the light map directly. I think I have got something wrong, due to most game engines don't do like this. Which kind of data should I choose to store into the light maps? I need diffuse only. Thanks in advance!
  2. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  3. So the foolproof way to store information about emission would be to dedicate a full RGB data set to do the job, but this is seemingly wasteful, and squeezing everything into a single buffer channel is desirable and indeed a common practice. The thing is that there doesn't seem to be one de facto standard technique to achieve this. A commonly suggested solution is to perform a simple glow * albedo multiplication, but it's not difficult to imagine instances where this strict interdependence would become an impenetrable barrier. What are some other ideas?
  4. Hey, I have to cast camera rays through the near plane of the camera and the first approach in the code below is the one I've come up with and I understand it precisely. However, I've come across much more elegant and shorter solution which looks to give exacly the same results (at least visually in my app) and this is the "Second approach" below. struct VS_INPUT { float3 localPos : POSITION; }; struct PS_INPUT { float4 screenPos : SV_POSITION; float3 localPos : POSITION; }; PS_INPUT vsMain(in VS_INPUT input) { PS_INPUT output; output.screenPos = mul(float4(input.localPos, 1.0f), WorldViewProjMatrix); output.localPos = input.localPos; return output; } float4 psMain(in PS_INPUT input) : SV_Target { //First approach { const float3 screenSpacePos = mul(float4(input.localPos, 1.0f), WorldViewProjMatrix).xyw; const float2 screenPos = screenSpacePos.xy / screenSpacePos.z; //divide by w taken above as third argument const float2 screenPosUV = screenPos * float2(0.5f, -0.5f) + 0.5f; //invert Y axis for the shadow map look up in future //fov is vertical float nearPlaneHeight = TanHalfFov * 1.0f; //near = 1.0f float nearPlaneWidth = AspectRatio * nearPlaneHeight; //position of rendered point projected on the near plane float3 cameraSpaceNearPos = float3(screenPos.x * nearPlaneWidth, screenPos.y * nearPlaneHeight, 1.0f); //transform the direction from camera to world space const float3 direction = mul(cameraSpaceNearPos, (float3x3)InvViewMatrix).xyz; } //Second approach { //UV for shadow map look up later in code const float2 screenPosUV = input.screenPos.xy * rcp( renderTargetSize ); const float2 screenPos = screenPosUV * 2.0f - 1.0f; // transform range 0->1 to -1->1 // Ray's direction in world space, VIEW_LOOK/RIGHT/UP are camera basis vectors in world space //fov is vertical const float3 direction = (VIEW_LOOK + TanHalfFov * (screenPos.x*VIEW_RIGHT*AspectRatio - screenPos.y*VIEW_UP)); } ... } I cannot understand what happens in the second approach right at the first 2 lines. input.screenPos.xy is calculated in vs and interpolated here but it's still before the perspective divide right? So for example y coordinate of input.screenPos should be in range -|w| <= y <= |w| where w is the z coordinate of the point in camera space, so maximally w can be equal to Far and minimally to Near plane right? How come dividing y by the renderTargetSize above yield the result supposedly in <0,1> range? Also screenPosUV seems to have already inverted Y axis for some reason I also don't understand - and that's why probably the minus sign in the calculation of direction. In my setup for example renderTargetSize is (1280, 720), Far = 100, Near = 1.0f, I use LH coordinate system and camera by default looks towards positive Z axis. Both approaches first and second give me the same results but I would like to understand this second approach. Would be very grateful for any help!
  5. I'm trying to use Perlin Noise to paint landscapes on a sphere. So far I've been able to make this: (the quad is just to get a more flat vision of the height map) I'm not influencing the mesh vertices height yet, but I am creating the noise map from the CPU and passing it to the GPU as a texture, which is what you see above. I've got 2 issues though: Issue #1 If I get a bit close to the sphere, the detail in the landscapes look bad. I'm aware that I can't get too close, but I also feel that I should be able to get better quality at the distance I show above. The detail in the texture looks blurry and stretched...it just looks bad. I'm not sure what I can do to improve it. Issue #2 I believe I know why the second issue occurs, but don't know how to solve it. If I rotate the sphere, you'll notice something. Click on the image for a better look: (notice the seam?) What I think is going on is that some land/noise reaches the end of the uv/texture and since the sphere texture is pretty much like if you wrap paper around the sphere, the beginning and end of the texture map connect, and both sides have different patterns. Solutions I have in mind for Issue #2: A) Maybe limiting the noise within a certain bounding box, make sure "land" isn't generated around the borders or poles of the texture. Think Islands. I just have no idea how to do that. B) Finding a way to make the the noise draw at the beginning of the uv/texture once it reaches the end of it. That way the beginning and ends connect seamlessly, but again, I have no idea how to do that. I'm kind of rooting for the solution a though. I would be able to make islands that way. Hope I was able to explain myself. If anybody needs anymore information, let me know. I'll share the function in charge of making this noise below. The shader isn't doing anything special but drawing the texture. Thanks! CPU Noise Texture: const width = 100; const depth = 100; const scale = 30.6; const pixels = new Uint8Array(4 * width * depth); let i = 0; for (let z = 0; z < depth; z += 1) { for (let x = 0; x < width; x += 1) { const octaves = 8; const persistance = 0.5; const lacunarity = 2.0; let frequency = 1.0; let amplitude = 1.0; let noiseHeight = 0.0; for (let i = 0; i < octaves; i += 1) { const sampleX = x / scale * frequency; const sampleZ = z / scale * frequency; let n = perlin2(sampleX, sampleZ); noiseHeight += n * amplitude; amplitude *= persistance; frequency *= lacunarity; } pixels[i] = noiseHeight * 255; pixels[i+1] = noiseHeight * 255; pixels[i+2] = noiseHeight * 255; pixels[i+3] = 255; i += 4; } } GPU GLSL: void main () { vec3 diffusemap = texture(texture0, uvcoords).rgb; color = vec4(diffusemap, 1.0); }
  6. I'm trying to use values generated with a 2D Perlin noise function to determine the height of each vertex on my sphere. Just like a terrain height map but spherical. Unfortunately I can't seem to figure it out. So far it's easy to push any particular vertex along its calculated normal, and that seems to work. As you can see in the following image, I'm pulling only one vertex along its normal vector. This was accomplished with the following code. No noise yet btw: // Happens after normals are calculated for every vertex in the model // xlen and ylen are the segments and rings of the sphere for(let x = 0; x <= xLen; x += 1){ for(let y = 0; y <= yLen; y += 1){ // Normals const nx = model.normals[index]; const ny = model.normals[index + 1]; const nz = model.normals[index + 2]; let noise = 1.5; // Just pull one vert... if (x === 18 && y === 12) { // Verts model.verts[index] = nx * noise; model.verts[index + 1] = ny * noise; model.verts[index + 2] = nz * noise; } index += 3; } } But what if I want to use 2D Perlin noise values on my sphere to create mountains on top of it? I thought it would be easy displacing the sphere's vertices using its normals and Perlin noise, but clearly I'm way off: This horrible object was created with the following code: // Happens after normals are calculated for every vertex in the model // xlen and ylen are the segments and rings of the sphere // Keep in mind I'm not using height map image. I'm actually feeding the noise value directly. for(let x = 0; x <= xLen; x += 1){ for(let y = 0; y <= yLen; y += 1){ // Normals const nx = model.normals[index]; const ny = model.normals[index + 1]; const nz = model.normals[index + 2]; const sampleX = x * 1.5; const sampleY = y * 1.5; let noise = perlin2(sampleX, sampleY); // Update model verts height model.verts[index] = nx * noise; model.verts[index + 1] = ny * noise; model.verts[index + 2] = nz * noise; index += 3; } } I have a feeling the direction I'm pulling the vertices are okay, the problem might be the intensity, perhaps I need to clamp the noise value? I've seen terrain planes where they create the mesh based on the height map image dimensions. In my case, the sphere model verts and normals are already calculated and I want to add height afterwards (but before creating the VAO). Is there a way I could accomplish this so my sphere displays terrain like geometry on it? Hope I was able to explain myself properly. Thanks!
  7. Hello, I have a custom binary ImageFile, it is essentially a custom version of DDS made up of 2 important parts: struct FileHeader { dword m_signature; dword m_fileSize; }; struct ImageFileInfo { dword m_width; dword m_height; dword m_depth; dword m_mipCount; //atleast 1 dword m_arraySize; // atleast 1 SurfaceFormat m_surfaceFormat; dword m_pitch; //length of scanline dword m_byteCount; byte* m_data; }; It uses a custom BinaryIO class i wrote to read and write binary, the majority of the data is unsigned int which is a dword so ill only show the dword function: bool BinaryIO::WriteDWord(dword value) { if (!m_file && (m_mode == BINARY_FILEMODE::READ)) { //log: file null or you tried to read from a write only file! return false; } byte bytes[4]; bytes[0] = (value & 0xFF); bytes[1] = (value >> 8) & 0xFF; bytes[2] = (value >> 16) & 0xFF; bytes[3] = (value >> 24) & 0xFF; m_file.write((char*)bytes, sizeof(bytes)); return true; } //----------------------------------------------------------------------------- dword BinaryIO::ReadDword() { if (!m_file && (m_mode == BINARY_FILEMODE::WRITE)) { //log: file null or you tried to read from a write only file! return NULL; } dword value; byte bytes[4]; m_file.read((char*)&bytes, sizeof(bytes)); value = (bytes[0] | (bytes[1] << 8) | (bytes[2] << 16) | bytes[3] << 24); return value; } So as you can Imagine you end up with a loop for reading like this: byte* inBytesIterator = m_fileInfo.m_data; for (unsigned int i = 0; i < m_fileInfo.m_byteCount; i++) { *inBytesIterator = binaryIO.ReadByte(); inBytesIterator++; } And finally to read it into dx11 buffer memory we have the following: //Pass the Data to the GPU: Remembering Mips D3D11_SUBRESOURCE_DATA* initData = new D3D11_SUBRESOURCE_DATA[m_mipCount]; ZeroMemory(initData, sizeof(D3D11_SUBRESOURCE_DATA)); //Used as an iterator byte* source = texDesc.m_data; byte* endBytes = source + m_totalBytes; int index = 0; for (int i = 0; i < m_arraySize; i++) { int w = m_width; int h = m_height; int numBytes = GetByteCount(w, h); for (int j = 0; j < m_mipCount; j++) { if ((m_mipCount <= 1) || (w <= 16384 && h <= 16384)) { initData[index].pSysMem = source; initData[index].SysMemPitch = GetPitch(w); initData[index].SysMemSlicePitch = numBytes; index++; } if (source + numBytes > endBytes) { LogGraphics("Too many Bytes!"); return false; } //Divide by 2 w = w >> 1; h = h >> 1; if (w == 0) { w = 1; } if (h == 0) { h = 1; } } } It seems rather slow particularly for big textures, is there any way i could optimize this? as the render grows too rendering multiple textured objects the loading times may become problematic. At the moment it takes around 2 seconds to load a 4096x4096 texture, you can see the output in the attached images. Thanks.
  8. Howdy! I was wondering let's say you have a mesh and the most blended vertex is attached to 4 bones (and so 4 weights that aren't 0 or 1). So the rest of the mesh vertices are also attached to 4 bones. However let's say one vertex is only attached to 1 single bone, so it has a 1.0 weight. What other 3 bones do you attach that vertex to? 1. The root bone with 0.0 weights? Or; 2. attach it to -1 ('no bone') and then use if() statement in hlsl for the transformation calculation? Thanks for your input!
  9. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  10. Hi everyone, currently I'm pretty desperate in figuring out how to translate planes from map files (quake/half-life) into (indexed) triangles. Most of the information about map files I got from Valve Developer Community [1]. I'm also aware of Stefan Hajnoczi's documentation [2], but it is not well written enough for me to understand. I have freshen up my linear algebra knowledge about planes from Points, lines, and planes [3] and Wikipedia [4] and I got a pretty good understanding about how to transform three points into a plane, to get Ax + By + Cz + D = 0 where (A B C) is the vector normal of the plane and D is the distance from the origin. I also know how to calculate the intersection of two / three planes. I used a similar code like Stefan to achieve this. The thing with Stefan's code is, that I am trying to achieve a different approach not by using linked lists (that's what he's using). I want to solve this problem in my coding convention and fully understand the mathematical problem in order to implement it correctly (and be able to modify/optimize it). I thought of a similar procedure as it says on Valve's documentation (see figure [5]). What am I missing in order to build the cube? What are the single steps to achieve this? I think I only need a little poke to get me in the right direction. Any further sources are greatly appreciated. I looked up source code from e.g. Trenchbroom and other projects on github, but I just want a simple solution to get more into detail like texture mapping etc. Best regards, Daniel References [1] https://developer.valvesoftware.com/wiki/Valve_Map_Format [2] https://github.com/stefanha/map-files [3] http://paulbourke.net/geometry/pointlineplane/ [4] https://en.wikipedia.org/wiki/Plane_(geometry) [5] https://developer.valvesoftware.com/w/images/0/0d/Brush_planes.gif
  11. I have tried a few brick material assets from the unity asset store but none of them seem to actually work. I am trying to add a brick material to a 3D mesh. The assets I have used so far would be as follows; brick walls package and brick 1 - materials. Does anyone know of brick materials for unity?
  12. I'm trying to use the stb_truetype library to rasterize fonts in directx, but I'm having problems with the background texture, there always appears a black background in texture like in this screenshot. auto load_font( const char* font_dir ) -> std::optional< stbtt_fontinfo > { auto font_info = stbtt_fontinfo { }; auto raw_font = std::fopen( font_dir, "rb" ); if ( !raw_font ) return std::nullopt; std::fseek( raw_font, 0, SEEK_END ); auto font_size = std::ftell( raw_font ); std::fseek( raw_font, 0, SEEK_SET ); auto font = new std::uint8_t[ font_size ]; fread( font, font_size, 1, raw_font ); fclose( raw_font ); return ( stbtt_InitFont( &font_info, font, 0 ) ) ? std::make_optional( font_info ) : std::nullopt; } auto gen_text_texture( IDirect3DDevice9* device, const stbtt_fontinfo font_info, const std::string& txt ) -> IDirect3DTexture9* { if ( font_info.data == nullptr ) return nullptr; auto b_w = 512, b_h = 20, l_f = 20; auto bitmap = std::make_unique< std::uint8_t[] >( b_w * b_h ); auto scale = stbtt_ScaleForPixelHeight( &font_info, l_f ); auto x = 0, ascent = 0, descent = 0, lineGap = 0; stbtt_GetFontVMetrics( &font_info, &ascent, &descent, &lineGap ); ascent *= scale; descent *= scale; for ( auto i = 0u; i < txt.size(); ++i ) { auto c_x1 = 0, c_y1 = 0, c_x2 = 0, c_y2 = 0; stbtt_GetCodepointBitmapBox( &font_info, txt[ i ], scale, scale, &c_x1, &c_y1, &c_x2, &c_y2 ); auto y = ascent + c_y1; auto byteOffset = x + ( y * b_w ); stbtt_MakeCodepointBitmap( &font_info, bitmap.get() + byteOffset, c_x2 - c_x1, c_y2 - c_y1, b_w, scale, scale, txt[ i ] ); auto ax = 0; stbtt_GetCodepointHMetrics( &font_info, txt[ i ], &ax, 0 ); x += ax * scale; auto kern = stbtt_GetCodepointKernAdvance( &font_info, txt[ i ], txt[ i + 1 ] ); x += kern * scale; } IDirect3DTexture9* texture { }; if ( FAILED( D3DXCreateTexture( device, b_w, b_h, 1, D3DUSAGE_DYNAMIC, D3DFMT_L8, D3DPOOL_DEFAULT, &texture ) ) ) return nullptr; D3DLOCKED_RECT rect; if ( FAILED( texture->LockRect( 0, &rect, nullptr, 0 ) ) ) return nullptr; memcpy( rect.pBits, bitmap.get(), b_w * b_h ); texture->UnlockRect( 0 ); return texture; } auto print_text( IDirect3DDevice9* device, ID3DXSprite* sprite, const stbtt_fontinfo font_info, float x, float y, const std::string& txt, D3DCOLOR color ) -> void { auto txt_texture = gen_text_texture( device, font_info, txt ); if ( txt_texture ) { if ( sprite && SUCCEEDED( sprite->Begin( D3DXSPRITE_ALPHABLEND ) ) ) { sprite->Draw( txt_texture, nullptr, nullptr, &D3DXVECTOR3 { x, y, 0.f }, color ); sprite->End(); } txt_texture->Release(); } } auto create_sprite( IDirect3DDevice9* device ) -> ID3DXSprite* { ID3DXSprite* output { }; D3DXCreateSprite( device, &output ); return output; } auto __stdcall hk_endscene( IDirect3DDevice9* device ) -> HRESULT { static auto windows_arial_font = load_font( "C:\\Windows\\Fonts\\Arial.ttf" ).value_or( stbtt_fontinfo { } ); static auto font_sprite = create_sprite( device ); print_text( device, font_sprite, windows_arial_font, 50.f, 50.f, "Teste", D3DCOLOR_ARGB( 255, 255, 000, 000 ) ); print_text( device, font_sprite, windows_arial_font, 50.f, 90.f, "Test test", D3DCOLOR_ARGB( 255, 255, 000, 000 ) ); return o_endscene( device ); }
  13. I'll keep this high level as I'm not the developer in question, but in our WebGL/3JS project we have some models which are straight tubes. We manipulate these to follow splines using shaders, so as far as the 'engine' is concerned they are straight tubes, then they get deformed at the rendering stage to follow their real path. This means trying to employ picking to detect the mouse hovering over a model doesn't work - it picks the straight-tube version. I gather a lot of geometrical stuff is done in GPU/shaders these days so I wondered if that means this is a common problem with some known solutions/ideas?
  14. So, I developed an engine a while back following ThinMatrix's tutorials and it worked perfectly. However, upon trying to create my own simple lightweight game engine from scratch, I hit a snag. I created an engine that only wants to render my specified background color, and nothing else. I first tried to render just one cube, and when that failed I figured that i probably just had the incorrect coordinates set, so I went and generated a hundred random cubes... Nothing. Not even a framerate drop. So I figure that they aren't being passed through the shaders, however the shaders are functioning as I'm getting no errors (to my knowledge, I can't be sure). The engine itself is going to be open source and free anyways, so I don't mind posting the source here. Coded in Java, using OpenGL (from LWJGL), and in Eclipse (Neon) format. Warning: When first running the engine, it will spit out an error saying it couldn't find a config file, this will then generate a new folder in your %appdata% directory labeled 'Fusion Engine' with a Core.cfg file. This file can be opened in any old text editor, so if you aren't comfortable with that just change it in the source at: "src/utility/ConfigManager.java" before running. Just ask if you need more info, please I've been trying to fix this for a month now. Fusion Engine V2.zip
  15. I am trying to write a program to rotate an octagon cube. I have the front and back faces completed. I can't seem to figure out the vertices for the right and top faces. Can someone please help me? Thanks for your time! Here are the front vertices: (-0.5, -1.0, 1.0) (0.5, -1.0, 1.0) (1.0, -0.5, 1.0) (1.0, 0.5, 1.0) (0.5, 1.0, 1.0) (-0,5, 1.0, 1.0) (-1.0, 0.5, 1.0) (-1.0, -0.5, 1.0)
  16. For Direct3D 9, Yes, sorry, I am about to upgrade, please bear with me, is it possible to render into 2 targets and 1 stencil view simultaneously. pd3dDevice->OMGetRenderTargets( 1, &pOldRTV, &pOldDSV ); For DX9, do I have to, I may have some performance hit, but that's ok, I am going to upgrade anyways pd3dDevice->SetRenderTarget(1, pParticleView); // Render the particles RenderParticles( pd3dDevice, pEffect, pVB, pParticleTex, numParts, renderTechnique); pd3dDevice->SetRenderTarget(2, pParticleColorView); RenderParticles( pd3dDevice, pEffect, pVB, pParticleTex, numParts, renderTechnique); CComPtr<IDirect3DSurface9> pStencilBuffer = 0; pd3dDevice->GetDepthStencilSurface(&pStencilBuffer); pd3dDevice->SetRenderTarget(3, pStencilBuffer); RenderParticles( pd3dDevice, pEffect, pVB, pParticleTex, numParts, renderTechnique);
  17. Hi. For improving programming skills, recently I tried to create some tiny shaders, with more ambitious effect than just displaying single color (example below - animated 3D noise). Could someone tell about even more interesting tricks for HLSL language (especially in context of writing shorter code) ? // 620 chars without whitespaces // Apply material with shader to quad and that's all // Compiled in Unity 2018.1.0f2 // https://github.com/przemyslawzaworski/Unity3D-CG-programming Shader "I" { Subshader { Pass { Cull Off CGPROGRAM #pragma vertex V #pragma fragment P #define l lerp half k(half3 x) { half3 p=x-frac(x), f=x-p, n={1,0,0}, t={1,9,57}; f*=f*(3-2*f); #define h(m) frac(cos(dot(m,t))*1e5) return l(l(l(h(p),h(p+n.xyy),f.x),l(h(p+n.yxy),h(p+n.xxy),f.x),f.y),l(l(h(p+n.yyx),h(p+n.xyx),f.x),l(h(p+n.yxx),h(p+1),f.x),f.y),f.z); } void V(uint i:SV_VertexID,out half4 c:POSITION) {c=half4((i<<1&2)*2-1.,1-2.*(i&2),1,1);} void P(half4 u:POSITION,out half s:COLOR) { u = half4(9*u.xy/_ScreenParams,_Time.g,0); for (half i;i<1;i+=.02,u.y-=.1,u.w=(k(u)+k(u+9.5))/2,s=l(s,u.w,smoothstep(0,1,(u.w-i)/fwidth(u.w)))){} } ENDCG } } }
  18. I want to load a texture in lab color space for meanshift clustering operations. I am looking into a parameter of D3DFMT..., but the image must stay in the original RGBA format. Any ideas? Thanks Jack
  19. Hi. I assume most of you are familiar with Telltale Games. Since the telltale tool isn't public, I was wondering if It is possible to achieve MCSM graphics in Unity. Of Course, I will not be making the whole game, I'm just wondering if the graphics are possible in Unity
  20. Hello, I have been trying to set up multiple render target views in a traditional oop style architecture (as opposed to DOD), DX11 forces binding of RenderTargets as an array with the depth view as well. This is annoying when it comes to binding multiple buffers when you have objects such as DX11Rendertarget2D which stores an ID3D11RenderTargetView*, if it worked like everything else did in the sense that it used slots such that you can do: SetRenderTarget(slot, RenderTarget); ,Then it would be a lot better as you could set the slots you want and set the depth buffer or leave it as the default one by passing null such is the way of a state-full API. Any input on how others have managed there RenderTargets would be appreciated, storing an array of ID3D11RenderTargetView*[8] doesn't seem like a good idea due to ownership and ref counting. Thanks.
  21. Hi everyone : ) I'm trying to implement SSAO with D3D12 (using the implementation found on learnopengl.com https://learnopengl.com/Advanced-Lighting/SSAO), but I seem to have a performance problem... Here is a part of the code of the SSAO pixel shader : Texture2D PositionMap : register(t0); Texture2D NormalMap : register(t1); Texture2D NoiseMap : register(t2); SamplerState s1 : register(s0); // I hard coded the variables just for the test const static int kernel_size = 64; const static float2 noise_scale = float2(632.0 / 4.0, 449.0 / 4.0); const static float radius = 0.5; const static float bias = 0.025; cbuffer ssao_cbuf : register(b0) { float4x4 gProjectionMatrix; float3 SSAO_SampleKernel[64]; } float main(VS_OUTPUT input) : SV_TARGET { [....] float occlusion = 0.0; for (int i = 0; i < kernel_size; i++) { float3 ksample = mul(TBN, SSAO_SampleKernel[i]); ksample = pos + ksample * radius; float4 offset = float4(ksample, 1.0); offset = mul(gProjectionMatrix, offset); offset.xyz /= offset.w; offset.xyz = offset.xyz * 0.5 + 0.5; float sampleDepth = PositionMap.Sample(s1, offset.xy).z; float rangeCheck = smoothstep(0.0, 1.0, radius / abs(pos.z - sampleDepth)); occlusion += (sampleDepth >= ksample.z + bias ? 1.0 : 0.0) * rangeCheck; } [....] } The problem is this for loop. When I run it, it takes around 140 ms to draw the frame (a simple torus knot...) on a GTX 770. Without this loop, it's 5ms. Running it without the PositionMap sampling and the matrix multiplication takes around 25ms. I understand that matrix multiplication and sampling are "expensive", but I don't think it's enough to justify the sluggish drawing time. I suppose the shader code from the tutorial is working, so unless I've made something terribly stupid that I don't see I suppose my problem comes from something I did wrong with D3D12 that I'm not aware of (I just started learning D3D2). Both PositionMap and NormalMap are render targets from the gbuffer, for each one I created two DescriptorHeap : one as D3D12_DESCRIPTOR_HEAP_TYPE_RTV and one as D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV, and called both CreateRenderTargetView and CreateShaderResourceView. The NoiseMap only has one descriptor heap of type D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV. Before calling DrawIndexedInstanced for the SSAO pass, I copy the relevant to a descriptor heap that I then bind, like so : CD3DX12_CPU_DESCRIPTOR_HANDLE ssao_heap_hdl(_pSSAOPassDesciptorHeap->GetCPUDescriptorHandleForHeapStart()); device->CopyDescriptorsSimple(1, ssao_heap_hdl, _gBuffer.PositionMap().GetDescriptorHeap()->GetCPUDescriptorHandleForHeapStart(), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); ssao_heap_hdl.Offset(CBV_descriptor_inc_size); device->CopyDescriptorsSimple(1, ssao_heap_hdl, _gBuffer.NormalMap().GetDescriptorHeap()->GetCPUDescriptorHandleForHeapStart(), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); ssao_heap_hdl.Offset(CBV_descriptor_inc_size); device->CopyDescriptorsSimple(1, ssao_heap_hdl, _ssaoPass.GetNoiseTexture().GetDescriptorHeap()->GetCPUDescriptorHandleForHeapStart(), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); ID3D12DescriptorHeap* descriptor_heaps[] = { _pSSAOPassDesciptorHeap }; pCommandList->SetDescriptorHeaps(1, descriptor_heaps); pCommandList->SetGraphicsRootDescriptorTable(0, _pSSAOPassDesciptorHeap->GetGPUDescriptorHandleForHeapStart()); pCommandList->SetGraphicsRootConstantBufferView(1, _cBuffSamplesKernel[0].GetVirtualAddress()); Debug/Release build give me the same results, so do shader compilation flags with/without optimisation. So does anyone see something weird in my code that would cause the slowness ? By the way, when I run the pixel shader in the graphics debugger, this line : offset.xyz /= offset.w; does not seem to produce the expected results, the two lines in the following table are the values in the debugger before and after the execution of that line of code Name Value Type offset offset x = -1.631761000, y = 1.522913000, z = 2.634875000, w = 2.634875000 x = -0.619293700, y = 0.577983000, z = 2.634875000, w = 2.634875000 float4 float4 so X and Y are okay, not Z. Please tell me if you need more info/code. Thank you for your help !
  22. Hi, everybody. I touched on this topic in connection with the recent transition to development on Unreal Engine exclusively in C++ Everyone knows that very little information about the documentation of the engine, I spent a lot of time in finding information about it. Rummaged through GitHub to find worthy examples of implementation, but came to the fact that the best way to learn the Engine is to look for answers in the source code. Want to share with you that I dug up and perhaps someone will help me with my problem. Unreal Engine 4 Rendering, Possible to use my own pure HLSL and GLSL shader code, Jason Zink, Matt Pettineo, Jack Hoxley - Practical renderind with DirectX 11 - 2011.pdf In General I want to understand how to operationalize the concept of context FGlobalShader, UPrimitiveComponent and using FPrimitiveSceneProxy definition FVertexFactory, which implemented the connection of the Shader through the material FMaterialShader and transfer the parameters to it. I have studied the source code of these classes and understand that through the class of materials are transmitted a lot of parameters. But I do not want at least at the first stage to use the parameters that I do not fully understand, but gradually. Create a clean class with the ability to transfer the parameters I need in it, but that it fits into the concept of the pipeline Unreal Engine. Can someone faced it and agree to share a small piece of code for example. Thank you in advance!
  23. I am implementing baking in our engine.I met a problem about how to assignment per object uv in lightmap atlas. I am using UVAtlas to generate lightmap uv, most unwrapped mesh has a uv range [0, 1), no matter how big they are. I wanna them have same uv density so to packing them well in lightmap atlas.I have tried thekla_atlas to do the same thing too, but it seems that it can not unwrap uv according mesh size. As far as I can see, unwrapping uv coordinates using its world space can solve this, all meshes share a same scale.But I don't hope to spend a lot of time to write these code and debug them.I am wandering is there exist some methods I don't know that can scale each lightmap uv to a same density. Thanks in advance. : )
  24. Hi people. The problem is that the floor moves with the player in same time. But if not draw walls i can see, that floor is just rotating but not moving in any direction — it is static. In the correct implementation, the player must walk on the floor. I attach the animation on which this effect is visible and my code that implements the collision with the floor. I have tried different versions of code for floorcasting (from forums, sources, including current version — from lodev). Note: I implemented level geometry as sectors (set of lines), not as blocks like in Wolf3d. I know that Doom uses bsp tree, and renders floor in different way. But I suppose that current way works with sectors too (but slowly). stripPosY = (ProjectionPlane.me().sizeInWorld.Y / 2 - (int)(stripHeight / 2)); stripPosY += (int)stripHeight-1; startPixel = stripPosY; while (startPixel != ProjectionPlane.me().sizeInWorld.Y) { float curdist = ProjectionPlane.me().sizeInWorld.Y / (2.0f * startPixel - ProjectionPlane.me().sizeInWorld.Y); float weight = curdist / (float)(minimalDist); float floorX = weight * ((PointF)minimalIntersection).X + (1.0f - weight) * (Player.me().worldPosition.X); float floorY = weight * ((PointF)minimalIntersection).Y + (1.0f - weight) * (Player.me().worldPosition.Y); int textureX = (int)(floorX * ProjectionPlane.imageFloor1.Width) % ProjectionPlane.imageFloor1.Width; int textureY = (int)(floorY * ProjectionPlane.imageFloor1.Height) % ProjectionPlane.imageFloor1.Height; textureX = (int)Math.Abs(textureX); textureY = (int)Math.Abs(textureY); SolidBrush b = new SolidBrush(ProjectionPlane.imageFloor1.GetPixel(textureX, textureY)); g.FillRectangle(b, stripX, startPixel, 1, 1); startPixel++; }
  25. How to unpack the frame buffer when packing by Compact YCoCg Frame Buffer?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!