Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1195 results

  1. Hi guys, I am having troubles implementing a directional light in HLSL. I have been following this guide https://www.gamasutra.com/view/feature/131275/implementing_lighting_models_with_.php?page=2 but the quad is rendering pure black no matter where the light vector is pointing. My quad is in the format of position, texcoord, & normal. This is the shader so far, cbuffer ModelViewProjectionConstantBuffer : register(b0) { float4x4 worldmat; float4x4 worldviewmat; float4x4 worldviewprojmat; float4 vecLightDir; } struct VS_OUTPUT { float4 Pos : SV_POSITION; float3 Light : TEXCOORD0; float3 Norm : TEXCOORD1; }; VS_OUTPUT vs_main(float4 position : POSITION, float2 texcoord : TEXCOORD, float3 normal : NORMAL) { float4 newpos; newpos = mul(position, worldmat); newpos = mul(newpos, worldviewmat); newpos = mul(newpos, worldviewprojmat); // return newpos; VS_OUTPUT Out = (VS_OUTPUT)0; Out.Pos = newpos;// mul(newpos, matWorldViewProj); // transform Position Out.Light = vecLightDir; // output light vector Out.Norm = normalize(mul(normal, worldmat)); // transform Normal and normalize it return Out; } float4 ps_main(float3 Light: TEXCOORD0, float3 Norm : TEXCOORD1) : SV_TARGET { float4 diffuse = { 1.0f, 0.0f, 0.0f, 1.0f }; float4 ambient = { 0.1, 0.0, 0.0, 1.0 }; //return float4(1.0f, 1.0f, 1.0f, 1.0f); return ambient + diffuse * saturate(dot(Light, Norm)); } Any help would be truly appreciated as this is the only real area of DX11 that I really have difficulties in. Thanks in advance.
  2. I have been trying to see how the ID3DInclude, and how its methods Open and Close work. I would like to add a custom path for the D3DCompile function to search for some of my includes. I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files. Thanks
  3. I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation. So I read in a series of samples and push them into float arrays float vals1[9], vals2[9], vals3[9], vals4[9]; int x = 0,y=0; for ( x = 0; x < 3; x++) { for (y = 0; y < 3; y++) { vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x)); vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x)); vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x)); vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x)); } } I can send these values out directly and the data is as expected Output1[threadID.xy] = (uint) (vals1[4] ); Output2[threadID.xy] = (uint) (vals2[4] ); Output3[threadID.xy] = (uint) (vals3[4] ); Output4[threadID.xy] = (uint) (vals4[4] ); however if i do anything to that data it is destroyed. If i add a vals1[4] = vals1[4]/2; or a vals1[4] = vals[1]-vals[4]; the data is gone and everything comes back 0. How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?
  4. Hello, i try to implement voxel cone tracing in my game engine. I have read many publications about this, but some crucial portions are still not clear to me. At first step i try to emplement the easiest "poor mans" method a. my test scene "Sponza Atrium" is voxelized completetly in a static voxel grid 128^3 ( structured buffer contains albedo) b. i dont care about "conservative rasterization" and dont use any sparse voxel access structure c. every voxel does have the same color for every side ( top, bottom, front .. ) d. one directional light injects light to the voxels ( another stuctured buffer ) I will try to say what i think is correct ( please correct me ) GI lighting a given vertecie in a ideal method A. we would shoot many ( e.g. 1000 ) rays in the half hemisphere which is oriented according to the normal of that vertecie B. we would take into account every occluder ( which is very much work load) and sample the color from the hit point. C. according to the angle between ray and the vertecie normal we would weigth ( cosin ) the color and sum up all samples and devide by the count of rays Voxel GI lighting In priciple we want to do the same thing with our voxel structure. Even if we would know where the correct hit points of the vertecie are we would have the task to calculate the weighted sum of many voxels. Saving time for weighted summing up of colors of each voxel To save the time for weighted summing up of colors of each voxel we build bricks or clusters. Every 8 neigbour voxels make a "cluster voxel" of level 1, ( this is done recursively for many levels ). The color of a side of a "cluster voxel" is the average of the colors of the four containing voxels sides with the same orientation. After having done this we can sample the far away parts just by sampling the coresponding "cluster voxel with the coresponding level" and get the summed up color. Actually this process is done be mip mapping a texture that contains the colors of the voxels which places the color of the neighbouring voxels also near by in the texture. Cone tracing, howto ?? Here my understanding is confus ?? How is the voxel structure efficiently traced. I simply cannot understand how the occlusion problem is fastly solved so that we know which single voxel or "cluster voxel" of which level we have to sample. Supposed, i am in a dark room that is filled with many boxes of different kind of sizes an i have a pocket lamp e.g. with a pyramid formed light cone - i would see some single voxels near or far - i would also see many different kind of boxes "clustered voxels" of different sizes which are partly occluded How do i make a weighted sum of this ligting area ?? e.g. if i want to sample a "clustered voxel level 4" i have to take into account how much per cent of the area of this "clustered voxel" is occluded. Please be patient with me, i really try to understand but maybe i need some more explanation than others best regards evelyn
  5. How do I fill the gap between sky and terrain? Scaling the terrain or procedural terrain rendering?
  6. isu diss

    DX11 Light Shafts

    I decided to implement light shafts using http://sirkan.iit.bme.hu/~szirmay/lightshaft_link.htm So far I've only managed to implement the shadow map. Can anyone help me to implement this in D3D11? (I mean steps, I can do the rest). I'm new to all these shadow maps and etc.
  7. Hi all, As a part of the debug drawing system in my engine, I want to add support for rendering simple text on screen (aka HUD/ HUD style). From what I've read there are a few options, in short: 1. Write your own font sprite renderer 2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer 3. Use an external library, like the directx toolkit etc. I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device. But other articles tell that this was 'patched' later on and should work now. Can someone shed some light on this and ideally provide me an example or article on how to set this up? All input is appreciated.
  8. I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane). Here is my pixel shader. As mentioned, I simply hard code it to output white: float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
  9. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  10. Hello, in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh. I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes) and "bone driven" "corrective morphs" (= morph is dependent from a bending or twisting bone) But now i have no idea which is the best method to implement a brush painting system. Just some proposals a. i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index b. the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index c. calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off d. the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh but my problem is that there could be several vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit. I think the given problem is quite typical an there are standard approaches that i dont know. Any help or tutorial are welcome P.S. I am working with SharpDX, DirectX11
  11. Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things. Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go. Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal. I plainly don't like Unity or Unreal, but might learn them for reference. So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11. Is there a current and up to date list of learning resources anywhere? I am about tired of 404s..
  12. Hi Guys, i want to draw shadows of a direction light but the shadows always disappear, if i translate my mesh (cube) in the world to far of the bounds of my orthographic projection matrix. That my code (Based of an XNA sample i recode for my project): // Matrix with that will rotate in points the direction of the light Matrix lightRotation = Matrix.LookAtLH(Vector3.Zero, lightDir, Vector3.Up); BoundingFrustum cameraFrustum = new BoundingFrustum(Matrix.Identity); // Get the corners of the frustum Vector3[] frustumCorners = cameraFrustum.GetCorners(); // Transform the positions of the corners into the direction of the light for (int i = 0; i < frustumCorners.Length; i++) frustumCorners[i] = Vector4F.ToVector3(Vector3.Transform(frustumCorners[i], lightRotation)); // Find the smallest box around the points BoundingBox lightBox = BoundingBox.FromPoints(frustumCorners); Vector3 boxSize = lightBox.Maximum - lightBox.Minimum; Vector3 halfBoxSize = boxSize * 0.5f; // The position of the light should be in the center of the back pannel of the box. Vector3 lightPosition = lightBox.Minimum + halfBoxSize; lightPosition.Z = lightBox.Minimum.Z; // We need the position back in world coordinates so we transform // the light position by the inverse of the lights rotation lightPosition = Vector4F.ToVector3(Vector3.Transform(lightPosition, Matrix.Invert(lightRotation))); // Create the view matrix for the light this.view = Matrix.LookAtLH(lightPosition, lightPosition + lightDir, Vector3.Up); // Create the projection matrix for the light // The projection is orthographic since we are using a directional light int amount = 25; this.projection = Matrix.OrthoOffCenterLH(boxSize.X - amount, boxSize.X + amount, boxSize.Y + amount, boxSize.Y - amount, -boxSize.Z - amount, boxSize.Z + amount); I believe the bug is by cameraFrustum to set a Matrix Idetity. I also tried with a Translation Matrix of my Camera Position and also the View Matrix of my Camera, but without success Can anyone tell me, how to draw shadows of my direction light always where my camera is current in my scene? Greets Benjamin
  13. Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code: //Texture.cpp: Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType) { //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView); CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView); } // SkyBox.cpp: void SkyBox::Draw() { // set cube map ID3D11ShaderResourceView *resource = mTexture.GetResource(); Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource); // set primitive topology Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); mMesh.Bind(); mMesh.Draw(); } // Vertex Shader: cbuffer Transform : register(b0) { float4x4 viewProjectionMatrix; }; float4 main(inout float3 pos : POSITION) : SV_POSITION { return mul(float4(pos, 1.0f), viewProjectionMatrix); } // Pixel Shader: SamplerState cubeSampler; TextureCube cubeMap; float4 main(in float3 pos : POSITION) : SV_TARGET { float4 color = cubeMap.Sample(cubeSampler, pos.xyz); return color; } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
  14. Hello, I used OMSetRenderTargetsAndUnorderedAccessViews function to set the uav resource in the pixel shader. Everything is ok and there is no warring/error message. However, the vsGraphic Diagnostic encountered a fatal error when I try to open the frame I captured. Although I've tried to used another PC and use another project(The Intel project which also bound uav resource to pixel shader) to test, the same error still occur. It's so tough to debug the shader without such tool. If there is anyone know what should I do, please tell me. Thanks for your reply! BTW, my graphic diagnostic engine version is 15.6.5, DirectX feature level is 11_0, Shader Model is 5_0. Thx!
  15. The DirectX team has just published a blog post / article with a call to action for game developers, to change swapchain usage patterns: https://blogs.msdn.microsoft.com/directx/2018/04/09/dxgi-flip-model/ I wanted to get some visibility on it, as well as start a discussion to see if there's any feedback from folks who have gone down this road in the past, or to hear from anybody who's trying this out as a result of this article.
  16. Hello guys, I tried to draw shadows in my scene with a direction light but in the end result i see the shadow + the view space of my direction light too, (see image). 4 yeahrs ago, a known, that i haven't contact anymore, wrote me a shadowmap shader and the geometry shader for doing this, but never fix this. So i haven't really much knowledge of shadow math and never fix this by my self I also tried to filter the light view space background with, if(shadow > 0.15) shadow = 1.0f (shadow = lightIntensity). But with this filter looks shadows of complexe geometries awful My Bias Value is 0.0001f Can anyone help me and explain me what is wrong? Greets Benjamin ps. (I uploaded the two HLSL Shaders in the attachment) ShadowMap.fx SimpleShader.fx
  17. Hi guys. Trying to solve this for a few days now but getting nowhere. I am having trouble with correctly displaying model when using diffuse and ambient light combination. In the picture you can see 2 larger cubes, top one using only ambient light, and bottom one using ambient + diffuse combination Smaller cube is just for showing where light is located. Can someone help me understand why is cube rendered the way it is? I was thinking problems with normals. I tried several different normal calculations and none of them work. Or maybe its not problem with the normals? I didnt want to post any code until maybe someone just give me a clue what might be happening? thanx
  18. I used DirectX in projects on Borland C++ Builder 6.0. Microsoft .libs don't work with Builder so I tooe special .lib files from here: http://www.clootie.ru/cbuilder/index.html#DX_CBuilder_SDKs Now I've moved to C++ Builder 10 Berlin and have to find a way to attach DirectX to my project again. I've searched the Web but found nothing on how to get access to DirectX in Embarcadero Builders, only old information on Borland Builder and old .libs. DirectX SDK .libs still can't be used with new Builder 10 because of incompatible format. My question is: did anyone use DirectX with Embarcadero Builder and how did you solve .libs problem? Can anyone give me a guide or example on how to make DirectX accessible in your Builder 10 project? Why there is no information on this anywhere?
  19. Hi there, I am rendering my game to render textures but I am having difficulty figuring out how to scale it to fill the window. The window looks like this: and the render texture looks like this (it's the window resolution downscaled by 4): (I implemented a screenshot function of the render texture which has proved to be very useful getting this working so far). My vertex shader is the classic "draw fullscreen triangle without binding a vertex or index buffer" as seen many times on this site: PS_IN_PosTex main(uint id : SV_VertexID) { PS_IN_PosTex output; output.tex = float2((id << 1) & 2, id & 2); output.pos = float4(output.tex * float2(2, -2) + float2(-1, 1), 0, 1); return output; } and the pixel shader is simply: Texture2D txDiffuse : register(t0); SamplerState samp : register(s0); float4 main(PS_IN_PosTex input) : SV_TARGET { return txDiffuse.Sample(samp, input.tex); } Can someone please give me a clue as to how to scale this correctly? Many thanks, Andy
  20. Hey everyone, ive used tools like Intel GPA in the past and I would like to continue doing that. But in my current work environment I cant find a tool that can analyze rendering without a swapchain and calling Present(). In addition to that, the program Im working on uses WPF for UI, which uses a D3D9Ex Device and some tools attach to that device instead. So, are there any tools that allow debugging D3D11 without depending on a swapchain? Thanks for any help!
  21. I've run into a puzzling rendering issue where triangles in the back, bleed through triangles in front of them. This screenshot shows the issue: The leftmost picture is drawn with no msaa and shows the expected result. The middle picture is exactly the same, except with msaa enabled. Notice the red pixels bleeding through at some triangle edges. The right picture shows a slightly rotated view, revealing the red surface in the back. In the rotated view, the artefact goes away, apparently because the triangles are no longer directly facing the camera. The issue only occurs on certain gpus (as far as I am currently aware, nvidia quadro K600 and intel integrated chips). When using a WARP device or the D3d11 reference rasterizer, the problem does not occur. The triangle mesh also affects the result. The two surfaces in the screenshot are pieces of a skull, triangulated from a 3d image scan using some form of marching cubes. So the triangles are in a very specific order. You can also see this in the following RenderDoc pixel history: RenderDoc shows the 'broken' pixel covered by two triangles -- I'd actually expect at least three: two adjacing the edge, and at least one from the back surface. The two triangles affecting the final pixel color are consecutive primitives. The front one has a shader depth output of 0.42414, the back one has a depth of 0.73829, but still ends up affecting the final color. If I change the order of the triangles -- for instance by splitting the surfaces and rendering each surface with its own draw call -- the problem also goes away. I understand that msaa changes rasterization, but shouldn't adjacing triangles still be completely without gaps? All sample positions within a pixel should be covered by the triangles on both sides of the edge, so no background should bleed through, right? For the record: I did check that there are no actual gaps/cracks in the mesh. Is this a driver/gpu bug? Am I misunderstanding the rasterization rules? Thanks!
  22. Trying to modify the D3D11 example to draw a texture onto the triangle rather than the colours but all I'm not seeing the triangle. What am I missing? Code: https://pastebin.com/KVMTq9r6 MiniTri.fx https://pastebin.com/0w1myEg0
  23. So, I am trying to do something fairly simple. Copy the content from one texture into another and then copy the result texture inside the first. (I'll also do some additional editing on the result texture and that's why I am copying the result back). But I am getting some strange results and the photo is loosing quality. Here is the Compute Shader I've used: Texture2D ObjTexture : register(t0); RWTexture2D<float4> ObjResult : register(u0); SamplerState ObjWrapSampler : register(s0); [numthreads(32, 32, 1)] void main(uint3 DTid : SV_DispatchThreadID) { float width, height; ObjTexture.GetDimensions(width, height); width -= 1; // X = [0 ... width-1] height -= 1; // Y = [0 ... height - 1] float2 uv = float2(DTid.xy) / float2(width, height); ObjResult[DTid.xy] = ObjTexture.SampleLevel(ObjWrapSampler, uv, 0); return; } and here is how I copy the result image back into the input one ID3D11Resource * inputTexture; inputTextureSRV->GetResource(&inputTexture); mContext->CopySubresourceRegion(inputTexture, 0, 0, 0, 0, mTexture.Get(), 0, NULL); I also tried copying the first texture inside the second texture and use an Unordered Acess View from the first texture, but I got the same result ... Is there something I am doing wrong here? This is the original texture This is the texture after ~60 passes And this is after ~240 passes. This is 'the final result' because it doesn't change anymore. As you can see, the image lost it's quality.
  24. I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color. float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.
  25. i am new to directx. i just followed some tutorials online and started to program. It had been well till i faced this problem of loading my own 3d models from 3ds max exported as .x which is supported by directx. I am using c++ on visual studio 2010 and directX9. i really tried to find help on the net but i couldn't find which can solve my problem. i don't know where exactly the problem is. i run most of samples and examples all worked well. can anyone give me the hint or solution for my problem ? thanks in advance!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!