• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By cozzie
      Hi all,
      As a part of the debug drawing system in my engine,  I want to add support for rendering simple text on screen  (aka HUD/ HUD style). From what I've read there are a few options, in short:
      1. Write your own font sprite renderer
      2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer
      3. Use an external library, like the directx toolkit etc.
      I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device.  But other articles tell that this was 'patched' later on and should work now.
      Can someone shed some light on this and ideally provide me an example or article on  how to set this up?
      All input is appreciated.
    • By stale
      I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below:
      However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error:

      I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch:
      ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type":
      for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!
    • By KaiserJohan
      Am trying a basebones tessellation shader and getting unexpected result when increasing the tessellation factor. Am rendering a group of quads and trying to apply tessellation to them.
      OutsideTess = (1,1,1,1), InsideTess= (1,1)

      OutsideTess = (1,1,1,1), InsideTess= (2,1)

      I expected 4 triangles in the quad, not two. Any idea of whats wrong?
      struct PatchTess { float mEdgeTess[4] : SV_TessFactor; float mInsideTess[2] : SV_InsideTessFactor; }; struct VertexOut { float4 mWorldPosition : POSITION; float mTessFactor : TESS; }; struct DomainOut { float4 mWorldPosition : SV_POSITION; }; struct HullOut { float4 mWorldPosition : POSITION; }; Hull shader:
      PatchTess PatchHS(InputPatch<VertexOut, 3> inputVertices) { PatchTess patch; patch.mEdgeTess[ 0 ] = 1; patch.mEdgeTess[ 1 ] = 1; patch.mEdgeTess[ 2 ] = 1; patch.mEdgeTess[ 3 ] = 1; patch.mInsideTess[ 0 ] = 2; patch.mInsideTess[ 1 ] = 1; return patch; } [domain("quad")] [partitioning("fractional_odd")] [outputtopology("triangle_ccw")] [outputcontrolpoints(4)] [patchconstantfunc("PatchHS")] [maxtessfactor( 64.0 )] HullOut hull_main(InputPatch<VertexOut, 3> verticeData, uint index : SV_OutputControlPointID) { HullOut ret; ret.mWorldPosition = verticeData[index].mWorldPosition; return ret; }  
      Domain shader:
      [domain("quad")] DomainOut domain_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad) { DomainOut ret; const float MipInterval = 20.0f; ret.mWorldPosition.xz = quad[ 0 ].mWorldPosition.xz * ( 1.0f - uv.x ) * ( 1.0f - uv.y ) + quad[ 1 ].mWorldPosition.xz * uv.x * ( 1.0f - uv.y ) + quad[ 2 ].mWorldPosition.xz * ( 1.0f - uv.x ) * uv.y + quad[ 3 ].mWorldPosition.xz * uv.x * uv.y ; ret.mWorldPosition.y = quad[ 0 ].mWorldPosition.y; ret.mWorldPosition.w = 1; ret.mWorldPosition = mul( gFrameViewProj, ret.mWorldPosition ); return ret; }  
      Any ideas what could be wrong with these shaders?
    • By simco50
      I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure.
      I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes.
      This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context.
      It looks something like this (pseudo):
      void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it.
      I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT.
      Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example).
      You would do something like this:
      //Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget.
      What is usually the solution to this?
    • By MehdiUBP
      I wrote a MatCap shader following this idea:
      Given the image representing the texture, we compute the sample point by taking the dot product of the vertex normal and the camera position and remapping this to [0,1].
      This seems to work well when I look straight at an object with this shader. However, in cases where the camera points slightly on the side, I can see the texture stretch a lot.
      Could anyone give me a hint as how to get a nice matcap shader ?
      Here's what I wrote:
      Shader "Unlit/Matcap"
              _MainTex ("Texture", 2D) = "white" {}
              Tags { "RenderType"="Opaque" }
              LOD 100
                  #pragma vertex vert
                  #pragma fragment frag
                  // make fog work
                  #include "UnityCG.cginc"
                  struct appdata
                      float4 vertex : POSITION;
                      float3 normal : NORMAL;
                  struct v2f
                      float2 worldNormal : TEXCOORD0;
                      float4 vertex : SV_POSITION;
                  sampler2D _MainTex;            
                  v2f vert (appdata v)
                      v2f o;
                      o.vertex = UnityObjectToClipPos(v.vertex);
                      o.worldNormal = mul((float3x3)UNITY_MATRIX_V, UnityObjectToWorldNormal(v.normal)).xy*0.3 + 0.5;  //UnityObjectToClipPos(v.normal)*0.5 + 0.5;
                      return o;
                  fixed4 frag (v2f i) : SV_Target
                      // sample the texture
                      fixed4 col = tex2D(_MainTex, i.worldNormal);
                      // apply fog
                      return col;
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 Modern directx and effects framework

This topic is 1400 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I'm reading on msdn that effects framework for dx11 is created for easy porting of an old projects. Does that means that if I create a new project from scratch for win8 I shouldn't use it?

Share this post

Link to post
Share on other sites
You can use it if it fits your purposes, otherwise you can reinvent it with something that fits better ;-)

The Effects framework is just one way to organize and use shader programs inside your engine.

It gives you Effects, which have Techniques, which have Passes, which have shader programs and optional render state. It also gives you some reflection/meta-data features, and a system to set shader variables (uniforms, textures) by name.
That's a fairly good structure to borrow from if you instead built your own.

My engine's system is similar, except I don't have Effects - each file represents a single technique. Also, passes for me represent different parts of the scene rendering phases, such as opaque, alpha-blended, depth-only/shadow, g-buffer attributes, etc... and I didn't want to supply shader variables in the way that the Effects framework is designed to... -- i.e. my ideal abstraction for interactiv with shaders was different than what Effects would've given me.

Share this post

Link to post
Share on other sites
Thank you. So, it's basically a wrapper around api states change, constants and buffer settins, right? What's the problem to, say, render all opaque objects, next change manually (via dx calls) necessary data/states and render particles, next render transparent objects? I have come from Adobe Flash 3d world, where we don't have frameworks like this, write shaders with some sort of assembler, and just have api calls (althought very limited). And I see many benefits of such manual coding.

To be honest, I'm confused a bit - I have a couple of dx 11 books, I'm reading online tutorials and it all seems good and fresh. But due to recent changes in windows (i.e. remove of d3dx, shaders compilation out of the box etc.) all examples simply don't work and need refactoring. And it seems that only examples that can run without any changes with vs 2013 is a couple of templates on msdn code base for windows store. Can you advise actual resources? May be groups or chats?}

Share this post

Link to post
Share on other sites

About D3DX: yes D3DX was (finally!) deprecated, however Chuck Walbourn made a set of project that could be very helpful to replace it: you can read more about here http://blogs.msdn.com/b/chuckw/archive/2013/08/21/living-without-d3dx.aspx


Here are the four projects (... and maybe they will turn five in Summer, with a new library replacing the old D3DXMesh)


- http://directxtex.codeplex.com/




Edited by Alessio1989

Share this post

Link to post
Share on other sites

So, it's basically a wrapper around api states change, constants and buffer settins, right? What's the problem to, say, render all opaque objects, next change manually (via dx calls) necessary data/states and render particles, next render transparent objects? I have come from Adobe Flash 3d world, where we don't have frameworks like this, write shaders with some sort of assembler, and just have api calls (althought very limited). And I see many benefits of such manual coding.

Yes, it's just a "smart wrapper" and yes, you can completely ignore it and make your own system (or do everything manually all the time if you want).

I know the Effect Framework only from DX9 and I have no idea whether it changed significantly in DX10/11, but in DX9 it was quite a good system for people learning the programmable pipeline, because it made it everything very straightforward and simple. But as you started to understand better how it everything works, you usually realised that the Effect Framework may not be really optimal if you try to do some optimizations of device calls and the whole rendering process - because you really don't know what's going on under the hood with its automatic state changes, texture setting, samplers, CPU pre-shaders etc.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement