• 12
• 12
• 9
• 10
• 13
• ### Similar Content

• By isu diss
I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color.
float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.

• By cozzie
Hi all,
As a part of the debug drawing system in my engine,  I want to add support for rendering simple text on screen  (aka HUD/ HUD style). From what I've read there are a few options, in short:
1. Write your own font sprite renderer
2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer
3. Use an external library, like the directx toolkit etc.
I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device.  But other articles tell that this was 'patched' later on and should work now.
Can someone shed some light on this and ideally provide me an example or article on  how to set this up?
All input is appreciated.
• By stale
I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below:
'
However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error:

I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch:
ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type":
for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!

• Am trying a basebones tessellation shader and getting unexpected result when increasing the tessellation factor. Am rendering a group of quads and trying to apply tessellation to them.
OutsideTess = (1,1,1,1), InsideTess= (1,1)

OutsideTess = (1,1,1,1), InsideTess= (2,1)

I expected 4 triangles in the quad, not two. Any idea of whats wrong?
Structs:
struct PatchTess { float mEdgeTess[4] : SV_TessFactor; float mInsideTess[2] : SV_InsideTessFactor; }; struct VertexOut { float4 mWorldPosition : POSITION; float mTessFactor : TESS; }; struct DomainOut { float4 mWorldPosition : SV_POSITION; }; struct HullOut { float4 mWorldPosition : POSITION; }; Hull shader:
PatchTess PatchHS(InputPatch<VertexOut, 3> inputVertices) { PatchTess patch; patch.mEdgeTess[ 0 ] = 1; patch.mEdgeTess[ 1 ] = 1; patch.mEdgeTess[ 2 ] = 1; patch.mEdgeTess[ 3 ] = 1; patch.mInsideTess[ 0 ] = 2; patch.mInsideTess[ 1 ] = 1; return patch; } [domain("quad")] [partitioning("fractional_odd")] [outputtopology("triangle_ccw")] [outputcontrolpoints(4)] [patchconstantfunc("PatchHS")] [maxtessfactor( 64.0 )] HullOut hull_main(InputPatch<VertexOut, 3> verticeData, uint index : SV_OutputControlPointID) { HullOut ret; ret.mWorldPosition = verticeData[index].mWorldPosition; return ret; }
[domain("quad")] DomainOut domain_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad) { DomainOut ret; const float MipInterval = 20.0f; ret.mWorldPosition.xz = quad[ 0 ].mWorldPosition.xz * ( 1.0f - uv.x ) * ( 1.0f - uv.y ) + quad[ 1 ].mWorldPosition.xz * uv.x * ( 1.0f - uv.y ) + quad[ 2 ].mWorldPosition.xz * ( 1.0f - uv.x ) * uv.y + quad[ 3 ].mWorldPosition.xz * uv.x * uv.y ; ret.mWorldPosition.y = quad[ 0 ].mWorldPosition.y; ret.mWorldPosition.w = 1; ret.mWorldPosition = mul( gFrameViewProj, ret.mWorldPosition ); return ret; }
Any ideas what could be wrong with these shaders?
• By simco50
Hello,
I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure.
I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes.
This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context.
It looks something like this (pseudo):
void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it.
I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT.
Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example).
You would do something like this:
//Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget.
What is usually the solution to this?

Thanks!

# DX11 Questions on renderer design, 3D models and formats

This topic is 2566 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello everyone.

I've been thinking on the matters described below for quite a long time, and I think I've read half of the Internet looking for answers, but still I can't finish designing my renderer.
My problem's with loading 3D models - precisely preventing loading one piece of data more than once - and incorporating this routine into a renderer, and some related thingys.

I. Background

The architecture of my renderer is quite basic, and pretty standard; I've got the following:

CRenderer - receives RenderChunks (each represents one Draw() call), then does some sorting etc., and draws the resulting rendering queue; with this class I'm more or less happy.

CTextureManager, CGeometryManager, CShaderManager - meant to help to avoid having a few instances of same data, and to keep track of whether the data loaded is still needed, or can safely be deleted. All the managers use the same D3D device pointer, created by CRenderer.

I've also designed my own binary model format, of which at first I was very proud. However, the more I think on the design of my renderer, the less I'm happy with it.
My model format is modular; it contains data blocks, like say GDATAn for geometry data, MDATAn for material data etc. One file can contain many blocks of any type; it's up to loader to decide what to take, and what to ignore.

As for now, I'm using only Direct3D 11.

II. My concerns and questions

1) As far as I know, normally, to prevent loading multiple instances of same data, some kind of manager would keep track of what filenames (or paths in the filesystem) have been already loaded. But let's say two models share geometry, but use different materials. In this case, for each model we need a separate file; so standard filename checking stops working; loading these two files, we'd have two instances of same geometry somewhere in VRAM.
In order to load one piece of information only once, we'd have to split one model into a number of files: one for each geometry, materials, skeleton, animation etc. But then all fancy model formats, including the one I designed, become useless for games, as we have to store a model as a number of unrelated files containing almost raw data, anyway. We would only reference these files from some simple model file, which would be nothing more than a list of files containing various data chunks of a model.

I've thought on giving each unique piece of data it's own unique ID; however this doesn't seem to be a good idea.

How can this problem be solved?

2) It seems logical to me to link shaders that work together, and refer to them as one effect, so I've decided to create my own effect framework, compatible with my renderer. But as I want modules of my renderer to be independent of each other, what renderer gets from such effect would be a just set of shader pointers, anyway (i.e. the renderer itself knows nothing of my effect framework, it just receives RenderChunks).
However, there is the same problem as with models: If I let the user create some effect files (with techniques, passes and stuff, containing vertex, geometry, pixel shader, etc., just like MS's) how can I control which shaders are already resident in the VRAM, and which are not? They wouldn't have their unique filenames; unless an effect file would only reference single shader files, e.g.

 technique Whatever { pass0 { VertexShader = Shader("vs.hlsl", VSEntryPoint); ... PixelShader = Shader("ps.hlsl", PSEntryPoint); } } 

But somehow I don't like this solution. Is there any better way?

BTW: Has anyone here ever used the "passes" feature of MS's effects, or any equivalent? I'm not sure whether to bother with implementing this functionality, as I won't be targeting older hardware (DX11, maybe DX 10, up), and the only reason I can think of to implement passes would be compatibility with (very) old hardware.

3) Should I create another manager for blend, depthstencil and rasterizer states? If not, could you propose who should keep those states?
I'm asking because I'm a bit scared of quickly growing number of managers in my renderer. And blend states etc. don't seem to be big enough pieces of data to require any managers. Am I right?

Also, any hints or comments on my design, or anything found in my post, will be appreciated., as well as any links to materials on the matters concerned.

Thanks.

##### Share on other sites
[color="#1C2837"]I've thought on giving each unique piece of data it's own unique ID; however this doesn't seem to be a good idea.[/quote]That's a file-name

Most games, by the time they're pressed onto a blu-ray (etc), don't have a folder called "data" with thousands of files inside it --- all of those files are usually packed into a single big archive file instead.
However, the game might still use file-names internally. But hang on, if there's only one file on the disk, how can different bits of data inside that file have their own file-names?
The answer is, you're allowed to make file-names mean whatever you want to; you're not forced to implement them in the same way Windows does.
e.g. If you wanted to refer to a sub-resource inside of the file "./data/myModel.mdl", you could use a file-name like ./data/myModel.mdl/mySubResource. Your game knows that to load that "file", it has to tell windows to open "./data/myModel.mdl" and find the "mySubResource" chunk.

##### Share on other sites

Yeah, I understand the concept. However, assume we have two models: a box with red texture rendered with effect1, and a box with blue texture, rendered with effect2. Both models are saved as model files, say box_red.mdl and box_blue.mdl. Both files contain the same geometry of our box, but differ in materials they describe. Now, if we load these files - from separate files or one big archive, doesn't matter - if we try to distinguish them simply by their filenames, paths, whatever, we would find them to be different. And so, we would load materials of both files - which is ok, as they differ - and geometry from both files - which is wrong, because now we have two instances of same geometry. That's my problem.
Now, we could ask our artists to give each chunk of geometry (and other data) their unique "in-file" names, but we have no guarantee that there won't be any duplicates. It also causes some trouble when updating files - if we wanted to change geometry of our box, we would have to edit both box_red.mdl and box_blue.mdl.

The only solution I can think of would be to keep each part of the model - geometry, materials, skeleton etc. - as separate files, say geometry.gmt, material.mat, skeleton.skl etc., and to have our models represented as a file, model.mdl, which just links to these "atom" files. So both box_red.mdl and box_blue.mdl would just link to box.gmt (geometry file), and their own material files. So our geometry manager would load box.gmt only once. And if we wanted to modify the geometry of our box, we would simply edit one file, box.gmt.
However, with this solution we can't use "combo" model formats, containing all data of the model in one file (.3ds, .x, .dae, etc.). So I wonder if there is a better solution, as I kind of like these "combo" formats

I like your idea of refering to sub-resources of files like they were files themselves ("./data/myModel.mdl/mySubResource"). However, it doesn't work with the box example - we still have ./data/box_red.mdl/geometrySubres and ./data/box_blue.mdl/geometrySubres, so we can't tell whether geometry of both files is the same or differ.

##### Share on other sites

assume we have two models: a box with red texture rendered with effect1, and a box with blue texture, rendered with effect2. Both models are saved as model files, say box_red.mdl and box_blue.mdl. Both files contain the same geometry of our box, but differ in materials they describe. Now, if we load these files - from separate files or one big archive, doesn't matter - if we try to distinguish them simply by their filenames, paths, whatever, we would find them to be different. And so, we would load materials of both files - which is ok, as they differ - and geometry from both files - which is wrong, because now we have two instances of same geometry. That's my problem.
IMO, an engine that does that is simply broken (sorry for the bluntness). Geometry and materials are completely different things - packing them together into a single resource is going to cause all sorts of problems (like the ones you're listing). Just have textures, materials, geometry and animations all as separate resources (which is also the solution you present )
However, for conveniance and load-time reasons, sometimes you do want to take a group of resources and pack them together into one contiguous blob so they're all loaded at once, but IMO this should be transparent to the user. If you ask to load "box.geo", then the file system should handle the details as to whether that's a file on the disk, a section of an archive, or a section of a blob within an archive, etc...
However, with this solution we can't use "combo" model formats, containing all data of the model in one file (.3ds, .x, .dae, etc.). So I wonder if there is a better solution, as I kind of like these "combo" formats [/quote]The intermediate formats that you use during development aren't really relevant - when you ship, they all do get packed into some kind of combo format anyway . Also, 3ds/dae are only suitable as intermediate formats, not shipping formats anyway.
I like your idea of refering to sub-resources of files like they were files themselves ("./data/myModel.mdl/mySubResource"). However, it doesn't work with the box example - we still have ./data/box_red.mdl/geometrySubres and ./data/box_blue.mdl/geometrySubres, so we can't tell whether geometry of both files is the same or differ.
[/quote]/data/box_blue.mdl could have an internal "geo resource" field, that contains the string "./data/props.geo/box", etc...

We should probably step back for a moment and consider the content pipeline for your game though, as it has a large impact on your formats, and how they're created/used.

In about 3/4 companies I've worked at, the pipeline looked like:
Source formats ---Export---> Export formats ---Compile---> Compiled formats ---Build Archives---> Shipping formats

Source data:
PSD (photoshop)
MB (maya)
etc...

Exported data:
TGA (uncompressed image)

Compiled data:
DDS (compressed image)
GEO / ANIM / MTL (binary geometry, animations, materials)

Shipping data:
Some kind of "combo" format archive

The content creators on the team work with source data - these are the only types of data files that are edited by hand.
Inside Maya/Photoshop/etc, source data is exported to intermediate formats. On a small project you'd probably do this manually via "File->Export/Save as", but on a project with a decent number of engine staff and tech-artists, you'd make this as automatic as possible (e.g. automatically exporting multiple TGAs from a single PSD whenever the PSD is saved).
An automatic process then takes the exported data and compiles it into game-specific formats.
During development, the game loads these compiled data files directly, but in a shipping build, data files are loaded from "combo" archives instead.

Source data is stored in version control, just like our source-code is, but usually in a different repository than the code (one designed to work with very large files).
Exported data is usually stored in a version control repository as well, often the same one as the code. Exported data is never modified by hand.
Compiled data is usually not stored in version control - it's generated on each developers workstation from the exported data OR a central network PC stores a cache of the compiled data (and may also handle the exported-data -> compiled-data step).
Shipping data is also not stored in version control - it can be generated from the compiled data if needed.

Ideally, this whole pipeline is automated after source-data has been exported. After hitting the "export" button, you're going to end up with some new files in your "compiled data" directory, but you almost never have to look at these files, and never, ever edit them - so the formats used here are pretty irrelevant.

At one company, when the game requested a particular resource, the timestamp on the "compiled" version would be checked against the "exported" version, and if the exported version was newer, then the resource would be recompiled before being loaded into the game. This engine even kept monitoring the export directory and re-compiled and re-loaded modified files while the game was running to allow for quick iteration
At other companies, converting from "exported" to "compiled" was done by running a batch program before launching the game. However, in both cases, "compiled" data files are rarely looked at by the developers. All developers do is edit the source files and hit a button, which somehow results in a whole bunch of compiled resources being written to a "data" folder that you never look at.
Also, note that one source file can produce many outputs. A single PSD file for a texture would usually be exported to multiple TGAs (e.g. diffuse/specular/normal maps) and a MB file could potentially produce materials, geometry, animations, etc... The way that these outputs are stored is completely opaque to the user (they could be different files, they could be grouped by level, they could all be grouped into a single archive).

##### Share on other sites
Wow, that was extremely helpful! Knowing this pipeline, and how it's done professionally, opened my eyes There are so many things I wasn't aware of! Lots of work ahead of me.

At one company, when the game requested a particular resource, the timestamp on the "compiled" version would be checked against the "exported" version, and if the exported version was newer, then the resource would be recompiled before being loaded into the game. This engine even kept monitoring the export directory and re-compiled and re-loaded modified files while the game was running to allow for quick iterattion [/quote]
Cool. But I guess I'll stick to simpler solutions for some time
I'll probably go with batch programs. I guess I'll ask my artist to export his models to Collada, and then I'll write a program to split them into geometry, metrials and stuff, which I'll run on my side.

I really appreciate that you found time to explain this in detail; it was exactly the knowledge that I lacked. Now I can get to work Many thanks to you, Hodgman!