pulo

Members
  • Content count

    51
  • Joined

  • Last visited

Community Reputation

276 Neutral

About pulo

  • Rank
    Member
  1. Hi there,   i am currently creating a small 2d top-down game (just for learning purposes). The map itself is build with Tiled (http://www.mapeditor.org/) - great tool. Now i already have implemented most of the rendering code, but i am still wondering if there are other/better ways of doing it, especially in the context of frustum culling and update logic for the tiles.   This is how my setup in Tiled looks:  I have different layer types in Tiled: Two Layers with type background, which are basically just static backgrounds for the map, they won't be changed in run-time.  One Layer which has type action, which contains all tiles which have different effects in game (e.g. collision, destruction - like bushes). One Layer which has type foreground, which is basically like the static background, but will be rendered after each entity in the game. I am considering one additional layer which would have type animated, which does contain all tiles which are animated somehow (only environmental stuff, no entities) Each layer can use each tileset (texture) loaded in tiled (thus is not restricted to just one) Is this even a good Setup? How would you organize those layers?   Now in my code those layers are represented by one class: TilemapLayer which is basically only for the rendering of those. Each non-empty tile of the layers from "Tiled" creates an Instance: struct Instance { Instance() {}; Instance(float x, float y, float z, float tileWidth, float tileHeight, float uOffset, float vOffset) : textureInfos(tileWidth, tileHeight, uOffset, vOffset), position(x, y, z) { } DRE::Vector3f position; DRE::Vector4f textureInfos; }; Those instances are what the name suggests, instances for an instance-buffer. I group those by the tilesets they are using to minimize texture-swapping and Draw-Calls like this: // Here we "order" by texture atlas so that we minimize switching! m_instances[countByte].push_back(Instance(offsetX, offsetY, 0.0f, (float)tileWidth, (float)tileHeight, uOffset, vOffset)); Rendering looks like this: DRE::uint_t offsetCounter = 0; DRE::uint_t stride = sizeof(Instance); DRE::uint_t offset = 0; m_graphicsObject->GetContext()->SetVertexBuffers(m_instanceBuffer.GetAddressOf(), 1, 1, &stride, &offset); for (DRE::uint_t i = 0; i < m_instances.size(); ++i) { // Update the Shader variables with tile set texture and information unsigned int textureWidth = (*m_tilesets)[i]->GetTextureWidth(); unsigned int textueHeight = (*m_tilesets)[i]->GetTextureHeight(); m_graphicsObject->GetRenderParameterManager()->SetParameter("textureWidth", textureWidth); m_graphicsObject->GetRenderParameterManager()->SetParameter("textureHeight", textueHeight); m_graphicsObject->GetRenderParameterManager()->SetParameter("shaderTexture", *(*m_tilesets)[i]->GetTexture()); stateObject->UpdateConstantBuffer("perLayer"); stateObject->UpdateShaderResourceView("shaderTexture"); if (m_isDynamic) { // Update the Instance Buffer D3D11_MAPPED_SUBRESOURCE bufferData; bufferData.pData = NULL; bufferData.DepthPitch = bufferData.RowPitch = 0; m_graphicsObject->GetGraphicsDevice()->GetDeviceContext()->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &bufferData); memcpy((Instance*)bufferData.pData, m_instances[i].data(), sizeof(Instance) * m_instances[i].size()); m_graphicsObject->GetGraphicsDevice()->GetDeviceContext()->Unmap(m_instanceBuffer.Get(), 0); // Draw the layer with instances m_graphicsObject->GetContext()->DrawInstanced(4, (DRE::uint_t)m_instances[i].size()); // } else { m_graphicsObject->GetContext()->DrawInstanced(4, (DRE::uint_t)m_instances[i].size(), offsetCounter); offsetCounter += (DRE::uint_t)m_instances[i].size(); }   This setup works reasonable fine (losing like 60fps in debug for each new tileset (texture) introduced), but i am worried about frustum culling. There is currently no easy way to do this, at least i am not seeing it. Do you have any suggestions?   My plan for the update logic (collision checks etc.) is to create a different representation for the game-state of the map, based on the action layer out of tiled. I am fine with updating only the visible section of the map. How do is represent this in memory?    I am thankful for any input one might give me for this. If you have any questions, feel free to ask.
  2.   Alright. Thanks again for your help!
  3.   While playing around with this a little bit i noticed some "problems" which i am unsure what the best solution would be.  I guess i need different primitive meshes for different types of textures:   Cube:    Plane:    Camera oriented plane:    Am i right in this assumption? I guess there is no universal 3D solution that fits for all sprites/purposes, isn't there?
  4.   Thanks again. One last follow-up, and sorry if this sounds dumb: Why do i need to match the 3D model as much as possible to the texture? Is this only relevant if i need dynamic lighting and shader-stuff? Couldn't i get away with only very basic shapes like a cube with the backfaces removed and thus simple doing billboarding with cubes instead of quads?
  5.   Thank you for your detailed pictures and explanations. I really appreciate it! So the advantages of the 3D model rendering (with removed backfaces - since the camera angles stay the same), as opposed to simple textured quads, is basically the easier depth sorting, the better representation of height and the possibility to do dynamic stuff (lighting)?
  6.   Thanks for the answer. So the size of the "simple" model does somewhat match the texture-size?   They are also stating that their are using the "tiled" approach. What i don't understand is how this is still considered tiled if you try to match the geometry as much as possible (or the silhouette). What is the purpose/advantage of using the tiled approach then?
  7. Hi there,   so i am currently investigating a lot of different methods for rendering beautiful isometric environments while still leaving room for dynamic elements (lights etc.). I think leveraging the 3D depth testing is a good idea.   I stumbled across this blog post for Shadowrun Returns and their method of doing it appeals to me: http://harebrained-schemes.com/blog/2013/03/22/mikes-dev-diary-art/   How did they do it? I mean it says that they are projecting their art onto simple 3D shapes, but for example do they use multiple same sized cubes for the bigger buildings, or actually a single larger one? I just cannot wrap my head around this problem... I hope someone can clear up my confusion regarding how to build a isometric world out of 3d shapes and thus being able to leverage the depth testing of the 3d hardware.   Thanks!
  8.       Thank you! Both of you! That was indeed my problem. I was so focused on checking the pipeline that i totally forgot about the window/directx initialization. EDIT: If any of you two also want to answer the question on stackoverflow, feel free to do so. I will wait till tomorrow and just accept the first answer that comes. Otherwise i will just answer the question myself and link to this thread.
  9. So i am trying to render a small isometric tile-map and just started with a single tile to feel the thing out. But i am running into a problem where if the tile gets jagged edges, even though it looks ok on the texture itself. The strange thing is that it also appears to be right if i use the Graphics Debugger in Visual Studio (And that is because it tile gets displayed a little smaller than the real one - slighty zooming in has the same effect). Here is a picture to better visualize what i mean:     The left picture is a part of the rendered frame inside a normal window. The right part is the captured frame for the graphics debugging tool. As you can see the display for the captured frame looks completely ok. The normal rendering inside a window also starts to look good if i scale this tile up by some factor.   Here is my Sampler Desc. I am creating the texture by using the CreateDDSTextureFromFilemethod provided by the DirectXTK (https://github.com/Microsoft/DirectXTK). CreateDDSTextureFromFile(gfx->GetGraphicsDevice()->GetDevice(), L"resources/tile.DDS", nullptr, shaderResource.GetAddressOf()) ZeroMemory(&g_defaultSamplerDesc, sizeof(D3D11_SAMPLER_DESC)); g_defaultSamplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; //D3D11_FILTER_MIN_LINEAR_MAG_POINT_MIP_LINEAR g_defaultSamplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; g_defaultSamplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; g_defaultSamplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; g_defaultSamplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER; g_defaultSamplerDesc.MinLOD = 0; g_defaultSamplerDesc.MaxAnisotropy = 1; g_defaultSamplerDesc.MaxLOD = D3D11_FLOAT32_MAX; I checked the .dds file and there is only 1 Mip-Level. Here is also the Texture description which the DirectXTK method is using. Seems fine to me: Width 64 unsigned int Height 64 unsigned int MipLevels 1 unsigned int ArraySize 1 unsigned int Format DXGI_FORMAT_B8G8R8A8_UNORM (87) DXGI_FORMAT + SampleDesc {Count=1 Quality=0 } DXGI_SAMPLE_DESC Usage D3D11_USAGE_DEFAULT (0) D3D11_USAGE BindFlags 8 unsigned int CPUAccessFlags 0 unsigned int MiscFlags 0 unsigned int And for what it's worth here is also my Vertex-Setup: Vertex v[] = { Vertex(-32.f, -32.f, 1.0f, 0.0f, 1.0f), Vertex(-32.f, 32.f, 1.0f, 0.0f, 0.0f), Vertex(32.0f, -32.f, 1.0f, 1.0f, 1.0f), Vertex(32.0f, 32.f, 1.0f, 1.0f, 0.0f) }; Any idea what might be causing this? This also happens with other dimetric textures.   Full disclosure: I also asked this question on stackoverflow, but since i have got no real reply (beside some nice help from one fella) for 5 days i am getting a bit desperate, especially because i already tried a lot of stuff. Here is the original question: http://stackoverflow.com/questions/36068768/isometric-tile-has-jagged-lines
  10. Do i understand this approach correctly: You compile all the shader permutations in runtime (at startup for example)? Is it possible to compile the shader permutations in build time, but still having the luxury of having them named nicely, so they can be identified at runtime by using the bitmask you suggested above?
  11. If you compile your shader with FXC, and then check the output listing for the name of the buffer you should be able to use that name to reflect the constant buffer.  Regarding the performance, I think there is a penalty for using the dynamic linkage (although I've never heard hard numbers on this before).  Most of the systems I have heard of just generate the appropriate shader code and compile the needed variants accordingly.   Is that something that isn't a usable solution for you?   I also heard about this solution, but could not find any samples or good explanation of whats the best way to go with this kind of system. Do you have any of those at hand? As i found out about the Dynamic Shader Linkage i was not expecting it being used so rarely, but the explanation from MJP now makes it quite clear why this might be the case.    So: Does anyone have a good article or something about how one would usually go about this kind of system?   Thank you for all your input!
  12. Thanks for your answer ;).   Why is that? Is it to cumbersome, or bad performance wise? I find it a quite interesting idea, especially because it gives you the possibility to create new class instances in run time.      How do i get the reflection interface for the default constant buffer? I already checked most of the reflection interfaces, including the constant buffers (with GetConstantBufferByIndex()) and bound resources (with GetResourceBindingDesc()). Sadly it does not show up on either of those. Getting the global interface name by index would be nice, because then one would not have to change the code manually if something changes inside the shader code and it would be easy to "calculate" the offset into the dynamic linkage array.
  13. Well i found something which might help. If i use the GetInterfaceByIndex on the Shader Reflection type i get the expected class "D3D_SVC_INTERFACE_CLASS" which i suspect would be different or non-existent on a normal struct.   EDIT: The only thing now left for me is to find out if it is possible to get the name from here through shader reflection. Any Ideas?: iBaseLight     g_abstractAmbientLighting; ^^^^^^^^^^^^^^^^^^^^^^^^ struct PixelInput { float4 position : SV_POSITION; float3 normals : NORMAL; float2 tex: TEXCOORD0; };
  14. Yes the constant buffer stays the same, regardless of which class implementation i am using at runtime. (Was this what you were asking?) The abstract interface is used like this: iBaseLight     g_abstractAmbientLighting; struct PixelInput { float4 position : SV_POSITION; float3 normals : NORMAL; float2 tex: TEXCOORD0; }; float4 main(PixelInput input) : SV_TARGET { float3 Ambient = (float3)0.0f; Ambient = g_txDiffuse.Sample(g_samplerLin, input.tex) * g_abstractAmbientLighting.IlluminateAmbient(input.normals); return float4(saturate(Ambient), 1.0f); } The reason i would like to reliably differentiate between a normal struct and a class used for dynamic shader linkage is that i would later use the variable name to automatically get the class instances like this:   g_pPSClassLinkage->GetClassInstance( varDesc.Name, 0, &g_pAmbientLightClass );
  15. Hey there,   i am currently trying to integrate Dynamic Shader Linkage into my ShaderReflection Code. I examined the Win32 Sample Microsoft provided, where they are declaring a cbuffer like this: cbuffer cbPerFrame : register( b0 ) {    cAmbientLight     g_ambientLight;    cHemiAmbientLight g_hemiAmbientLight;    cDirectionalLight g_directionalLight;    cEnvironmentLight g_environmentLight;    float4            g_vEyeDir;    }; While cAmbientLight, cHemiAmbientLight etc. are all classes interfacing the iBaseLight interface. This is a example:   interface iBaseLight {    float3 IlluminateAmbient(float3 vNormal);        float3 IlluminateDiffuse(float3 vNormal);    float3 IlluminateSpecular(float3 vNormal, int specularPower );     }; //-------------------------------------------------------------------------------------- // Classes //-------------------------------------------------------------------------------------- class cAmbientLight : iBaseLight {    float3 m_vLightColor;         bool     m_bEnable;        float3 IlluminateAmbient(float3 vNormal);           float3 IlluminateDiffuse(float3 vNormal)    {        return (float3)0;    }    float3 IlluminateSpecular(float3 vNormal, int specularPower )    {        return (float3)0;    } }; Now i am already able to get the needed constant buffer variables out of the reflection (m_vLightColor and m_bEnable in this case), but i was wondering if there is a reliable method to detect if the constant buffer variable is a interface class (like cAmbient Light for example). The Reflection of this constant buffer only gives D3D_SVC_STRUCT as class type for the variables. By browing through the MSDN i notcied that there are existing two classes which would suit: D3D_SVC_INTERFACE_POINTER and D3D_SVC_INTERFACE_CLASS. Am i doing something wrong or is it normal that the class displayed here is a STRUCT?