Jump to content
  • Advertisement

pulo

Member
  • Content Count

    51
  • Joined

  • Last visited

Posts posted by pulo


  1. Hi there,

     

    i am currently creating a small 2d top-down game (just for learning purposes). The map itself is build with Tiled (http://www.mapeditor.org/) - great tool. Now i already have implemented most of the rendering code, but i am still wondering if there are other/better ways of doing it, especially in the context of frustum culling and update logic for the tiles.

     

    This is how my setup in Tiled looks:

    1.  I have different layer types in Tiled:
      • Two Layers with type background, which are basically just static backgrounds for the map, they won't be changed in run-time. 
      • One Layer which has type action, which contains all tiles which have different effects in game (e.g. collision, destruction - like bushes).
      • One Layer which has type foreground, which is basically like the static background, but will be rendered after each entity in the game.
      • I am considering one additional layer which would have type animated, which does contain all tiles which are animated somehow (only environmental stuff, no entities)
    2. Each layer can use each tileset (texture) loaded in tiled (thus is not restricted to just one)

    Is this even a good Setup? How would you organize those layers?

     

    Now in my code those layers are represented by one class: TilemapLayer which is basically only for the rendering of those. Each non-empty tile of the layers from "Tiled" creates an Instance:

    struct Instance {
        Instance() {};
        Instance(float x, float y, float z, float tileWidth, float tileHeight, float uOffset, float vOffset) :
            textureInfos(tileWidth, tileHeight, uOffset, vOffset), position(x, y, z) {
        }
        DRE::Vector3f position;
        DRE::Vector4f textureInfos;
    };

    Those instances are what the name suggests, instances for an instance-buffer. I group those by the tilesets they are using to minimize texture-swapping and Draw-Calls like this:

    // Here we "order" by texture atlas so that we minimize switching!
    m_instances[countByte].push_back(Instance(offsetX, offsetY, 0.0f, (float)tileWidth, (float)tileHeight, uOffset, vOffset));

    Rendering looks like this:

    DRE::uint_t offsetCounter = 0;
    
    DRE::uint_t stride = sizeof(Instance);
    DRE::uint_t offset = 0;
    m_graphicsObject->GetContext()->SetVertexBuffers(m_instanceBuffer.GetAddressOf(), 1, 1, &stride, &offset);
    
    for (DRE::uint_t i = 0; i < m_instances.size(); ++i) {
        // Update the Shader variables with tile set texture and information
        unsigned int textureWidth = (*m_tilesets)[i]->GetTextureWidth();
        unsigned int textueHeight = (*m_tilesets)[i]->GetTextureHeight();
        m_graphicsObject->GetRenderParameterManager()->SetParameter("textureWidth", textureWidth);
        m_graphicsObject->GetRenderParameterManager()->SetParameter("textureHeight", textueHeight);
        m_graphicsObject->GetRenderParameterManager()->SetParameter("shaderTexture", *(*m_tilesets)[i]->GetTexture());
    
        stateObject->UpdateConstantBuffer("perLayer");
        stateObject->UpdateShaderResourceView("shaderTexture");
    
        if (m_isDynamic) {
            // Update the Instance Buffer
            D3D11_MAPPED_SUBRESOURCE bufferData;
            bufferData.pData = NULL;
            bufferData.DepthPitch = bufferData.RowPitch = 0;
            m_graphicsObject->GetGraphicsDevice()->GetDeviceContext()->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &bufferData);
            memcpy((Instance*)bufferData.pData, m_instances[i].data(), sizeof(Instance) * m_instances[i].size());
            m_graphicsObject->GetGraphicsDevice()->GetDeviceContext()->Unmap(m_instanceBuffer.Get(), 0);
    
            // Draw the layer with instances
            m_graphicsObject->GetContext()->DrawInstanced(4, (DRE::uint_t)m_instances[i].size()); //
        }
        else {
            m_graphicsObject->GetContext()->DrawInstanced(4, (DRE::uint_t)m_instances[i].size(), offsetCounter);
            offsetCounter += (DRE::uint_t)m_instances[i].size();
        }

     

    This setup works reasonable fine (losing like 60fps in debug for each new tileset (texture) introduced), but i am worried about frustum culling. There is currently no easy way to do this, at least i am not seeing it. Do you have any suggestions?

     

    My plan for the update logic (collision checks etc.) is to create a different representation for the game-state of the map, based on the action layer out of tiled. I am fine with updating only the visible section of the map. How do is represent this in memory? 

     

    I am thankful for any input one might give me for this. If you have any questions, feel free to ask.


  2. If you're choosing to use this particular abstraction (renders projected onto impostor geometry), you're probably going to be doing it for 2 main reasons: Z buffering and lighting. There are other benefits, to be sure, but these are the big ones. The Z buffering gets past some of the interesting sprite draw-order problems that have been highlighted over 30 years of isometric games, and the lighting just makes it look juicy.

    But for both of these, you really want to have geometry that fits the rendered sprite that is projected on it. Cubes can work for many elements, as long as those elements are essentially cubic in nature. Since you are still going to try to avoid cases of intersection or clipping between objects, the occasional weird cube-like clipping artifact won't be too bad. But a mismatch between the rendered object and its impostor geometry is going to be very noticeable once you toss dynamic lighting into the mix.

    So, no, there's really no one size fits all solution for impostor geometry. For this reason, I'd say that if you are looking for a paradigm to reduce your workload, this isn't it. Just go with a standard 2D sprite stacking approach instead, and deal with the edge cases as best you can. This technique ends up actually being more work, because after you have finished the intricate modeling of your rendered objects, you still have to perform modeling to obtain a good-fit impostor to stick in on. That second step can be skipped in a traditional 2D, at the cost of all the tradeoffs you have to make.

    If you are willing to accept some lighting quirks, though, you can settle for cubes or portions of cubes for everything. It'll show up in lighting, but maybe you can finesse it so it's not that bad.

     

    Alright. Thanks again for your help!


  3. Yes, it's mostly for lighting. Cubes typically work well enough for z buffering, unless the shape is concave. They don't light very well, though.

     

    While playing around with this a little bit i noticed some "problems" which i am unsure what the best solution would be. 

    I guess i need different primitive meshes for different types of textures:

     

    Cube: 7ll2RWR.png

     

    Plane: ueJo37v.png

     

    Camera oriented plane: ON4rafE.png

     

    Am i right in this assumption? I guess there is no universal 3D solution that fits for all sprites/purposes, isn't there?


  4. Pretty much, yes. Also, you can cram more detail into the scene without drastically increasing the face count. And because the scene is handled as a 3D scene, you can take advantage of all the usual 3D tricks: frustum culling, occlusion culling, render batching, etc... To increase performance, and shader tricks such as normal mapping to increase render quality.

     

    Thanks again. One last follow-up, and sorry if this sounds dumb: Why do i need to match the 3D model as much as possible to the texture? Is this only relevant if i need dynamic lighting and shader-stuff? Couldn't i get away with only very basic shapes like a cube with the backfaces removed and thus simple doing billboarding with cubes instead of quads?


  5. Here is another shot of a wall piece.

    kJvgqvG.png

    You can see that the wall model is UV mapped to 'snip' out the rendered texture. Now, here is a shot with another wall piece tiled in adjacent to that one:

    Ha6KqgW.png

    The rendered wall pieces are constructed such that they present a repeating tile pattern, so that when wall pieces are placed on a grid adjacent to each other, the rendered patterns form a seamless image. By placing different wall pieces, entire dungeons can be built.

     

    Thank you for your detailed pictures and explanations. I really appreciate it! So the advantages of the 3D model rendering (with removed backfaces - since the camera angles stay the same), as opposed to simple textured quads, is basically the easier depth sorting, the better representation of height and the possibility to do dynamic stuff (lighting)?


  6. I reckon they would use geometry that matches the rendered geometry as closely as possible, not limiting themselves just to cubes. Here is an example from one of my own projects:

    1dSpRVp.png

    You can see that the geometry that is actually being rendered is much simpler than the geometry used to construct the rendered tiles of the wall and floor. It's a single plane, in the case of the floor.

    I've learned that the most important thing is that you match the silhouette of the rendered geometry as close as you can. If you look at The spherical newell cap on the wall to the left in that screenshot, you can see a small crescent of white. That is caused by part of the wall background being projected by mistake onto the sphere, and receiving lighting from the sphere portion rather than from the more dimly-lit top of the wall. Adjustments to the silhouette of the impostor geometry can help to mitigate that sort of thing.

     

    Thanks for the answer. So the size of the "simple" model does somewhat match the texture-size?

     

    They are also stating that their are using the "tiled" approach. What i don't understand is how this is still considered tiled if you try to match the geometry as much as possible (or the silhouette). What is the purpose/advantage of using the tiled approach then?


  7. Hi there,

     

    so i am currently investigating a lot of different methods for rendering beautiful isometric environments while still leaving room for dynamic elements (lights etc.). I think leveraging the 3D depth testing is a good idea.

     

    I stumbled across this blog post for Shadowrun Returns and their method of doing it appeals to me:

    http://harebrained-schemes.com/blog/2013/03/22/mikes-dev-diary-art/

     

    How did they do it? I mean it says that they are projecting their art onto simple 3D shapes, but for example do they use multiple same sized cubes for the bigger buildings, or actually a single larger one? I just cannot wrap my head around this problem...

    I hope someone can clear up my confusion regarding how to build a isometric world out of 3d shapes and thus being able to leverage the depth testing of the 3d hardware.

     

    Thanks!


  8. I had a similiar issue once, and it turned out I was doing windowed mode wrong in terms of calculating the window size to fit the backbuffer size etc., resulting in a vaguely stretched display that was hard to notice for a long while.. Maybe you could check your window+DirectX initialisation code?

     

     


    I was going to say the same thing. You want to make sure that the client area of your window is the same size as your D3D backbuffer,
    otherwise you'll get really crappy scaling when the backbuffer is blit onto the window. You can use something like this:
     
    RECT windowRect;
    SetRect(&windowRect, 0, 0, backBufferWidth, backBufferHeight);
    
    BOOL isMenu = (GetMenu(hwnd) != nullptr);
    if(AdjustWindowRectEx(&windowRect, style, isMenu, exStyle) == 0)
        DoErrorHandling();
    
    if(SetWindowPos(hwnd, HWND_NOTOPMOST, 0, 0, windowRect.right - windowRect.left, windowRect.bottom - windowRect.top, SWP_NOMOVE) == 0)
        DoErrorHandling();
    
    See the docs for AdjustWindowRectEx for more details.

     

     

    Thank you! Both of you! That was indeed my problem. I was so focused on checking the pipeline that i totally forgot about the window/directx initialization.

    EDIT: If any of you two also want to answer the question on stackoverflow, feel free to do so. I will wait till tomorrow and just accept the first answer that comes. Otherwise i will just answer the question myself and link to this thread.


  9. So i am trying to render a small isometric tile-map and just started with a single tile to feel the thing out. But i am running into a problem where if the tile gets jagged edges, even though it looks ok on the texture itself. The strange thing is that it also appears to be right if i use the Graphics Debugger in Visual Studio (And that is because it tile gets displayed a little smaller than the real one - slighty zooming in has the same effect). Here is a picture to better visualize what i mean:

     

    4CNt0Yr.png

     

    The left picture is a part of the rendered frame inside a normal window. The right part is the captured frame for the graphics debugging tool. As you can see the display for the captured frame looks completely ok. The normal rendering inside a window also starts to look good if i scale this tile up by some factor.

     

    Here is my Sampler Desc. I am creating the texture by using the CreateDDSTextureFromFilemethod provided by the DirectXTK (https://github.com/Microsoft/DirectXTK).

    CreateDDSTextureFromFile(gfx->GetGraphicsDevice()->GetDevice(), L"resources/tile.DDS", nullptr, shaderResource.GetAddressOf())
    
    ZeroMemory(&g_defaultSamplerDesc, sizeof(D3D11_SAMPLER_DESC));
    g_defaultSamplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; //D3D11_FILTER_MIN_LINEAR_MAG_POINT_MIP_LINEAR
    g_defaultSamplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
    g_defaultSamplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
    g_defaultSamplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
    g_defaultSamplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
    g_defaultSamplerDesc.MinLOD = 0;
    g_defaultSamplerDesc.MaxAnisotropy = 1;
    g_defaultSamplerDesc.MaxLOD = D3D11_FLOAT32_MAX;

    I checked the .dds file and there is only 1 Mip-Level. Here is also the Texture description which the DirectXTK method is using. Seems fine to me:

    Width 64 unsigned int
    Height 64 unsigned int
    MipLevels 1 unsigned int
    ArraySize 1 unsigned int
    Format DXGI_FORMAT_B8G8R8A8_UNORM (87) DXGI_FORMAT
    + SampleDesc {Count=1 Quality=0 } DXGI_SAMPLE_DESC
    Usage D3D11_USAGE_DEFAULT (0) D3D11_USAGE
    BindFlags 8 unsigned int
    CPUAccessFlags 0 unsigned int
    MiscFlags 0 unsigned int

    And for what it's worth here is also my Vertex-Setup:

    Vertex v[] =
    {
    Vertex(-32.f, -32.f, 1.0f, 0.0f, 1.0f),
    Vertex(-32.f, 32.f, 1.0f, 0.0f, 0.0f),
    Vertex(32.0f, -32.f, 1.0f, 1.0f, 1.0f),
    Vertex(32.0f, 32.f, 1.0f, 1.0f, 0.0f)
    };

    Any idea what might be causing this? This also happens with other dimetric textures.

     

    Full disclosure: I also asked this question on stackoverflow, but since i have got no real reply (beside some nice help from one fella) for 5 days i am getting a bit desperate, especially because i already tried a lot of stuff. Here is the original question: http://stackoverflow.com/questions/36068768/isometric-tile-has-jagged-lines


  10. Well it requires some bitwork, ie. shifting and masking. First of all, you'll need to decide yourself the layout of the bits such as

     

    bit 0 : Ambient fog (one bit is enough for yes/no)

    bit 1 : Normal map 

    bit 2-3 : Num Point lights (2 bits is enough for values between 0-3)

    bit ... 

     

    As the loop counter presents all the shader permutations, only thing left is to extract each bit field (variable "i" is the loop counter presented in the earlier post):

     

    bool AmbientFog = i & 1; (the desired value is in the first bit so masking out other bits is enough)

    bool NormalMap = (i >> 1) & 1; (the desired value is in the second bit so one shift is necessary and masking after)

    int   NumPointLights = (i >> 2) & 3;

    ...

     

    4 bits are used here (you'll need few bits more) so the loop needs to be run 2^4 times. 

     

    Shifts are practically multiplies / divisions by 2^(shift value) which effectively will move the bits in the desired positions. After shifting is is necessary to use AND (&) operator to mask out unwanted bits.

     

    The loop creating all the permutations doesn't know about the required usage - ie. it can't know whether a permutation is used or not. Nothing prevents you from creating permutations while program is running (and saving them to disk in order to accelerate the program operation). Of course this maybe produce slowdowns if a new shader permuation is required.

     

    Cheers!

    Do i understand this approach correctly: You compile all the shader permutations in runtime (at startup for example)?

    Is it possible to compile the shader permutations in build time, but still having the luxury of having them named nicely, so they can be identified at runtime by using the bitmask you suggested above?


  11.  


    How do i get the reflection interface for the default constant buffer?

    If you compile your shader with FXC, and then check the output listing for the name of the buffer you should be able to use that name to reflect the constant buffer.  Regarding the performance, I think there is a penalty for using the dynamic linkage (although I've never heard hard numbers on this before).  Most of the systems I have heard of just generate the appropriate shader code and compile the needed variants accordingly.

     

    Is that something that isn't a usable solution for you?

     

    I also heard about this solution, but could not find any samples or good explanation of whats the best way to go with this kind of system. Do you have any of those at hand?

    As i found out about the Dynamic Shader Linkage i was not expecting it being used so rarely, but the explanation from MJP now makes it quite clear why this might be the case. 
     

    So: Does anyone have a good article or something about how one would usually go about this kind of system?

     

    Thank you for all your input!


  12. Thanks for your answer ;).


    Congratulations on being the first person that I've met who's actually trying to use dynamic shader linkage

     

    Why is that? Is it to cumbersome, or bad performance wise? I find it a quite interesting idea, especially because it gives you the possibility to create new class instances in run time. 

     


    It *might* show up if you get the reflection interface for the default constant buffer, but I'd have to test to find out

     

    How do i get the reflection interface for the default constant buffer? I already checked most of the reflection interfaces, including the constant buffers (with GetConstantBufferByIndex()) and bound resources (with GetResourceBindingDesc()). Sadly it does not show up on either of those.

    Getting the global interface name by index would be nice, because then one would not have to change the code manually if something changes inside the shader code and it would be easy to "calculate" the offset into the dynamic linkage array.


  13. Well i found something which might help. If i use the GetInterfaceByIndex on the Shader Reflection type i get the expected class "D3D_SVC_INTERFACE_CLASS" which i suspect would be different or non-existent on a normal struct.

     

    EDIT: The only thing now left for me is to find out if it is possible to get the name from here through shader reflection. Any Ideas?:

    iBaseLight     g_abstractAmbientLighting;
                   ^^^^^^^^^^^^^^^^^^^^^^^^
    
    struct PixelInput
    {
    float4 position : SV_POSITION;
    float3 normals : NORMAL;
    float2 tex: TEXCOORD0;
    };

  14. Yes the constant buffer stays the same, regardless of which class implementation i am using at runtime. (Was this what you were asking?)

    The abstract interface is used like this:

    iBaseLight     g_abstractAmbientLighting;
    
    
    struct PixelInput
    {
    float4 position : SV_POSITION;
    float3 normals : NORMAL;
    float2 tex: TEXCOORD0;
    };
    
    
    float4 main(PixelInput input) : SV_TARGET
    {
    float3 Ambient = (float3)0.0f;
    Ambient = g_txDiffuse.Sample(g_samplerLin, input.tex) * g_abstractAmbientLighting.IlluminateAmbient(input.normals);
    
    
    return float4(saturate(Ambient), 1.0f);
    }

    The reason i would like to reliably differentiate between a normal struct and a class used for dynamic shader linkage is that i would later use the variable name to automatically get the class instances like this: 

     g_pPSClassLinkage->GetClassInstance( varDesc.Name, 0, &g_pAmbientLightClass );

  15. Hey there,

     

    i am currently trying to integrate Dynamic Shader Linkage into my ShaderReflection Code. I examined the Win32 Sample Microsoft provided, where they are declaring a cbuffer like this:

    cbuffer cbPerFrame : register( b0 )
    {
       cAmbientLight     g_ambientLight;
       cHemiAmbientLight g_hemiAmbientLight;
       cDirectionalLight g_directionalLight;
       cEnvironmentLight g_environmentLight;
       float4            g_vEyeDir;   
    };

    While cAmbientLight, cHemiAmbientLight etc. are all classes interfacing the iBaseLight interface. This is a example:

     

    interface iBaseLight
    {
       float3 IlluminateAmbient(float3 vNormal);
       
       float3 IlluminateDiffuse(float3 vNormal);
    
    
       float3 IlluminateSpecular(float3 vNormal, int specularPower );
       
    };
    
    
    //--------------------------------------------------------------------------------------
    // Classes
    //--------------------------------------------------------------------------------------
    class cAmbientLight : iBaseLight
    {
       float3 m_vLightColor;     
       bool     m_bEnable;
       
       float3 IlluminateAmbient(float3 vNormal);
          
       float3 IlluminateDiffuse(float3 vNormal)
       { 
          return (float3)0;
       }
    
    
       float3 IlluminateSpecular(float3 vNormal, int specularPower )
       { 
          return (float3)0;
       }
    };

    Now i am already able to get the needed constant buffer variables out of the reflection (m_vLightColor and m_bEnable in this case), but i was wondering if there is a reliable method to detect if the constant buffer variable is a interface class (like cAmbient Light for example).
    The Reflection of this constant buffer only gives D3D_SVC_STRUCT as class type for the variables. By browing through the MSDN i notcied that there are existing two classes which would suit: D3D_SVC_INTERFACE_POINTER and D3D_SVC_INTERFACE_CLASS.

    Am i doing something wrong or is it normal that the class displayed here is a STRUCT?

     

     

     


  16. Alright. I found the problem:

     

    I had to invert the "v" coordinate of the texture coordinates (v = 1-v).

    Is there a way to detect if my texture coordinate need inversion or do need to specify this before conversion (or alternatively export my mesh for a left handed coordinate system)?

     

    Thanks for any suggestion!


  17. Hey there,

     

    i am currently trying to render a model and "attach" a texture to it.

    The model is based on a .obj file which i am converting to a binary file for faster loading times (my code is nearly the same as this one: http://www.getcodesamples.com/src/B364EC3C/690BAF9B).

     

    Now rendering the model works quite nice. 

    Texture mapping seems to be a problem though. By looking at my rendering it seems like the texture coordinates are wrong, but the obj2vbo.cpp file was provided by microsoft in a sample code, so i would rather look a my own code first.

     

    This is how i load the binary file:

    void createMeshData(_In_ byte* meshData, _Out_ VertexBuffer** vertexBuffer, _Out_ IndexBuffer** indexBuffer, _Out_ uint32* vertexCount, _Out_ uint32* indexCount){
        *vertexCount = *reinterpret_cast<uint32*>(meshData);
        *indexCount = *reinterpret_cast<uint32*>(meshData + sizeof(uint32));
        BasicVertex* vertices = reinterpret_cast<BasicVertex*>(meshData + sizeof(uint32)* 2);
        *vertexBuffer = this->m_renderer->createVertexBuffer(vertices, sizeof(BasicVertex) * (*vertexCount), false);
        unsigned short* indices = reinterpret_cast<unsigned short*>(meshData + sizeof(uint32)* 2 + sizeof (BasicVertex)* (*vertexCount));
        *indexBuffer = this->m_renderer->createIndexBuffer(indices, sizeof(unsigned short)* (*indexCount), false);
    }

    By debugging the Shader i found that my samplerState and my TextureResource bound to the shader should be correct.

    My sampler description looks like this:

     

    D3D11_SAMPLER_DESC samplerDesc;
    samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
    samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
    samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
    samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
    samplerDesc.MipLODBias = 0.0f;
    samplerDesc.MaxAnisotropy = 1;
    samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
    samplerDesc.BorderColor[0] = 0.0f; samplerDesc.BorderColor[1] = 0.0f; samplerDesc.BorderColor[2] = 0.0f; samplerDesc.BorderColor[3] = 0.0f;
    samplerDesc.MinLOD = 0;
    samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;

    While this is my PixelShader:

     

    Texture2D objTexture;
    SamplerState samplerState;
    
    
    struct PixelInput
    {
        float4 position : SV_POSITION;
        float2 tex: TEXCOORD0;
    };
    
    
    float4 main(PixelInput input) : SV_TARGET
    {
        float4 textureColor;
    
    
        textureColor = objTexture.Sample(samplerState, input.tex);
    
    
        return textureColor;
    }

    Do you see something obvious wrong? If not then let me know and i take another look at my conversion code or post it here aswell if i see nothing wrong with it.

    Thanks in advance!

     

    EDIT: Here is a picture of how it looks at the moment:

    ddWOnu4.jpg


  18. Hey there,

    So i am trying to get the number of constant buffers from my compiled shaders through reflection. Sadly it is not working as expected and i don't know why. Here is how i load the shader code inside a ID3DBlob:

    ID3DBlob* vertexBlob;
    HRESULT result = D3DReadFileToBlob(vertexShader.c_str(), &vertexBlob);

    And this is how i get the reflection:

    ComPtr<ID3D11ShaderReflection> reflection;
    HRESULT hr = D3DReflect(shader.code->GetBufferPointer(), shader.code->GetBufferSize(), IID_ID3D11ShaderReflection, reinterpret_cast<void**>(reflection.GetAddressOf()));
    
    if (FAILED(hr)) {
    // SOMETHING WENT WRONG;
    return (nullptr);
    }
    
    D3D11_SHADER_DESC shaderDesc;
    reflection->GetDesc(&shaderDesc);

    If i check the shaderDesc in Debug the ConstantBuffers variable is 0. I already made sure that changes made to the Shader are getting recognized. I added another input variable and the number of input parameters inside the description increased. Finally here is how i define the constant buffer, but i don't think that this is the problem:

    cbuffer perObject 
    {
    matrix worldViewProj;
    };
    
    struct vertexInput
    {
    float3 position : POSITION;
    float3 color : COLOR;
    };
    
    float4 main(vertexInput input) : SV_POSITION
    {
    return float4(input.position, 1.0f);
    }

    Has anyone any ideas what might be wrong?-
    Thank you in advance.

     

     


  19. Hi there,
     
    i am trying to implement a little wrapper class for the win32 window api. It works quite nicely, but one problem i cannot seem to solve remains, and it is a quite peculiar one. 
    This is the code in the class (only the relevant parts):

     

    LRESULT CALLBACK WindowProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
    {
        // sort through and find what code to run for the message given
        switch (message)
        {
            // this message is read when the window is closed
            case WM_DESTROY:
            {
                // close the application entirely
                PostQuitMessage(0);
                return 0;
            }break;
         }
    
    
        // Handle any messages the switch statement didn't
        return DefWindowProc(hWnd, message, wParam, lParam);
    }
    
    Window::Window(HINSTANCE thisInstance, DirectX::XMINT2 size, wstring title, bool enableFullscreen) :
    appInstance(thisInstance),
    position(0, 0),
    size(size),
    #ifdef UNICODE
    title(title),
    #else
    title(string(title.begin(), title.end())),
    #endif
    fullscreen(false)
    {
        this->buildWindow();
        if (fullscreen)
            setFullscreenWindowed(true);
    }
    
    
    Window::~Window()
    {
        DestroyWindow(windowHandle);
    
    
        UnregisterClass(CLASSNAME, appInstance);
    }
    
    
    void Window::buildWindow()
    {
        WNDCLASSEX wClass; //window class structur
        wClass.hInstance = this->appInstance;
        wClass.lpszClassName = "Classname";
        wClass.lpfnWndProc = WindowProc;
        wClass.style = CS_HREDRAW | CS_VREDRAW;
        wClass.cbSize = sizeof(WNDCLASSEX);
        wClass.hIcon = NULL;
        wClass.hCursor = LoadCursor(NULL, IDC_ARROW);
        wClass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
        wClass.lpszMenuName = NULL;
        wClass.hbrBackground = NULL;
        wClass.cbClsExtra = 0;
        wClass.cbWndExtra = 0;
        if (!RegisterClassEx(&wClass))
        {
              Debug::getInstance().log("Faild to create Window Class", EDEBUG_LEVEL::E_ERROR);
              //throw some errors
              return;
        }
    
        this->windowHandle = CreateWindowEx(
             NULL,
            "Classname",
            "Dies ist ein Titel", //real window title
            WS_OVERLAPPED | WS_CAPTION | WS_SYSMENU | WS_MINIMIZEBOX,
            0, 0,
            800, 600,
            NULL,
            NULL,
            this->appInstance,
            NULL
            );
        if (this->windowHandle == NULL)
        {
             Debug::getInstance().log("CreateWindow failed.", EDEBUG_LEVEL::E_ERROR);
        }
    }
    
    void Window::show()
    {
        ShowWindow(windowHandle, SW_SHOW);
        UpdateWindow(windowHandle);
        SetFocus(windowHandle);
    }
    

    And this is my main loop (in main.cpp)
     
     

    int WINAPI WinMain(HINSTANCE instance, HINSTANCE prevInstance, PSTR sCmdLine, int cmdShow)
    {
        Window mainWindow = Window(instance, { 800, 600 }, std::wstring(L"Hier steht ein netter Titel"), false);
        mainWindow.show();
    
        MSG msg;
    
        while (TRUE) {
            while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
                TranslateMessage(&msg);
                DispatchMessage(&msg);
            }
    
            if (msg.message == WM_QUIT) {
                break;
            }
        }
    
        return msg.wParam;
    }

    Now the problem is: The window does not appear. In fact it gets destroyed immediately after its creation. The reason for this seems to be that i call the 

    buildWindow()

     method from the constructor of the Window class. If i call the method like a member function directly from the main-loop like 

    mainWindow.buildWindow()

     right before 

    mainWindow.show()

     it works as expected. What is wrong here?

     

    Thanks in advance


  20. I have a problem i am unable to solve. I've tried for several hours now, but i just don't understand what the problem is.

    I am rendering different rectangles with the help of the DirectXTK (PrimitiveBatch) and surrounding them with a BoundingOrientedBox (DirectXCollision.h). For visualization i draw the corners of the BoundingBox to make sure that everything works fine. And it is! Except when i change the y-position of the rectangle. The strange thing is: My BoundingBox corners are below the actual rectangle, but the center of the rectangle and the center of the bounding box are both at the same and right position, only my rendered rectangle is mispositioned.
     

    Here how i set up the BoundingBox:

    this->boundingBox = new BoundingOrientedBox(XMFLOAT3(XMVectorGetX(this->position), XMVectorGetY(this->position), XMVectorGetZ(this->position)), XMFLOAT3(width/2.0f, height/2.0f, depth/2.0f), XMFLOAT4(XMVectorGetX(this->orientationQuaternion), XMVectorGetY(this->orientationQuaternion), XMVectorGetZ(this->orientationQuaternion), XMVectorGetW(this->orientationQuaternion)));

    And this is how i set-up my vertices for the rectangle:

    float x = XMVectorGetX(this->position) - getWidth()/2;
    float y = XMVectorGetY(this->position) - getHeight()/2;
    float z = XMVectorGetZ(this->position) - getDepth()/2;
    
    // FRONT
    this->vertexPosition[0] = VertexPositionColor(XMVectorSet(x, y + getHeight(), z, 1), Colors::Blue);
    this->vertexPosition[1] = VertexPositionColor(XMVectorSet(x + getWidth(), y + getHeight(), z, 1), Colors::Blue);
    this->vertexPosition[2] = VertexPositionColor(XMVectorSet(x, y, z, 1), Colors::Blue);
    this->vertexPosition[3] = VertexPositionColor(XMVectorSet(x + getWidth(), y, z, 1), Colors::Blue);
    // RIGHT
    this->vertexPosition[4] = VertexPositionColor(XMVectorSet(x + getWidth(), y + getHeight(), z, 1), Colors::Blue);
    this->vertexPosition[5] = VertexPositionColor(XMVectorSet(x + getWidth(), y + getHeight(), z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[6] = VertexPositionColor(XMVectorSet(x + getWidth(), y, z, 1), Colors::Blue);
    this->vertexPosition[7] = VertexPositionColor(XMVectorSet(x + getWidth(), y, z + getDepth(), 1), Colors::Blue);
    // TOP
    this->vertexPosition[8] = VertexPositionColor(XMVectorSet(x, y + getHeight(), z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[9] = VertexPositionColor(XMVectorSet(x + getWidth(), y + getHeight(), z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[10] = VertexPositionColor(XMVectorSet(x, y + getHeight(), z, 1), Colors::Blue);
    this->vertexPosition[11] = VertexPositionColor(XMVectorSet(x + getWidth(), y + getHeight(), z, 1), Colors::Blue);
    // BACK
    this->vertexPosition[12] = VertexPositionColor(XMVectorSet(x + getWidth(), y + getHeight(), z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[13] = VertexPositionColor(XMVectorSet(x, y + getHeight(), z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[14] = VertexPositionColor(XMVectorSet(x + getWidth(), y, z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[15] = VertexPositionColor(XMVectorSet(x, y, z + getDepth(), 1), Colors::Blue);
    // LEFT
    this->vertexPosition[16] = VertexPositionColor(XMVectorSet(x, y + getHeight(), z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[17] = VertexPositionColor(XMVectorSet(x, y + getHeight(), z, 1), Colors::Blue);
    this->vertexPosition[18] = VertexPositionColor(XMVectorSet(x, y, z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[19] = VertexPositionColor(XMVectorSet(x, y, z, 1), Colors::Blue);
    // BOTTOM
    this->vertexPosition[20] = VertexPositionColor(XMVectorSet(x, y, z, 1), Colors::Blue);
    this->vertexPosition[21] = VertexPositionColor(XMVectorSet(x + getWidth(), y, z, 1), Colors::Blue);
    this->vertexPosition[22] = VertexPositionColor(XMVectorSet(x, y, z + getDepth(), 1), Colors::Blue);
    this->vertexPosition[23] = VertexPositionColor(XMVectorSet(x + getWidth(), y, z + getDepth(), 1), Colors::Blue);

    This is how i calculate and apply the world transforms:

    For the corner points:

    XMMATRIX scale = XMMatrixScaling(0.01f, 0.01f, 0.01f);
    XMMATRIX trans = XMMatrixTranslation(XMVectorGetX(cornerPoint), XMVectorGetY(cornerPoint), XMVectorGetZ(cornerPoint));
    g_pEffectPositionNormal->SetWorld(scale * trans);
    
    g_pBall->Draw(g_pEffectPositionNormal, g_pInputLayoutPositionNormal);

    The cornerPoint is a vector with the position for one of the corners of the bounding box.
     

    And my rectangle:

    g_pEffectPositionNormal->SetWorld(body->getOrientation() * XMMatrixTranslation(XMVectorGetX(body->getPos()), XMVectorGetY(body->getPos()), XMVectorGetZ(body->getPos())));
    
    g_pEffectPositionNormal->Apply(pd3dImmediateContext);
    pd3dImmediateContext->IASetInputLayout(g_pInputLayoutPositionNormal);
    
    
    g_pPrimitiveBatchPositionColor->Begin();
    g_pPrimitiveBatchPositionColor->Draw(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP, body->getVertexPosition(), 24);
    g_pPrimitiveBatchPositionColor->End();

    Where getPos() returns the center point position of my rectangle

     

    Do you see anything obvious wrong with it?
    Thanks

     


  21. i am currently trying to dive into 2D programming with DirectX, especially bulding a tile based sidescroller. I have worked with some chapters from a book (Advanced 2D Game Development) but always tried to create something on my own. 
    But now i am kinda stuck with the camera work in 2D. 
    I do realize that there different matrices to tranform the objects to world space, then the world space to view space and then to projection (camera) space. Do i really need all of those for a 2D game?
     
    I have a camera class (out of the book) and was trying to use it to "roam" around with the camera (i am currently drawing a grid, with D3DXLine - bigger than the screen - and i would like to float over it with my camera) but its not working:
     
       
     Camera::Camera()
        {
         p_position = D3DXVECTOR3(0.0f, 0.0f, 10.0f);
         p_upDir = D3DXVECTOR3(0.0f, 1.0f, 0.0f);
        
         float aspectRatio = (float)g_engine->getWindowWidth() / (float)g_engine->getWindowHeight();
        
         this->setPerspective(3.14159f / 4, aspectRatio, 1.0f, 2000.0f);
        }
        
        Camera::~Camera() { }
        
        void Camera::setPerspective(float fov, float aspectRatio, float nearPlane, float farPlane)
        {
         this->setFOV(fov);
         this->setAspectRatio(aspectRatio);
         this->setNearPlane(nearPlane);
         this->setFarPlane(farPlane);
        }
        
        void Camera::setPosition(float x, float y, float z)
        {
         this->p_position.x = x;
         this->p_position.y = y;
         this->p_position.z = z;
        }
        
        void Camera::Update()
        {
         D3DXMatrixPerspectiveFovLH(&this->p_matrixProj, this->getFOV(), this->getAspectRatio(), this->getNearPlane(), this->getFarPlane());
         g_engine->getDevice()->SetTransform(D3DTS_PROJECTION, &this->p_matrixProj);
        
         D3DXMatrixLookAtLH(&this->p_matrixView, &this->p_position, &this->p_target, &this->p_upDir);
         g_engine->getDevice()->SetTransform(D3DTS_VIEW, &this->p_matrixView);
        }
    
     
    I am calling the Update() in every game_update()
     
    Now i am wondering why there is no SetTransform(D3DTS_WORLD, ...) according to the camera movement. Is it just missing and i would have to implement it by myself, or do i need to set it elsewhere? I am kinda lost of how the "normal" set up would be to implement such a thing. How i can move around on the screen freely (with my mouse for example)

    At the moment i am just altering the Camera Position, but nothing happens.
     
    This is a example of how i set up my sprite transform before they get drawn:
     
        void Sprite::transform()
         {
         D3DXMATRIX mat;
         D3DXVECTOR2 scale((float)this->_scaling, (float)this->_scaling);
         D3DXVECTOR2 center((float)(this->_width*this->_scaling)/2, (float)(this->_height*this->_scaling)/2);
         D3DXVECTOR2 trans((float)this->getX(), (float)this->getY());
         D3DXMatrixTransformation2D(&mat, NULL, 0, &scale, &center, (float)this->_rotation, &trans);
         g_engine->getSpriteHandler()->SetTransform(&mat);
         }
     
    If you need any more code, please let me know!
    Thanks a lot!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!