Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Everything posted by trojanfoe

  1. trojanfoe

    GIMP vs Adobe

    For pixel art, look at Graphics Gale. It's free. I really dig Marmoset Hexels though.
  2. I have been doing C++/DirectX 11 dev for a few months and this is greatly helped by DirectXTK and the VS templates provided by Chuck et al. However I was a bit disappointed that Xbox UWP apps using DirectX 11 only use shader model 4 which could cramp my style if I wanted to play with compute shaders. In order to get SM5 on Xbox/UWP you have to go DirectX 12. Having said that, there is plenty to work with, resource wise, when going DirectX11.
  3. Isn't the idea that UWP is "sandboxed" like Mac apps and mobile apps in order to provide more security? In that case they won't have access to anything external and only stuff in their own bundle. I have some experience with Mac sandboxed apps but have only had a cursory look at UWP apps as I am considering them as a viable target platform.
  4. trojanfoe

    Unity SetQualityLevel persistence

    Option 5 sounds best to me, if my vote counts
  5. I would personally not bother with the single-header file implementation and instead move the implementation into .cpp files. You will end up with cleaner looking code and avoid your issue altogether.
  6. trojanfoe

    Migrating from Win32 (DX) to UWP

    Looking at the UWP documentation it looks like this requirement to use DirectX 12 in order to get full access to the GPU is not true at all. That's very good news as it means you can use DirectX 11 and provide Win32 binaries targetting Win7 (still 35% market share?) as well as UWP apps targetting Win10/XBoxONE. Can anyone confirm this?
  7. trojanfoe

    Migrating from Win32 (DX) to UWP

    I am also interested in targetting UWP but have never developed for it. You will get a flavour of what's involved if you install the Direct3D Game Templates for VS and create a solution with a "UWP DirectX 11 (or 12) DR C++/WinRT" project and a "Win32" version and look at the implementation in Main.cpp and DeviceResources.cpp (Game.cpp is 99% the same in both, which is greate). From what I can see you won't have to learn much about UWP and WinRT as that's taken care of by the template (WinRT is the Windows Runtime subset of Win32 API calls that's supported on all Win10 platforms). Also check out this video about C++/WinRT from Microsoft as it means you can write in ISO C++ and not C++/CX and that's a good thing. Something else I found out recently is that if you want more of the GPU from your UWP game on Xbox One then it needs to be DirectX 12, which could be a show-stopper for some.
  8. I have been trying to create a BlendState for my UI text sprites so that they are both alpha-blended (so you can see them) and invert the pixel they are rendered over (again, so you can see them). In order to get alpha blending you would need: SrcBlend = SRC_ALPHA DestBlend = INV_SRC_ALPHA and in order to have inverted colours you would need something like: SrcBlend = INV_DEST_COLOR DestBlend = INV_SRC_COLOR and you can't have both. So I have come to the conclusion that it's not possible; am I right?
  9. Yep, that was my next step, however I thought I'd just ask to see if it was possible.
  10. OK, so that's not working. This is what it tooks like (text is bottom left): Here is the blend state creation code: D3D11_BLEND_DESC blendDesc = {}; blendDesc.RenderTarget[0].BlendEnable = TRUE; blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD; blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_ONE; blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_SRC_ALPHA; blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD; blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; ThrowIfFailed( d3dDevice->CreateBlendState(&blendDesc, s_invertedBlendState.GetAddressOf()), "create Text inverted blend state" ); and here is the pixel shader: Texture2D txDiffuse : register(t0); SamplerState samp : register(s0); float4 main(PS_IN_PosColTex input) : SV_TARGET { float4 color = txDiffuse.Sample(samp, input.tex); return float4(color.rgb * color.a + 1 - color.a, color.a - 1); } However on the screen the text has black boxes instead of white from the screenshot. I use a stack of Scene objects with the UI Scene being on top. In the render loop I clear a render target to transparent and each Scene renders their content onto the render target and finally the render target is written to the window render target (this is for future expand where post-processing is expected). Would have some bearing on any of this?
  11. Whoa! That's amazing. I'll try this tonight and get back to you. Many thanks!
  12. Yes. Basically I want to see the text regardless of colour of the pixel behind (this is only for debug text as I don't think that would look much good in production).
  13. I had problem the VS graphics debugger crashing when I don't take a snapshot. If I take one, it's ok (this is using VS2017 with latest update). How do people rate nsight against the VS graphics debugger or RenderDoc?
  14. OK, fixed it. Beginners mistake. I had forgotten to set the viewport of the window before rendering from render texture to window
  15. Hi there, I am rendering my game to render textures but I am having difficulty figuring out how to scale it to fill the window. The window looks like this: and the render texture looks like this (it's the window resolution downscaled by 4): (I implemented a screenshot function of the render texture which has proved to be very useful getting this working so far). My vertex shader is the classic "draw fullscreen triangle without binding a vertex or index buffer" as seen many times on this site: PS_IN_PosTex main(uint id : SV_VertexID) { PS_IN_PosTex output; output.tex = float2((id << 1) & 2, id & 2); output.pos = float4(output.tex * float2(2, -2) + float2(-1, 1), 0, 1); return output; } and the pixel shader is simply: Texture2D txDiffuse : register(t0); SamplerState samp : register(s0); float4 main(PS_IN_PosTex input) : SV_TARGET { return txDiffuse.Sample(samp, input.tex); } Can someone please give me a clue as to how to scale this correctly? Many thanks, Andy
  16. Hi there, I have an issue with the SpriteRenderer I have developed. I seem to be getting different alpha blending from one run to the next. I am using DirectXTK and have followed the advise in the wiki and have converted all my sprite images to pre-multiplied alpha using TexConv.exe: texconv -pow2 -pmalpha -m 1 -f BC3_UNORM -dx10 -y -o Tests\Resources\Images Tests\ResourceInput\Images\*.png and here is the code that applies the blend states (using the DIrectXTK CommonStates class): ID3D11BlendState* blendState = premultAlphas ? m_commonStates->AlphaBlend() : m_commonStates->NonPremultiplied(); m_deviceContext->OMSetBlendState(blendState, nullptr, 0xffffffff); Can anyone think of what I am doing wrong?
  17. Hey thanks for the information CortexDragon.
  18. Hi CortexDragon, you originally had lots of interesting looking information in your reply about how various techniques could be used to do this properly, but you edited it away. Why was that?
  19. Hey thanks for the reply. I seem to be getting pretty good results by sorting back-to-front and then by texture, so I will just dump the depth state stuff altogether.
  20. OK, so it looks like it's the depth state that are interfering with it. If I turn off the depth tests and draw back-to-front by z-order, I get the desired results: So it looks like I have more to learn
  21. Hi there, this is my first post in what looks to be a very interesting forum. I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering? Thanks in advance!
  22. Yeah I already do that - I keep position, origin, scale and rotation and a transform and inverse transform, both the latter with dirty flags. The matrices are calculated in their "get" methods if their dirty flags are set (I think we are on the same page). At the moment I can use SpriteBatch in a "flat" scene graph by simply providing the position, origin, etc to it's Draw() method, but in order to get a hierarchical scene graph working I would need to decompose the world transform back to position, origin, etc and that's where my maths breaks down. However this seems like a waste to decompose, given the vertex shader loves matrices so much anyway, so I am thinking: 1) For SM5+ hardware, I will pass the transform and other stuff in a structured buffer. 2) For < SM5 I think I am looking at passing the transform in a per-object constant buffer. I am currently trying to learn how to do that
  23. Hey thanks - already on that one. The major issue I currently have with SpriteBatch is that I want to pass it a world transform rather than position, origin, scale and rotation, as the transform is stored in my scene graph entities and is accumulated as the scene graph is traversed (i.e. multiplied with parent transform). At the moment I am having to decompose the transform back to position, origin, scale and rotation, however I cannot get the maths right for it to work. Passing transforms to the vertex shader on a per-object basis looks tricky anyway, unless you want to use structured buffers (SM5+ ?), so I think I am looking at a somewhat high-end sprite renderer. Oh well, it's fun learning as I go.
  24. I hope this is the right place to ask questions about DirectXTK which aren't really about graphics, if not please let me know a better place. Can anyone tell me why I cannot do this: DirectX::SimpleMath::Rectangle rectangle = {...}; RECT rect = rectangle; or RECT rect = static_cast<RECT>(rectangle); or const RECT rect(m_textureRect); despite Rectangle having the following operator RECT: operator RECT() { RECT rct; rct.left = x; rct.top = y; rct.right = (x + width); rct.bottom = (y + height); return rct; } VS2017 tells me: error C2440: 'initializing': cannot convert from 'const DirectX::SimpleMath::Rectangle' to 'const RECT' Thanks in advance
  25. I thought the same and hacked a const in and it made no difference. Doesn't really matter now - I've just added a conversion method to my Util class, however I was a bit stumped as to why it didn't work and wondered if it was a well known issue in this forum. Thanks for your reply.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!