Sign in to follow this  
DvDmanDT

DX11 Should or shouldn't I port my XNA 3.1 game/engine/framework?

Recommended Posts

Hi everyone.

For about a year and a half now, I have been developing games using XNA 3.1. Few months ago, I decided to take everything those games/demos had in common and make an engine/framework of it, which I call LilEngine. It handles all the 'boring' stuff, and leaves game logic as well as rendering to the actual game. Works great.

The problem is that I want some stuff from .NET 4, in particular the DLR for scripting (IronPython) and the Parallel tools to ease multithreading. As far as I know, developing XNA 3.1 applications and games is not supported in VS 2010.

So therefore I'm considering an upgrade. I don't explicitly [i]need[/i] those things, but they would be nice to have. Do you think I even should upgrade? If yes, to XNA 4 or SlimDX? It should be noted that I only care for Windows and not the other XNA platforms.


Upgrading to Xna 4 I expect to be fairly painfree. It has all the math classes/structs and uses the same content pipeline architecture.

SlimDX on the other hand would allow me to use newer features (DX11) and possibly also to integrate unmanaged components such as an existing renderer, since SlimDX exposes the interface pointers..

Share this post


Link to post
Share on other sites
I've been pretty seriously irritated with XNA 4. The profiles mechanism is highly intrusive, because it artificially forces you in software to one of two crippled specifications. On the low end is "Reach" which actually means WinPhone7 and has all sorts of restrictions based on the mobile GPU. On the other side you have HiDef, which means Xbox and refuses to even start on a PC that doesn't have a DirectX 10+ card. You have to pick one of these two, regardless of what features you're actually using.

Share this post


Link to post
Share on other sites
also the Graphics Device seams to be entirely new to me (i used XNA 2 & 3.1 for about 1.5 years), i tried to do a simple project in XNA 4 because it would be quicker and easier then doing it with native c++ and DX,

Share this post


Link to post
Share on other sites
Has anyone tried porting an XNA game to SlimDX? At the moment, I'm using mostly 2D features such as SpriteBatch and SpriteFont.

How much abstraction does XNA really offer? Is there any libraries for SlimDX which offers some of the features XNA provides?

Share this post


Link to post
Share on other sites
If you are already in XNA I don't see much point in porting to SlimDX. Porting 3.1 to 4.0 can be a bit of a hassle but it is worth it imho (they made some fairly major changes to how you draw primitives and handle rendering states). For the game I ported, 4.0 fixed a lot of sound issues we had with extensive .mp3 music in the game.

Share this post


Link to post
Share on other sites
[quote name='Promit Roy' timestamp='1306507244' post='4816439']
I've been pretty seriously irritated with XNA 4. The profiles mechanism is highly intrusive, because it artificially forces you in software to one of two crippled specifications. On the low end is "Reach" which actually means WinPhone7 and has all sorts of restrictions based on the mobile GPU. On the other side you have HiDef, which means Xbox and refuses to even start on a PC that doesn't have a DirectX 10+ card. You have to pick one of these two, regardless of what features you're actually using.
[/quote]
Yup, that's the one thing that really upsets me about XNA 4.

I'm also annoyed by their reasoning for removing almost any traces of PC functionality such as replacing the D3DX Texture2D.FromFile() which was lightyears ahead of the crippled Texture2D.FromStream(), and Effect.FromFile() which freed you from the content framework for shaders and let you write custom runtime shaders. Their public reasoning was that they didn't like having PC-only features/functions that developers for their other platforms had no use for and as such were apparently "confusing" for them to be there. Yet these other platforms have a plethora of features and functionality the PC have no access to. One of the most common things I see regarding XNA is the issue of the net libraries in XNA on Windows.

It's easy to see where Microsoft views the PC version of the framework. I wouldn't be surprised if they eventually stopped supporting the PC at all.

Share this post


Link to post
Share on other sites
Yeah it's kinda irked me about the focus on Windows phone/Xbox with 4.0's release. If you glance at the revamped apphub or the xna download it would give the impression XNA isn't even geared for PC.

The removal of Texture.FromFile() and Effect.FromFile() I don't think would have been nearly as bad if they at least had the content pipeline included in the redistributable. But that doesn't look like it'll be happening soon (or ever).

@DvManDT

I agree with EJH, I like the organization of 4.0 better than 3.1. If you're already heavily invested with XNA (and happy with it) it makes sense to keep with it. Is it really worth it to spend a great deal of time porting it to SlimDX for D3D11 capability on the prospect of maybe you'll want to use some feature that so far you've seem to have been fine doing without? Do the time costs outweigh the benefits? If you're primarily using sprite batches and doing 2D rendering, probably not, since you're probably going to end up with similar results regardless of which managed D3D framework you're working with.

Share this post


Link to post
Share on other sites
[url="http://www.nelxon.com/blog/xna-3-1-to-xna-4-0-cheatsheet/"]The XNA 3.1-4.0 Cheat Sheet[/url] might help if you decide to upgrade.

Share this post


Link to post
Share on other sites
[quote name='Starnick' timestamp='1306542274' post='4816630']The removal of Texture.FromFile() and Effect.FromFile() I don't think would have been nearly as bad if they at least had the content pipeline included in the redistributable. But that doesn't look like it'll be happening soon (or ever).[/quote]
Or something equivalent. The fact that their official suggestion for functionality lost by this switch is to invoke the content pipeline is almost insulting. In order to invoke the content pipeline on an end user machine, they need the full .NET Framework 4, the full XNA Framework, and in order to install that, they need to install Visual C# Express. It's a mind numbing amount of baggage just to obtain a few lost features, that would have better been suited by simply leaving the old functionality in, or at least moving it into a "WindowsHelper" class or something, if they wanted to keep the basic API clean between platforms.

Share this post


Link to post
Share on other sites
[quote name='DvDmanDT' timestamp='1306508036' post='4816442']
Has anyone tried porting an XNA game to SlimDX? At the moment, I'm using mostly 2D features such as SpriteBatch and SpriteFont.

How much abstraction does XNA really offer? Is there any libraries for SlimDX which offers some of the features XNA provides?
[/quote]

This is what I am doing at the moment, porting from XNA 3.1 to SlimDX(D3D11). Although I use a large proportion of the 3D features, so it is likely quite a bit more effort than for just 2D.

The biggest chunk of code to port is the replacement for the content pipeline, ie similar binary serializer, build system etc. But I am quite lucky since I already wrote replacements for a bunch of the importers and processors due to limitations in the XNA builtin ones(eg font processor, texture importer, fbx importer etc).


My biggest reason is probably mostly the direction XNA seems to be heading in compared to the native API. But also things like DX11 features, 64bit support(in content editor), a multi threaded content pipeline etc.


David

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
      627679
    • Total Posts
      2978608
  • Similar Content

    • By evelyn4you
      hi,
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      or
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      updateSubresource(buffer_A)
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      updateSubresource(buffer_B)
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
       
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
       
       
       
       
       
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
       
    • By Rockmover
      I am really stuck with something that should be very simple in DirectX 11. 
      1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
      2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).
       
      However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?
       
      If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC). 
      I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.  
      I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?  
      I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.
       
      I'm also more than happy to post my simple test code if that helps as well!
       
      THANKS SO MUCH IN ADVANCE!!!
    • By Reitano
      Hi,
      I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs.
      In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset.
      The new allocator appears to be working but I am not 100% confident it is actually correct. In particular:
      1) it relies on queries which I am not too familiar with. Are they 100% reliable ?
      2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code:
      BeginFrame:
          page.data = device.Map(page.buffer)
          device.Unmap(page.buffer)
      RenderFrame
          Alloc(size, initData)
              ...
              memcpy(page.data + page.start, initData, size)
          Alloc(size, initData)
              ...
              memcpy(page.data + page.start, initData, size)
      (Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer)
      Is this valid ? 
      3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ?
      4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet.
      Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ? 
      For reference, I've uploaded the (WIP) allocator code at https://paste.ofcode.org/Bq98ujP6zaAuKyjv4X7HSv.  Feel free to adapt it in your engine and please let me know if you spot any mistakes
      Thanks
      Stefano Lanza
       
    • By Matt Barr
      Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array.

      I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function.

      My hack fix was just to inline the code, but this is not a great solution for the future.  Any ideas? I've considered just returning an array of ints that represents the sorted indices.
  • Popular Now