Sign in to follow this  

DX11 Should or shouldn't I port my XNA 3.1 game/engine/framework?

This topic is 2396 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone.

For about a year and a half now, I have been developing games using XNA 3.1. Few months ago, I decided to take everything those games/demos had in common and make an engine/framework of it, which I call LilEngine. It handles all the 'boring' stuff, and leaves game logic as well as rendering to the actual game. Works great.

The problem is that I want some stuff from .NET 4, in particular the DLR for scripting (IronPython) and the Parallel tools to ease multithreading. As far as I know, developing XNA 3.1 applications and games is not supported in VS 2010.

So therefore I'm considering an upgrade. I don't explicitly [i]need[/i] those things, but they would be nice to have. Do you think I even should upgrade? If yes, to XNA 4 or SlimDX? It should be noted that I only care for Windows and not the other XNA platforms.


Upgrading to Xna 4 I expect to be fairly painfree. It has all the math classes/structs and uses the same content pipeline architecture.

SlimDX on the other hand would allow me to use newer features (DX11) and possibly also to integrate unmanaged components such as an existing renderer, since SlimDX exposes the interface pointers..

Share this post


Link to post
Share on other sites
I've been pretty seriously irritated with XNA 4. The profiles mechanism is highly intrusive, because it artificially forces you in software to one of two crippled specifications. On the low end is "Reach" which actually means WinPhone7 and has all sorts of restrictions based on the mobile GPU. On the other side you have HiDef, which means Xbox and refuses to even start on a PC that doesn't have a DirectX 10+ card. You have to pick one of these two, regardless of what features you're actually using.

Share this post


Link to post
Share on other sites
also the Graphics Device seams to be entirely new to me (i used XNA 2 & 3.1 for about 1.5 years), i tried to do a simple project in XNA 4 because it would be quicker and easier then doing it with native c++ and DX,

Share this post


Link to post
Share on other sites
Has anyone tried porting an XNA game to SlimDX? At the moment, I'm using mostly 2D features such as SpriteBatch and SpriteFont.

How much abstraction does XNA really offer? Is there any libraries for SlimDX which offers some of the features XNA provides?

Share this post


Link to post
Share on other sites
If you are already in XNA I don't see much point in porting to SlimDX. Porting 3.1 to 4.0 can be a bit of a hassle but it is worth it imho (they made some fairly major changes to how you draw primitives and handle rendering states). For the game I ported, 4.0 fixed a lot of sound issues we had with extensive .mp3 music in the game.

Share this post


Link to post
Share on other sites
[quote name='Promit Roy' timestamp='1306507244' post='4816439']
I've been pretty seriously irritated with XNA 4. The profiles mechanism is highly intrusive, because it artificially forces you in software to one of two crippled specifications. On the low end is "Reach" which actually means WinPhone7 and has all sorts of restrictions based on the mobile GPU. On the other side you have HiDef, which means Xbox and refuses to even start on a PC that doesn't have a DirectX 10+ card. You have to pick one of these two, regardless of what features you're actually using.
[/quote]
Yup, that's the one thing that really upsets me about XNA 4.

I'm also annoyed by their reasoning for removing almost any traces of PC functionality such as replacing the D3DX Texture2D.FromFile() which was lightyears ahead of the crippled Texture2D.FromStream(), and Effect.FromFile() which freed you from the content framework for shaders and let you write custom runtime shaders. Their public reasoning was that they didn't like having PC-only features/functions that developers for their other platforms had no use for and as such were apparently "confusing" for them to be there. Yet these other platforms have a plethora of features and functionality the PC have no access to. One of the most common things I see regarding XNA is the issue of the net libraries in XNA on Windows.

It's easy to see where Microsoft views the PC version of the framework. I wouldn't be surprised if they eventually stopped supporting the PC at all.

Share this post


Link to post
Share on other sites
Yeah it's kinda irked me about the focus on Windows phone/Xbox with 4.0's release. If you glance at the revamped apphub or the xna download it would give the impression XNA isn't even geared for PC.

The removal of Texture.FromFile() and Effect.FromFile() I don't think would have been nearly as bad if they at least had the content pipeline included in the redistributable. But that doesn't look like it'll be happening soon (or ever).

@DvManDT

I agree with EJH, I like the organization of 4.0 better than 3.1. If you're already heavily invested with XNA (and happy with it) it makes sense to keep with it. Is it really worth it to spend a great deal of time porting it to SlimDX for D3D11 capability on the prospect of maybe you'll want to use some feature that so far you've seem to have been fine doing without? Do the time costs outweigh the benefits? If you're primarily using sprite batches and doing 2D rendering, probably not, since you're probably going to end up with similar results regardless of which managed D3D framework you're working with.

Share this post


Link to post
Share on other sites
[url="http://www.nelxon.com/blog/xna-3-1-to-xna-4-0-cheatsheet/"]The XNA 3.1-4.0 Cheat Sheet[/url] might help if you decide to upgrade.

Share this post


Link to post
Share on other sites
[quote name='Starnick' timestamp='1306542274' post='4816630']The removal of Texture.FromFile() and Effect.FromFile() I don't think would have been nearly as bad if they at least had the content pipeline included in the redistributable. But that doesn't look like it'll be happening soon (or ever).[/quote]
Or something equivalent. The fact that their official suggestion for functionality lost by this switch is to invoke the content pipeline is almost insulting. In order to invoke the content pipeline on an end user machine, they need the full .NET Framework 4, the full XNA Framework, and in order to install that, they need to install Visual C# Express. It's a mind numbing amount of baggage just to obtain a few lost features, that would have better been suited by simply leaving the old functionality in, or at least moving it into a "WindowsHelper" class or something, if they wanted to keep the basic API clean between platforms.

Share this post


Link to post
Share on other sites
[quote name='DvDmanDT' timestamp='1306508036' post='4816442']
Has anyone tried porting an XNA game to SlimDX? At the moment, I'm using mostly 2D features such as SpriteBatch and SpriteFont.

How much abstraction does XNA really offer? Is there any libraries for SlimDX which offers some of the features XNA provides?
[/quote]

This is what I am doing at the moment, porting from XNA 3.1 to SlimDX(D3D11). Although I use a large proportion of the 3D features, so it is likely quite a bit more effort than for just 2D.

The biggest chunk of code to port is the replacement for the content pipeline, ie similar binary serializer, build system etc. But I am quite lucky since I already wrote replacements for a bunch of the importers and processors due to limitations in the XNA builtin ones(eg font processor, texture importer, fbx importer etc).


My biggest reason is probably mostly the direction XNA seems to be heading in compared to the native API. But also things like DX11 features, 64bit support(in content editor), a multi threaded content pipeline etc.


David

Share this post


Link to post
Share on other sites

This topic is 2396 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628756
    • Total Posts
      2984528
  • Similar Content

    • By GreenGodDiary
      Having some issues with a geometry shader in a very basic DX app.
      We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially.
      My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it.
      Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles.

      Here's a video to show you what happens: Video (ignore the stretched textures)

      Here's my GS: (VS is simple passthrough shader and PS is just as basic)
      struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } }  
      I havent used geometry shaders much so im not 100% on what happens behind the scenes.
      Any tips appreciated! 
    • By mister345
      Hi, I'm building a game engine using DirectX11 in c++.
      I need a basic physics engine to handle collisions and motion, and no time to write my own.
      What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems. 
      I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
      The simpler the better. Please let me know, thanks!
    • By Hexaa
      I try to draw lines with different thicknesses using the geometry shader approach from here:
      https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader
      It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There
      seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture.
      It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there?
      I develop the application with SharpDX btw.
      Any clues or help is very welcome
       


    • By Beny Benz
      Hi,
      I'm currently trying to write a shader which shoud compute a fast fourier transform of some data, manipulating the transformed data, do an inverse FFT an then displaying the result as vertex offset and color. I use Unity3d and HLSL as shader language. One of the main problems is that the data should not be passed from CPU to GPU for every frame if possible. My original plan was to use a vertex shader and do the fft there, but I fail to find out how to store changing data betwen shader calls/passes. I found a technique called ping-ponging which seems to be based on writing and exchangeing render targets, but I couldn't find an example for HLSL as a vertex shader yet.
      I found https://social.msdn.microsoft.com/Forums/en-US/c79a3701-d028-41d9-ad74-a2b3b3958383/how-to-render-to-multiple-render-targets-in-hlsl?forum=xnaframework
      which seem to use COLOR0 and COLOR1 as such render targets.
      Is it even possible to do such calculations on the gpu only? (/in this shader stage?, because I need the result of the calculation to modify the vertex offsets there)
      I also saw the use of compute shaders in simmilar projects (ocean wave simulation), do they realy copy data between CPU / GPU for every frame?
      How does this ping-ponging / rendertarget switching technique work in HLSL?
      Have you seen an example of usage?
      Any answer would be helpfull.
      Thank you
      appswert
    • By ADDMX
      Hi
      Just a simple question about compute shaders (CS5, DX11).
      Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
      I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
      (both kernels use that same RWByteAddressBuffer)
      first kernel does some job and sets Result[0] = 0;
      (using Result.Store(0, 0))
      I'v checked with debugger, and indeed the value stored at dword 0 is 0
      now my second kernel
      RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
      Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
      do I need any additional synchronization there ?
      I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
      If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
      I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
       
  • Popular Now