Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Platform-agnostic renderer


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
51 replies to this topic

#21 swiftcoder   Senior Moderators   -  Reputation: 10360

Like
3Likes
Like

Posted 07 September 2012 - 07:03 AM

True, but I think the design idea still stands. If you have a parent reference in every class, you can always navigate up the hierarchy if your platform requires something to carry out it's functionality.

I.e your ResourceLoader class might need to obtain a pointer to the X11 Display struct in order for it to create or load an image (i.e XCreatePixmap())

[see: Single Responsibility Principle, Separation of Concerns]

If your ResourceLoader can't fully encapsulate the process of loading resources, then you need to rethink your design. Of course it may need a reference to the underlying device, but it should never have to go searching for it - you should pass a reference to the device into the constructor.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


Sponsor:

#22 clb   Members   -  Reputation: 1786

Like
3Likes
Like

Posted 07 September 2012 - 07:20 AM

+1 for ignoring any design decision Ogre3D did.

As for the discussion "should a mesh draw itself, i.e. should I have a function Mesh::Draw()?" the answer is strongly no. In a typical 3D scene, you never do anything just once. You never just draw a single mesh, you never just update the physics of a single rigid body, and so on. The most important design decision for a performant 3D engine is to allow batch, batch, batch! This means that you will need a centralized renderer that can control the render process, query the renderable objects, sort by state changes, and submit render calls as optimally as possible. The information required to do that is scene-wide, and inside a single Mesh::Draw() function you can't (or shouldn't) make these state optimization decisions. It's the renderer's role to know how to draw a mesh, and it's also the renderer's role to know how to do that as efficiently as possible with respect to all other content in the scene.
Me+PC=clb.demon.fi | C++ Math and Geometry library: MathGeoLib, test it live! | C++ Game Networking: kNet | 2D Bin Packing: RectangleBinPack | Use gcc/clang/emcc from VS: vs-tool | Resume+Portfolio | gfxapi, test it live!

#23 pmvstrm   Members   -  Reputation: 122

Like
0Likes
Like

Posted 07 September 2012 - 10:45 AM

Hmm,

I think the Unreal Engine approach is descent.
The User can adjust at runtime in a Config file wich specific *.DLL (win32/win64) / *.SO (Linux/Unix/Mac) Renderer Driver should be used.
You can interface with an abstract Renderdriver proxy class and let the Renderdriver do its thing. Simple but effective and ellegant (just my 2 cents).

#24 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 07 September 2012 - 06:49 PM

Here's how I'm currently doing things, thanks to some of the brilliant suggestions I've recieved here from our community's most brilliant and senior members... :)

Pseudo-code:

[source lang="csharp"] public class RenderOp { public string[] CmdString { get; set; } public MeshData Mesh { get; set; } public Material Material { get; set; } }; public class RenderOpBatch { // Pseudo-implementation not shown to save space }; public abstract class Renderer { RenderOpBatch currentBatch; Queue<RenderOpBatch> RenderBatches; public virtual void StartRenderBatch() { currentBatch = new RenderOpBatch(); RenderBatches.Enqueue(currentBatch); } public virtual void QueueOp(RenderOp op) { currentBatch.Add(op); } public abstract void FlushAllBatches(); // blah, blah, blah... you get the idea :) };[/source]

That's not truly how I've implemented it, but just a pseudo-code expression of the idea. For instance, I don't really use Queue<T>, I use a custom collection type that allows me to choose LIFO, FIFO or custom sorting of batches and all sorts of stuff. Any comments/criticisms/suggestions concerning this concept?
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#25 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 07 September 2012 - 08:25 PM

Ok... here's another thing I'm trying to work out: vertex types and input layouts (can't remember what the OpenGL counterpart of an input layout is called... usage hint, maybe?)...

I need to design a sub-system through which new vertex structures can be implemented beyond the common ones the engine will already offer, and the ones I do offer need to adhere to a clean, consistent format. It needs to be written so that the same data can be used to create an D3D "input layout" or an OpenGL "usage hint" (or whatever it's called) on-the-fly as the vertex data is pushed to the renderer. It's hard for me to decide on things sometimes because I'm not sure what parts/features of D3D and OpenGL are so seldomly used that they can just be cut out, and which people are going to be pissed if I don't let them have... Anyway, some of the ideas I have are:

The first thing to consider is how we designate what fields of a vertex are for (and how big they are). We could, like DirectX, just use a string (e.g., "POSITION"). Or we could use some type of enumeration, like this:

[source lang="csharp"] public enum VertexElements { NULL = 0x0000, POSITION = 0x0001, COLOR = 0x0002, TEXCOORD = 0x0004, NORMAL = 0x0008, BINORMAL = 0x0016, TANGENT = 0x0032 };[/source]

What might the pros/cons of each method be? And what would be a good way to represent the size of vertex fields without using a platform-specific enumeration like SlimDX's "Format" enum? Or is there yet another unthought-of way of doing this that would be superior to both?

Next, what would be the best way to implement a cohesive vertex typing system that can be broken-down and understood by virtually any type of renderer? I have some thoughts already, and I'll show you what ideas I'm toying with:

1) A common interface all vertex structures inherit from. For example:

[source lang="csharp"] [StructLayout(LayoutKind.Sequential)] public interface IVertex { int SizeInBytes { get; } VertexElements[] Elements { get; } byte[] ToBytes(); };[/source]

All vertex types would implement that interface if such a method was used, and they would have to return a static value which is not part of the memory of an actual vertex instance on the stack (as that would throw things off).

2) Create a new struct/class (e.g., "VertexDescription") that houses a nice description of a vertex-type and tells you what's in its guts. The essence of it might look like this (incomplete example):

[source lang="csharp"] public class VertexLayout { int sizeInBytes; VertexElements[] elements; };[/source]

In addition to this structure, perhabs it might be an idea to implement a new enumeration type which replaces platform-specific enumerations like SlimDX's "Format" but offers the same data in a new way; potential even giving the size in bytes of an element as its own numerical value!?

[source lang="csharp"] public enum ElementFormat { byteX1 = 1, byteX2 = 2, byteX3 = 3, byteX4 = 4, shortX1 = 2, shortX2 = 4, // ...and so on... };[/source]

Anyway, I hope the wisdom of the community can once again offer me some excellent ideas! Posted Image

EDIT: The idea of assigning the enum values of "ElementFormat" the size on the element in bytes actually wont work because C# treats enums as numeric values and would not be able to distinguish between them. My bad, didn't think about that. Please disregard that erroneous idea.

Edited by ATC, 07 September 2012 - 10:31 PM.

_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#26 pmvstrm   Members   -  Reputation: 122

Like
0Likes
Like

Posted 08 September 2012 - 07:43 AM

Hi ATC,

Here's how I'm currently doing things, thanks to some of the brilliant suggestions I've recieved here from our community's most brilliant and senior members... Posted Image

Yeah, there are some amazing peoples out there Posted Image


Pseudo-code:

[source lang="csharp"] public class RenderOp { public string[] CmdString { get; set; } public MeshData Mesh { get; set; } public Material Material { get; set; } }; public class RenderOpBatch { // Pseudo-implementation not shown to save space }; public abstract class Renderer { RenderOpBatch currentBatch; Queue<RenderOpBatch> RenderBatches; public virtual void StartRenderBatch() { currentBatch = new RenderOpBatch(); RenderBatches.Enqueue(currentBatch); } public virtual void QueueOp(RenderOp op) { currentBatch.Add(op); } public abstract void FlushAllBatches(); // blah, blah, blah... you get the idea [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] };[/source]


I think this pretty straith forward, But i think you can change your rederque with task based sheduling like (i recommended
the free OpenSource Version of Intel Thread building blocks). Then your Renderque can spread over multiple cores.
In TBB you have Namespaces and OOP Classes instead of dealing with native Posix or Winthreads so it is much
more easier - and - it is crossplatform and works with the Intel/Visual C/C++ and GNU GCC Compiler.

Peter

#27 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 08 September 2012 - 10:28 AM

I think this pretty straith forward, But i think you can change your rederque with task based sheduling like (i recommended
the free OpenSource Version of Intel Thread building blocks). Then your Renderque can spread over multiple cores.
In TBB you have Namespaces and OOP Classes instead of dealing with native Posix or Winthreads so it is much
more easier - and - it is crossplatform and works with the Intel/Visual C/C++ and GNU GCC Compiler.

Peter


To be implemented in the D3D11-specific renderer implementation and its OpenGL counterparts. :-)

The engine already contains a robust and "battle-proven" sub-library I call the "HAI" (Hardware Abstraction Interface). It can pull all of the important information about a machine's graphics hardware from DXGI or OpenGL, and it also finds (among other things) the amount of CPU cores, total physical memory (RAM), HDD space, logical drives, etc. My D3D11 renderer implementation will of course use that to allocate rendering tasks to dynamically-generated threads, the number of which is selected to be optimal for the CPU speed and the amount of cores.
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#28 pmvstrm   Members   -  Reputation: 122

Like
0Likes
Like

Posted 08 September 2012 - 11:30 AM

To be implemented in the D3D11-specific renderer implementation and its OpenGL counterparts. :-)

The engine already contains a robust and "battle-proven" sub-library I call the "HAI" (Hardware Abstraction Interface). It can pull all of the important information about a machine's graphics hardware from DXGI or OpenGL, and it also finds (among other things) the amount of CPU cores, total physical memory (RAM), HDD space, logical drives, etc. My D3D11 renderer implementation will of course use that to allocate rendering tasks to dynamically-generated threads, the number of which is selected to be optimal for the CPU speed and the amount of cores.


Wow, i think this was a lot of work and many ifdefs. (-;

I thnik allways the solution is what fit for your needs and if you have selfmade code you understand you get even more productive
enstead of learning lots of diffrent tools and Version changes.

Peter

#29 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 08 September 2012 - 12:07 PM

Wow, i think this was a lot of work and many ifdefs. (-;

I thnik allways the solution is what fit for your needs and if you have selfmade code you understand you get even more productive
enstead of learning lots of diffrent tools and Version changes.

Peter


It might surprise you, then, to hear there are actually very few ifdefs in the entire code-base. Furthermore, the engine can switch between DirectX versions, OpenGL versions and entire rendering APIs (e.g., DirectX to OpenGL) while it runs. :-)

Most of the #ifs and #elses have only to do with Debug vs Release builds and handling exceptions; often in resource disposal code (no sense in throwing an exception in a release build if the application can continue or is shutting down, for example).

Edited by ATC, 08 September 2012 - 12:11 PM.

_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#30 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 08 September 2012 - 11:19 PM

Well, I think I got the vertex type paradox solved in a way that works great and seems perfect. Things are looking good thanks to you guys' input and brilliant advice.
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#31 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 08 September 2012 - 11:48 PM

Anyone know where I can find a full list of valid SlimDX/D3D10 InputLayout strings (e.g., "POSITION", "COLOR", "NORMAL", etc)?
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#32 krippy2k8   Members   -  Reputation: 646

Like
1Likes
Like

Posted 09 September 2012 - 02:24 AM

Anyone know where I can find a full list of valid SlimDX/D3D10 InputLayout strings (e.g., "POSITION", "COLOR", "NORMAL", etc)?


On MSDN

#33 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 09 September 2012 - 02:26 AM

Thanks krippy! I'd just come across that page and realized it had what I wanted! :)
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#34 phantom   Moderators   -  Reputation: 7553

Like
3Likes
Like

Posted 09 September 2012 - 03:44 AM

At this point I'd like to say don't bother with D3D10 support; use D3D11 - even if your card doesn't support the full D3D11 feature set then you can use 'feature levels' to create a D3D10 device.

The D3D11 API replaces and improves upon the D3D10 one and renders the later useless :)

Also, when it comes to semantics, unlike D3D9 D3D11 allows you to define your own; they are basically free form strings now (aside from the system ones).

#35 ATC   Members   -  Reputation: 551

Like
1Likes
Like

Posted 11 September 2012 - 10:30 PM

Thanks again to everyone for the incredibly helpful insights, advice, examples and information. Things have progressed very far very fast, and I hope to be rendering this engine's first scenes very soon!

I'm most likely going to call for some of the community's brilliant guidance on a very robust shader/material and lighting system. But I'm going to spend some time working out the design the best I can and come prepared with good questions and examples.
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#36 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 12 September 2012 - 12:41 AM

Alright, I could use some inspiration about the best way to design my shading/lighting/material system in a clean, platform agnostic way.

What sort of model/hierarchy might be the best approach for the engine, keeping in mind extensibility and allowing users of the engine to create custom shaders or modify existing ones? What key differences in OpenGL and D3D shading might cause problems, and how might be the best way to get around them?

One thing I'm finding a bit tricky is how I'm going to adhere to the data-driven design concept but allow user code to specify how a shader, say for DirectX, chooses its technique and desired passes, applies render states, blends passes, set variables and transformations, works dynamically with varying numbers and types of light, etc? And how might we design our system to keep all the platform-specific stuff separate from base implementations?

This is an area of graphics programming I'm not terribly skilled with, and I'm sure someone with more skill and experience can help me get the right ideas and write some nice code befitting the level of quality this project requires.
_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#37 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 12 September 2012 - 11:13 AM

I really need some sort of platform-independent and elegantly-designed system which allows for "shader setups"... Differing materials, lighting, etc and setting variables on a shader. So far I've tried a few things but have hit deadend designs. For instance, I haven't really figured out a good way that a "RenderOp", when running on the D3D10 or D3D11 renderer, can specify one or more techniques to use and which passes within it to use/not use; and furthermore to setup render states and properties on the device as required...

Might it be a decent idea to implement the "Material" type like this:

[source lang="csharp"]public class Material{ public List<ShaderVariable> Variables{get; set;}};[/source]

...and then in the actual "Renderer", we specify a shader "globaly" for a render batch and use the Material to set variables (Textures, lighting parameters, etc) on it? It would seem to follow that there could be some sort of "SetGlobalTransforms" method that operates on the active shader instance and, for example, set the "World" matrix of an entire model; eliminating the need to set it over and over on the individual meshes that make up the model?

Edited by ATC, 12 September 2012 - 11:33 AM.

_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#38 swiftcoder   Senior Moderators   -  Reputation: 10360

Like
0Likes
Like

Posted 12 September 2012 - 11:32 AM

For instance, I haven't really figured out a good way that a "RenderOp", when running on the D3D10 or D3D11 renderer, can specify one or more techniques to use and which passes within it to use/not use; and furthermore to setup render states and properties on the device as required...

I tend to view those as much earlier concerns. By the time you are specifying a RenderOp, you should already have a renderer-specific shader that you know will execute in this environment.

Also keep in mind that a 'RenderOp' is conceptually a single render command - it shouldn't include multiple passes or techniques.

For this reason, I am not a big fan of the 'effects framework' often used along side D3D. You are better off rolling your own technique/pass system in the front end, that will pass the minimal set of shaders/data to the renderer for each RenderOp, and will generate a separate RenderOp for each pass/technique.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#39 ATC   Members   -  Reputation: 551

Like
0Likes
Like

Posted 12 September 2012 - 11:42 AM

I tend to view those as much earlier concerns. By the time you are specifying a RenderOp, you should already have a renderer-specific shader that you know will execute in this environment.

Also keep in mind that a 'RenderOp' is conceptually a single render command - it shouldn't include multiple passes or techniques.

For this reason, I am not a big fan of the 'effects framework' often used along side D3D. You are better off rolling your own technique/pass system in the front end, that will pass the minimal set of shaders/data to the renderer for each RenderOp, and will generate a separate RenderOp for each pass/technique.


I see. I feared I had screwed up a bit by not implementing this earlier. Definitely going to require some going back and refactoring/rewriting a few things. Thankfully, I have a pretty decent codebase that won't be terribly hard to edit and fix up.

Is there any more you can tell me about this, by any chance? A bare-bones, pseudo-code example of how a shading engine's key components and hierarchy might fit together and how to use the "RenderOp" type with it properly? This is the first data-driven rendering system I've implemented, so its taking some getting use to and changing the way I think about things.

Also keep in mind that a 'RenderOp' is conceptually a single render command - it shouldn't include multiple passes or techniques.


^^^ There is a lot of wisdom and insight in that remark which is just now hitting me. Treating it that way could potentially make the design a lot simpler and more effecient. But I'm wondering if it might be a bit "wasteful" to make numerous draw calls to render out techniques with multiple passes when I could just iterate through and apply them in one batch... If you could elaborate on this in particular it would be quite helpful!

What also troubles me is how different DirectX vs OpenGL shaders are... I have no real experience with shaders in OpenGL, so I'm scared to let too much DirectX influence rub off on the design and screw myself when I have to implement the OpenGL-specific side of things...

Edited by ATC, 12 September 2012 - 11:47 AM.

_______________________________________________________________________________
CEO & Lead Developer at ATCWARE™
"Project X-1"; a 100% managed, platform-agnostic game & simulation engine


Please visit our new forums and help us test them and break the ice!
___________________________________________________________________________________

#40 swiftcoder   Senior Moderators   -  Reputation: 10360

Like
0Likes
Like

Posted 12 September 2012 - 12:29 PM

Treating it that way could potentially make the design a lot simpler and more effecient. But I'm wondering if it might be a bit "wasteful" to make numerous draw calls to render out techniques with multiple passes when I could just iterate through and apply them in one batch... If you could elaborate on this in particular it would be quite helpful!

The effects framework just does the same loop for you, under the hood, rendering one pass/technique at a time. So there isn't any performance cost to doing the loop yourself.

And once you start to sort your RenderOps to minimise on state changes, you may be able to increase performance significantly. The effects framework doesn't have enough information available to interleave rendering operations from different effects.

What also troubles me is how different DirectX vs OpenGL shaders are... I have no real experience with shaders in OpenGL, so I'm scared to let too much DirectX influence rub off on the design and screw myself when I have to implement the OpenGL-specific side of things...

If you ignore the effects framework (which OpenGL doesn't have), then there really aren't that many differences.

OpenCL vs Compute shaders is a different topic, but that may or may not affect you currently.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS