Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


phantom

Member Since 15 Dec 2001
Online Last Active Today, 09:47 AM

#4988051 starting game engine

Posted by phantom on 08 October 2012 - 11:39 AM

I will have to disagree with the notion that one should NEVER write an engine... Tell that to the developers of Unity, Unreal, etc...


They would probably agree as all of them started off life as a game only to late end up being pulled apart as a reusable engine...

Whic is the point that every 'do not write an engine' point has; you write a game, you learn how to put something together, you pull out the bits you want for your NEXT game and repeat... over time, with refactoring an change, an engine forms.

Having little test demos or complex scenes is not the same thing; we had those, but the in-house engine developed by experianced people wasn't any good when it ran head first into a REAL game.


#4987931 Best way to manage render targets, buffers, etc...?

Posted by phantom on 08 October 2012 - 04:03 AM

Sometimes writing a good free-standing engine can be incredibly profitable; both financially and through productivity of actual game development.


Unfortunately unless you are driving that engine development via a game then you are not likely to end up with a good free-standing engine; I've seen what happens when a group of people, who have experiance with making games, try to do just that... hell, I've been involved in the various rebuilds/rewrites/fixes over the last year to get the thing from a 'we can drive a car around a test level' to 'we can run a game at 60fps across 3 platforms' - some bits of the code are simply not the same any more as they weren't driven by a need.

All, and I mean ALL, the bst engines have been driven by a game to develop and test them.
It's a simple fact of life.


#4987362 Passing shader parameters from a scene node to the renderer

Posted by phantom on 06 October 2012 - 05:32 AM

I might have done a poor job of expressing the idea in pseudo-code (I did it in all but 2mins) and maybe another poor job of explaining how my renderer currently works. I'm also using C#, so that's another key difference. But I'm pretty satisfied with how I have things setup now. In C# references are like "smart pointers" behind the scenes, so the strongly-typed interface design I'm using now is very efficient and the Renderer can piece things together quickly and pump out very complex scenes from simple code. It can accept new types of vertex data its never seen before, dynamically generate input elements and input layouts for D3D, handle "foreign" types of meshes and formats, and more. However, I think my "Shader" and "Material" implementations are a bit weak and need revisiting...


Having read quite a few of your posts I'm aware that you are using C#, I'm also more than aware of how C# works so no need to explain that ;)

However anywhere where the runtime has to make a choice, based on virtual type information, isn't going to be 'very efficient' - this is not a commentry on C# or the .Net JIT as the same thing applies to doing a virtual call with C++; there is going to be overhead where the runtime has to figure out just where it is going to jump to and what it is going to execute (and with it comes the associated cache misses and the like).

This is not to say your design won't work and won't serve you well, but don't think that interfaces and virtual calls aren't costing you things; this is why AAA-class renderers avoid virtual calls and precook data and types as much as possible.
Same goes with dealing with any type of data; it might seem like a good idea but the cost of doing it (both design and runtime wise) might not seem as worth it when you realise just how little you end up using such things at run time.
(A good engine can of course deal with any vertex data layout it is passed, but the layout will be a known quanity at load time and fixed rather than having to care about dynamic setups. Even for runtime generated data this will end up fixed and won't have to be figured out on a per-frame basis.)

The point is you should be making very very few choices at the sharp end of a renderer; your data should be pre-cooked as much as you can, any choices should be simple (virtual calls != simple) and your data well laid out.

A lot of the work you're talking about I defer to a "SceneManager"; e.g., determining what's on/off camera, finding LOD levels and mip-map levels, etc.


I guess the split of work depends on how you design the system.
The system above is the middle section of a sandwich.

The 'scene manager' does exist, it contains the scenes and maintains information about what goes into each scene, cameras etc but it doesn't do any direct work on the scene.
It is queried by the renderer and for each scene (and thus each camera due to how our renderer works) it compiles a list of potentially visable objects which are then processed (more or less) as detailed above.
The main difference to the above is we currently (and I don't understand why, the choice was made before I moved to the department) don't support instanced draw calls; so once the vis-list is completed we have a list of per-scene draw calls.
These draw calls are then executed on the deferred contexts before being pushed to the main context for final rendering.

Scene Manager ===> Renderer ===> 'device' via contexts

So the renderer acts as a compiler or data and feeding system; the scene manager just simple holds the scene and makes no choices about what is sent where. (LOD levels etc are decided, currently, per-object vis-test; ie you pass the vis-test we do a distance test to see which LOD should be rendered and that is the draw call created.)

If you have any suggestions on how I could improve this system I'm all ears. I could especially use more work in my "Shader" and "Material" implementations as I said earlier. For instance, the right time/place to bind variable values to the underlying effect code the "Shader" class wraps. For instance, should a "RenderOp" contain shader variables or should they be bound to the shader prior to pushing an op to the Renderer?


Shaders should take there values from a combination of the material being used and runtime data; aside from sorting your draw calls shaders really shouldn't factor in that much and even then they are only useful to know about to build a 'sort key' to ensure you aren't doing things like 'A-B-A-B' drawing.

Material data itself should mostly be static, more than likely pushed into a buffer of some sort (Constant Block on D3D, OpenGL equivilant) and never touched; just rebound each time the draw call is about to happen.

Per-instance data is, as you've got, compiled into a buffer and attached at draw time. The buffer is going to be per-material (well, depending on the material it might have a few per-instance buffers for different techniques of drawing. For example if you have a shadow map and colour pass you might want two per-instance buffers. One for the colour pass only and one which is shared for both passes, depending on shader/data requirements but this would be configured via data and split as needed at runtime (see copying example from my earlier post for data routing).


#4986538 Poll: There's a big piece of code in need of a rewrite, do you create a n...

Posted by phantom on 03 October 2012 - 03:00 PM

As others have said, it depends.

If it's a clean up then in place.
It it's a top down performance rewrite (such as C++ => intrinsic based vectored code) then depending on the size either in place or I'll copy the old code out to notepad++ on another monitor and replace.

If it's a self contained function then I'll keep the old function and write a new one with a name which indicates the difference and direct the code there (such as a recent 'calculate SHParameters' which became 'calculateSHParametersSIMD' once I rebuilt it.


#4986532 Communicating with Programmers

Posted by phantom on 03 October 2012 - 02:49 PM

or is this goal achievable through knowledge of programming jargon alone?


I spent 20mins moaning about this earlier; if you do not have a solid grasp on what something means do not try to use it.

I work as a rendering programmer, which means I have to interact with artists a fair amount, and while I might well shout and swear about them from time to time due to frustrations about the way they do things (like large amounts of polys under the ground which you'll never see...) the thing which annoys me the most is when they hear a new term and suddenly start trying to use it everywhere; I've taken to assuming they are using the term incorrectly (until I know otherwise) because if I take them at their word I find it slows down my bug fixing etc.

(QA are the second biggest offenders here; so many bug reports of 'z-fighting' or 'light probe issue' when the problem wasn't either of those things!)

I guess the best way to ask is: has there ever been something you wished a creative member of your team knew, be it a language, program, whatever, that may have facilitated development or at least communication?


Frankly; english.
If it helps then a rough drawing/sketches.

If you are talking to a rendering programmer then assume we have a working knowledge of things such as vertices, uv channels etc which at least matches your own but don't try and frame things only in the context of the 3DMax and Maya because not all of us know about the program.


#4985393 Should I give up?

Posted by phantom on 30 September 2012 - 10:30 AM

In most games the game data will end up MUCH bigger than any runtime/exes anyway so its generally a non-issue size wise.


#4985208 Passing shader parameters from a scene node to the renderer

Posted by phantom on 29 September 2012 - 06:58 PM

Put simply; unless it is per-instance data the model shouldn't care.

When you create your model it gets a reference to a material structure (assuming C++ this could simply be a Material class pointer) and when the draw call is setup the pointer is placed into the structure.

Internally the material will hold all the data needed to work which isn't setup per-instance. (colour data, textures which are the same for all instances of the model etc).

So, for example, if you had a 'house' material all any object which uses it needs to do is indicate to the renderer it uses that material and thats all. It doesn't need to care about textures, parameters or any of that stuff. If you need a house which has a different texture then you'd create a different material (so you might have wood_house and brick_house materials) with the same shaders but different textures; internally you deal with not duplicating the shader setting and only change the textures between draw calls.

Per-instance data is a little tricker and it does depend somewhat on how your engine is setup.

For example if you have a simple update-draw loop on a single thread then you can get away with pass a chunk of memory along with the rest of the draw data which contains the per-instance data. This could be something as simple as an array of [handle, DataType] where 'handle' was something you asked for from the material for a per-instance parameter and 'DataType' could well be a union with a 'type' flag so that the same data could be used for float4 parameters and texture data. (or even a 'known' constant block layout if you wanted to expose it)

A solution for a threading setup would be more involved as you'd need a 'game' and 'renderer' side abstraction and then data gets passed across via a message queue from one to the other and stored locally.

The key points really is that the material needs to have a concept of 'per-instance' data and each mesh instance needs to store it's only copy of that data somehow. The handle based system is probably simplest.

[source lang="cpp"]struct InstanceData{ HandleType handle; DataType type; union { Float4 values; Texture * texture; }}class Model{Material * myMaterial;InstanceData myInstanceData[1];void Creation(){ // get all resources here somehow // including myMaterial myInstanceData[0].handle = myMaterial.getParameter("foo"); myInstanceData[0].type = DataType::TextureData; myInstanceData[0].texture = textureSystem.getNameTexture("cow");}DrawCall createDrawCall(){ DrawCall dc; // fill out general draw details dc.instanceDataPointer = myInstanceData; return dc;}[/source]

So when the object it created it gets a handle to the 'foo' parameter and set the data to a pointer to the texture named 'cow'.

Once the draw call is created this per-instance data is added to it and in the renderer it is checked and the resulting data set to the correct variables for the draw.
(Internally the material knows how much per-instance data to expect; in the create this would be queried to size the array correctly on creation instead of the hard coded example above.)

(Note: This is only a rough sketch of the idea. It needs fleshing out but I hope you get the idea).


#4984039 Rendering concept

Posted by phantom on 26 September 2012 - 12:07 PM

Also, as someone else mentioned, do not store entire objects in memory within arrays, lists, collections, etc... Store only the pointer to it. Otherwise, you're duplicating (potentially) a LOT of data; plus, modifying one instance in one set of memory will not reflect the changes in another... and any type of sorting/moving operation requires a LOT of extra work for the CPU. Remember that the CPU moves data fastest when it comes in chunks that match the native size of the registers (e.g., 32-bit chunks for x86, 64-bit chunks for x64). So storing the pointers to objects in your array/list/collection can be a marked optimization, as pointers will naturally match the optimal native data size.


However you have to becareful with this kind of thinking.

Pointers are basically cache miss factories and missing the cache is one of the worst things you can do as the CPU has to stall while it goes off to fetch data from memory. Amusingly as CPUs have gotten faster that stall has gotten worse as it can take hundreds of cycles before your data has been fetched which is all time the CPU is left twiddling its thumbs.

For certain operations it is BETTER to pack memory into contiguous chunks and access them in one direction from start to end; the pre-fetcher in the CPU will hide some to all of the latency of memory fetches and you'll be basically pulling data from cache as you go. Of course you also have to pay attention to data layout but it can be worth it.

So, if I was writing a command list system for feeding a graphics API would I embed the mesh into the list? No.
But would I embed the translation matrix? Yes, I would as I'm going to want to get that and going via a pointer is just accessing memory for the sake of accessing memory.

Data layout and cache friendlyness are two of the most important concepts going; saving some bytes by using a pointer instead of a real object might seem clever but often it can result is slowing things down rather than speeding it up.

Learn how the system works and avoid "typical C++ bullshit" (google that, it's a good read) if you want to do things quickly.


#4983795 OpenGL and unified address space

Posted by phantom on 25 September 2012 - 05:16 PM

Until recently I'm 99.999999% sure that isn't the case; on system start up a chunk of memory was reserved for the integrated GPUs and that was all they could see.

If you wanted to copy something to GPU controlled memory then it was copied across from 'system' ram to 'graphics' ram.

That's why AMD's pinning extensions is a pretty big deal as it allows for that zero-copy stuff to work but the GPU doesn't 'own' it, the memory is just locked so it can't be paged out from the physical address; the GPU itself is still just seeing a physical address, all be it one outside of its normal address range.


#4983767 OpenGL and unified address space

Posted by phantom on 25 September 2012 - 04:07 PM

As an aside, integrated GPUs have done this for years.


In fact they haven't; CPUs and GPUs, even integrated ones, use their own address spaces to access the physical memory. What 0x00F4567380 refers to as far as the CPU is concerned is different to what the GPU sees.

AMD's Trinity APUs are, afaik, the first CPU+GPU combo where both parts can access the same memory without requiring a driver to do address translation in any form but it won't be until 2013 that they will be using the same memory controller.

As for the second part of the question; AMD do have some extensions which allow you to 'pin' memory so that it can't be swapped out (GPUs currently can't handle paging in/out of memory so any pages shared must be resident) and thus freely accessed by both CPU and GPU parts - however this is really only useful in the context of an APU otherwise the GPU would be accessing via the PCIe bus which would be a tad on the slow side.

Now, once both the CPU and GPU share the MMU and can respond to page faults accordingly such pinning won't be required, but that's not due until the 2013 time frame from AMD.


#4983253 Is XNA dying and MS forcing to C++?

Posted by phantom on 24 September 2012 - 09:56 AM

The source engine didn't come out until 2003 when it was stolen by hackers, I have no idea where you got 2001.


Sorry, that was a brain-fart on my part : I meant Half-Life Engine.
(Although Source is just the evolved HL Engine but that's just splitting hairs ;))

Valve and ID have always been open source friendly, some of their early games where openGL because they wanted to port them some day. Just because early directx was garbage doesn't mean it wasn't heavily used. As I recal most OpenGL games made back then where also D3D, if need be. The few years where the infighting did hold it back is over with now. It only lasted a couple years and I feel the vendors learned allot from it.


I don't quite know why you think were Open Source friendly from; while iD have indeed released older engines as open source for some time they were closed to the point where you weren't allowed to use their tools to create levels for your own game. Valve have no Open Source pedigree that I can think of; they might put out some papers but I can't ever recall them throwing source out into the open.

iD's games started off life as software rendered, the move to OpenGL came later with GLQuake being released around the time of Win95 and the rise of the Voodoo cards; back then it was OpenGL or Glide : D3D was about but not widely used although MS were trying to push it. Valve get involved due to their licensing of the OpenGL based Quake engine (which then got some of the Quake2 upgrades) which is why it was primarily a software/gl based engine at the time. This had nothing to do with wanting to port things, this was just the state of technology at the time.

Some games did gain D3D support, HL and UT did indeed have D3D renderers but for a long time they were rubbish when compared to the OpenGL based ones - slower and lower image quality tended to be their hallmarks while OpenGL, and indeed the Glide based renders, were much nicer to look at. (I accidently used D3D to render CS once... I very quickly went back to OpenGL as it was simply better at the time).

The ARB infighting lasted long enough to scupper at least 2 OpenGL versions and slow development down which is what ultimately has done OpenGL in when it comes to AAA development cycles - in a world where D3D is king and the better API (in the form of D3D11) the desire to use OpenGL is low unless you want to port and then its saner to put a layer over your API and target D3D on windows anyway as thats where the stability and speed remains (as well as the tools).

OpenGL will always be around regardless of it's slight ineriority to Directx, all becuse it runs on almost everything. Yes I an OpenGL fan admit directx is slightly better, ...just slightly.


OpenGL needs to stay around and it needs to be strong to force development; if not things will stagnate. Right now it is still lacking the former (being able to run on minority platforms of Linux and OSX isn't, in the AAA sense, much of a draw; if we can drop XP/DX9 support when the market is still 10%+ the sub-10% combined apprent market of Linux and OSX has no real draw for the engineering cost).

I used OpenGL for a good 8 years before switching; up until DX9 it really was a case of 'whatever floats your boat' API wise - but with the GL3.0 fuck up I gave up on the ARB being able to do something sensible with OpenGL and thus my move to D3D land where things are nicer and given I've always just run a windows system (Well, I did play with linux for a while and concluded 'meh' some years back) there was no great loss.

By the way I love OpenGL|ES, I always assumed it was openGL. My bad on that.


Well, as I said, OpenGL|ES makes sense - it is the saner, stripped down, relative.
Hell, if OpenGL Longs Peak had happened then chances are I'd be using OpenGL myself still.. but it didn't and because of that I still refuse to trust the ARB will do somethinig sane any time soon and thus just watch OpenGL stagger forward from a distance...


#4983165 Is XNA dying and MS forcing to C++?

Posted by phantom on 24 September 2012 - 04:01 AM

I like c++, yet for WP7 and Xbox-Indie I'd be forced to one particular language, although there is no reason something else wouldn't run.


Correct, you can use C++ on the XBox... if you are willing to pay for all the certification fees to make sure your game doesn't do something its not meant to do.

Using .Net let MS sandbox XNA apps so they couldn't run riot over your console, which are generally completely locked down devices, and gave people a chance to release games on it. Now I think some of it was very broken (for example most of the floating point power of a 360's CPU is in the vector units and their SIMD operations, however last I checked this wasn't exposed) however it is still a chance you wouldn't get without dropping thousands of dollars to have tested etc which raises a massive barrier for entry.

Welcome to the reality of wanting to run untrusted code on what is effectively a 'trusted' platform - you either get things signed or play in the sandbox.


#4981850 Why are most games not using hardware tessellation?

Posted by phantom on 19 September 2012 - 04:17 PM

One reason could be, that the industry still makes games primary for Xbox360 and PS3.


Others have commented but I'll throw in my voice to this too.

While the internals of the renderer on our shiney new engine are designed/arranged in a D3D11 fashion it was only November of last year that we were given permission by management to drop/rip out the Windows DX9 path and only support DX11 (+feature levels), X360 and PS3 paths for the game we are working primarily with.

Even then however we don't have any support in place for compute shaders, tesselation, geo-shaders or any other 'post-DX9 hardware functionality' (cbuffers etc are of course used internally but that's an implementation detail and nothing more).

Post-game release we do have plans to add these things, as the game teams require it, but right now it's basically DX9 features on the DX11 API (not that the game supports DX9 hardware, but, ya know, details ;))

(side note: I had, however, considered hacking in PN Triangle/Phong triangle support for a few materials, unfortunately workloads haven't allowed it as yet and I'm not sure management would like me sneaking it in via a hidden command line option anyway ;))


#4981830 Is XNA dying and MS forcing to C++?

Posted by phantom on 19 September 2012 - 03:14 PM

This is the reason why C++ is strongly recommended these days by MSFT.


It is?

Huh... news to me; they have done a lot of work to improve their C++ support and include it in the WinRT work for Win8 to make it seemless but no where have I seen 'we recommend C++ over any other .Net language' - if anything everything I've seen has been along the lines of 'use whatever you are happy with using - the support is the same across the board'. If anything MS have been working to bring C++ development up to the .Net standard tools, library and API wise so that you can develop your "metro" apps with any language pretty seemlessly.

On XNA;
XNA's trouble was that it was really mired in DX9 style development with no real path forward to DX11 platforms and features so it 'going way' should be no surprise to anyone.

Nor is the 'death of XNA' a good reason to goto C++ - you'll quickly realise how much you've lost, support library wise, by doing that.

Note: I'm not saying 'dont learn c++' but as a reason that is a pretty poor one.


#4981084 C# and C++

Posted by phantom on 17 September 2012 - 06:54 PM

C++ : A Cautionary Tale.

The past couple of days at work I've been working on optimisating/vectorising some C++ code - something I'm in fact normally very very good at (low level stuff is a kind of speciality of mine). Last week was a long one (~52h, or close to 7 days worth of hours in 5 days) due to us being in the closing weeks of the project so I was a little tired and thus didn't check the code in as I wanted to test it when I was fresh.

I tested the code, worked on two platforms but the rewritten SPU code didn't do as expected.
The code compiled - the code ran - but the code clearly wasn't correct.

Many hours of debugging later I discovered the two problems;
1. a difference in the API for SPU/PS3 vs PC/XBox
2. I'd made a mistake with some pointers.

Basically when I started writing the code on the friday I started with float*, so all my sizes, offsets and pointers were in terms of floats.

Later, however, I swapped to a vector format, 4 floats wide, but in my tiredness failed to update all my pointer offsets correctly.

The compiler accepted the code.
The SPU ran the code.
The code even produced some output without crashing for a good while.
But the code was wrong.

This is the kind of thing you get to work with when it comes to C++ : you might not be working with SPUs but it is oh so easy to make mistakes which function correctly 999 out of 1000 and then on the 1000s try blow up in your face.

An older tail, from a good 8 or 9 years back - I was working with a friend on some rendering concepts for a game. Code ran fine on my PC, didn't work properly on his. About a day later I finally found out why; an uninitialised variable. On my machine it would always spin up as 'true', on my friends it seemed to be 'false' 99% of the time.

When learning to program you don't want something which trips you up like this causing random and hard to explain bugs. I'm an experianced programmer and, despite the tale above, very good at it but even I trip up from time to time (I also write code which functions flawlessly as well, but that doesn't make for a good cautionary tale *chuckles*).

Reasons such as this are why C# and Python are recommended - they let you learn to reason as a programmer BEFORE you have to deal with the subtle and hard to find issues that C++ brings to the table.

And it's not like starting with a language which shields you from the crazy makes you any worse off in the long run; I started with BASIC and was happy there for many years before taking the leap into 68000 Assembly but at that point I had a strong understanding of programming which meant the assembly was easier to follow and understand.




PARTNERS