I've just been familiarising myself with SSE and I can mostly see what's going on now. I can think of a couple of areas where my code might benefit from it too, so that's good.
I'd love to have a look at the code of some of the AAA games out at the moment, especially the likes of COD. I guess part of the fun of game development for me is trying to work out how how things are done and how to do my own interpretation, I generally only buy a game if I want to see how it works - I spend most of my time in corners looking at detail or how they've done shadows, etc. I bought the PC version of Crysis a while back and I've never actually played the game, I spent all my time in the sandbox
So I re-ask my original question: what metrics are we using for defining "complicated"?
I spent a good half an hour or so trying to work out why it wasn't building out of the box in 2005 and some of the files I found myself in were heavy in asm and used lots of SSE (I guess) or SIMD stuff I've never come across.
I've just had another good look through and I guess along with the asm, they use very short variable names which, to me, always makes things look more complicated. I guess the question I was asking was are AAA games flooded with asm and things like that?
Things under BT_USE_NEON appear to use lots of calls I've never heard of, I guess it was just unfamiliarity that phased me. I'm back to being unphased for my own project
With physics engines like bullet, can you apply your own calculations to the resulting positions of rigid bodies, etc? My game is loosely based around snowboarding and whilst I'm sure I could easily model a board to slide down a slope, it might get a lot more complex when you consider the fact that being on an edge will have different physics properties to being flat on the snow.
For a few days I've been weighing up the pros and cons of doing my own physics or using something like bullet. If I do my own, obviously it'll get pretty complex but if I can't model different parts of the snowboard in a middleware physics engine I might have to consider my own cut down version.
In a more simplistic sense, I personally find these things irritating in game visuals:
Polygon joins: if you don't disguise joins, they can completely ruin a scene. I mean a boulder on a terrain needs to either have foliage hiding the joins or some clever texturing.
Too much bloom: I think this can make a scene look too 'bloomy'.. It's almost like soft focus on a film when they want to make someone look prettier than they are.
Tearing: this is a huge no no for me, I refuse to play a game with it and it frustrates me that the developers have obviously put too much in and still release it with tearing instead of pulling things back.
Badly coloured smoke effects: Smoke that just doesn't match the scenery colour-wise is inexcusable
Badly rendered smoke effects: in the real world, smoke doesn't have hard edges
Badly rendered billboards: if you're using billboards to cheat, use them sparingly otherwise this cab look more cheap than realistic,
Collision: a person walking into a wall and continuing to walk just looks wrong - along with artefacts sticking into/out of things
Add chaff: scattering a few little rocks here and about and using decals can greatly help realism
Texturing: I'm more pleased to see cleverly placed textures than hi res ones, i.e. keep repeating textures to a minimum.
I think in general, if an effect doesn't look good, eg billboards a grass, rethink it or take it out
But to answer your question, I'd try to keep it 20-50%. You need some room for characters, special effects, and post-processing.
The budget always varies from game to game. You often always can't lock down your budgets until you've implemented the whole workload and then started to optimize, cut-back and balance them together... Or you often rely on experience from previous (similar) games to set your starting budgets.
Maybe you want 30% for characters, maybe 10%. Maybe 50% for post, maybe 10% :-/
Often I've seen environments and characters combined at ~25% and post at 50%, but on other games that could be flipped.
What kind of game is it, what camera angles/distances, and what else needs to be drawn?
It's a snowboarding/skiing game. I'll need to draw other static objects like instanced trees, huts, jumps, etc, one main character skinned, probably max of 2 or 3 other close characters plus up to a dozen or other lower detail skinned characters. Draw distance is fairly crucial and needs to be, at times, as far as the eye can see. I've developed a crude dynamic pvs method which I need to redevelop but works ok for now.
I'm currently rendering at an average of around 2ms up to an absolute maximum of 5ms to draw the entire 4096x4096 terrain with all texturing. based on the comments that feels like a pretty good start
I'm just wondering what the best practice is with chunked terrains and component entity systems. Should each chunk be an entity in itself? Or should the whole terrain be an entity?
I'm trying to streamline and refactor my rendering process, and indeed, design parts of it that haven't been done yet. I have a sandbox project that doesn't use my new architecture and it has my terrain system in it, which is honed and extremely efficient. Porting it to my new game engine is throwing up lots of design decisions. My rendering engine essentially works sequentially through a vector of render 'tokens' which allow me to sort it on various criteria. My main question is "should" my terrain parts (i.e. chunks) be just another render token in the list or should I contain my specialised terrain rendering code in its own render method.
It feels wrong to keep it in its own method, but its quadtree and relational nature doesn't really lend itself to a linear list of completely unrelated render tokens. My set up is essentially like this at the moment:
Entity (contains standard orientation data)
---> RenderableComponent (there is a link to this object in each 'RenderToken' which just holds the sortable key, the material and link)
---> SkeletonComponent (this is just the skeleton data)
---> MeshComponent (this is just the mesh data)
---> AnimatorComponent (this builds skeletal poses based on animation data)
My entity system doesn't really have 'systems' that control the entities/components, rather the functionality exists within the components. This was an early design decision that I quite like but it's easily changable.
So how would you go about moving a quadtree-based terrain system into this architecture? Keep it a self-contained terrain system or integrate it into the pipeline properly?
I do my comp/ent system this way. My entity holds a map of components keyed on the component enum type, which means you don't have to loop through components to find the one you want, works really well for me. I guess if you wanted more than one similar-type components, you could hold a vector of components in the map, wouldn't take much to change
I skipped trough your post, mostly interested by how you write, it's very Andre Lamothe and will be great for beginners to game dev and c++ - once you get your code rock solid and you're more confident that what you've written isn't necessarily the 'right' way to do things, but perfectly acceptable, you should consider writing a full blog or something - you have a talent for writing.
Whilst skipping very quickly through the code, I noticed that your scenes implement IEvent. Conceptually that doesn't really make sense. I would assume Scenes can accept IEvents, but they aren't an event themselves. I would consider renaming this particular IEvent interface to IEventHandler or something similar. It's the IEventHandler that would accept IEvent implemented objects and would likely impose a HandleEvent(IEvent event) type method.
It's a cosmetic thing but might be a bit confusing for beginners if they are just picking up OO concepts or C++
If your character/camera doesn't move too quickly and you're talking about objects that are a very long way off, you can always render the distant objects' lowest LODs once every x frames to a render target and render that as a billboard. Haven't tried this myself but I've heard of it being done.
I wrote my first game on the ZX Spectrum some 29 years ago. My friends were busy swapping cassette games to try and beat each others' scores and I would borrow them to amaze myself at how they programmed it.
I then went on to write my first "3d game" on the Sinclair QL which was a copy of a golf game called Leaderboard - didn't get past the 1st hole but it taught me all about trigonometry, no graphics libraries in those days - sin/cos tables were better as lookup tables back then rather than calculated (if u had the spare memory),
I then wrote several games on the Amiga, even publishing one on licenseware which had great magazine write ups (still got the Amiga Format mag somewhere).
Then onto PCs with a very early version of DirectX, which was mind-blowing compared to what I'd used before. Always self-taught in the early days, but now I just seek the assistance of the friendly experts on here. I work in an investment bank for my day job and it's nowhere near as much fun.
41 now and still tinkering with my engine and enjoying it as much, if not more, than when I was burying my head in the Amiga Hardware Reference manual trying to get a sideways scroller working in DevPac assembly in the early nineties.
For those younguns among you who think it's just a phase you'll grow out of.... It ain't!
Another quick question on this, is this how sprites are also usually done? Or would you generally use point sprites - I'm using DX9.
The reason I'm asking is because for my level editor (which will just use the c++ game engine from a c# front end) I'm going to need to overlay sprite-type icons on objects/zones, etc. this way seems pretty convenient to me
Although I guess this kind of changes the title to non-static graphics!
When you want to show something like a health bar, a map, score or number of lives etc, I thought you might have to setup a specific view projection matrix, etc, but I just 'accidentally' did it by creating the graphic bounds using interpolated view projected vertices, e.g., the whole screen would be x and y -1,-1 to 1,1 respectively. In the vertex shader I just don't use a matrix to transform it and it works.
Is this how it's normally done anyway? Seems it would be quicker than doing it with matrices
I think it depends on the texture coordinates. If you create an organic shape in 3ds max and uv wrap it correctly, vertices should be shared.
For the cube example, it may be that 3ds max creates it with texture coordinates different on each face, meaning that you'll need duplicated vertices (unless you're using an atlas) and that's what it'll export.
The offset thing, yes that just means that for each consecutive group of 3 indices in the p element, the first is the vertex lookup, the second is the normal lookup and the third is the texture coordinate lookup.