Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 05:27 AM

#5070100 DWORD instead of std::bitset fails

Posted by Hodgman on 15 June 2013 - 11:38 PM

Though even more offtopic, do you have any suggestion on how to improve that, maybe some complete different branch-less approach for checking if a state was already applied before?

I assume that actually applying/submitting the states that make it past that continue statement pretty much has to involve branching, but sure, you could eliminate that if-continue branch like this:

const BaseState* toApply[MAX_STATES];
int toApplyCount = 0;
for(...)
{
	toApply[toApplyCount] = pState;
	u64 mask = (1ULL << type);
	int isValid = (bPreviousApplied & mask) ? 1 : 0; //should compile to a conditional move, not a branch
	toApplyCount += isValid;
}
for(int i=0; i!=toApplyCount; ++i)
{
	toApply[i]->Apply();
}

n.b. the pattern of if((bitfield & mask) == mask) is only required if the mask contains a pattern of more than one bit and you need to check that every bit in the mask are all set. If mask only has one bit set, you can just use if(bitfield & mask). But I guess that only matters if you're doing the bitfield logic by hand instead of using std::bitset ;)




#5069739 DWORD instead of std::bitset fails

Posted by Hodgman on 14 June 2013 - 06:28 AM

Yes, "1<<x" is dangerous, because "1" is an "int". If "int" is 32 bits, then if x is 32 or higher, you're gonna have a bad time.
Make sure that "1" is cast to the appropriate type before shifting, as above.


#5069689 Writing Game Engine!

Posted by Hodgman on 14 June 2013 - 01:28 AM

A generic engine is too nebulous a concept, it can be so many things, that in isolation, it will never be done. There's always more things it could be doing, or better ways in which it could be doing the things it already does.

 

IMHO, an engine needs to be driven by a customer. ID Tech is always driven by a new Quake/Doom/etc, Unreal Tech is always driven by a new Unreal/Gears/etc, CryEngine is driven by a new FarCry/Crysis, an so on.

Gamebryo failed because they never ate their own dog food (they made/sold a crappy engine, without ever making their own games on it).

The "dog food test" should always be applied to an engine, so I'd avoid others like BigWorld, etc, as they failed to make a game on their own tech...

You have to eat your own dog food.

 

Pick a game you want to make, figure out what feature that game needs from an engine, and then develop your engine to support those features!

Also, the only way that you can test whether you've done a good job is to actually make the game! So, you should always make the game in parallel with the engine. Don't finish the engine first and then make the game, because that just wont work. You will find problems with the engine once you try to use it, and you will have to re-do engine work while making the game, so make them side-by-side, so that they can both influence each other.

 

Then if you want to keep making the engine, make another game using the same process!

 

 


The two are mutually exclusive in my opinion. You either write what you need to make a specific game work or you write an engine that allows you to make several games once that engine is complete. Otherwise you'll quickly loose focus and never finish what you started.
I started writing an engine 5 years ago and I haven't finished yet.

I think you just disproved your own point. You're making an engine that has no specific finish line, no goal, no hard list of requirements to fulfil, and no way of evaluating/testing it's merit... so you've not finished wink.png

Having an explicit purpose for the engine (it must support the needs of this game) gives it a solid focus, and a framework for evaluation.




#5069681 graphics specialization

Posted by Hodgman on 14 June 2013 - 12:45 AM

My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future?

I describe myself as a "graphics programmer" on LinkedIn, and I get approached by recruiters on there for graphics programming jobs about once a month with decent salaries on offer (relocation required though -- Europe, North America, Asia).
I quit my last job as a graphics programmer about a year ago, and that company hasn't been able to find a candidate to replace me yet, so I still do occasional contract work for them.
So, in my experience, we are in good demand right now.
 

Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all.

There's a few kinds of roles that a graphics programmer can be doing.
1) Interacting with the underlying graphics API for each platform, and building cross-platform abstractions on top of them -- this is done within the engine.
2) Building the general rendering framework, either with the raw APIs, or with a cross-platform abstraction above. Lighting systems, deferred rendering, generic post-processing chains, etc...
3) Game specific rendering requirements. e.g. motion blur on a particular character, flame jets for some specific attack, "distortion" and smoke over a spawning animation, etc...
 
Generally #1 is done by the engine team and #3 is done by the game team. #2 could be done by either, depending on the project.
Any work done by the game-side graphics programmers will likely be done using the cross-platform API provided by the engine, rather than the underlying raw APIs (GL/D3D/etc).
 
I don't think that any two games that I've worked on have ever used the exact same lighting and post-processing setup. Generally things are tweaked specifically for the needs of each game, with the engine acting as a starting point and a flexible framework for making these changes.
 

Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals?
 
This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

It's hard to say... When Quake blew everyone out of the water in 1996 with their efficient 6DOF software rasterizer, which used BSP for polygon sorting to avoid the need for a z-buffer, they were implementing an idea that was published in 1969. You still see the same themes now, e.g. Splinter Cell Conviction came out in 2010, using a "brand new" occlusion culling technique, which was first published in 1993.
Software rasterization was always a kind of right of passage for graphics programmers, but just a toy to learn the basics, seeing as we all use hardware rasterizers now. But these days they've regained a bit of popularity, with people using them this decade to do occlusion rasterization on the CPU in order to reduce the work that's sent to the GPU. The BRDF papers I've been reading lately span the last three decades...
The above examples just go to show that like fashion, ideas in this field seem to routinely fall out of favour and later be rediscovered again wink.png
 
Personally, I quite enjoyed using the Horde3D engine as a graphics programming playground. It does all the OpenGL work for you, but requires you to write your own shaders. It also lets you modify the rendering pipeline, so that you can configure different post-processing effects, or change the lighting pipeline from deferred rendering, to light-pre-pass, to forward shading, to inferred shading, etc... This is good practice for tasks #2 and #3 above.
To gain experience in task #1, you've got to make a small framework (like Horde3D) using some API. Bonus points if you port it to more than one API (e.g. a D3D and a GL version). Generally, if you've learned one graphics API, then learning a 2nd (3rd, 4th) will be easy, as they all embody the same ideas, but in different ways.
 

Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?

On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.
Pixel shaders in particular are the most important "inner loop" out of all your code -- they can be executed millions of times per frame (1280*720 is almost 1M, and each pixel on the screen will be affected by many different passes in a modern renderer).


#5069668 Screen Space Ambient Occlusion?

Posted by Hodgman on 13 June 2013 - 11:25 PM

You draw a full-screen quad, rendering to a colour buffer, using the depth-map as input, and using a shader that computes SSAO...

This shader takes a bunch of depth samples for each pixel, and uses them to calculate occlusion.

 

Is there any particular SSAO tutorial or article that you're using?




#5069306 [c++] Abstract mesh geometry data

Posted by Hodgman on 12 June 2013 - 11:02 PM

In my engine:

A mesh contains an array of buffers (containing vertex/index data), an array of stream formats and an array of sub-meshes.

A stream format describes which buffers need to be bound to which slots, and what buffer-index/format/offset/stride to use for each attribute.

A sub-mesh points to a stream-format and a material, and contains a draw-call (arguments to DrawIndexedPrimitive, etc).

 

This lets any kind of vertex layout be used, where a mesh might have position/normal/texcoords interleaved in one buffer, another might have position/normal each in their own buffer, and another might have positions and normals in the one buffer but packed end-to-end rather than interleaved.




#5068641 Game Development Studios needed

Posted by Hodgman on 10 June 2013 - 05:29 AM

looking thru projects on kickstarter, many of them seems to match format of my own idea, and yet their development goal is set for apx. $50k, (I am particulalry talking about this guys:  http://www.kickstarter.com/projects/522716131/legends-of-dawn/posts they already have one title behind them). Is it because they are complete team already, if it is so, wouldnt be possible to hire a studio for lets say +25%, $70K (or less even)?

Look at the reasons they give for asking for your kick-starter money:
 

It's important for us to stay independent since we would like to maintain full control over the game and have freedom to continue to develop the world in a way we think is the best.
Our funds dried up just before our last two steps needed to be able to self-release the game:
 
1. We need to purchase and integrate VOs
2. We have to pay for a few plug-in licences that we integrated in our Dreamatrix Engine
 
We've allowed ourselves max 2 months for completing the game and we're confident that we can deliver a complete product by that date.

So: they've only got two months of work left on the project, they've been self-funded so far, and they're independent, which means that their staff are likely working "for free" at the moment (being an owner of the studio will be their pay-cheque). Also, they only want the $25K to pay their voice-acting bill and a few small licensing fees.
This is a typical "kick-finisher" project wink.png
 
Then look at their stretch goals, for if they go over their requested $25K:

$80,000 -- Mod Tools

If the Kickstarter campaign reaches $80,000 we will take our in-house rpg ready tool to the next level and open it to the community. You will be able to create your own worlds and share them with the community. 
$150,000 -- Multiplayer
Should we hit $150,000, we'll be able to offer multiplayer after the main release of the game.
... For multiplayer we anticipate additional 4 months of work

So 80-25 == $55k to polish their development tools enough for public release, and 150-80 == $70k to add multiplayer over 4 months, at $17.5k per month.

Dreamatrix is an independent ship with crew of 8 full timers (and a number of freelancers)

I would guess this means that there are 8 people who are currently not being paid a wage at all, but are shared owners in the company, and a small number of contractors to perform jobs they can't do themselves (like the voice acting that they want money for).
 
In a normal situation, salaries are going to be one of your biggest expenses. The average game developer salary is $80K a year -- so that $17.5K a month is enough to pay ~3 people's wages.
Under normal circumstances, you'd want over $50K per month to hire a team like theirs, and you'd need to hire them for at least a year to produce a game.

Then, a large/AAA console game, like Skyrim, etc, will have a team that is 10x bigger than their team (so 10x more expensive).




#5068639 Downsampling filters for mipmaps

Posted by Hodgman on 10 June 2013 - 05:03 AM

The nVidia texture tools codebase contains a few different filters implemented on the CPU. The last time I implemented a CPU mip-generation tool (offline though, not real-time), I used their Kaiser filter as the default choice.




#5068628 Game Development Studios needed

Posted by Hodgman on 10 June 2013 - 04:13 AM

I doubt many people here, if anyone, has experience in hiring a studio to implement a big game concept (like an RPG), because it's quite a large and expensive endevor!

I've been on the other side - working at a studio who is being approached by a client who wants a quote on their game idea. This process is quite involved, so it requires the studio to have a lot of trust in the client's ability to actually produce the cash in the end. We would draw some draft designs and plans from their concepts, which are used to make a draft schedule, budget and milestones, which the client then haggles about... And/or accepts and "greenlights" the project, signing all the legal contracts agreeing to certain payments vein made in exchange for each of the milestones.

For any kind of game that's comparable to the average console game that you find on a shelf, you'd be looking at around $1-10M...

That all said, I work with an outsourcing group who specialize in putting together game teams, either for small parts of games or whole projects... But as with any studio, the right amount of money has to be there ;)


#5068554 What tone-mapping technique are you using?

Posted by Hodgman on 09 June 2013 - 10:03 PM

Thanks for the replies, links and app suggestions biggrin.png

How do you define looking good ?

When I put real-world brightness values into my existing code (where a patch of one object might be 10000x brighter than another due to attenuation), there was a lot of clipping to white and/or complete loss of detail. When I look at the same kind of scene in real life, there isn't a corresponding loss of detail, I can still perceive the scene fine (except for the light sources themselves, which saturate to white, bloom out and burn into my retina wink.png).

...but yes, this is a very subjective thing: I want to use physical values and end up with a rendition that's artistically appealing.




#5068548 Stencil shadow volumes - a thing of the past?

Posted by Hodgman on 09 June 2013 - 09:26 PM

I've still used them in current-gen console games in 2012/2013, but yes, they're becoming more an more rare. If you actually do want pixel-perfect crisp edges, rather than soft shadows, then they're perfect. If you want soft edges though, then shadow-mapping gives you a lot more options.

To soften my stencil shadow edges, I ended up using a screen-space tile-culled gaussian-ish separated bilateral blur pass (that's a mouthful!)...

 

One of their downsides that often isn't mentioned is that they're a fill-rate hog. Often tutorials explain how to extrude the volume "to infinity", so there is no limit to how far a shadow can be cast. The issue is that when looking at a volume from side-on, a large and complex object can fill your screen with many layers of polygons that have to be rasterized. e.g. bad mspaint illustration:

RsLHdlp.png

There's no real shader processing going on, and the bandwidth is minimal, but nonetheless the overdraw can still be quite inefficient when used on large, complex scenes.

 

 

but issues involving z-fighting and the per-polygon nature of these calculations always ensure self-shadowing issues crop up

I'm not sure why you're having so many self-shadowing issues. On my last project we used them specifically for character self-shadowing, and used a different technique for character-on-world shadows!

Z-Fighting is common to most shadow-mapping techniques as well. The usual advice and tricks apply, like setting your near plane as far away as possible, or using polygon offsets and z-bias values...




#5068546 HW accel Matrices on android (NDK)

Posted by Hodgman on 09 June 2013 - 09:06 PM

for example on armv7 uses NEON extension for matrix multiply

NEON is a SIMD instruction set, which is why I thought that's what you were asking for tongue.png
That stackoverflow discussion is talking about math libraries that have been ported to use NEON intrisics (and SSE, AltiVec, etc).

 

Math code that's hand-written in assembly won't be any better/worse that math code that's carefully written in C/C++. I'd recommend just using a higher level language and looking at the assembly output from your compiler to double-check that it's doing an OK job (and if it's not, don't resort to writing asm yourself, tweak the high level code so that the compiler performs better).

 

To use CPU-specific assembly instructions (like NEON ones) from high-level code, compilers provide "intrinsic functions". E.g. GCC's ones for NEON are here.




#5068391 How come an inverse of a transformation will turn world into object?

Posted by Hodgman on 09 June 2013 - 04:47 AM

A "world" matrix is usually short for an "object to world" transform matrix (given an object-space point, it will transform it to a world-space point).

If you invert it, you get a "world to object" transform matrix (given a world-space point, it will transform it to an object-space point).

 

Same with other matrices:

* The "view" matrix is the "world to view" matrix. Inverting it gives you the "view to world" matrix.

* If you have an object that represent's a camera, then that object's "world" matrix (i.e. it's "object to world" matrix) is the inverse of the "view" matrix (because "view-space" is just "object-space" of the camera).

* The projection matrix is the "view to projection" matrix. Inverting it gives you to "projection to view" matrix.




#5068374 HW accel Matrices on android (NDK)

Posted by Hodgman on 09 June 2013 - 12:52 AM

I assume that by hardware acceleration he means that they use SIMD.

 

There's some answers over here: http://stackoverflow.com/questions/981787/good-portable-simd-library




#5068372 why 'sampler_state' can only be used with effect

Posted by Hodgman on 09 June 2013 - 12:43 AM

The entries within a sampler state structure need to be passed to the IDirect3DDevice9::SetSamplerState function.
 
When you use the FX framework, it does this for you. You can set many kinds of render-states in your techniques, and the FX framwork will pass those values on to the device.
 
When you're just using raw shaders (not "Effects"), then the shader compiler just produces binary blobs of shader code, which you use to construct shader objects, which you pass to IDirect3DDevice9::SetPixelShader/SetVertexShader. These raw shader objects do not contain any of the information contained in the sampler-state, technique, pass, etc blocks, which the FX framework uses.

 

When calling SetPixelShader/SetVertexShader, only the current shader program code is changed, they don't/can't set any render-states. So, you must make the calls to SetSamplerState and SetRenderState yourself (which is what the Effects Framework does).

 

In my engine, I use my own syntax for declaring sampler states, which I parse/extract from the shader code files myself while compiling them, and then later pass to SetSampleState.






PARTNERS