Jump to content

  • Log In with Google      Sign In   
  • Create Account


C0lumbo

Member Since 02 Nov 2012
Offline Last Active Today, 03:23 PM

#5136346 Optimization in Games

Posted by C0lumbo on 04 March 2014 - 12:02 PM

Most important thing when doing optimisation is to take the time to measure accurately. Makes me so annoyed when people optimise without measuring first, so they have no idea if the optimisation has a real world effect.

 

I'm coming from an engineering perspective, but the same would apply to any other form of optimisation. Harder perhaps with game design, and although analytics help, you have to be particularly careful when interpreting your measurements.




#5136252 Large textures are really slow...

Posted by C0lumbo on 04 March 2014 - 12:24 AM

I think that tanzanite7 is getting frustrated because he's essentially asking the question:

 

"In a situation where HSR and early-z are not an option, is discard still bad?"

 

And he's getting a lot of answers about HSR and early-z.

 

TBH, I'd have to measure, but one possible reason that discard might be worse than the alpha blend in this scenario is that discard is adding a dynamic branch to the shader (by dynamic, I mean that some fragments within a 2x2 quad might take the discard path and others may not). However, I'd imagine that the relative costs are very hardware and shader dependent, I wouldn't be surprised if you could create a shader where discard clearly improves performance (e.g. if the shader was doing complex per pixel deferred lighting operations). However, I think that where you're just blending a simple texture onto the screen, then the cost of the discard probably outweighs the cost of the alpha blend.




#5134340 How to get good fast?

Posted by C0lumbo on 25 February 2014 - 12:00 AM

Here is a how-to guide to learn programming in just 21 days: http://abstrusegoose.com/249




#5133884 Legal risk in editors, and giving players freedom to edit

Posted by C0lumbo on 23 February 2014 - 10:34 AM

I am not a lawyer, but I think that as long as you are not hosting the content, then there is no problem. e.g. Adobe can hardly be held responsible for what people make in photoshop, be it copyright infringement or even a criminal act (although interestingly Photoshop supposedly includes code that attempts to catch currency forgers)

 

As an example, there are plenty of sports games without licenses that let you modify the names of teams and players. I believe this is perfectly legal, even though it's obvious the feature is there to let the player base use licensed names. Such games often allow methods for players to save and share their modified team rosters, this is perfectly legal. However, if the method of sharing the rosters involved the developers website/servers in any way, then most likely the developer would get sued. i.e. Let your users save the data as a file and share it through their own website and your fine, provide the service that lets the users share the data and you're in trouble.

 

Here is my opinion on your specific questions:

 

-I guess allowing them to change 3D models and changing textures is not a good idea. AFAIK, Im responsible for whatever content they create. Right? You are not responsible

-What about smaller things like names, color templates, etc. You are not responsible

-What about a map editor? Edit the heightmap/tiles, place the built-in objects etc. You would only be responsible if you run some infrastructure that allows players to share the maps

-Is it any different when the game is single-player/online multiplayer? If you are hosting multiplayer servers then you would be responsible for making sure user created content is appropriate. If you're not hosting the user created content then you must still make an effort to warn players that they may (i.e. will definitely) be exposed to obscene content in multiplayer play.

-What if they create a mod for your game? Are you still responsible for its content? You are not responsible unless you host the mods on your website or something

-What if I dont explicitly allow them to change things like textures, sounds etc, but they can change the files in the game`s data folder? You are not responsible

-What if I do protect the files with checksums or something, but someone hacks it? You are not responsible, but if you ship something inappropriate which can be accessed through hacking then you are responsible (e.g. Hot Coffee, or that guy that padded a console DVD with South Park episodes instead of with zeroes, it was inaccessible except by hacking, but they still had to recall all the DVDs)

 

Again I am not a lawyer, I may be wrong about any of this.

 

Edit: I suppose the City of Heroes vs Marvel case is relevant: http://en.wikipedia.org/wiki/City_of_Heroes




#5133769 Handling Post Processing after Deferred Shading

Posted by C0lumbo on 23 February 2014 - 02:21 AM

Usually the approach is to have 2 render targets and 'ping pong' between them. Let's say the result of your deferred rendering scene ends up in RT1.

 

Post Effect A renders to RT2 using RT1 as a texture.

Post Effect B renders to RT1 using RT2 as a texture

Post Effect C renders to RT2 using RT1 as a texture

 

etc.

 

That's the basic approach, although the details can get a bit messy. e.g. You might want to reuse some of your G-buffer render targets rather than allocating new ones. You might want to do some of your post effects at reduced resolution. You might want to combine some post effects into single passes to better balance ALU and bandwidth usage in your shaders.




#5133020 Checkerboard algorithm?

Posted by C0lumbo on 20 February 2014 - 01:16 PM

Try this:

 

//Pseudo-code
checkerboardColour(Vec point, float fScale)
{
return ((int)(p.x/fScale) + (int)(p.y/fScale)) % 2 == 0 ? white : black;
}




#5132155 Using OpenGL for particle systems...

Posted by C0lumbo on 17 February 2014 - 04:32 PM

If you were just trying to implement a particle system for the sake of getting it done, I'd recommend you keep it simple, and generate all four vertices on the CPU and submit as indexed triangles. In a simple 2D title, neither of the more complex approaches are likely to make any material difference to the final performance. However, it sounds like pushing yourself to learn new techniques is a big part of why you want to do it.

 

I think instancing doesn't bring much to the table for a particle system. Per instance, you'll need to send position, size, rotation, texture coordinates and it probably won't end up measurably faster than generating and sending all four vertices.

 

If I understand what you mean by the geometry shader/transform feedback approach, you're talking about using the vertex shader to update your particles using transform feedback and the geometry shader when rendering to expand a single particle into a quad. That sounds much more worthwhile both in terms of learning about interesting parts of the pipeline, and for getting some impressive particle throughput if you want to reuse the system for future demos.

 

I'd recommend you still start with the CPU approach first though, that way you have a reference point to check that your fancy rendering path is producing correct results, you have a fallback path for older graphics cards that you can use for your game, and you have a reference point to measure the speed boost against (in a job interview situation it's always nice to have solid performance numbers if you're talking about optimisations you made).




#5131776 Making a texture render to itself

Posted by C0lumbo on 16 February 2014 - 01:15 PM

I'd suspect that the driver is detecting the potentially trouble-causing situation and behind your back it's making a copy of the render target to use as the source texture. If that's the case you get correct rendering results on your machine, but there's no guarantee it'll work on any other driver/hardware and the copy means you get sub-optimal performance.

 

Just a guess though.




#5130977 Power of normal mapping and texture formats?

Posted by C0lumbo on 13 February 2014 - 01:03 AM

Thanks, I'l get into the links.
For diffuse maps I'll play around with DXT-3 and 5 (not sure yet what 5 brings compared to 3), with this I'll include the alpha map in the diffuse texture to save an extra texture for blended materials. I'm fully using directx3dtexture9 objects (and d3dx) so this should work fine.

On normal mapping for now I'll stay with DDS format for keeping it structured/ standardized, but without DXT compression.

 

If you're not sure which to use, use DXT-5 instead of DXT-3, I can't think of any real-life situations where DXT-3 is the better choice (DXT3 vs DXT5 is often referred to as a choice between sharper and smoother, but this is nonsense IMO). Unless you specifically only want 16 different shades of alpha for some reason, use DXT5.




#5130023 Aspect ratio vs. display ratio

Posted by C0lumbo on 09 February 2014 - 12:44 AM

So there's two separate issues:

 

1. There's lots of different aspect ratios and resolutions to handle.

2. Some resolutions have pixel aspect ratios which don't match the display's physical aspect ratio, resulting in pixels that aren't square. In that situation if you rendered a quad with a circle texture on it and rotated it around, it would squash and stretch as it rotated.

 

It sounds to me that the OP is happy he has solutions for problem #1 (display more where possible or use pillar boxing/letterboxing if you have to), but is mainly concerned with issue #2.

 

IMO, you can just ignore issue #2. I think pretty much all machines offer resolutions with square pixels, and if your users pick one that doesn't match, then that's their lookout.




#5129932 Power of normal mapping and texture formats?

Posted by C0lumbo on 08 February 2014 - 04:31 PM


- How about texture compression?

(my normal map tool, Shadermap, has possibilities to save the maps as "DDS DXT 1 / 3 or 5" for example)

- What exactly is DDS DXT?

 

This is a really, really good article explaining the BC formats, starting with the DXT1-5 and then going on to talk about the newer DX11 level ones, which you can't use but are still worth knowingabout. http://www.reedbeta.com/blog/2012/02/12/understanding-bcn-texture-compression-formats/

 

My rule of thumb is to always use DXT compression for diffuse maps, and only roll it back to bigger formats if it looks really bad, which for textures destined to be used on 3D models is really quite rare

 

Normal maps are trickier, as the article suggests, using DXT5 with X encoded in the RGB, Y encoded in alpha and Z reconstructed seems reasonable. Note that the increased complexity of the shader as you reconstruct the normal won't necessarily mean that the shader is slower than an 8888 solution. The fact that you'll be fetching 1/4 of the data might* more than compensate for the extra calculations that the shader has to do. If you don't want to invest the time in modifying your shaders so you can use DXT5, then use a 16 bit texture format like 565 as it'll half your texture footprint for a pretty minimal effort.

 

*I say might. I strongly suspect it will be a win on most graphics cards, but a lose on some others. Graphics programming sucks sometimes.




#5128138 Polygon based terrain LOD

Posted by C0lumbo on 02 February 2014 - 06:05 AM

There's a fairly old series of articles by Jonathon Blow about doing LOD for environmental triangle soups. Doesn't sound like it's simple to implement.

 

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%201/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%202/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%203/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%204/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%205/

 

Parts 1 and 2 talk about heightfield lodding, then it starts getting into more general 3D model cases in parts 3, 4 and 5.




#5126261 Economics problem

Posted by C0lumbo on 25 January 2014 - 12:19 AM

I would say your equation should be: F / (T + C) rather than F / (T * C)

 

You then need to work out T in terms of money. As a starting point, why not use the person's hourly wage (or per second wage, if your time is in seconds). Maybe you could scale it according to some personality trait that represents how much they value their free time.




#5125982 Structure of Development Costs for games

Posted by C0lumbo on 23 January 2014 - 03:56 PM

If you're trying to do back-of-the-envelope calculations, then you can do a lot worse than calculate one man-month as $10000 (http://www.altdevblogaday.com/2011/11/13/10000-is-the-magic-number/). This (in theory) covers the cost of hardware, software, wages, taxes, office rent, insurance, utilties, pensions, etc.

 

So if you hear of a game that cost $1000000 and was developed over the course of 10 months, then you can guestimate that the team size was about 10 people.

 

Of course, when you're doing it as an indie, you're going to be cutting out as many of those costs as possible, and $1000000 seems like an impossibly huge budget, but then you're probably using free software, working from home, not paying yourself a proper salary/pension, etc.




#5125251 What is the general practice to deal with animationsets?

Posted by C0lumbo on 21 January 2014 - 12:01 AM

There's nothing wrong with loading everything up at game start, providing that you have a reasonably small set of things to load so that you're not making the user wait for too long a time, and you're not exceeding your memory budget for your target platform.

 

Packing multiple assets into a single file is often a more efficient way to load things than having lots of separate loose files. Particularly if you're using the push model of loading so have very sequential access as you load the file (I don't about the details of the ,x format).

 

A majority of big games, particularly ones with large amounts of cutscenes, would have at least some of their character's animations loaded on demand. For most indie games, packing the character and the animations up together and loading the whole lot at boot or at game start seems a pretty sensible choice.






PARTNERS