Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


C0lumbo

Member Since 02 Nov 2012
Offline Last Active Today, 06:39 AM

#5139926 Getting a direction constant from normalized vector

Posted by C0lumbo on 18 March 2014 - 02:32 AM

People are probably correct suggesting that you shouldn't end up needing this function, but to implement it I would perhaps do something like:

 

float fThreshold = cos(PI / 8);

 

If (x > fThreshold)

    return East;

else if (x < -fThreshold)

    return West; 

else if (y > fThreshold)

    return North;

else if (y < -fThreshold)

    return South;

else if (x > 0 && y > 0)

    return NorthEast

else if (x > 0 && y < 0)

    return SouthEast

else if (x < 0 && y > 0)

    return NorthWest

else if (x < 0 && y < 0)

    return SouthWest

 

(this is assuming positive x is east and positivie y is north)




#5139643 [TEXTURE] BC7

Posted by C0lumbo on 17 March 2014 - 12:31 AM

BC7 is pretty great and is incredibly versatile, and is probably the best choice in a wide variety of situations (Chris_F points out the exceptions well, but I'd just emphasize that when Chris_F says that "For some textures BC7 will be too large, so for that you will want BC1", he's not just talking about the amount of memory available, there's a very real performance benefit from reducing the amount of texture data that has to be thrown around, so I don't think BC1 will disappear too quick).

 

However, you touched upon it's main disadvantage in your original question, BC7s versatility is largely down to the fact you can choose from 8 different encoding schemes at a per block level, and many of these encoding schemes have many different partition tables and other options. This means an exhaustive BC7 encoder has to do similar work to a BC1 encoder but do it many hundreds of times. The result is that the time required to generate a decent quality BC7 compressed version of a texture is huge, it's very slow even with a compute shader to do the heavy lifting.

 

That said, I have no doubt though that those developing exclusively for hardware that supports BC7 (not very many people on PC yet!), will end up with the majority of their textures as BC7, especially if someone can put together a faster encoder with some clever heuristics to reduce the search space.




#5136346 Optimization in Games

Posted by C0lumbo on 04 March 2014 - 12:02 PM

Most important thing when doing optimisation is to take the time to measure accurately. Makes me so annoyed when people optimise without measuring first, so they have no idea if the optimisation has a real world effect.

 

I'm coming from an engineering perspective, but the same would apply to any other form of optimisation. Harder perhaps with game design, and although analytics help, you have to be particularly careful when interpreting your measurements.




#5136252 Large textures are really slow...

Posted by C0lumbo on 04 March 2014 - 12:24 AM

I think that tanzanite7 is getting frustrated because he's essentially asking the question:

 

"In a situation where HSR and early-z are not an option, is discard still bad?"

 

And he's getting a lot of answers about HSR and early-z.

 

TBH, I'd have to measure, but one possible reason that discard might be worse than the alpha blend in this scenario is that discard is adding a dynamic branch to the shader (by dynamic, I mean that some fragments within a 2x2 quad might take the discard path and others may not). However, I'd imagine that the relative costs are very hardware and shader dependent, I wouldn't be surprised if you could create a shader where discard clearly improves performance (e.g. if the shader was doing complex per pixel deferred lighting operations). However, I think that where you're just blending a simple texture onto the screen, then the cost of the discard probably outweighs the cost of the alpha blend.




#5134340 How to get good fast?

Posted by C0lumbo on 25 February 2014 - 12:00 AM

Here is a how-to guide to learn programming in just 21 days: http://abstrusegoose.com/249




#5133884 Legal risk in editors, and giving players freedom to edit

Posted by C0lumbo on 23 February 2014 - 10:34 AM

I am not a lawyer, but I think that as long as you are not hosting the content, then there is no problem. e.g. Adobe can hardly be held responsible for what people make in photoshop, be it copyright infringement or even a criminal act (although interestingly Photoshop supposedly includes code that attempts to catch currency forgers)

 

As an example, there are plenty of sports games without licenses that let you modify the names of teams and players. I believe this is perfectly legal, even though it's obvious the feature is there to let the player base use licensed names. Such games often allow methods for players to save and share their modified team rosters, this is perfectly legal. However, if the method of sharing the rosters involved the developers website/servers in any way, then most likely the developer would get sued. i.e. Let your users save the data as a file and share it through their own website and your fine, provide the service that lets the users share the data and you're in trouble.

 

Here is my opinion on your specific questions:

 

-I guess allowing them to change 3D models and changing textures is not a good idea. AFAIK, Im responsible for whatever content they create. Right? You are not responsible

-What about smaller things like names, color templates, etc. You are not responsible

-What about a map editor? Edit the heightmap/tiles, place the built-in objects etc. You would only be responsible if you run some infrastructure that allows players to share the maps

-Is it any different when the game is single-player/online multiplayer? If you are hosting multiplayer servers then you would be responsible for making sure user created content is appropriate. If you're not hosting the user created content then you must still make an effort to warn players that they may (i.e. will definitely) be exposed to obscene content in multiplayer play.

-What if they create a mod for your game? Are you still responsible for its content? You are not responsible unless you host the mods on your website or something

-What if I dont explicitly allow them to change things like textures, sounds etc, but they can change the files in the game`s data folder? You are not responsible

-What if I do protect the files with checksums or something, but someone hacks it? You are not responsible, but if you ship something inappropriate which can be accessed through hacking then you are responsible (e.g. Hot Coffee, or that guy that padded a console DVD with South Park episodes instead of with zeroes, it was inaccessible except by hacking, but they still had to recall all the DVDs)

 

Again I am not a lawyer, I may be wrong about any of this.

 

Edit: I suppose the City of Heroes vs Marvel case is relevant: http://en.wikipedia.org/wiki/City_of_Heroes




#5133769 Handling Post Processing after Deferred Shading

Posted by C0lumbo on 23 February 2014 - 02:21 AM

Usually the approach is to have 2 render targets and 'ping pong' between them. Let's say the result of your deferred rendering scene ends up in RT1.

 

Post Effect A renders to RT2 using RT1 as a texture.

Post Effect B renders to RT1 using RT2 as a texture

Post Effect C renders to RT2 using RT1 as a texture

 

etc.

 

That's the basic approach, although the details can get a bit messy. e.g. You might want to reuse some of your G-buffer render targets rather than allocating new ones. You might want to do some of your post effects at reduced resolution. You might want to combine some post effects into single passes to better balance ALU and bandwidth usage in your shaders.




#5133020 Checkerboard algorithm?

Posted by C0lumbo on 20 February 2014 - 01:16 PM

Try this:

 

//Pseudo-code
checkerboardColour(Vec point, float fScale)
{
return ((int)(p.x/fScale) + (int)(p.y/fScale)) % 2 == 0 ? white : black;
}




#5132155 Using OpenGL for particle systems...

Posted by C0lumbo on 17 February 2014 - 04:32 PM

If you were just trying to implement a particle system for the sake of getting it done, I'd recommend you keep it simple, and generate all four vertices on the CPU and submit as indexed triangles. In a simple 2D title, neither of the more complex approaches are likely to make any material difference to the final performance. However, it sounds like pushing yourself to learn new techniques is a big part of why you want to do it.

 

I think instancing doesn't bring much to the table for a particle system. Per instance, you'll need to send position, size, rotation, texture coordinates and it probably won't end up measurably faster than generating and sending all four vertices.

 

If I understand what you mean by the geometry shader/transform feedback approach, you're talking about using the vertex shader to update your particles using transform feedback and the geometry shader when rendering to expand a single particle into a quad. That sounds much more worthwhile both in terms of learning about interesting parts of the pipeline, and for getting some impressive particle throughput if you want to reuse the system for future demos.

 

I'd recommend you still start with the CPU approach first though, that way you have a reference point to check that your fancy rendering path is producing correct results, you have a fallback path for older graphics cards that you can use for your game, and you have a reference point to measure the speed boost against (in a job interview situation it's always nice to have solid performance numbers if you're talking about optimisations you made).




#5131776 Making a texture render to itself

Posted by C0lumbo on 16 February 2014 - 01:15 PM

I'd suspect that the driver is detecting the potentially trouble-causing situation and behind your back it's making a copy of the render target to use as the source texture. If that's the case you get correct rendering results on your machine, but there's no guarantee it'll work on any other driver/hardware and the copy means you get sub-optimal performance.

 

Just a guess though.




#5130977 Power of normal mapping and texture formats?

Posted by C0lumbo on 13 February 2014 - 01:03 AM

Thanks, I'l get into the links.
For diffuse maps I'll play around with DXT-3 and 5 (not sure yet what 5 brings compared to 3), with this I'll include the alpha map in the diffuse texture to save an extra texture for blended materials. I'm fully using directx3dtexture9 objects (and d3dx) so this should work fine.

On normal mapping for now I'll stay with DDS format for keeping it structured/ standardized, but without DXT compression.

 

If you're not sure which to use, use DXT-5 instead of DXT-3, I can't think of any real-life situations where DXT-3 is the better choice (DXT3 vs DXT5 is often referred to as a choice between sharper and smoother, but this is nonsense IMO). Unless you specifically only want 16 different shades of alpha for some reason, use DXT5.




#5130023 Aspect ratio vs. display ratio

Posted by C0lumbo on 09 February 2014 - 12:44 AM

So there's two separate issues:

 

1. There's lots of different aspect ratios and resolutions to handle.

2. Some resolutions have pixel aspect ratios which don't match the display's physical aspect ratio, resulting in pixels that aren't square. In that situation if you rendered a quad with a circle texture on it and rotated it around, it would squash and stretch as it rotated.

 

It sounds to me that the OP is happy he has solutions for problem #1 (display more where possible or use pillar boxing/letterboxing if you have to), but is mainly concerned with issue #2.

 

IMO, you can just ignore issue #2. I think pretty much all machines offer resolutions with square pixels, and if your users pick one that doesn't match, then that's their lookout.




#5129932 Power of normal mapping and texture formats?

Posted by C0lumbo on 08 February 2014 - 04:31 PM


- How about texture compression?

(my normal map tool, Shadermap, has possibilities to save the maps as "DDS DXT 1 / 3 or 5" for example)

- What exactly is DDS DXT?

 

This is a really, really good article explaining the BC formats, starting with the DXT1-5 and then going on to talk about the newer DX11 level ones, which you can't use but are still worth knowingabout. http://www.reedbeta.com/blog/2012/02/12/understanding-bcn-texture-compression-formats/

 

My rule of thumb is to always use DXT compression for diffuse maps, and only roll it back to bigger formats if it looks really bad, which for textures destined to be used on 3D models is really quite rare

 

Normal maps are trickier, as the article suggests, using DXT5 with X encoded in the RGB, Y encoded in alpha and Z reconstructed seems reasonable. Note that the increased complexity of the shader as you reconstruct the normal won't necessarily mean that the shader is slower than an 8888 solution. The fact that you'll be fetching 1/4 of the data might* more than compensate for the extra calculations that the shader has to do. If you don't want to invest the time in modifying your shaders so you can use DXT5, then use a 16 bit texture format like 565 as it'll half your texture footprint for a pretty minimal effort.

 

*I say might. I strongly suspect it will be a win on most graphics cards, but a lose on some others. Graphics programming sucks sometimes.




#5128138 Polygon based terrain LOD

Posted by C0lumbo on 02 February 2014 - 06:05 AM

There's a fairly old series of articles by Jonathon Blow about doing LOD for environmental triangle soups. Doesn't sound like it's simple to implement.

 

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%201/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%202/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%203/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%204/

http://number-none.com/product/Unified%20Rendering%20LOD,%20Part%205/

 

Parts 1 and 2 talk about heightfield lodding, then it starts getting into more general 3D model cases in parts 3, 4 and 5.




#5126261 Economics problem

Posted by C0lumbo on 25 January 2014 - 12:19 AM

I would say your equation should be: F / (T + C) rather than F / (T * C)

 

You then need to work out T in terms of money. As a starting point, why not use the person's hourly wage (or per second wage, if your time is in seconds). Maybe you could scale it according to some personality trait that represents how much they value their free time.






PARTNERS