Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Nov 2012
Offline Last Active Today, 06:58 AM

#5180053 how to balance my game?

Posted by C0lumbo on 13 September 2014 - 06:45 AM

If you have implemented AI players already, then one option might be to let your AI do a big chunk of your testing for you.


Have them play thousands of matches overnight and see which tank type is winning the most games. Once you've got the stats balanced out for the AI, then you still need to do a ton of testing with humans, because your results from the AI may be biased due to quirks of the way the AI play, but it's still probably a decent starting point.

#5174791 What game companies hire remote programmers?

Posted by C0lumbo on 19 August 2014 - 12:17 PM

In general, it's very difficult to find permanent positions that are remote. Fully telecommuting companies that I know of are Slightly Mad Studios (http://www.slightlymadstudios.com/careers.html) and Boomzap (http://www.boomzap.com/jobs/) both of which look like they're hiring at the moment.


I think that most telecommuting/partial telecommuting positions you'll find are going to be contract only, and typically people hiring contract coders are expecting a high level of experience, I don't know where contract coders go to find work. The other common route to telecommuting is to be on the inside somehow (e.g. after working in-house at a company for awhile, switching some or all of your hours to wfh for whatever reason).

#5174049 Checkstyle

Posted by C0lumbo on 16 August 2014 - 01:16 AM

The correct solution would have been to spread the error message over multiple lines using either the \ character or string concatenation (http://stackoverflow.com/questions/1135841/c-multiline-string-literal), no?




MyLogFunction("Really long string blah blah")


Could be


MyLogFunction("Really long string "

                          "blah blah");


I don't have a problem with long lines, particularly for a logging message, but I believe that blind programmers quite like shorter line lengths, so it's not just petty bureaucracy to have a limit.


Damn, that was a po-faced reply. Have a vote up because I liked the story.

#5169151 What is a uber shader?

Posted by C0lumbo on 25 July 2014 - 11:39 AM

I'm wondering what a uber shader is. My first impression about a uber shader is putting everything in one shader and choosing the path dynamically. Does the shaders in Unreal Engine count as uber shader? They have plenty of branches in their shader code, however most of them are based on static conditions which are defined by macros during shader compiling, not run time. Do these kind of shader fall into the cateogry of uber shader? Thanks


I think it's a slightly fuzzy term without a strict definition.


I'd say that if your shader code is handling lots of different rendering scenarios, then it's an uber-shader. Whether those different scenarios are implemented with run-time or compile time switches doesn't matter imo, it's still an uber shader.


I think that sometimes when people talk about uber-shaders, they mean just the scenario where there is pretty much just one shader used by the entire title (maybe with the exception that 2D/post-processing might use something different). Other times it's more broad, and a title might be said to have a skinning uber shader and an environment uber shader, etc.

#5166558 Z-buffer with different projection matrix

Posted by C0lumbo on 13 July 2014 - 06:24 AM

This article http://www.gamasutra.com/view/feature/131393/a_realtime_procedural_universe_.php?page=1 talks about how to manage the sort of situation where you're dealing with enormous variations in scale. Basically, it's just identifying objects that are very far away and adjusting both their position and scale so they fit into a single projection matrix.


In the context of space, it works fine because the objects are quite well separated by lots of, erm, space. Not sure what your scene is like to necessitate different projection matrices.

#5165954 Calculate if angle between two triangles connected by edge

Posted by C0lumbo on 10 July 2014 - 12:37 AM

I'm a bit rusty on 3d related math, but I looked at dot product and I don't think that works since I believe it can only be used to find the angle between 0 and PI, where as I want between 0 and 2*PI.


I think your best bet is to use the dot product:


If they're coplanar (the normals point the same way) the dot product will be 1.0.

If the triangles are 90 degrees to each other, the dot product will be 0.0

If the triangels are folded right over and face each other the dot product will be -1.0


You can get the angle by doing acosf of the dot product.


The limitation of using the dot product is that you can't work out the direction of the fold. 90 degrees either way looks the same. The trick would be to use some other technique to work out the fold direction. I can't think of a particularly elegant way to calculate the direction: One idea is to take the normal of your first triangle and dot product it with one of the edge directions from your second triangle (just not the shared edge!). The result will be a positive or negative number depending on which direction the fold is.

#5165063 glDrawElements without indices?

Posted by C0lumbo on 06 July 2014 - 10:07 AM

Is glDrawArrays what you're after?


"Another option would be to have one single index buffer on the GPU that serves all the geometries that I need to render, but that is still taking some memory on the GPU." - This is an option I use for the common case of quads, but for separate triangles, glDrawArrays should be better I think.

#5162605 Question about depth bias.

Posted by C0lumbo on 24 June 2014 - 12:40 PM

I usually end up organising my rendering code so that it happens like this:


Render Opaque Stuff (environments, characters, etc)

Turn z-write off, z-bias on

Render depth-biased decal stuff (blob shadows/bullet holes, etc)

Turn z-write back on, z-bias off

Render 1-bit transparent stuff (foliage, etc)

Turn z-write off

Render 8-bit transparent stuff (particles, etc)


By turning off the z-write while you render your decal stuff, you can stack lots of decals without having to increment the depth bias between each layer.


Edit: Also, I almost always z-bias by creating a new projection matrix that has a minutely narrower field of view. I find it tends to succeed better across different platforms.

#5160434 Fast distance based sort

Posted by C0lumbo on 14 June 2014 - 12:12 AM

It's perhaps more correct to calculate the z depth than the raw distance (or distance squared as Pink Horror points out). It's more or less the same cost (just do a dot product of the particle position with either the z-column or z-row (I forget which) of your view matrix). As an advantage it opens up the opportunity to quickly discard anything on the wrong sides of your near and far planes, which will save you time from submitting useless data to the GPU.


I think your choice of sort algorithm will be important. Radix sorts are awesome, so you could definitely do worse than go with that, you perhaps don't need super-high precision in the sort so you could speed it up further by shortening the z-depth to only 16-bits so you only need two passes.


Alternatively, depending on the approach you've used, you might be expecting your data to be nearly sorted (if there's coherency from the previous frame). If that's the case, then pick a sort algorithm that performs well for nearly sorted data.

#5157112 2D Mobile Game Like Crash Banticoot

Posted by C0lumbo on 31 May 2014 - 02:12 AM

My advice is to make sure the controls are touch-screen friendly.


Your game will be a lot more fun if you can restrict your controls to a move left button and move right button for the left thumb, and an absolute maximum of two action buttons for your right thumb (jump and attack presumably).


IMO, any more than that, and you're not designing a platformer for the target system properly.

#5155916 SpriteBatch billboards in a 3D slow on mobile device

Posted by C0lumbo on 25 May 2014 - 12:43 PM

I would think that most likely you are fill-rate bound. In the absence of a GPU profiler, the easiest way to confirm whether or not you are fill rate bound is by setting up a scissor rectangle so that only a small area of the screen is visible. For your particular simple case, maybe just make the particles smaller instead of add a scissor rectangle.


If it's not the fill rate, maybe it's the cost of the vertex processing.

#5155245 deferred shading question

Posted by C0lumbo on 22 May 2014 - 12:25 PM

Thanks for all this, i've another question. Will i still have to render objects twice? One foe gbuffers and one normally?


No, under the deferred rendering system described by Ashaman73, you render your objects once to fill the gbuffers, then the post-processing steps are all done as 2D passes.

#5154830 why the alphablend is a better choice than alphatest to implement transparent...

Posted by C0lumbo on 20 May 2014 - 10:09 AM

discard is GLSL (openGL based shading language) / clip is HLSL (directX based shading language) -> both refer to the clipping operation that can be used for alpha testing.


The reason why alpha blend is "sometimes" the better choice is that for example powerVR gpus use so called "Deferred Tile-Based-Rendering". The gpu collects triangle data and at some point executes pixel processing. But before going there powerVR chips implement an additional optimization stage (this is what the "Deferred" part refers to) that determines which parts of the "Tile" should actually be drawn so we don't shade them multiple times for no reason aka overdraw (overdraw mostly refers to the redundant multiple framebuffer writes but shading is also part of this problem).

So when using clipping operations the gpu won't be able to do this optimization anymore. Note that on every gpu this results in a performance reduction because of early-Z since you can't determine which pixel should be culled before going through the pixel pipeline. But on powerVR chips this is even more of a problem due to the aforementioned "pixel overlap determination stage".


Why exactly alpha blend is faster in this case I'm not quite sure...my guess is that the blend operation in a tile-based-rendering environment is fairly fast since you don't blend into the actual framebuffer but the small on-chip-memory that holds the tile which as it seems may still be faster than opaque rendering without the hidden-surface-removal stage.


I hope I got all of this right since I'm still in the process of learning so if I made a mistake someone correct me please smile.png


Actually, I think that any sort of blending is just as damaging to the hidden-surface-removal techniques used by PowerVR chips as discard is.


I believe the key reason alpha blending can be more efficient than alpha testing is to do with when the z-test/z-write is done.


With alpha blending, the z-testing and any z-write can be done before the fragment shader is run.

With alpha testing, the z-write can't be done until after the fragment shader is run, because the fragment shader needs to be executed before the GPU knows whether the z-write is needed or not. I believe the GPU will defer both the z-test and the z-write until after the fragment shader, which means that a lot more fragments have to be processed with alpha test on, which can be quite damaging for chips with relatively poor fill-rates.

#5154487 Collision Detection (2D & 3D)

Posted by C0lumbo on 18 May 2014 - 02:38 PM

This book is pretty good (if you want an entire book devoted to the subject of collision detection)



#5153839 Projection Mapping?

Posted by C0lumbo on 15 May 2014 - 03:21 PM

Yes, sounds quite simple really from the tech standpoint, they've just found an effective way to get the look they wanted:


- Create high poly 3D scene

- Render it out nicely in Max/Maya

- Touch it up in Photoshop to give it that painted look

- Apply the textures back onto a low poly mesh to give just enough 3D-ness to fool the eye at the controlled camera angles.