Jump to content

  • Log In with Google      Sign In   
  • Create Account


C0lumbo

Member Since 02 Nov 2012
Offline Last Active Today, 12:58 PM
*****

#5166558 Z-buffer with different projection matrix

Posted by C0lumbo on 13 July 2014 - 06:24 AM

This article http://www.gamasutra.com/view/feature/131393/a_realtime_procedural_universe_.php?page=1 talks about how to manage the sort of situation where you're dealing with enormous variations in scale. Basically, it's just identifying objects that are very far away and adjusting both their position and scale so they fit into a single projection matrix.

 

In the context of space, it works fine because the objects are quite well separated by lots of, erm, space. Not sure what your scene is like to necessitate different projection matrices.




#5165954 Calculate if angle between two triangles connected by edge

Posted by C0lumbo on 10 July 2014 - 12:37 AM


I'm a bit rusty on 3d related math, but I looked at dot product and I don't think that works since I believe it can only be used to find the angle between 0 and PI, where as I want between 0 and 2*PI.

 

I think your best bet is to use the dot product:

 

If they're coplanar (the normals point the same way) the dot product will be 1.0.

If the triangles are 90 degrees to each other, the dot product will be 0.0

If the triangels are folded right over and face each other the dot product will be -1.0

 

You can get the angle by doing acosf of the dot product.

 

The limitation of using the dot product is that you can't work out the direction of the fold. 90 degrees either way looks the same. The trick would be to use some other technique to work out the fold direction. I can't think of a particularly elegant way to calculate the direction: One idea is to take the normal of your first triangle and dot product it with one of the edge directions from your second triangle (just not the shared edge!). The result will be a positive or negative number depending on which direction the fold is.




#5165063 glDrawElements without indices?

Posted by C0lumbo on 06 July 2014 - 10:07 AM

Is glDrawArrays what you're after?

 

"Another option would be to have one single index buffer on the GPU that serves all the geometries that I need to render, but that is still taking some memory on the GPU." - This is an option I use for the common case of quads, but for separate triangles, glDrawArrays should be better I think.




#5162605 Question about depth bias.

Posted by C0lumbo on 24 June 2014 - 12:40 PM

I usually end up organising my rendering code so that it happens like this:

 

Render Opaque Stuff (environments, characters, etc)

Turn z-write off, z-bias on

Render depth-biased decal stuff (blob shadows/bullet holes, etc)

Turn z-write back on, z-bias off

Render 1-bit transparent stuff (foliage, etc)

Turn z-write off

Render 8-bit transparent stuff (particles, etc)

 

By turning off the z-write while you render your decal stuff, you can stack lots of decals without having to increment the depth bias between each layer.

 

Edit: Also, I almost always z-bias by creating a new projection matrix that has a minutely narrower field of view. I find it tends to succeed better across different platforms.




#5160434 Fast distance based sort

Posted by C0lumbo on 14 June 2014 - 12:12 AM

It's perhaps more correct to calculate the z depth than the raw distance (or distance squared as Pink Horror points out). It's more or less the same cost (just do a dot product of the particle position with either the z-column or z-row (I forget which) of your view matrix). As an advantage it opens up the opportunity to quickly discard anything on the wrong sides of your near and far planes, which will save you time from submitting useless data to the GPU.

 

I think your choice of sort algorithm will be important. Radix sorts are awesome, so you could definitely do worse than go with that, you perhaps don't need super-high precision in the sort so you could speed it up further by shortening the z-depth to only 16-bits so you only need two passes.

 

Alternatively, depending on the approach you've used, you might be expecting your data to be nearly sorted (if there's coherency from the previous frame). If that's the case, then pick a sort algorithm that performs well for nearly sorted data.




#5157112 2D Mobile Game Like Crash Banticoot

Posted by C0lumbo on 31 May 2014 - 02:12 AM

My advice is to make sure the controls are touch-screen friendly.

 

Your game will be a lot more fun if you can restrict your controls to a move left button and move right button for the left thumb, and an absolute maximum of two action buttons for your right thumb (jump and attack presumably).

 

IMO, any more than that, and you're not designing a platformer for the target system properly.




#5155916 SpriteBatch billboards in a 3D slow on mobile device

Posted by C0lumbo on 25 May 2014 - 12:43 PM

I would think that most likely you are fill-rate bound. In the absence of a GPU profiler, the easiest way to confirm whether or not you are fill rate bound is by setting up a scissor rectangle so that only a small area of the screen is visible. For your particular simple case, maybe just make the particles smaller instead of add a scissor rectangle.

 

If it's not the fill rate, maybe it's the cost of the vertex processing.




#5155245 deferred shading question

Posted by C0lumbo on 22 May 2014 - 12:25 PM

Thanks for all this, i've another question. Will i still have to render objects twice? One foe gbuffers and one normally?

 

No, under the deferred rendering system described by Ashaman73, you render your objects once to fill the gbuffers, then the post-processing steps are all done as 2D passes.




#5154830 why the alphablend is a better choice than alphatest to implement transparent...

Posted by C0lumbo on 20 May 2014 - 10:09 AM

discard is GLSL (openGL based shading language) / clip is HLSL (directX based shading language) -> both refer to the clipping operation that can be used for alpha testing.

 

The reason why alpha blend is "sometimes" the better choice is that for example powerVR gpus use so called "Deferred Tile-Based-Rendering". The gpu collects triangle data and at some point executes pixel processing. But before going there powerVR chips implement an additional optimization stage (this is what the "Deferred" part refers to) that determines which parts of the "Tile" should actually be drawn so we don't shade them multiple times for no reason aka overdraw (overdraw mostly refers to the redundant multiple framebuffer writes but shading is also part of this problem).

So when using clipping operations the gpu won't be able to do this optimization anymore. Note that on every gpu this results in a performance reduction because of early-Z since you can't determine which pixel should be culled before going through the pixel pipeline. But on powerVR chips this is even more of a problem due to the aforementioned "pixel overlap determination stage".

 

Why exactly alpha blend is faster in this case I'm not quite sure...my guess is that the blend operation in a tile-based-rendering environment is fairly fast since you don't blend into the actual framebuffer but the small on-chip-memory that holds the tile which as it seems may still be faster than opaque rendering without the hidden-surface-removal stage.

 

I hope I got all of this right since I'm still in the process of learning so if I made a mistake someone correct me please smile.png

 

Actually, I think that any sort of blending is just as damaging to the hidden-surface-removal techniques used by PowerVR chips as discard is.

 

I believe the key reason alpha blending can be more efficient than alpha testing is to do with when the z-test/z-write is done.

 

With alpha blending, the z-testing and any z-write can be done before the fragment shader is run.

With alpha testing, the z-write can't be done until after the fragment shader is run, because the fragment shader needs to be executed before the GPU knows whether the z-write is needed or not. I believe the GPU will defer both the z-test and the z-write until after the fragment shader, which means that a lot more fragments have to be processed with alpha test on, which can be quite damaging for chips with relatively poor fill-rates.




#5154487 Collision Detection (2D & 3D)

Posted by C0lumbo on 18 May 2014 - 02:38 PM

This book is pretty good (if you want an entire book devoted to the subject of collision detection)

 

http://www.amazon.com/exec/obidos/tg/detail/-/1558607323?tag=realtimecolli-20




#5153839 Projection Mapping?

Posted by C0lumbo on 15 May 2014 - 03:21 PM

Yes, sounds quite simple really from the tech standpoint, they've just found an effective way to get the look they wanted:

 

- Create high poly 3D scene

- Render it out nicely in Max/Maya

- Touch it up in Photoshop to give it that painted look

- Apply the textures back onto a low poly mesh to give just enough 3D-ness to fool the eye at the controlled camera angles.




#5153722 OpenGL png transparency

Posted by C0lumbo on 15 May 2014 - 01:23 AM

I'm a little suspicious about this snippet:

 

FREE_IMAGE_FORMAT formato =
FreeImage_GetFileType(imageName,0);
    FIBITMAP* imagen = FreeImage_Load(formato, imageName);

FIBITMAP* temp = imagen;

imagen = FreeImage_ConvertTo32Bits(imagen);
FreeImage_Unload(temp);

 

If your .png is authored and is loaded correctly, then it should already be a 32 bit image. I wonder if it's only 24-bit and FreeImage_ConvertTo32Bits is adding an empty alpha channel for you.

 

Also, could you post your fragment shader, please? The line "I should note as well that if I change the Fragment shader color value to vec4 (it's currently set to a vec3) and include an alpha color (set to anything between 0.1 and 1)" sounds a bit wrong, you probably should be dealing with vec4s when you're manipulating rgba colours.

 

I suppose that to narrow down whether it's a image loading or a rendering problem, you could experiment by modifying the "textura[j*4+3]= pixels[j*4+3];" line to "textura[j*4+3]= 128;" and see if that gives you a semi-transparent image. If so, then you know it's your image loading, otherwise it's your rendeiring. (Actually maybe it's textura[j*4+0] which represents alpha, I can never remember which order these things are supposed to be)




#5153218 Texturing issue

Posted by C0lumbo on 12 May 2014 - 11:05 PM

This problem is a real pain. Here's a couple of articles that address it. Basically you need to be able to mess around with the q coordinate.

 

http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/

https://home.xyzw.us/~cass/qcoord/




#5151851 wrapping sphere around frustum

Posted by C0lumbo on 06 May 2014 - 11:43 AM

Yes, you will need to rebuild it every frame (or at least every time that the camera changes, which is going to be most frames probably).

 

Given that the bounding sphere is going to be used for quick trivial rejection, I'd recommend taking the once-per-frame cost of transforming the sphere from view space to world space, as that'll be more convenient (i.e. faster) to use when you do your many hundreds of trivial rejection tests.




#5149866 Alpha blending with a color that has no RGB components

Posted by C0lumbo on 27 April 2014 - 08:28 AM

PNG is stupid in that it's authors made the assumption that the RGB values of transparent pixels aren't required.
If an artist tries to have pixels that have a particular colour value, but a zero alpha value, PNG jumps in and turns their colours into garbage.
AFAIK, the RGB values of invisible pixels in a PNG file are *undefined*, they're just garbage.

If you need this feature, which games very often do, then PNG isn't a suitable file format...
I personally deal with this problem by using a pair of two non-transparent files: one contains the RGB values, and the other contains the alpha values in the R channel... :-(

[edit] if you don't need artist-authored values, and are ok with just some constant colour, then you can do this yourself after loading the PNG and before passing the pixel data to GL. Just loop over each pixel, check if alpha is zero, and if so, set RGB to your constant.

 

For some time, I also blamed the .png file format for this problem, but apparently it's not a deficiency of .png, but a deficiency of Photoshop's png exporter.






PARTNERS