• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
B_old

Compact World Space normal storage in g-buffer

7 posts in this topic

You could try storing just x and y to 16-bit channels and reconstruct z. But you need steal one bit of information from other channels to get good results. If you have this extra bit that you could use this would be easy and fast encode/decode scheme with ok quality Edited by kalle_h
0

Share this post


Link to post
Share on other sites
Spherical coordinates will give you pretty good precision in world space, especially in 16-bit.

 

Edit: ahhh... missread sperical with spheremap, my fault, the following statement is only valid for spheremap transformation. spherical should work like sugguested.

 

 

I see one issue with blind spots. From my experiences with all the compression algorithms from aras page, you will encounter a blind spot, a single vector, when approxiamated, will suffer in compression quality and therefore results in ugly artifacts. As example take a look at method 4, first algorithm. You will see that the normal (0,0,1) and (0,0,-1) have the same encoding, therefore the decoding of one of these vectors will be wrong. This seems to be just one case, but the artifacts I have experienced were really obviously.

 

This is no issue in view space, because you can adjust the algorithm to hide the blind vector (pointing into the screen), so that it is practically never compressed. But when you use world space, you will have spots where the blind vector is visible (e.g. lighting artifacts), and it will be clearly visible.

Edited by Ashaman73
0

Share this post


Link to post
Share on other sites
You could try storing just x and y to 16-bit channels and reconstruct z. But you need steal one bit of information from other channels to get good results. If you have this extra bit that you could use this would be easy and fast encode/decode scheme with ok quality

I thought about this as well. I prefer to have the normals in a single render target though. This means I have to cram the x,y and the z sign into a single channel. I have not tried it yet, but it should work. Wonder how it will perform.

 

Spherical coordinates will give you pretty good precision in world space, especially in 16-bit. However you'll eat up some ALU cycles on trig instructions. Another option is to simply transform from world space to view space, compress using spheremap, and decompress/transform back to world space when sampling your G-Buffer. This isn't always as bad as you think, especially on GPU's with oodles of ALU to spare. There's also best-fit normals, but I'm not a big fan of having to use a lookup texture.

I overlooked the spherical coordinates approach. It seems like a good option to me. 

Until now I transformed my normals to world space when I needed it and of course I could let it that way. But I feel a bit uncomfortable about it, because then it seems it would be preferable to do the typical lighting calculations in view space and only transform to world space when there are no other options. Maybe I could mix both, because I reconstruct both world space and view space positions from exactly the same source. But generally this is exactly what I wanted to avoid in the first space, assuming that it would be a bit clearer to do everything in the same space. Depends a bit on how expensive the spherical coordinates really are.

 

Just looking at world space normals, what do you think is faster: decoding WS normal stored in spherical coordinates or decoding VS normal stored in spheremap and transforming to view space?

 

I also wonder how Epic gets away with it in their demo. Maybe it's really not that apparent with proper geometry/materials. Or do you think the remark "Gaussian Specular for less aliasing" has anything to do with it?

 

Thanks for the replies, that gives me some options to try!

Edited by B_old
0

Share this post


Link to post
Share on other sites

[quote name='B_old' timestamp='1357719213' post='5019396']
I thought about this as well. I prefer to have the normals in a single render target though. This means I have to cram the x,y and the z sign into a single channel. I have not tried it yet, but it should work. Wonder how it will perform.
[/quote]

This depends on how you use this channel. If you encode the sign with other data, you could get in trouble when you need to interpolate these data (e.g. down scale, blending etc.).

 

I for one use 16 bit floating point targets compressing the normal into 2 channels using the methods mentioned on aras page. This is done in view space, therefore I have no troulbe hiding the z-pole (blind spot) vector into the screen, getting rid of the lighting artifacts. I need worldspace normals very seldomly, therefor doing all calculations in view space is the best solution for me.

 

Benefits of doing all in view space:

1. The position reconstruction is very fast (depth + screen pos, no transformation necessary).

2. Many screenspace methods works very well in view space (SSAO,normal mapping).

3. Lights can be transformed to view space before uploading them to the GPU, therefor no additional overhead for using view space here.

4. View space is tighly packed around the (0,0,0) center, this has the benefit of high precision of floating point numbers.

1

Share this post


Link to post
Share on other sites
Benefits of doing all in view space:

1. The position reconstruction is very fast (depth + screen pos, no transformation necessary).

2. Many screenspace methods works very well in view space (SSAO,normal mapping).

3. Lights can be transformed to view space before uploading them to the GPU, therefor no additional overhead for using view space here.

4. View space is tighly packed around the (0,0,0) center, this has the benefit of high precision of floating point numbers.

1. It is almost as fast for world space

2.,3. No issues here in world space, I assume

4. Very interesting point I never thought about

0

Share this post


Link to post
Share on other sites

[quote name='Ashaman73' timestamp='1357736141' post='5019432']
I for one use 16 bit floating point targets compressing the normal into 2 channels using the methods mentioned on aras page.
[/quote]

 

I used also 16-bit floating point with stereographic projection, but I encountered some bad geometry which produced nan/inf in the render target, which messed up all the lighting calculations and then luminance calculations. I wasn't able to determine the problem causing geometry so I switched back to 16-bit unorm format which doesn't have the problem of nans/infs. 

 

Cheers!

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0