Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Krypt0n

Member Since 07 Mar 2009
Offline Last Active Today, 02:05 AM

#5229240 [SSAO] Using a 32 bit normal-depth map

Posted by Krypt0n on 15 May 2015 - 06:46 PM

there are quite some points to this.

1. you vertex normal can be anything you want, it's rather very unlikely to have a vertex normal for lighting that aligns with the face normal.

2. with normalmaps it varies even more.

3. in orthogonal view, you would be correct about the facing, but in perspective view, you can stand in a room and see 5 sides of a cube, so you're clearly spaning more than 180degree of objects.


#5229079 D3D11 asynchronous buffer updates

Posted by Krypt0n on 15 May 2015 - 12:20 AM

yes, you understand it correctly and yes, updatesubresource is slow indeed. the problem is that nobody on driver side knows what you gonna do on application side, maybe you won't call updatesubresource at all, maybe you'll call it for gig of data, so it's hard to know how much cache to allocate, obviously 1GB of cache would be bad in your case where you have 16threads maybe...

a better approach would probably be to use staging resources. you map those in the main thread and park those in some container for the other threads to take, once the other threads are done, they can somehow signal the main thread it should unmap, copysubresource and map those again. this way you have all control over the memory:
- you can allocate the buffers however you want/need.
- you can chunk the copy to not have peaks in the frame (e.g. limit to n copies/frame even if those copy request come irregular)
- you can react to getting out of budgets (e.g. no free staging texture -> put thread into sleep)

overall a better from my experience.


#5228882 Want to create a cloud in 3d array

Posted by Krypt0n on 13 May 2015 - 08:14 PM

the easiest way would be to draw a sprite for every spot in the 3d array that is not 0 (the density would be the alpha value of the sprite)


#5228881 Sorting draw calls by distance

Posted by Krypt0n on 13 May 2015 - 08:05 PM

it's kind of like
fraction<<exponent

if you adjust the exponent to n by
exponent += n-exponent;
you need to adjust the fraction accordignly
fractions >>= n-exponent

I couldn't really follow what you did here.

you can write a float like
n = pow(2,exponent) * fraction;
e.g.
10 = pow(2,3) * 1.25
now imagin the biggest exponent of all floats you have is 4 and you want to adjust all other floats, means
10 = pow(2,3 + (4-3) ) * ( 1.25 / pow(2,4-3) )
on exponent side you extend it by 1 and you compensate on fraction side by the same amount
 
10 = pow(2,4) * 0.625
with integers a division by something that is power of 2 is actually just a shift right, so the formula above ends up being
 
10 = pow(2,old_exponent + (new_exponent-old_exponent) ) * (fraction >> (new_exponent-old_exponent) )
 

you can simply sort floats as if those were ints, simply because the exponent is in the upper bits, means, a higher exponent will result in a higher number if you interpret it as integer.

As you suggested earlier, I want to only sort by fraction to maximize the amount of information I have for depth. That is, given N bits of space in the sort key, it seems like you would get better precision by having all those N bits be only for the fraction. That would only work if you can normalize the floats to have the same exponent.

you will most likely get less data density.

Imagine you have the values in the range 0 to 11, you will figure out the highest exponent is pow(2,4), now you normalize all values to pow(2,4) and take the least 3 bit of the fraction, you will have 0, 1, 2, 3, 4, 5.
0 and 1 will map to 0
1 and 2 will map to 1
...
11 will map to 5

which means 6,7 will stay unused.

if you normalize by the biggest number (11) instead of the exponent of the biggest number (effectively 16), you will cover the whole range of 3 bits, 0 to 7
n * 7 / (11-0)

 

This doesn't work when normalizing to [0,1] because like I said 0.1 and 0.2 have the same fraction bits. However I think if I normalize to something like [1,2) then it works because for all decimal values where the value in front of the decimal is 1, you have fixed exponent bits in the float:
 
decimal floating point | float exp bits | float fraction bits:
1.0 | 01111111 | 1 .00000000000000000000000
1.1 | 01111111 | 1 .00011001100110011001100
1.2 | 01111111 | 1 .00110011001100110011001
1.7 | 01111111 | 1 .10110011001100110011001
1.456789 | 01111111 | 1 .01110100111100000001111
 
^ now I can ignore exp bits

if you do the normalization by the biggest number and remapping

bits = 3
range = (1<<bits)-1
n = (int) (float_value * range / (1.7-1.0));

you end up with

1.0 | 000
1.1 | 001
1.2 | 010
1.7 | 111
1.456789 | 100




#5228859 Sorting draw calls by distance

Posted by Krypt0n on 13 May 2015 - 04:50 PM

there are many things to the question you ask biggrin.png

1 about float:
floats consist of a sign bit, 8bit exponent and a fraction part
the exponent is always adjusted to the highest bit of the fraction is 1 (and does not need to be saved)
so if you'd want to sort just based on the fraction, you'd need t adjust the exponent to be the same for all floats. you'ed need to find the maximum exponent and adjust the other floats

it's kind of like
fraction<<exponent

if you adjust the exponent to n by
exponent += n-exponent;
you need to adjust the fraction accordignly
fractions >>= n-exponent

so
faction<<exponent will still represent the same number (but fraction has lost some lower bits, therefor is less accurate)

2.
you actually don't need to do that, as long as the sign of the floats is positive, you can simply sort floats as if those were ints, simply because the exponent is in the upper bits, means, a higher exponent will result in a higher number if you interpret it as integer. if the exponent of two numbers is the same, just the fraction differs which is like the 'int' part and will also result in the proper results

3.
and you are right, by normalizing it's meant that you find min and max of your float and map all floats to 0 to 1
new float = (oldfloat-min)/(max-min)

and if you want to use 8 bit, then yes, just multiply it by 255 and cast it to int.


#5228855 [SSAO] Using a 32 bit normal-depth map

Posted by Krypt0n on 13 May 2015 - 04:20 PM

	n.z = -1.0f * sqrt( 1.0f - ( xy.x * xy.x + xy.y * xy.y ) ) ;
shouldn't you rather use n instead of xy?

more common way to write
n.xy = ( xy - 0.5f ) * 2.0f ;
is
n.xy = xy * 2.0f - 1.0f;
is it will be one instruction (mad) instead of (sub, mul)


same in
		neighborNormalDepth.xy = 2.0f * ( nSamp.xy - 0.5f ) ;
		neighborNormalDepth.z = -1.0f * sqrt( 1.0f - ( nSamp.r * nSamp.r + nSamp.g * nSamp.g ) ) ;



#5228166 Generating a sine-based tiled procedural bump map

Posted by Krypt0n on 09 May 2015 - 01:38 PM

the function in the slide looks wrong, rand() should not be random, it should be a set of random points that you initialized before going through the pixels.

something like that:
 
const unsigned int texWidth = 512;
const unsigned int texHeight = 512;
const unsigned int N = 100;

float2 Points[N];
for(unsigned int a=0;a<N;a++)
  Points[a] = float2(rand()%texWidth ,rand()%texHeight);

unsigned char *data = new unsigned char[texWidth * texHeight];
srand(static_cast<unsigned int>(time(NULL)));
for (unsigned int y = 0; y < texHeight; ++y)
{
	for (unsigned int x = 0; x < texWidth; ++x)
	{
		float sum = 0;
		for (unsigned int i = 0; i < N; ++i)
		{
			float px_rand = (float)x - (Points[i].x;
			float px_rand_2 = px_rand * px_rand;
			float py_rand = (float)y - (Points[i].y;
			float py_rand_2 = py_rand * py_rand;
			sum += sinf(sqrtf(px_rand_2 + py_rand_2) * 1.0f / (2.08f + 5.0f * (float)rand()));
		}
		unsigned char value = static_cast<unsigned char>((sum / N) * 255) & static_cast<unsigned char>(0x00ff);
		data[y * texWidth + x] = value;
	}
}
edit: same for the division by "rand()" of course.


#5228110 Back Face Culling idea

Posted by Krypt0n on 09 May 2015 - 08:13 AM

I think the arrow in the 2nd picture should be in the opposite direction, it's eyepos-vertexpos, but otherwise a good visualization of the idea smile.png


#5228104 Is it possible to declare single-bit-variables/struct?

Posted by Krypt0n on 09 May 2015 - 07:46 AM

in c++ you can declare single bit variables, you need to use the bit-fields specifier.
struct CarProperties
{
  uint8_t EngineRunning:1;
  uint8_t OutOfFuel:1;
  uint8_t Burning:1;
  uint8_t WantedByPolice:1;
  uint8_t Electric:1;
  uint8_t TwoSeater:1;
  uint8_t Airbags:1;
  uint8_t DrivenByThePope:1;
};
in that way you declare that a variable should be 1 bit, the compiler can then decide how it will handle it. So declaring != what actually happens.
in the sample above, with visual studio, sizeof(CarProperties) will most likely be 1, hence every declared variable will occupy one bit as you wanted. however, a different compiler could just es good allocate 8 bytes per uint8 due to alignment preferences of the underlying platform. Also, if you put that struct on your stack, every flag might be a different size, different location etc. if the compiler can deduce that you never actually use it as a struct (just because you want to have a simple and human readable flag field and you don't worry about memory size).

the actually bit mangling might happen at the very last moment where you actually write the bitfield struct into an array.

if you rely on sizes like that, you should urgently place asserts to assure it won't behave differently
static_assert(1==sizeof(CarProperties));
if something changes (compiler version, compilation flags etc.) this might change also, without your awareness and cause tons of trouble (e.g. if you write it out like
fwrite(&Car1,1,sizeof(CarProperties),myFile);



#5228017 Back Face Culling idea

Posted by Krypt0n on 08 May 2015 - 04:05 PM

this optimization was actually common when you were doing your own software rasterization, but nowadays just some specialized libs on consoles do that for rare cases, in most cases you are not vertex but pixel bound (if not, then you're doing something wrong ;) ).

some libraries also calculate the coverage of triangles and reject those if no pixel was touched by it or if it has no area in screenspace.

while a GPU can process bazillions of vertices and fragments in parallel, triangle processing is a very serial process. if you are unlucky, your GPU might have bubbles in the pipeline because the triangle setup is working on backface triangles and not feeding the fragments and not freeing enough buffer for vertices to let them work. e.g. if you have a sphere with 1Mio triangles on the screen and you rotate it in a way that back faces are processed first, there will likely be bubbles. if you rotate it in a way that front faces will be processed first, then the back face processing later on might be hidden as the fragment units might still be busy processing front faces.


an optimization of your idea is called "clustered backface culling", you basically sort your triangles based on the orientation into bins. later you just check the orientation of the bin and you can reject a bunch of faces in one go. of course, that's an approximation, there will be some backfaces left, but getting 80% of it culled with 1% of work is a good trade off.

http://zach.in.tu-clausthal.de/teaching/cg1_0607/literatur/clustered_backface_culling.pdf


#5227552 Question about Frac() in GLSL shader

Posted by Krypt0n on 06 May 2015 - 12:03 PM

you worry too much about it.

 

textures never reach 1.0, take for example point filtering (nearest sampling) and a texture of 256 texel. to sample from it you pass UVs from the shader to the texture units, the texture unit scales the value by the texture size, so in our case

uv*=256;

now it indexes into our texture

color = texture[ (int)uv.x];

as our texture is 256 in size, valid values are 0-255 (inclusive), 

if uv.x is originally 1.f, it will result in 256.f and casting to int won't change it, of course. we would have an overflow, thus the texture unit actually does

color = texture[  ((int)uv.x) % 256];

and it makes sense. imagine you address your first texel using 0.0f and you have uv wrapping enabled, if 1.0 would be the right most corner of the texture, then wrapping around would actually mean that the next time you reach the first texel is 1.0000001f, that would make the first wrap 1.f in size (1.f-0.f) and the second would be (2.0f-1.00001f) 0.99999f

 

so, to wrap textures around, you simply need to use

dest_u = frac(src_u);

but it might be not correct to do it in vertexshader, if a triangle crosses the border, you'd need to frac per pixel.

if triangles don't cross border, then you could just as good pre-frac the values on CPU side.




#5227519 How much video memory do I really have?

Posted by Krypt0n on 06 May 2015 - 09:21 AM

are those with or without mipmaps? if you are mipmapping (which I'd expect for a terrain texture, otherwise you'd have noise all over the place and it might render slow) you'd need to add 300MB.

some format might also be not natively supported, e.g L8 might actually be an RGBA8 with just R used.

it also depends on whether your textures are 'dynamic' etc. the driver might create some temporal versions to not stall rendering while the CPU is filling the buffer for the next frame

also, the driver allocates some buffer, maybe 256MB for random stuff
- page table for memory layouts
- shader binaries
- some framebuffer (in 1080p it's 8MB per buffer)
..

windows might allocate some buffer e.g.
- if you alt+tab to another window, it should not fail just because the editor uses all VMem
- there are some textures needed (e.g. the desktop background etc.)
- if you open your task manager and check the extended performance cpu view, you'll see there are probably thousands of 'handles', while those can be everything (windows, fonts, files etc.) if just a few of those are gfx resources (Fonts, DIBs etc.) it can easily accumulate
- (in general, windows is quite crazy in allocating quite a big chunk of resource straight at the start, check how much main memory is gone after booting, that's the percentage of vmem that is probably also gone)

textures also might have some overhead, with mips there might be some alignment, but also some extra space for compression information...
the driver might also divide the memory areas for management reasons (like 128MB for VBs even if you just use 1kb).

in short, NO WAY you can expect to get all you've paid for ;). I always count with 512MB less.

the best way to test it is probably to create a tiny application that allocates 256x256 RGBA32 textures until an allocation fails. place it on desktop, reboot, start it and you'll see your peak budget.


#5227315 Triangle is not visible - PathTracing

Posted by Krypt0n on 05 May 2015 - 09:01 AM

I don't have another guess, but guessing the problem is probably the most unproductive way for you of debugging it ;)

 

You're now at the beginning, and while beginning might seem to be the most difficult part of it, later on tiny implementation issues can lead to the most weird results that nobody has seen before and you won't get any guesses.

 

So, I'll rather share you the way I debug it usually.

My way to debug tracers is to add a pixel picker function, then you can click on some problematic area and call a pick function with your breakpoint inside. it also comes in handy to have a mouse-over functions that prints out the color to the console, sometimes there are just single wrong pixel and it's not trivial to click on those ;) .

once you trap into that breakpoint, you can step through the functions and you validate the outcome with your knowledge, e.g. here you know you've clicked on the triangle, so the intersection result should be 'true' right? you can step further to the secondary rays and edit the ray direction to actually point to the lightsource (as you know where it is) in your debugger and check if you hit it. once that is working, you need to check if the result is processed (aka shading) as you'd expect.

 

I know, this is a bit of a dive into it all, but it's a very deterministic way to find the bugs (there might be more than one and finding them by random guesses might not show up the result, so you won't even know 100% if you've fixed an issue and if a previously checked part is now working).




#5226196 So, what happened to god games ?

Posted by Krypt0n on 28 April 2015 - 10:29 PM

I had a thought about it last night, and actually, what do gods do?

Nobody every saw a god doing anything (I mean, provable by science), what gods do is affecting you in a way that would otherwise be "luck" "destiny" etc.

 

Maybe, instead of having a hand and affecting the world (B&W), a god could really 'tweak' the output of actions. e.g. someone prays for sun so he/she can go to the beach, someone else prays for rain for the crops. 4players at a poker table pray for luck. someone prays that her husband comes back safe from the hunt.

as a god, you could spent your "power" points based on this and you could gain those power points by having people believe in you, which is really a hard thing if you cannot manifest, but just influence things. it's not just about serving people, you could just as good try to influence that a whore house collapses, very religious people would "know" that god punished those unholy persons, rise their believes and so you'd get power points.

 

now the gameplay goal idea (I probably shouldn't share this, because you all gonna steal it, right? :D)

how about having a start setup and an outcome after e.g. 100years that you have to change. e.g. all your people are living on an island with a volcano and they will be extinct if they don't settle over to the neighboring island without a volcano. Again, you cannot just pick some and throw them over, you need to affect their lives in the subtle "luck" way.

if you could move in the simulation back and forth, like in a recorded time line and see how actions in the past propagate...

possible paths

- you tweak the fishing 'luck' to be better the closer it is to the other island, eventually some fishermans might create a base at the other island, maybe already preparing the fish, maybe moving over workers, woman? stay there for longer, build a hut, a house, done *yay*

- you could give one guy all the luck in winning big money, eventually he'd want some big place for himself, seeing the island in the distance he might get the big insane idea to make this his island, done *yay*

 

 

(just came to my mind: do you guy remember the star trek voyager episode, where there was that one time jumping ship captain, that was destroying planets, civilizations, moons etc. to safe his species, and his wife, from an accident that happened? that was kind of a nice setting and like a god game)




#5226129 So, what happened to god games ?

Posted by Krypt0n on 28 April 2015 - 12:51 PM

the best part of god games is actually developing those, isn't it?

 

I had ton's of fun programming cornways game of life, my first implementation was on a 1MHz Atari in Atari Basic, it was 1fps at best, but I was watching it for every single frame how that simple logic develops into complex interactions.

some times later I've read some article about simulations of fish (you know, big fish eat smaller fish... eats smaller fish... like there will never be a balance between hunters and...

Later I've stumbled on an article about braitenberg vehicles, how totally simple rules created not just interaction, but also psychological behaviour that mapped very accurately to 'love' 'fear' etc. and to behaviour like mating between ducks... programmed a simulation of those organisms. sadly that really stucked right at the place where also most papers ended.

 

when I've been reading about black/white in game magazins as a kid (and I think that was like 1-2 years before the game should ship), how every little "being" had a daily life, had feelings, needs and they were praying to you and you could decide whether you help them or maybe even punish them. Sometimes several prayers were contradicting (at least that's what the magazin said) and you as a good would need to make decisions.

yes, yet again I've started to create a simulation like that.

 

but when it comes to playing? hmm, I'm not sure what would be tempting about a god game. Most I've seen added some kind of war system to give some kind of goal, but really, that's what put me usually rather off.

But maybe that's the problem and why god games are rather rare. It's kinda like voxel games that were existing long before minecraft (in 2d and 3d), but they never had a real goal. Minecraft gave you not just a sandbox, but a purpose. prepare for the night, defend yourself, etc. 

 

 

god games could be crappy looking 2d top down simulation, if you had some kind of motivation. That micromanagement isn't such a big problem I guess, it becomes a problem in black&white because there is story and a lot of emphasis on that. But you should be able to be a god without listening to any prayers.

 

 

now I feel like programming a god sandbox simulation :)






PARTNERS