Jump to content

  • Log In with Google      Sign In   
  • Create Account


Tournicoti

Member Since 30 Aug 2009
Offline Last Active Jun 02 2014 12:05 AM
**---

#4982232 Self-shadowing on curved shapes problem

Posted by Tournicoti on 20 September 2012 - 10:24 PM

Instead of using if statements while looping through split indices get the correct split index by adding up conditional statements for all the split tests. This does away with the need for branching.
...


thanks Posted Image


So here's my last version [EDITED] :

#define CSM_MAXSPLITS 8


SamplerState shadowMapSampler
{
	 Filter = MIN_MAG_MIP_POINT;
	 AddressU = BORDER;
	 AddressV = BORDER;
	 BorderCOLOR = float4(1.0f,1.0f,1.0f,1.0f);
};

float3 sampleColorCSM(in float posProjZ,in float3 posWorld,in float3 normalWorld)
{
	 float3 uv;
	 float bias;
	 uint split=0;
	 float4 posLight;

	 if (posProjZ<g_CSM_depths[0].x || posProjZ>g_CSM_depths[g_CSM_nbSplits].x) return float3(1.0,1.0f,1.0f);

	 [unroll (CSM_MAXSPLITS)]
	 for (uint i=1;i<=g_CSM_nbSplits;i++)
		  split+=posProjZ>g_CSM_depths[i].x;

	 posLight=mul(float4(posWorld,1.0f),g_CSM_VP[split]);
	 posLight/=posLight.w;

	 bias=dot(normalWorld,-g_vDirectionalLightDirection);
	 bias=clamp(g_CSM_depths[split].y*sqrt(1.0f-bias*bias)/bias,g_CSM_depths[split].y,g_CSM_depths[split].z);

	 uv=float3(posLight.xy*float2(0.5f,-0.5f)+0.5f,split);

	 return (g_CSMMaps.Sample(shadowMapSampler,uv).x+bias>posLight.z)*g_ColorCSMMaps.Sample(shadowMapSampler,uv).xyz;
}

float sampleCSM(in float posProjZ,in float3 posWorld,in float3 normalWorld)
{
	 float3 uv;
	 float bias;
	 uint split=0;
	 float4 posLight;

	 if (posProjZ<g_CSM_depths[0].x || posProjZ>g_CSM_depths[g_CSM_nbSplits].x) return 1.0f;

	 [unroll (CSM_MAXSPLITS)]
	 for (uint i=1;i<=g_CSM_nbSplits;i++)
		  split+=posProjZ>g_CSM_depths[i].x;

	 posLight=mul(float4(posWorld,1.0f),g_CSM_VP[split]);
	 posLight/=posLight.w;

	 bias=dot(normalWorld,-g_vDirectionalLightDirection);
	 bias=clamp(g_CSM_depths[split].y*sqrt(1.0f-bias*bias)/bias,g_CSM_depths[split].y,g_CSM_depths[split].z);

	 uv=float3(posLight.xy*float2(0.5f,-0.5f)+0.5f,split);

	 return g_CSMMaps.Sample(shadowMapSampler,uv).x+bias>posLight.z;
}

g_CSM_depths[split].x is the minimal depth of the #split map, g_CSM_depths[split+1].x is the maximal depth of the #split map.
g_CSM_depths[split].y is the minimal depth bias of the #split map (remove acne on flat surfaces)
g_CSM_depths[split].z is the maximal depth bias of the #split map (remove acne on bumped surfaces)

Thanks for reading Posted Image


#4981993 Self-shadowing on curved shapes problem

Posted by Tournicoti on 20 September 2012 - 05:53 AM

Thanks, it solved the problem Posted Image

If you want to have a look on my HLSL colored csm sampling functions that use it :

#define CSM_MAXSPLITS 8

SamplerState shadowMapSampler
{
	Filter = MIN_MAG_MIP_POINT;
	AddressU = BORDER;
	AddressV = BORDER;
	BorderCOLOR = float4(1.0f,1.0f,1.0f,1.0f);
};

bool getSplitUV(in float posProjZ,in float3 posWorld,out uint split,out float2 uv,out float posLightZ)
{
split=1;
uv=0;
posLightZ=0;
[unroll (CSM_MAXSPLITS)]
for (;split<=g_CSM_nbSplits;split++)
  if (posProjZ<g_CSM_depths[split].x) break;

split--;
if (split==g_CSM_nbSplits) return false;

float4 posLight=mul(float4(posWorld,1.0f),g_CSM_VP[split]);
posLight/=posLight.w;
posLightZ=posLight.z;

uv=(posLight.xy)*float2(0.5f,-0.5f)+0.5f;

return true;
}

float3 sampleColorCSM(in float posProjZ,in float3 posWorld,in float3 normalWorld)
{
uint split;
float2 uv;
float posLightZ,factor;
if (getSplitUV(posProjZ,posWorld,split,uv,posLightZ))
{
  factor=saturate(dot(normalWorld,g_vDirectionalLightDirection));
  factor=saturate(sqrt(1.0f-factor*factor)/factor)*g_CSM_depths[split].y;
  factor=(g_CSMMaps.Sample(shadowMapSampler,float3(uv,split)).x+factor<posLightZ) ? 0.0f : 1.0f;
}
else
  factor=1.0f;
return factor*g_ColorCSMMaps.Sample(shadowMapSampler,float3(uv,split)).xyz;
}

PS :
g_CSM_depths[split].x is the minimal depth of the #split map, g_CSM_depths[split+1].x is the maximal depth of the #split map.
g_CSM_depths[split].y is the depth bias of the #split map
Any suggestion or improvement is welcome Posted Image


#4981612 Self-shadowing on curved shapes problem

Posted by Tournicoti on 19 September 2012 - 03:52 AM

Hello Posted Image

I have a problem of self-shadowing that I can't solve by changing the depth bias :
sm.PNG

I use (colored) cascaded shadow maps, each map has its own (constant) depth bias.

I wonder if there's a way to use partial derivatives (ddx ddy) to adjust the depth bias for each pixel ?

Thank you for any suggestion, or your help Posted Image

Nico


#4978378 What is the guy called with all the money? boss?

Posted by Tournicoti on 09 September 2012 - 02:11 PM

Easy one, the guy with all money is called "Scrooge McDuck"
Can I have a candy ?


#4976030 Chess AI with Neural Networks

Posted by Tournicoti on 03 September 2012 - 05:57 AM

Generally, neural networks are very bad candidates for finite state problems. There are specific algorithms for these problems that are a lot better.

NN are well suited for :
- pattern recognition (generalization - classification)
- problems that can't be defined precisely, or that can evoluate in a undefined way.
- automatic learning (with an utility-based learning algorithm for instance) for relatively simple tasks. (competitive learning is interesting)

They can't be used in programs that must be proven. They are black boxes and their 'behaviour' is globally unpredictable.
Encoding inputs and outputs, setting network parameters (learning rates,how many layers, how many nodes for each layer, etc), etc... is awfully difficult and it often needs to have a very good idea about how NN work. (and patience...)

So, I'd suggest just to avoid them whenever it's possible Posted Image


#4965957 What are the near and far plane used for?

Posted by Tournicoti on 03 August 2012 - 03:55 PM

Near and far planes represent the depth area you want to be displayed in your scene : all what you'll see in your scene is in this depth area : the near plan is for the minimal depth beeing represented (displayed), and the far plane represents the maximum depth you'll see in your scene. Maybe can you have a look on how projection matrix is built ?

Edit : These depthes are expressed in view space. In clip space they are converted from 0.0 (near plane depth) to 1.0 (far plane depth) thanks to the projection matrix.


#4964870 Very odd problem with collision detection

Posted by Tournicoti on 31 July 2012 - 08:29 AM

Glad I can help you Posted Image

My idea is "I assume at first there is no collision, and then if I find any positive case in the loop, there is collision (and I can exit now the loop)"
The problem is that you set collision to false in your loop when you encounter a negative case, but it must be set to false once before the loop only.

The second loop works because of the "break" statement

Even without 'break', it would work. It's just because we now know there is collision so we can skip the enumeration


#4964854 Very odd problem with collision detection

Posted by Tournicoti on 31 July 2012 - 07:38 AM

Hello
collision=false;
for(int i = 0; i< Map.CollisionPosition.Count; i++)
		 if (MousePosition * 32 == Map.CollisionPosition[i])
		  {
               collision = true;
			   break;
		  }
Isn'it what you're trying to do ?


#4933146 Recurrent neural network with bias node?

Posted by Tournicoti on 20 April 2012 - 05:05 AM

Hello domokato Posted Image

Hi guys,

I'm trying to use a genetic algorithm to train a recurrent neural network. I kind of understand what bias nodes are for in feed forward networks. Do I need one for a recurrent neural network?

Thanks


Yes, for the same reason as for feedforward networks. (it permits to shift the activation function along x axis) .

For example, if I have a 2-inputs unit with 1 recurrent loop (on its output), the input vector is for time n : [ input1n , input2n , outputn-1 , 1 ]. So 4 weights too.
The output is then computed like in a feedfordward network.

Nico

EDIT :

About bias :

I take this activation function
bool f(float v)
{
return v>0.0f;
}
and I take a unit with 2 boolean inputs with weights equal to 1.

With a bias of 0, the unit performs a OR (input1+input2>0.0)
With a bias of -1, the unit performs a AND (input1+input2-1>0.0)

My point is that a bias must (should) be added to any unit that performs a linear combination of its inputs.


#4925094 Neural Network Genome Help Please :'(

Posted by Tournicoti on 25 March 2012 - 06:38 AM

Hello Gen

Is it possible to consider the list of weights as the genome itself ?
So you can alter and combine these lists to get new altered or combined genomes.
Honestly I don't know how to combine two genomes here, but I would first try to do some kind of average of genomes ?

Good luck Posted Image
Nico


#4923411 Neural Network Math Help ? :)

Posted by Tournicoti on 19 March 2012 - 01:34 PM

Hello CryoGenesis
I'm glad I can help :)
NB : it's recommended to 'normalize' your input values so that abs(input)<1
Otherwise you will use huge values in the learning rule, and the weights will oscillate indefinitely instead of stabilize.

For example if you know the min and the max of the values you provide to the ANN, you can apply something like that to each input :
i'= (i-min)/(max-min)
and provide i' instead of i.

Bye !
Nico


#4923130 Neural Network Math Help ? :)

Posted by Tournicoti on 18 March 2012 - 03:46 PM

An example maybe ? Let's say I want to approximate a 2-parameters (x and y) function with a single node.
So I have to add a third component to the input set to 1.0
Then the node has 2+1 inputs (so 3 weights too) and the input vector is [1.0,x,y]
So the integration is W0*1+W1*x+W2*y

So the code of a node 'without bias' is perfect : just add an extra 1.0 to its input and it's done, you have a node with a bias.


#4923120 Neural Network Math Help ? :)

Posted by Tournicoti on 18 March 2012 - 03:15 PM

(this is not activation+=1 but activation+=W0)

I think you can add a bias to your nodes without modifying too much your code.
Add an extra component to your input array and put 1.0 in it at the start of the program. Your input vectors are now : [1.0,i1,i2,....,in].
Then W0, ie the weigth associated to your constant input value 1.0 will evoluate like any other weight.


#4923107 Neural Network Math Help ? :)

Posted by Tournicoti on 18 March 2012 - 02:37 PM

Whats the point of Bias?
Is it needed for the neural network to function?


The point of Bias is to shift the activation function along x axis (and it can be considered as a constant input for the implementation as I suggested)
It's needed for practical purpose : if you don't use a bias, the function you are approximating (ie the problem you are solving) must pass threw (0,f(0)) where f is the activation function you chose. Otherwise, the net won't converge. With a bias you don't have this limitation anymore.

Oh and does the bias have to be used for every node or just the input nodes?


The bias has to be used with any node that does signal integration, so typically all the nodes except input ones (since these are just 'slots' to provide input to the net).


#4923101 Neural Network Math Help ? :)

Posted by Tournicoti on 18 March 2012 - 02:04 PM

The output of the neural network is always between 0.4 - 0.6.


...
(i1*W1 + i2 + W2 ... In * Wn)


... or maybe is it because you didn't put a bias to your nodes ?
(W0 + i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.

A simple way to add a bias is to add a 1.0 component to the input vector




PARTNERS