Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Tournicoti

Member Since 30 Aug 2009
Offline Last Active Apr 28 2015 04:52 AM

#5072522 Extract corners of projection matrix...

Posted by Tournicoti on 24 June 2013 - 10:40 AM

 

The resulting point is now in view space. At this point you can ignore the w component of the vector and multiply the resulting vector by the view matrix.

I would say "multiply the resulting vector by the inverse of the view matrix.




#5072408 Extract corners of projection matrix...

Posted by Tournicoti on 24 June 2013 - 12:35 AM

Hello

 

Yes, if you transform the unit cube by (Mview*Mprojection)-1,you'll get the frustum corners in world space. (for row major matrices, (Mprojection*Mview)-1 otherwise)

 

EDIT : when I said row major matrices, I meant row vectors with matrix pre-multiplication

So, (Mview*Mprojection)-1 in DirectX.

And, AFAIK, (Mprojection*Mview)-1 in OpenGL




#5070485 Homogenous space

Posted by Tournicoti on 17 June 2013 - 11:57 AM

What I tried to say was if there is at least one perspective projection matrix in your transformation matrix, the w-division is needed. (Edit : because w is no more 0 or 1 !)

With projective spaces,the transformation  is not only multiplying matrices. It's dividing the homogeneous vector by w at the end to get the equivalent 3D cartesian vector(x,y,z).




#5070476 Homogenous space

Posted by Tournicoti on 17 June 2013 - 11:37 AM

Hello

 

The 2 only needed things to know about homogeneous coordinates :

  • w=0 for direction 4D vectors (i.e. normals)
  • w=1 for position 4D vectors (i.e. vertices)

When rendering, the GPU will itself do the w-division, you don't need to do it yourself in this context.

If you need to do yourself calculations on 4D vectors, be sure to do yourself the w-division at the end if you have (at least) one perspective projection matrix in your transformation.

 

Hope it's clear rolleyes.gif




#5060158 [C#] Need help with Hopfield ANN

Posted by Tournicoti on 07 May 2013 - 06:57 PM

Not sure if you did this in your initialization, but in the weight matrix  :

 

Mat i,i=0

Mat i,j= Mat j,i

 

... is generally used : it permits to be sure the network will reach a stable state.

 

 

 

The capacity of a hopfield network is approximately nbPatterns=0.138*nbNodes, so 4 nodes is too small, maybe try with 10x10 nodes ( about 13 patterns in term of capacity )

 

I noticed you have implemented a synchronous method to compute the network state. It's totally possible even if initially, it was supposed to be asynchronous and stochastic.

 

Hope it can help !smile.png

 

If you still have problems with your implementation, I can post some pseudo-code if necessary




#5044314 Eight, Nine, Ten...

Posted by Tournicoti on 18 March 2013 - 12:55 PM

Obvious, the coder has 8 fingers on each hand




#5038864 funny idea for an ann game

Posted by Tournicoti on 03 March 2013 - 04:43 PM

I believe he wants to use an ANN for all of the game's logic.

 

I don't believe he understands the point (If any) of ANN's.

So what's the point (if any) of ANN ?




#5038486 video and ann

Posted by Tournicoti on 02 March 2013 - 01:12 PM

I don't know enought about this HTM model sorry, but a Mc Culloch & Pitts based neural model is a hyperplane. You can represent, visualize it as a plane in 3D, a line in 2D, in a visual application.




#5037832 MADE A SIMPLE NEURAL NETWORK, BUT NOT SURE IF I DID IT RIGHT

Posted by Tournicoti on 28 February 2013 - 06:56 PM

I would like to add another little thing :

 

What I described is the Perceptron as it was designed by Frank Rosenblatt (original version)

However there's another (very slightly different !) version in which -1 and 1 are used instead of 0 and 1 to encode booleans.

 

 

In term of learning efficiency it's better because  :

  • with 0 and 1, the weights will evolve only when deltaOutput is not zero and input is not zero.
  • with -1 and 1, the weights will always evolves when deltaOutput is not zero.

 

And this is 'nothing' to change :

  • change input encoding (so no more 0 and 1, but -1 and 1)
  • change the activation function :  sum>0.0f ? 1.0f : -1.0f

Bye smile.png




#5037171 MADE A SIMPLE NEURAL NETWORK, BUT NOT SURE IF I DID IT RIGHT

Posted by Tournicoti on 27 February 2013 - 08:52 AM

The learning rate is just an user-defined constant, here. It's usually a value between 0 and 1.

It's like defining the 'step' of the adaptation.

 

I would suggest to try low values first  (0.2 ? even less ?) because inputs[i]*deltaOutput is here a huge value : -1 or 0 or 1 !

 

Afterwards, the goal is to maximize the learning rate so that the learning process is quicker but still accurate enough for the wanted approximation.

 

Hope I'm still clear smile.png (sorry for my english, I'm getting tiredph34r.png )

 

Bye




#5037128 MADE A SIMPLE NEURAL NETWORK, BUT NOT SURE IF I DID IT RIGHT

Posted by Tournicoti on 27 February 2013 - 07:09 AM

Hello

This seems to be a good start for me smile.png

In fact you are precisely describing (wanting ?) Perceptron model

 

This is pseudo-code :

class Neuron
{
	float [] inputs; // (with : true is 1.0f, false is 0.0f)
	float [] weights;

	Constructor(integer nbInputs)
	{
		inputs=new array [nbInputs +1] (+1 is for including the bias). Or getting a reference on an external array.
		weights=new array [nbInputs +1]
		
		fill the weights array with float random values between -1 and 1
		set 1.0f in the position corresponding to the bias in the inputs array(typically, first or last position)
	}
	
	float computeOutput()
	{
		sum=dotProduct(inputs,weights);
		return sum>0.0f;
	}

	void learn(float desiredOutput,float learningRate) // adaptation of the weights
	{
		float output=computeOutput();
		float deltaOutput=desiredOutput-output; (so -1.0f or 0.0f or 1.0f)
		
		for each position in the arrays
		{
			weights[i]+=learningRate*inputs[i]*deltaOutput;
		}
	}

};

 

Hope it makes sense rolleyes.gif

 

for a OR, it should converge with : w1=w2=lambda, and wBias=0. With lambda>0

 

Good luck




#5036283 Finding a point inside a Pyramid

Posted by Tournicoti on 25 February 2013 - 03:19 AM

I add some code to help to see how it's possible to implement that :

	class PlaneD // a class that describes a plane, with a normal and a distance
	{
	public:
		D3DXVECTOR3 normal;
		float distance;


		PlaneD(D3DXVECTOR3 & n,D3DXVECTOR3 & p) // a constructor, given the normal and a point of the plane
			:normal(n)
			,distance(D3DXVec3Dot(&n,&p))
		{}
	};


	// ... and 2 functions I use to check if a point is 'behind' a plane :

	float computePointPlaneDistance(PlaneD & plane,D3DXVECTOR3 & point)
	{
		return D3DXVec3Dot(&plane.normal,&point)-plane.distance;
	}

	bool pointBehindPlane(D3DXVECTOR3 & point,PlaneD & plane)
	{
		return computePointPlaneDistance(plane,point)<=0.0f;
	}

As you can see I use D3DX API. You can just replace D3DXVECTOR3, D3DXVec3Dot with your own code.

Hope it helps smile.png

EDIT : Be sure to use normalized vector for the plane's normal, otherwise the calculations of the distances will be wrong.




#5036255 Finding a point inside a Pyramid

Posted by Tournicoti on 24 February 2013 - 09:49 PM

A pyramid is a convex volume, so if you can check your point against the 4 planes defining your pyramid, it's done ! smile.png

So, just compute the 4 planes equations and check if your point is 'below'(in term of half-spaces) all the planes.

Good luck




#5035410 Rotation of a Bounding Box around a point that is not the objects centre

Posted by Tournicoti on 22 February 2013 - 08:23 AM

It's possible to decompose this rotation around a fixed point into 3 transformations :

 

- first a translation, so that the fixed point becomes the origin

- then a 'standard' rotation (around the origin)

- then the inverse of the translation , to move back the box

 

Hope it can help ?




#4982654 Self-shadowing on curved shapes problem

Posted by Tournicoti on 22 September 2012 - 07:39 AM

Maybe should I be more specific about how it works ?

There are 2 maps of the same dimensions :
  • a depth map, like in standard shadow mapping
  • a color map, that stores the filtering colors
Generation of the shadow map :
  • the color map is filled with (1,1,1)
  • the opaque geometry is rendered on the depth map only
  • the transparent geometry is alpha-blended on the color map, reading the depth map.
Scene rendering with colored shadows :

The point is shadowed (depth test) ?
  • Yes, return (0,0,0)
  • No, sampling of the color map.
This filter is then multiplied by the light color, to get the filtered light color that can be used in lighting calculations afterwards


NB :
Since this shadow map stores the depth and the color filter of a light ray reaching an opaque geometry, it can be used on opaque geometry only.
I still use standard shadow mapping on transparent geometry, just taking into account the depth data of this shadow map.


Thank you so much for the help I got Posted Image


alwaysbetterthanblabla.PNG




PARTNERS