• Advertisement

Spencer

Member
  • Content count

    194
  • Joined

  • Last visited

Community Reputation

126 Neutral

About Spencer

  • Rank
    Member
  1. Reinforcement learning

    here is another good reference i think. --Spencer EDIT: had som problems with the link thingy
  2. [quote]Original post by cwhite Quote:Original post by uutee 1) In multilayer networks, does EVERY neuron (excluding input neurons) have a link to the bias neuron, whose values is always -1? No, topology can be arbitrary. Bias neurons are recommended though. Quote: For most nontrivial problems they are neede. NNs without can only learn problems that are linearly separable. For example the XOR function cant be approximated without bias neurons...
  3. Reductionism and intelligence

    If there would be such a thing as free will (which i personally am very sceptic to) i definitely would go for 3. But due to my scepticism i cant argue baout it :)
  4. dude...thanks :) And thanks to all who replied!
  5. Hi! I thought the LUT for the normals where located in an external file that had to be downloaded from ID or something...are they actually provided in the file? If so, where in the file? If they are, all my problems are solved ;) I will test your suggestion for optimization as soon as i get the normals right. And as for the "doesnt work like i want it to". It is only the normals that ends upp wrong. Everything else is fine... Thanks for the replies.
  6. Quote:Original post by Samurai Jack Indeed, STL may make your code huge if you use it. As for myself i use STL whenever I have to, but it is far better to avoid it whenever possible. What I mean is, if you know the number of entires (vertices), you don't need a resizable buffer. So if you're sure that your mesh has 1200 vertices and 500 faces you don't need a dynamic buffer. D3DXVECTOR3 pVertices = new D3DXVECTOR3[numVertices]; When using OpenGL i suppose, on the end you will have: NumVertices = NumFaces*3; NumNormals = NumVertices; Sure thing, however Optimization will be done _After_ the stuff works... Quote: Like a debug approach, i would suggest, that you compute normals before duplicating the vertices, and render. Then try to compute normals after your whole mesh is generated. Then you will be able to compare the visual results. Maybe they aren't so dramatic? I'm only guessing. I did that before i decided to use glDrawArrays, and it looked right. The difference is huge... Quote: A good place for algebra (matrices, vectors, flat/smooth normals, quaternion) is euclidian space by Martin Baker. Try to enter martin baker math into goolge and check some results. I'm sure it will help you out! Thanks for the tips...but i am sure the math is not the problem. Quote: As for math libraries, you could use directx math libraries for math, they are awesome - and besides, they are in the 9nth release, they are tested and approved. An alternative is the nvidia math library on developer.nvidia.com. I am deeveloping on a linux box so the directx lib is not an option ;)
  7. Hi! Hope this is the right forum for my question. Anyhow, in my .md2 model loader i want to convert the datastructure to be able to use glDrawArrays. This involves duplicating the vertices and the texture coordinates. So far no problems. But when i want to calculate the normals it doesnt work like i want it to. What i do is that first i create all the vertices like you would probably normaly do by reading the vertex indices from the .md2 file and then i go through the triangle list and just duplicate it to the positions where it is used. I also do the same thing when i calculate the normals. In my mind this would give the same normals for duplicated points, but this does not seem to be the case. In order to explain furhter i will paste the relevant part of my code Point *p; vector<Point*> tmpPoints; vector<Vector*> tmpNormals; //build the actual vertices for(int i = 0; i < m->nrOfFrames; i++){ for(int j = 0; j < header->numVertices; j++){ y = frame->vertices[j].vertex[2] * frame->scale[2] + frame->translate[2]; x = frame->vertices[j].vertex[0] * frame->scale[0] + frame->translate[0]; z = frame->vertices[j].vertex[1] * frame->scale[1] + frame->translate[1]; p = new Point(x, y,z); //store the points temporarily tmpPoints.push_back(p); } Vector *triNorm,*uv,*vv; int cnt = 0; //calculate a normal for each triangle surface for(int tris = 0; tris < header->numTriangles; tris++){ uv = new Vector(tmpPoints[triangles[tris].vrtxIndices[0]], tmpPoints[triangles[tris].vrtxIndices[1]]); vv = new Vector(tmpPoints[triangles[tris].vrtxIndices[0]], tmpPoints[triangles[tris].vrtxIndices[2]]); m->frames[i].setTriangleNormal(uv->crossProduct(*vv),tris); delete uv; delete vv; } //use surface normals to calculate vertex normals for(int v = 0; v < header->numVertices; v++){ triNorm = new Vector(0,0,0); for(int v2 = 0; v2 < header->numTriangles; v2++){ if((triangles[v2].vrtxIndices[0] == v) || (triangles[v2].vrtxIndices[1] == v) || (triangles[v2].vrtxIndices[2] == v)){ (*triNorm) += *(m->frames[i].getTriangleNormal(v2)); cnt++; } } triNorm->divideByScalar((float)cnt); triNorm->normalize(); cnt = 0; //store normals tem. tmpNormals.push_back(triNorm); } //duplicate the vertices and normals for(int tr = 0; tr < header->numTriangles; tr++) { m->frames[i].setPoint(*tmpPoints[triangles[tr].vrtxIndices[0]],tr*3+0); m->frames[i].setPoint(*tmpPoints[triangles[tr].vrtxIndices[1]],tr*3+1); m->frames[i].setPoint(*tmpPoints[triangles[tr].vrtxIndices[2]],tr*3+2); m->frames[i].setNormal(*tmpNormals[triangles[tr].vrtxIndices[0]],tr*3+0); m->frames[i].setNormal(*tmpNormals[triangles[tr].vrtxIndices[1]],tr*3+1); m->frames[i].setNormal(*tmpNormals[triangles[tr].vrtxIndices[2]],tr*3+2); } tmpPoints.clear(); tmpNormals.clear(); ((char*)frame) += sizeOfFrame; } Is my problem just a bug somewhere or "is my logic flawed"? Peace :) --Spencer
  8. ATI drivers in FC3

    hi...actually i think you are being a bit unfair to ATI. I have a radeon 9600 and ever since i first installed the drivers (ver 3.10.something iirc) i have had no problems other than the one time I didnt RTFM good enough. As a matter of fact i hade some problems installing NVIDIA drivers on one computer. As with stability, i have had no issues whatsoever with my card or drivers. What i am trying to say here is that i _think_ peaople only hear that ATI drivers will mess up both your head and computer and decide that ATI sucks without trying for themselves. And, with the new drivers coming soon (yeah i know we heard it before ;)) with xorg 6.8 suppart and GLSL and stuff i really think ATI is back in the linux "market". Just my 2 coins to the discussion peace --Spencer
  9. Yes, i was thinking about that, and perhaps skipping the octree and use some other grid like structure of some sort..maybe...and cull the grids.... Thanks a bunch you
  10. Hello, i am currently implementing some map and terrain classes in my game. But i have troubles with efficient renering of the landscape. What i have is a very simple map format wich only states the height and width och the map, in number of tiles (sort of quads split in two to make triangles), the size of the quads and an image name containing the heightmap....not good but works for now. Further more my thoughts now is to build all the "tiles" in my map and insert them in my octree so i can cull them against the view frustum. Then i get a list of tiles that are visible each frame and i use glDrawArrays (sorry for the specifics, but you get the idea) to render the triangles. However, the results i get now is crappy and although i think very much overhead comes from my datastructures right now, i was wondering if there are any good other ways of doing this? That is, store the map smartly and being able to decide what tiles are visibla quickly each frame.. thanks a bunch
  11. Hi all. Long time since my last post here...but i guess that doesnt matter..hmm ;) Anyhow, i am working on a small soccer simulation where two teams are supposed to learn how to play soccer against each other. Right now they are learning low level functions like passing, shooting, intercepting etc. However, i have _big_ problems with this. What i do is that for the passing skills, i will train a neural network to output and angle and and a force, both in the interval [0,1]. The angle will be the modification in which to shoot the ball relative the ballholder (will be used as -PI+nnAngle*2PI). The force is a multiplier for the balls maximun speed...quite simple really. The input i use for this is a 2 tuple with the relative angle to the receiver, the distance to the receiver, the change in angle in next time step and the change in distance in the next time step. This way i get rotationally invariant data and the collection of training examples is simplified. This seems to me like there would be a strong correlation between the input and the output and thus no problem for a NN to approximate such a function. But no, my results are really crappy and the output is almost "random".... I have tried various implementations of neural nets so i dont think its in the code either.... EDIT: I am using a feedforward network and train it with backprop.. Any and all suggestions, questions and feedback is very appreciated :) thank you
  • Advertisement