• Create Account

# carangil

Member Since 26 May 2004
Offline Last Active Aug 25 2014 09:18 AM

### #4979008Random location of space objects how to keep a specified min distance?

Posted by on 11 September 2012 - 01:10 PM

Some thoughts about my terrain generation (large objects). I subdivide the area into cells (+ some spacing between cells). Each cell can contain one randomly placed object, the spacing between the objects guarantees, that you don't get overlapping objects. This is similiar to the AABB approach. Even better would be a voronio cell generation which will break up patterns more. You can place your objects randomly within the boundaries of a voronio cell to avoid intersection testing.

I like this idea, and it can be made very simple and fast.

Suppose the minimum distance you want is 'd': No two object closer than 'd' points from each other. Store created objects on a sparse grid discritized at intervals of 'd.' A hashtable can handle the sparse grid at both storage and time O(n), where n is number of objects.

```ht = new HashTable();	//key for the hashtable is a triplet of integer coordinates , value is the object stored

for (a=0; a< numobjects; a++)
{
bool reject = true;

while (reject == true)
{

vec3 objectpos =  random_position_in_3d();

//quantize down to a discreet grid coordinates, which are just increments of 'd'
int x = objectpos.x/d;
int y = objectpos.y/d;
int z = objectpos.z/d;

reject = false;
// now check this spot in the grid, and all the immediate neighbors
for (i=-1;i<=1;i++)
{
for (j=0;j<=1;j++)
{
for (k=0;k<=k;k++)
{
if ( obj2  = ht.get(  new int_triplet(x+i, y+j, z+k ) ) //if there's on object here, it may be within 'd' of the new object
{
if (distance (objectpos , obj2.pos) < d)
reject = true;
}
}
}
}
} //loop back and pick a new position if we rejected
//this should be a rare occurance, unless the RNG is bad, OR you are putting in so many objects the grid is filling up

someobject obj = new someobject( objectpos, "some star");

ht.put(new int_triplet(x,y,z) , obj);

}

```

Note the above will only work well if the space is very sparse, and you want random points with a respected minimum distance. If you want something more uniformly spaced, use the poisson-disc mentioned above.

edit: adjust the code a bit
edit: give up getting the braces to line up; they really do look straight in the editor!

### #4978724[noob]How to Draw from .OBJ in memory?

Posted by on 10 September 2012 - 04:56 PM

I thought I was rendering from a vertex array?

You are rendering from an array of vertices, but that's not an OpenGL Vertex Array. If you arrange the data just right, you can call glVertexPointer, glNormalPointer, and glTexCoordPointer, and just hand it a pointer to your data. Then you just call glDrawElements or glDrawArrays, depending on what you really want to do. This is will speed things up just because instead of individual calls to glVertex3f, you it's all done internally. Later, the next step you can take is allocate a VBO and copy it to the graphics card. You will actually keep the calls to glVertexPointer, etc, except they will be offsets to inside the VBO.

I was going to say make sure you ' - 1' from all the indices, because OBJ files start with '1' and C starts with '0'. But it looks like you already did that.

Try rendering an OBJ file with just a single face in it. If that works, the rest should

And instead of rendering, just printf what 'it' thinks the data is. You will probably find that you did something slightly wrong when you cut up the strings you read from the file. Can you put up one of your obj files?

### #4976559[noob]How to Draw from .OBJ in memory?

Posted by on 04 September 2012 - 02:10 PM

I see a couple of problems here:

1. The glNormal3f should be put before the call to glVertex3f, otherwise all the normals will be off by 1.
2. For each triangle you are only pushing 1 vertex. You wrote:

```// this vertex is the said triangle, verts attached
vpos.x = mesh.vertices[ mesh.triangleList[i].Vertex[0] ].x;
vpos.y = mesh.vertices[ mesh.triangleList[i].Vertex[1] ].y;
vpos.z = mesh.vertices[ mesh.triangleList[i].Vertex[2] ].z;
// this vertex is the said triangle, normals attached
npos.x = mesh.normals[ mesh.triangleList[i].Normal[0] ].x;
npos.y = mesh.normals[ mesh.triangleList[i].Normal[1] ].y;
npos.z = mesh.normals[ mesh.triangleList[i].Normal[2] ].z;

glVertex3f(vpos.x, vpos.y, vpos.z);
//glNormal3f(npos.x, npos.y, npos.z);
```

and I think you meant:
```
for (j=0;j<3;j++)
{
// this vertex is the said triangle, verts attached
vpos.x = mesh.vertices[ mesh.triangleList[i].Vertex[j] ].x;
vpos.y = mesh.vertices[ mesh.triangleList[i].Vertex[j] ].y;
vpos.z = mesh.vertices[ mesh.triangleList[i].Vertex[j] ].z;
// this vertex is the said triangle, normals attached
npos.x = mesh.normals[ mesh.triangleList[i].Normal[j] ].x;
npos.y = mesh.normals[ mesh.triangleList[i].Normal[j] ].y;
npos.z = mesh.normals[ mesh.triangleList[i].Normal[j] ].z;
glNormal3f(npos.x, npos.y, npos.z);
glVertex3f(vpos.x, vpos.y, vpos.z);
}
```

What you did was take the x coord of the 1st vertex, and y coord of the 2nd vertex and the z coord of the 3rd vertex!

Also, BTW, as soon as you get this rendering correctly, the 1st thing you should do is render with a VBO, or, at a minimum, a vertex array.
The format you have already is kind of similar to a vertex array, you just need to rearrange the data a little bit.

edit:

What I do when I load on obj file: In many cases, obj files use the same vertex/normal/texcoord index: i.e. 1/1/1 4/4/4 10/10/10, etc. Not always! Sometimes some models will re-use texcoord indices, resulting in off vertices like 10/5/10, 11/6/11, etc. OpenGL Vertex Arrays / VBO doesn't allow mis-matched vertices like this. So when I load an OBJ file, I make a map from 'obj file' vertex spec to opengl vertex index, copying when necessary. So, 1/1/1 and 1/1/2 are 2 completely different vertices in GL.

### #4974879[ Ray Tracing ] Cam distance doesnt seem to affect traced scene as I`d expect

Posted by on 30 August 2012 - 12:59 PM

By camera position, you mean the focal point, right? If you are leaving the image plane (world window, or the 'film of the virtual camera') at z=0, and moving the point you are projecting through (the focal point / camera position / 'lens of the virtual camera') what you are really doing is changing the focal length of the virtual camera: You are zooming in and out. This is like holding the back of the camera steady, and extending the lens out.

What you need to do is move the focal point AND the plane together. This will move the camera, and keep the focal length of the camera the same. Moving forward and back will be the easiest case, just add/subtract from z. But you when rotate the camera, you will need to rotate the image plane with it, and then you will need to break out the matrix math.

Alternatively, you can keep the camera position & 'world window' stationary, and just use a matrix to translate and rotate the world around the camera. Similar to how OpenGL has a 'stationary' camera looking down -z, and you rotate the world around it. It's probably the easiest to deal with. There's bazillions of optimized matrix library classes out there that you can use for this.

### #4972436Why do I need GLSL?

Posted by on 22 August 2012 - 06:38 PM

My 'normal' I assume you mean fixed-function pipeline.

Here's a good example: Steep Parallax Mapping:

http://graphics.cs.brown.edu/games/SteepParallax/

They show 4 images. Texture mapping is done in basic fixed-function opengl. You can also do some normal mapping in fixed-function opengl extensions. Even per-pixel normal mapping through the dot-product texture combiner. If you set the right opengl options and parameters, you can do some pretty cool looking stuff.

But the last two, parallel mapping, and steep parallax mapping, require very specific, per-pixel operations. These operations are written in GLSL. Before GLSL, everything you needed OpenGL to do has to have been thought of before hand and built into opengl. Examples are things like turning lights on and off, adjusting fog, etc. It's all pre-made, and all you do is turn it on or off and set some parameters. GLSL lets you ADD and invent NEW things the creators of OpenGL never thought of yet.

### #4963404Is widescreen only resolution ok?

Posted by on 26 July 2012 - 01:07 PM

The best option is to let the user run at any (reasonable) aspect ratio. Letterboxing is the easiest and should always work. Assuming everyone has a wide screen isn't safe; for instance my main PC at home still has a 4:3 monitor, and I'm a developer. Why keep the dinosaur screen? It's a good size, bright colors, and it hasn't blown up yet!

Posted by on 26 March 2012 - 02:23 PM

I'm watching this thread with great interest, as random-level generation has been an interest of mine for a while. An issue I've always dealt with is sometime's its difficult to enforce a particular level is solvable. If you randomly place locked doors, keys, passageways, etc, it becomes very easy to make an unwinable scenario. I came up with this solution, but I haven't implemented it yet: generate the world 1 room at a time.

Take what Ashaman73 said, and come up with a theme and goal. Lets say the goal of this level to retrieve object X, and then exit the level a different way than you came in. Instead of generating a level with object X in it, just generate a room. The player looks around, find a door. He opens it, and it leads to another room, passageway etc. As the player moves around, the level is generating just 1 step ahead according to a few simple rules:

1. First, there's a list of objects that need to be found, along with a probability representing how hard/easy the object is to find. The game starts with Object X, and the exit on the list.

2. When the player attempts to open a door, if there are any other unexplored paths in the game (doors that have been generated but not yet opened), there is a probability this door is locked. If it's unlocked, a room behind the door is generated. If it is locked, a key to this door is added to the list of objects to be found. (or see rule 5)

3. When generating a new room, if there are objects on the list that need to be found, a dice is rolled and maybe that object is in this room, and you should place it here. Also, there is a probability you will find a random key that doesn't belong to a specific door yet.

4. When generating a new room, it may contain a random number of doors to other ajoining rooms. if there are object that still need to be found, and they are necessary to win this level, and the objects haven't been placed in this room, and the player has opened the last unexplored door, then this room MUST contain at least 1 more door other than the one you entered through.

5. When generating a door, if it is locked, and the player has previously found a random key, randomly decide if this is the right key or not.

I think that from the player's point of view, a level generated as above on-the-fly will always be solvable AND will be indistinguishable from the case where the level is generated all at once ahead of time. Also, there a few advantages to this kind of scenario, for instance a player can get a 'lucky' powerup (or unlucky punishment) that changes the probabilities, and makes it hard to win. A punished player will always find the key after exploring room after room, a rewarded player will find it very soon. It also has this extra feature: In a pre-generated world if the player looks everywhere, but for some reason completely missed the one spot the key is, he'll never find it, and will become frustrated. In a on-the-fly world, the player will eventually find it, no matter where he is looking or which direction he took in the beginning of the maze.

### #4923464How long would it take to make a decent full fps? (One person team..)

Posted by on 19 March 2012 - 06:13 PM

Looks like ill never be able to achieve this... Moving on

One thing to keep in mind is, today, many things are so much easier than the used to be. I can't write a AAA (or even A or B) game title myself, but that doesn't keep me from working on my game projects. It just means I need to target a different audience. You don't need mocap animation or huge high poly models. Modern versions of Wolfenstein or Doom clones are attainable as a 1-man operation, if you leverage the right resources:

Sound and Music: It used to be playing sound in the DOS days meant writing your own sound driver! Grab an OGG library and OpenAL, and in a few days you can have high quality music playing with 3d positional sound. Online you can find decent Creative Commons music tracks, and sound effects. (Or record your own sound effects.)

Graphics Rendering: If you know OpenGL you can have a wolf3d or doom-looking maze running quite quickly. You don't have to write the software rasterizer, or figure out the assembly language for optimal texturing. It's all done for you. You don't even have to have very aggressive hidden surface removal. You mostly just have to focus on the actual gameplay logic.

Graphics assets: Use sprites for the enemies like they did back then. High resolution sprites and high-res textures are easy today. Want a hi-res brick wall texture? No fooling around with PC-Paintbrush, just take your digital camera out and go up to a building. Use a digital video camera and record your friends running around and doing fight moves against a green screen. With hardware you already have and software you can get online for free, in a couple days you can have a library of sprite animations for a couple different character models. This would have taken expensive specialized hardware back then and who knows how much time. I read somewhere many of the enemies in Doom were digitized photos of clay models they built. Go get some action figures, and use stop animation, lol.

The point is, a lot of the hard problems are solved, especially if you use an already written engine. Creativity is key. If the game is fun, no one will care the whole world is made of low res cubes. Look at Minecraft!

### #4901755Bots appearing algorithm

Posted by on 11 January 2012 - 03:14 PM

I think I know what he wants: something like in the GTA games where there is the illusion of random cars driving all over the city. Actually, only the cars within a certain radius of the player exists. My 'pet' project game is an asteroids clone in open space, and required the same illusion. Basically there's a few parameters, and a few rules:

(For me the 'object' is an asteroid. For you its cars.)

Parameters:

object space radius : maximum distance from player to track these objects
minimum object count : minimum number of objects to have within the object space radius
maximum object count : most number of objects to have within the object space radius

Rules:

Initial state: When the game starts, for some random number R between minimum and maximum object count, place R objects in random locations within the object space radius. Each object is moving in a random direction. In free space, this was easy, for you, you'll have to restrict each car to a particular road and have it moving in a random direction along that road.

Object leave rule: When updating an object's position, if it falls outside of the object space radius, delete it. If this results in object count going below the minimum, immediately execute the object create rule.

Object create rule: Every so often (random timer, etc) if the number of objects within the object space radius is less than the maximum, create an object on a random point along the object space radius circle (for me in space it was a sphere).

A few notes:

-If you literally implement the Object leave rule, the player could be looking at an object while moving away from it backwards. When the object pops out of existance, if the player move towards where it was, its gone. Instead, have a Object render distance that is somewhat smaller then the object space radius. That way if an object disappears to the player for a moment, it might still exist when he goes back towards it.

- If the player is not moving (or is a moving significantly slower than these objects are), then when creating an object, its best to create one that is moving towards the player. This is because any object moving away from the player will leave the radius in the next frame. (When the player is moving faster than the average speed of these objects, its best to create a mix of objects moving towards and away from the player, because the player can easily overtake the newly created objects and keep them in the radius)

Hope this helps!

Oh, and please let the random cars occasionally crash into each other! I want angry road-raged drivers in your virtual city The NPC drivers in Vice City were too nice to each other.

### #4886974Too many OcTrees!

Posted by on 23 November 2011 - 01:06 PM

I currently have 2 octrees that work on the server, one for calculating the potentially visible set (PVS) that stores meshes and a finer grained one for physics that stores 'collides'.

Is there any reason why you can't use the same tree for both? You can have a very fine grained octree for physics and just not traverse it all the way down when you are determining PVS. Are the trees the 'same' except for depth, or are the splits in completely different places?

### #4883329can you assign an enum to a memory address of a shader?

Posted by on 12 November 2011 - 06:45 PM

```
{
phong=0,
normal,
bumpmapped,
};

//Then you can assign each one:

```

Then throughout your code you can just pass around the enum. You can even write the enum to disk (since its just an int), but I would be careful because if you add/remove things from the enum list as the code matures, your values might change. It might be safer to do:

```#define SHADER_PHONG 0

```

Note, my style is a bit 'C' but works for C++ too. If you really want to use a stl map, then you can use it instead of an array. But an array is the simplest way to map integers to things, IMHO.

### #4878956remove duplicate vertices

Posted by on 31 October 2011 - 12:18 PM

Removing duplicate vertices efficiently can be tricky. If the vertices are EXACTLY the same, you can hash them. If you if don't trust hashing float values (which you shouldn't), you can discretize the values into integers, and then hash. But even that can get you in trouble. What you really need to do is compute the distance from each point to all the other points, and fuse and vertices smaller than a constant error threshold value. But that's O(n^2). What do you do?

What I do is I have a custom 'hashtable' for this, with a special 'hash' function, which is just: spatial_hash(float x, float y, float z) = (x+y+z). What does this do? If two points are close in space, then their x,y and z values are also close, and so is their spatial_hash value. For instance, if for two points, x and y are the same but z differs by .1 units, then the hash will only differ by .1 units. It's easy to see that for any given threshold distance D that you want to use for the error threshold, there is a 'worst-case' maximum spatial_hash delta value. So now we have a function where if two points in 3-dimenstions are close together, their value (which is now one dimenstional) is close together.

So, now you arrange them in buckets, just like a hashtable. What is the bucket_size? Well, each bucket should be the width of the spatial_hash delta value. This guarantees that for a given point, if you figure out what bucket it belongs to, all the points within that threshold distance D are either in that bucket, or one of its immediate neighbors. This makes the problem very close to O(n).

### #4877973An out-of-frustum experience

Posted by on 28 October 2011 - 02:04 PM

Post a pic. I've also wondered what the scene looks like after its been projected and squashed into homogeneous clip space.

### #4877315Blending tip/info

Posted by on 26 October 2011 - 01:48 PM

Was testing some fullscreen blending and I noticed that If I had blending on with a fullscreen texture that was fully opaque alpha = 1.0, blending is faster than if it is partially transparent alpha = .4 Is there some cap limit like .99 or something that it decides the pixel doesn't need to access the pixel currently in the framebuffer to mix with and just overwrites it (straight write instead of read, mix, write) ?

What card are you using? It seems like a cheap trick some of the integrated (like Intel GMA) cards might try to do. It's an implementation-specific detail. The driver/card can do anything it wants as long as the output is correct.

Does your texture have an alpha channel? If I'm a clever driver trying to squeeze frames out of a cruddy card, and I see the material is set to alpha 1.0, and the texture has no alpha, I won't bother with alpha blending because I know what the result will be... Same if alpha is set to 0.0; I don't even have to draw! (except to depth buffer)

Can you show the code you tested with?

### #4837168Which is the Best First language to start programming and programming games?

Posted by on 18 July 2011 - 09:20 PM

I would say either Python or Processing because both are simple to start in, but rather expressive. Processing is close enough to Java that when you are ready you can transition when you are ready. But, considering how much of the work in a 3D game is done by the GPU, you can render rather fancy graphics at high frame rates in almost any language. It's only fancy AI and physics that will tie up the CPU, and even now physics is being transitioned off the CPU through CUDA, PhysX, etc. If you're just learning programming, it's best to start with a simple language where you can learn all the basic concepts to programming.

Still, I find the question of 'what language to start with' a lot harder to answer than it used to be. 20 years ago, the answer was BASIC. 10 years ago, probably Visual BASIC. Today? I find it hard to say VB .NET is a good starting language.

PARTNERS