Jump to content

  • Log In with Google      Sign In   
  • Create Account

JTippetts

Member Since 04 Jul 2003
Offline Last Active Yesterday, 10:53 PM

#5004347 The best way to texture a 3d model?

Posted by JTippetts on 26 November 2012 - 05:19 PM

There are many, many ways of texturing an object, and which you choose depends greatly on your personal skills as well as the use to which the model will be used and the over-all texture budget you have to work with. Here is a very brief and wholly insufficient list of a few ways you can texture your object:

1) Procedural

Procedural textures can be useful in certain circumstances. Procedurals attempt to mimic the surface or volume characteristics of an object without resorting to explicit UV mapping. If the model is intended for use in a game, you might be limited by shader instruction count and/or complexity, so typically you will avoid procedurals except for things such as water and so forth. If the model is intended for an in-Blender render, though, then you can get a lot of mileage out of procedural, node-based materials and never touch a UV map.

2) Hand-paint directly onto object, or via external paint application such as Gimp

Blender allows you to do texture painting directly on the model. You set up your UV mapping, ensuring that each face has its own unique space in the UV map with no overlap with other faces. Create a texture and go into Texture Paint mode. Paint away, save your texture as an image.

3) Painted and baked from high-poly

A process that is frequently used, especially with characters, is to model your object in very high detail and paint it, then bake out various textures including normal map or displacement map, ambient occlusion, diffuse, specular, etc... to textures that are applied via UV texturing to your lower polygon model. Blender offers this functionality using the Bake menu tab under Render settings (Blender Internal Render engine only; Cycles doesn't offer baking yet).

A good process for this technique is to start with the low-poly object you intend to use in the game. Add a multi-resolution modifier to it and crack the detail up a bit, then sculpt in some details. (If you set up your UV mappings on the low-poly, the multi-res modifier will also subdivide those UV mappings) You can use texture painting to paint directly on the model in blender, and you can even paint to multiple textures in order to do a diffuse map and a specular map. Then you can bake all of these to images to load into the game.

4) Modular UV mapping using seamless, pre-made textures.

If you are on a very limited texture budget and can not afford to have a unique texture for every object, you can create a small set of textures that are seamlessly tiling, then manipulate the UVs of the faces of your object to figuratively "snip" pieces of these textures out in creative ways. Multiple objects can share textures this way, greatly saving texture budget.

One of the more instructive polycount threads I have read recently is this one: http://www.polycount.com/forum/showthread.php?t=87797

Not only is it instructive in technique (pay attention to the posts in the thread that discuss the seamless UV mapping trick in 4) but it is also interesting to see one particular artist's progress as she learns multiple techniques and in the end produces a very striking and beautiful scene using all sorts of nifty tricks.

Polycount in general is a highly useful forum to browse if you are starting out, so you might spend some time over there learning things that will blow your mind.

Good luck.


#5004320 Game Entity Organization

Posted by JTippetts on 26 November 2012 - 03:26 PM

The way I prefer to do my renderables components is that each render operation essentially boils down to a material+geometry. The renderer really only needs to know what material (ie textures, blend settings, shaders, etc...) to bind and what geometry (ie quads, quad lists for a 2D game) to draw. As long as the renderable component can supply the correct material and geometry, the renderer should be able to draw it in a single method and just leave it up to the renderable component to do what it needs to do, be it calculating sprite frames and applying texture matrices to the material, setting blend settings, constructing quad-lists from glyph strings, etc...

uglybdavis: You made me chuckle. Have a +1 cookie to go with the sandwich. :D


#5004064 Code::Blocks "warning: deprecated conversion from string constant to ‘ch...

Posted by JTippetts on 25 November 2012 - 07:11 PM

I use tolua++ to generate Lua bindings and unfortunately, as it currently stands, the generated binding code is filled to the brim with these warnings. They are harmless in the context of what tolua++ generates (ie, the code doesn't do anything "bad"), and it's even possible that a newer version of tolua++ has eliminated them, but for the time being I just use compiler flag.

You can do it per-project as Bacterius suggested, or you can do it globally. Go to Settings->Global Compiler Settings->Other options and add the -Wno-write-strings there. I actually don't recommend you do it globally, though, since you shouldn't get in the habit of ignoring warnings.


#5004008 breakout game

Posted by JTippetts on 25 November 2012 - 03:46 PM

Your xstep and ystep values together constitute what is called a vector. The word vector has quite a few different but related meanings, but in this context you can think of it as Direction + Magnitude. If you imagine the vector as an arrow, the direction of course is the direction the arrow is pointing, and the magnitude is the arrow's length. If xstep=1 and ystep=1, the resulting vector is (1,1). It can be conceptualized as an arrow that is pointing up and to the right (assuming the origin, (0,0), to be in the lower left corner of the screen) at a 45 degree angle. The magnitude of the vector is approximately 1.414. (The magnitude, or length, of course is calculated by the Pythagorean theorem.) So if you add xstep and ystep to the ball's current position, the result is to move it up and right a distance of 1.414 units.

To get the ball to move in other directions, you merely change the values of xstep and ystep to point the arrow in different directions. xstep=-1, ystep=1 will move it up and to the left. xstep=0, ystep=1 will move it straight up. xstep=1, ystep=0 will move it straight right. And so forth.

Of course, you also need the ball to move at a consistent speed (denoted by the magnitude of the vector) regardless of what direction it is pointing. A vector of (1,1) is not the same length as the vector (0,1), so a ball moving along the vector (1,1) will move faster than along the vector (0,1). To fix this, you need to normalize the vector; ie, convert the vector to what is called a unit vector, or a vector whose magnitude is 1. This is simple enough to do, you simply divide xstep and ystep by the magnitude of the vector.

Once your vector is normalized to unit length, then you can scale it by the ball's speed (by multiplying xstep and ystep by speed) before using it to move the ball. One way of doing this is to encapsulate xstep, ystep and speed into some sort of ball structure, which is far preferable than having xstep and ystep live globally in your program. Then you can just call a method on the ball structure/class to move the ball. Something like this:

class Ball
{
     public:
     Ball(float x, float y, float vx, float vy, float speed) : x_(x), y_(y), speed_(speed)
	 {
		  setDirection(vx,vy);
     }
     ~Ball(){}

     void setPosition(float x, float y)
     {
          x_=x;
          y_=y;
     }

     void setDirection(float vx, float vy)
     {
          vx_=vx;
          vy_=vy;
          float len=sqrt(vx*vx+vy*vy);
          vx/=len;
          vy/=len;
     }

     void move()
     {
          x_+=vx_*speed_;
          y_+=vy_*speed_;
    	  
          // Check to see if it hit the sides, and reflect the vector if so
          if(x_<0 || x_>ScreenWidth) vx_*=-1.0f;
          if(y_<0 || y_>ScreenHeight) vy_*=-1.0f
     }

     private:
     float x_, y_, speed_,  vx_, vy_;
};

This is just a quickie, of course, but it shows how the Ball class encapsulates everything it needs to move. Then in your timer function, instead of explicitly performing the movement and checks there, you can simply call Ball.move() to have the ball update itself.


void timer(int value)
{
     ball.move();
     glutPostRedisplay();
     glutTimerFunc(20, timer, 1);
}


There is no reason any of the ball's internal logic should be in timer() itself; that should be safely encapsulated inside Ball, so that the timer function doesn't have to worry about it. The timer function should just be calling logic update methods, and letting the object logic handle itself. It's much cleaner and far more flexible this way.


#5003391 breakout game

Posted by JTippetts on 22 November 2012 - 11:51 PM

I assume you call paddle_left whenever the paddle is supposed to move left? That's why your bricks disappear. As soon as paddle_left is called, the buffer is cleared and only the paddle is drawn. Anything else that was drawn (perhaps during a previous call to draw_scene) is wiped when glClear is called inside paddle_left.

What you are doing here is just... no. There should be one place, and one place only where things are drawn. The proper place is not inside whatever logic you use to make paddles move. You might want to take a break and do some research on basic game loops: how they're structured, what pieces go where, etc... Mixing logic and rendering like you are doing here is just not going to cut it.

Typically, a loop proceeds through a set of tasks in a pre-defined order:

1) Call Time() to get the start of frame time

2) Process any waiting input events, and hand the results off to interested parties. Here you might call functionality to move your paddle, for instance by adding or subtracting a value from its position. Only that, though; no drawing, no buffer clearing. Just input handling.

3) Update logic. How this is done varies; some setups will update logic on a fixed time step (every, say, 1/60th of a second) while others will compare the current Time() against the Time() of the last logic update, and use the difference as a step length. In this stage, only logic is performed: things are moved, decisions are made. No drawing. None whatsoever. Doesn't belong here, either.

4) Update collisions. This process might be interleaved with your physics step, depending on how you are doing your physics. At any rate, here you account for the things that happen when a collision occurs. Break blocks, bounce balls, check for ball out of bounds and decrement ball counter, and so on. Again, no rendering. These stages of the loop don't care what goes on the screen. As Trienco indicated, they should do what they do even if nothing is ever drawn on the screen. Any drawing code in these sections should actually generate compile errors, because there is absolutely no reason these modules should even know what OpenGL is. That's how you avoid problems.

5) Render. Aha! Here we go, now that everything has moved to its new location, we can draw the screen. This part of the loop doesn't care about logic or physics or things moving or input being handled. If you are doing any of that here, you are doing the wrong thing. The only thing this part of the loop cares about is drawing the stuff it's told to draw on the screen. There should be exactly one call to glClear() and one call (after drawing is done) to swap buffers. Any more and you are just going to eradicate work already done. Clear the buffer, draw your game, draw your UI, swap buffers.

6) Repeat.

There are many variations to the loop, but that is the basic gist of it. You might want to read the usual suspects when it comes to game loops (Gaffer's fixed time step article, the Bullet physics canonical game loop, etc...) in order to really understand the processes and how the different sections need to be separated. Make any number of throwaway applications as you need to get this figured out. Without this basic understanding, you are just doomed to repeat your past failures and cause yourself more frustration.


#5003388 vector iterator not dereferencable

Posted by JTippetts on 22 November 2012 - 11:36 PM

You should put the !messageQueue.empty() test before you attempt to dereference top(), in case the queue actually is empty which would mean that messageQueue.top().dispatchTime is trying to dereference end() iterator.


#5002571 open gl game

Posted by JTippetts on 19 November 2012 - 10:13 PM

You're kind of all over the place, aren't you?

Your posting history has a lot of threads about Pong, Breakout, Space Invaders, etc... So why not keep trying to figure out one of those? And might I recommend you find an API/framework and stick to it, instead of jumping back and forth from DX9 to OpenGL to Win32 etc... You remind me a little bit of that squirrel on Over The Hedge, the one that's all hyped up on caffeine. Just buckle down and stick with something until you get it, instead of haring off onto something else because it gets difficult.


#5002077 Who to follow on Twitter?

Posted by JTippetts on 18 November 2012 - 12:16 PM

Avoid Twitter, I say. It's pretty easy to get caught up in following folks, reading tweets, checking out blog feeds, etc.... and never actually getting anything done. Trust me, I know. Shut off FB, close Twitter, kill gd.net (as painful as that may be) and open up your IDE instead. Listening to all the grandiose 140-character bits of pith in the world won't make you a better programmer.


#5001696 random midpoint displacement self-intersection problem

Posted by JTippetts on 16 November 2012 - 10:56 PM

I have run into exactly the situation you are describing.

To my knowledge, there is no easy way to prevent it. Fractal functions (either subdivision-based such as midpoint displacement, or implicit such as fractal noise) are not "self aware". There is no easy way for point (x,y) to make adjustments to itself to avoid intersecting with the displacement provided by point (x1,y1). The points are mutually unaware of each other, effectively speaking; at least in this context. The problem is compounded by the fact that the heightmap plane is topologically a planar membrane that is being stretched and forced into self-intersecting forms.

There are some things that you can do to try to mitigate the problem. Here is an image of a distorted spire, formed as a cone displaced in X and Z by a couple of fractal methods:

Posted Image

You can see it has a lot of the deviant behavior you describe, and that behavior only gets worse as the scale of the displacement is increased. In this next one, I calculated a scaling factor as (MaxHeight-PointHeight)^2, and used that to scale the amount of X and Z displacement applied to the geometry. The higher up the spire a point is, the smaller the scale and thus the smaller the displacement.

Posted Image

This is a bit of a hack to avoid the weird mountaintops, and you still get plenty of weird folding on lower-elevation peaks and ridges.

Basically, once you try to introduce overhangs and other elements of a 3D nature, then the displaced heightmap plane is no longer a sufficient abstraction to work with, and you need to shift your thinking to a volumetric method. If you think of your ground surface as simply the threshold between solid ground and sky, and don't try to force it into a heightmap representation by displacing a 2D plane, then it doesn't matter how the volume is folded and distorted by the displacement functions; the generated mesh geometry will not be self-intersecting. An example:

Posted Image

In places where the threshold of the ground is pushed through itself, it might bore holes, and it still might result in some unrealistic geometry (floating chunks, etc...) but at least the mesh geometry itself isn't weirdly folded and self-intersecting. And the strange formations can be controlled by the parameters of the system, such that floating islands are reduced, if not eliminated altogether.


#5000780 Creating "Chains" of objects with smooth motion.

Posted by JTippetts on 13 November 2012 - 10:12 PM

Correct, the posted videos probably weren't using physics systems. However, in today's world, 2D physics libraries are an extremely cheap drop-in solution, and you can get such a thing up and running literally in moments. I did a small test using Love2D and Box2D, and got a chain swirling around very much like the one in the second video in about 15 minutes. I can post the code, if you like; it's dead simple. The hard work is done by the physics library, all I had to do was tweak a few parameters.

That being said, you could probably also achieve a similar effect by using "canned" animations, by pre-computing animation tracks for each ball in the chain for a particular movement, especially for the Zelda boss, since there isn't the wavy variation that the second one has. I suspect the second one might be canned, too, since those bullet-hell games tended to be pretty tightly timed and patterned.


#5000468 tic tac toe AI

Posted by JTippetts on 12 November 2012 - 11:16 PM

Pretty tough to say what's going on without seeing the rest of the code (such as how the variable player_X is set and when, etc...) But still, it seems as if you are kind of going about this oddly.

You have a board that can hold 'X', 'O' or ' ' (space for empty). To find a randomly selected space for player O, you could do something similar to this:

int Computer::findRandomSpace()
{
     int move=-1;
     do
     {
          space=rand()%9;
     }
while (board[move]!=' '
     return move;
}


Of course, this method will hang if the board is full, so ensure that you test the board first for a win/loss/draw condition. Essentially, you loop until you find an empty space, generating a new random board index each time.

Another way would be to iterate through the board and find all the empty spaces, and add the empties to a vector:

std::vector<int> Computer::getValidMoves()
{
     std::vector<int> moves;
     for(int c=0; c<9; ++c)
     {
          if (board[c]==' ') moves.push_back(c);
     }
     return moves;
}

After calling getValidMoves(), you will have a vector that is either empty (if the board is full), or holds the indices of all empty spaces in the board. Now, you can choose one at random:

int Computer::chooseMove()
{
     std::vector<int> moves=getValidMoves();
     if(moves.size()==0) return -1;   // Board is full
     int length=moves.size();
     return moves[rand() % length];
}

Another benefit of constructing a vector of all valid moves is that when you move on to a smarter AI, you can change the getValidMoves() method to prioritize spaces by how much closer they get you to a win condition. For example, if a space will generate a win condition for the player, it receives the highest priority; if a space will block the other player from a win condition, it gets the next highest priority. And so forth. You can sort the vector based on priority, then in the chooseMove() method, instead of grabbing a random space you grab the space at the back of the list (assuming you sort from lowest priority to highest). That way, you always choose the best move you can make.


#5000247 Despondent

Posted by JTippetts on 12 November 2012 - 08:49 AM

Whoa, there, you need to relax a bit. It sounds like you're overwhelming yourself with too much.

First of all, why are you bothering with source control right now? Source control is great, but when you're just learning it's not necessary. At this moment, you need to learn how to program, and there is simply no need for you to worry about learning to use source control at the same time.

Any time you face complexity, the only thing you can do is to break it down into more simple pieces, and work on them one at a time. Trying to take it all on at once will just lead to frustration.


#5000101 starting over

Posted by JTippetts on 11 November 2012 - 09:50 PM

Make some video games.


#4998601 sfml problem/question regarding VideoMode

Posted by JTippetts on 07 November 2012 - 03:52 PM

What FLeblanc and I mean when we say "native resolution" is that every LCD monitor has a recommended resolution, dependent upon the monitor's size and quality. Using this recommended resolution means that there is a 1:1 correspondence between a "logical pixel" (what resides in the frame buffer) and a physical pixel (the actual bit of screen hardware that displays a pretty color). Choosing a resolution that is not the recommended resolution means that logical pixels can be spread across multiple physical pixels, resulting in blurring and shimmering artifacts as the picture moves.

By default, a game should detect optimal resolution by default (going with a user's current desktop resolution is probably best, since it will usually be the optimal setting) but present a dropdown box of alternative, allowable modes they can choose from if they have performance problems or other personal reasons to switch. But my personal opinion is that you shouldn't force a player to a non-optimal fullscreen resolution by default, and any game that does is likely to earn my ire.


#4998463 How to avoid singleton

Posted by JTippetts on 07 November 2012 - 10:51 AM

Why does there need to be a concrete Fireball class anyway? A fireball is just a projectile with an exploding component that does fire damage. Object composition is your friend. By creating concrete classes like that, you limit your options and make a lot more work for yourself in trying to shoehorn everything in. What if you want an Iceball spell? You could create an Iceball class, duplicate some code from Fireball and change some things around, or inherit from a base class... but all of that is a code smell that ultimately leads you into a bad place of spaghetti code that can kill your project. Why not construct a composition system, instead? Break it down into individual behaviors, and add those behaviors to an object as it needs them.
Here is the gist of how I construct a fireball spell in Goblinson Crusoe:

1) Create an Object. An Object is an empty canvas; at this point, it could be anything.
2) Add a projectile component. Now, the object is a thing that moves from one place to another, and when it gets where it's going or hits something along the way, it sends itself a message that it has collided. It listens for logic update messages to know when to move. Lots of spells would share this component: fireball, ice spear, thrown rock, shot arrow, etc...
3) Add an animation component. Now, the projectile has a visual presence in the world, a looping animation of a fireball streaking through the air. Lots of spells would share this component, too.
4) Add an explosion damage payload. This component listens for the notification that is has collided with something, and when it receives such a message it will query the world for all other objects within a specified radius, and deliver a packet of Fire damage to each one. Some spells might use this area-effect payload, others might use a single-target-damage payload. Others might use a heal-target payload, or a heal-area payload. Still others might use a polymorph-other payload. Whatever; this component describes what happens when the thing hits something.
5) Add a Flash component, which also listens for a collided notification and spawns an explosion animation.
6) Finally, add a terminator component that also listens for collided, and sends the object a message to die.

At any step along the way of constructing this object, you can tweak things. Change the damage type to Ice and the animations to iceball and ice explosion. Add a component that spawns additional projectiles for damage debris or ember shards that strike additional enemies. Add a component that spawns 100 little floating daisies to settle out of the fire-seared air onto the scorched earth. Change the projectile component type to tracking rather than straight-line, in order to hunt down and destroy. Change the Flash animation to a nuke going off, for extra visceral effect. You name it. You are not limited to just one generic type of fireball. And there are no singletons to be seen anywhere in that scenario.




PARTNERS