Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by lawnjelly

  1. lawnjelly

    Playing with water physics

    While I've been mostly busy this past month with moving house, I have been making a few models and experimenting with making some water physics in Godot. I got thinking about when doing Frogger, about having a section with the frog driving a boat, but didn't really have time. Anyway as far as I can see there is no built in support for water in Godot using Bullet or internal physics engine, but I figured it couldn't be too hard to hack something together. Fixed tick rate physics First things first, I figured a good way of getting physics to work in combination with a fixed tick rate and interpolation. I will probably write a separate post on this, but I was successful in this case by completely separating the physics representation from the rendered object representation (the 3d model etc) in the scene graph. This ensured that any interpolation code on the rendering side did not interfere with the physics. Buoyancy Next step was to turn off gravity, and add some kind of manual forces to keep the objects around the waterline. It turns out that you have to be very careful how and when to apply manual forces on every tick, so as not to disrupt the simulation. In Godot I achieved this by using the _integrate_forces function, NOT the _physics_process. I found that rather confusingly adding forces (particularly angular) within _physics_process borked the simulation. _integrate_forces on the other hand gives you access to the actual physics state. My current code to keep objects at the surface is rather simple: It calculates the y offset (y is up in Godot) of the object from the surface, and applies an impulse in proportion towards the sea level. I actually increase this above water for some kind of gravity, but I'm sure I will come to a more robust solution for this, such as turning on real gravity when significantly above sea level. Righting This works reasonably well, however, there is a problem, boats etc roll around in the water and don't stay upright. Clearly there needs to be some kind of way of 'righting' objects. I posted a thread to get advice on this ( however in the end, rather than using a complex and CPU intensive technique to calculate proper buoyancy I have for now opted for a rather simple hack. I essentially calculate how far an imaginary mast is from pointing vertically up, and calculate an angular velocity towards this. The details are in the thread. This seems to work pretty well, at least in the flat ocean case. I may have to either modify this or opt for a more complex technique for large waves. Waves It was pretty easy to put in some keyboard control to move a boat or similar about, next step was to see if I could get some kind of waves in the ocean. It turns out Godot has a simplified method of writing multiplatform shaders, and it was pretty easy to follow a tutorial and get a vertex shader with some moving waves. This was already looking better but I wanted the boats etc to move with the waves. As there is no way of reading back the results from the shader, I essentially duplicated the shader wave height function in the main code, did some hackery for scaling, and ended up with a function where I could calculate wave height in the game at any location that would match the shader. My first thoughts were to simply plug in the wave height into the existing buoyancy code, however, probably due a delay between the impulse being applied and having a decent effect, the objects ended up bobbing out of sync with the waves. Due to the velocity being applied the objects tended to overshoot the surface continually. Really I just wanted to have the objects match the surface with loads of friction, so I realised for a hack I could apply the wave height to the rendered representation, and not apply it to the physics. I tried this out and it seems to basically work (for now). The physics is working on a flat ocean, and the waves are added afterwards to what you see. Obviously this is not ideal, there will be collisions in the physics where there would not be in reality due to the wave heights, and there is no momentum due to the waves etc. This is all a work in progress. Anyway these are the results so far I think it is promising. Large Worlds Soon I want to experiment with a large world. For a boating game I'd like to be able to travel a long way at speed, and this may cause precision issues if using single precision floats. So I'll see if I can have some kind of scheme where I teleport the player and objects once they move a certain distance from the origin, while keeping the physics and waves consistent. Another alternative is keeping the player at the origin and moving the objects relative to the player, however that might be a nightmare interfacing with the physics engine.
  2. Sorry I should have worded better, it is a good alternative solution.
  3. It is an established solution, it's easy to demonstrate working, just code it up. The transform isn't necessary because you are simply interpolating a triangle in both cases... However... Afaik what may be causing the confusion is that strictly speaking, the texture mapping used in general 3d is not 'physically correct' as you are seeing it. If you use a fish eye projection for a camera, and draw a triangle, in theory the texture should also be distorted, but if you render it with standard 3d games hardware it will not be distorted. Only the vertices are going through the transform matrix, the fragment shader afaik is typically given simple linear interpolation. This may not be the case in a ray tracer. So, you are actually right in a way I think.
  4. I had a little trouble conceptualizing this, I kind-of see where you are going but am not convinced it would work. You could try it out though, code it up and compare with the barycentric solution?
  5. You can do something like that, it is doing essentially exactly the same as the very first suggestion (using barycentric coordinates), except in an extremely roundabout fashion (going on a roundabout trip via the GPU). Of course, it depends what the actual use case is, whether the conversion is rare or needed as a realtime lookup. There are many cases where having a UV -> 3d mapping for the entire texture is useful rather than e.g. using the barycentric method per point, and using the GPU is an option to create this. In my own use cases I've been fine using the CPU to calculate this conversion texture, however it you needed to e.g. recalculate it on a per frame basis the GPU might be an option, bearing in mind the potential for pipeline stalls if you have to read this back.
  6. Well not quite true (I do inverse UV calculations in 3d paint in quite a few places for instance, it might also be used in lightmapping, ray tracing etc etc), but it is probably rarely used by developers relying on 3rd party engines and tools to do this sort of thing for them. To elaborate a little, if the OP is talking about the very specific case of using a uniform transform to get all 3d verts into uv space (such as matching a viewport transform, e.g. 'project from view' in blender), rather than the more usual uv unwrapping or complex projections, then in theory to go from UV space back to 3d space you would simply use the reverse transform (e.g. inverse matrix). However in practice with a UV coordinate you have usually lost the 'depth' value (a degree of freedom I think this is called?), so even if you got back to the viewport x, y, you would have lost the depth information and thus the 3d position. I think also there might be issues once you are not using a simple orthographic projection (consider a fish eye lens and trying to extrapolate barycentric coordinates outside a central triangle, the relationship between uv and 3d space would break down I think). If you still had the 3d -> uv matrix and the uv depth value, then it may in some cases be possible to go directly back to a 3d vertex (don't quote me on that, I might be missing something obvious lol ). It is similar to any matrix transform in that respect.
  7. This is essentially correct. The 3d position of a particular UV coordinate only makes sense in terms of the triangle (or triangles) that encompass that point. If you pick a UV coordinate outside of the triangles for instance, it doesn't really represent any point in 3d space.
  8. Copying and pasting from examples can be okay but afterwards you should usually go through it to make sure you fully understand what it is doing (of course, we all copy and paste code sometimes! ). Yes, when correct, the code should work whatever the speed of the projectile. Here are some tutorials on 2d vector math for games, they will explain it far better than I can, e.g. The code that alvaro posted is basically how to normalize a vector, although you should usually do a check for zero length vector to prevent divide by zero, which will cause an error. Normalizing (aka Unitizing) a vector takes a vector in any direction (except zero length vector, because the direction is undefined) and resizes it so that the length in that direction is 1.0. It is a very common operation in games.
  9. Part of the problem for us is that your code shown includes too much irrelevant stuff to easily work out what is happening, or intended (for me, at least! ), and your explanation of what is intended is not clear. If you are reading the mouse x, y each loop and using the bullet start position each loop to determine the angle, you will get problems. Are you trying to have a bullet that has a constant direction? Or are you trying to create a homing missile? For a constant direction you could for example RECORD the target position when fired, then calculate the move towards this target each loop: e.g. // calculate offset from bullet to target // Where vector 2 is using floats rather than ints Vector2 ptOffset = ptTarget - ptBullet; // make offset length 1 ptOffset.Normalize(); // make length of offset equal to desired move length ptOffset *= fMoveLength; // probably have some logic for reaching target // .... // move the bullet towards target ptBullet += ptOffset; A similar approach to this might also work for a moving target (but usually you'd use something a bit more complex to limit bullet direction changes etc). You could also probably more appropriately use this kind of thing for a constant direction bullet: class CBullet { public: Vector2 m_ptPos; Vector2 m_ptVelocity; void Fire(const Vector2 &ptStart, const Vector2 &ptTarget) { m_ptPos = ptStart; m_ptVelocity = ptTarget - ptStart; m_ptVelocity.Normalize(); m_ptVelocity *= SPEED; } void Update() { m_ptPos += m_ptVelocity; } }; It seems like you are using a different x, y target position each time: (more like a homing missile) SDL_GetMouseState( &x, &y ); In which case any direction for a bullet move should be based on the current bullet position, not the bullet start position.
  10. It just depends how accurate you want to go for. You can estimate a rough centre of mass for each bone by using something like this: // for through bones for (int b=0; b<32; b++) { Vertex3 ptAverage(0, 0, 0); float fTotalWeight = 0.0f; for (int n=0; n<nVerts; n++) { // for through weights for (int w=0; w<4; w++) { if (vert[n].boneID[w] == b) { ptAverage += vert[n].pos * vert[n].boneWeight[w]; fTotalWeight += vert[n].boneWeight[w]; } // if } // for w } // for n ptAverage /= fTotalWeight; print ("AveragePosition of bone " + b + " is " + ptAverage); } You can also estimate the orientations of the bones by e.g. making a guess of a orientated bounding box, guesstimate some joint positions etc.
  11. Just a quick guess (I haven't properly read your code) but it sounds like it might be an integer rounding issue. If your destination is e.g. at 190, 100 and you move 1/100th of the way each time, converted to integers you might either move 1, 1 pixel at a time or 2, 1 (depending on how you round) so you would end up at 200, 100 or 100, 100. If you move faster it will be more accurate (e.g. 19, 10 per move) and slower, less accurate.
  12. lawnjelly

    Playing with water physics

    Haha yep!! I haven't decided on game type yet, there's a few hungry shark games that do well lol. Can't decide whether to do something modern or ye olde with sailing. I guess I can do both.
  13. The past few days I have been playing with Godot engine with a view to using it for some Gamedev challenges. Last time for Tower Defence I used Unity, but updating it has broken my version so I am going to try a different 'rapid development' engine. So far I've been very impressed by Godot, it installs down to a small size, isn't bloated and slow with a million files, and is very easy to change versions of the engine (I compiled it from source to have latest version as an option). Unfortunately, after working through a couple of small tutorials I get the impression that Godot suffers from the same frame judder problem I had to deal with in Unity. Let me explain: (tipping a hat to Glenn Fiedler's article) Some of the first games ran on fixed hardware so they didn't have a big deal about timing, each 'tick' of the game was a frame that was rendered to the screen. If the screen rendered at 30fps, the game ran at 30fps for everyone. This was used on PCs for a bit, but the problem was that some hardware was faster than others, and there were some games that ran too fast or too slow depending on the PC. Clearly something had to be done to enable the games to deal with different speed CPUs and refresh rates. Delta Time? The obvious answer was to sample a timer at the beginning of each frame, and use the difference (delta) in time between the current frame and the previous to decide how far to step the simulation. This is great except that things like physics can produce different results when you give it shorter and longer timesteps, for instance a long pause while jumping due to a hard disk whirring could give enough time for your player to jump into orbit. Physics (and other logic) tends to work best and be simpler when given fixed regular intervals. Fixed intervals also makes it far easier to get deterministic behaviour, which can be critical in some scenarios (lockstep multiplayer games, recorded gameplay etc). Fixed Timestep If you know you want your gameplay to have a 'tick' every 100 milliseconds, you can calculate how many ticks you want to have complete at the start of any frame. // some globals iCurrentTick = 0 void Update() { // Assuming our timer starts at 0 on level load: // (ideally you would use a higher resolution than milliseconds, and watch for overflow) iMS = gettime(); // ticks required since start of game iTicksRequired = iMS / 100; // number of ticks that are needed this frame iTicksRequired -= iCurrentTick; // do each gameplay / physics tick for (int n=0; n<iTicksRequired; n++) { TickUpdate(); iCurrentTick++; } // finally, the frame update FrameUpdate(); } Brilliant! Now we have a constant tick rate, and it deals with different frame rates. Providing the tick rate is high enough (say 60fps), the positions when rendered look kind of smooth. This, ladies and gentlemen, is about as far as Unity and Godot typically get. The Problem However, there is a problem. The problem can be illustrated by taking the tick rate down to something that could be considered 'ridiculous', like 10 or less ticks per second. The problem is, that frames don't coincide exactly with ticks. At a low tick rate, several frames will be rendered with dynamic objects in the same position before they 'jump' to the next tick position. The same thing happens at high tick rates. If the tick does not exactly match the frame rate, you will get some frames that have 1 tick, some with 0 ticks, some with 2. This appears as a 'jitter' effect. You know something is wrong, but you can't put your finger on it. Semi-Fixed Timestep Some games attempt to fix this by running as many fixed timesteps as possible within a frame, then a smaller timestep to make up the difference to the delta time. However this brings with it many of the same problems we were trying to avoid by using fixed timestep (lack of deterministic behaviour especially). Interpolation The established solution that is commonly used to deal with both these extremes is to interpolate, usually between the current and previous values for position, rotation etc. Here is some pseudocode: // some globals int iCurrentTick = 0 // player Vector3 m_Pos_previous = (0, 0, 0); Vector3 m_Pos_current = (0, 0, 0); Vector3 m_Pos_render = (0, 0, 0); // called each frame by engine void Update() { // Assuming our timer starts at 0 on level load: // (ideally you would use a higher resolution than milliseconds, and watch for overflow) int iMS = gettime(); // ticks required since start of game int iTicksRequired = iMS / 100; // remainder int iMSLeftOver = iMS % 100; // number of ticks that are needed this frame iTicksRequired -= iCurrentTick; // do each gameplay / physics tick for (int n=0; n<iTicksRequired; n++) { TickUpdate(); iCurrentTick++; } // finally, the frame update float fInterpolationFraction = iMSLeftOver / 100.0f; FrameUpdate(fInterpolationFraction); } // just an example void TickUpdate() { m_Pos_previous = m_Pos_current; m_Pos_current.x += 10.0f; } // very pseudocodey, just an example for translate for one object void FrameUpdate(float fInterpolationFraction) { // where pos is Vector3 translate m_Pos_render = m_Pos_previous + ((m_Pos_current - m_Pos_previous) * fInterpolationFraction); } The more astute among you will notice that if we interpolate between the previous and current positions, we are actually interpolating *back in time*. We are in fact going back by exactly 1 tick. This results in a smooth movement between positions, at a cost of a 1 tick delay. This delay is unacceptable! You may be thinking. However the chances are that many of the games you have played have had this delay, and you have not noticed. In practice, fast twitch games can set their tick rate higher to be more responsive. Games where this isn't so important (e.g. RTS games) can reduce processing by dropping tick rate. My Tower Defence game runs at 10 ticks per second, for instance, and many networked multiplayer games will have low update rates and rely on interpolation and extrapolation. I should also mention that some games attempt to deal with the 'fraction' by extrapolating into the future rather than interpolation back a frame. However, this can bring in new sets of problems, such as lerping into colliding situations, and snapping. Multiple Tick Rates Something which doesn't get mentioned much is that you can extend this concept, and have different tick rates for different systems. You could for example, run your physics at 30tps (ticks per second), and your AI at 10tps (an exact multiple for simplicity). Or use tps to scale down processing for far away objects. How do I retrofit frame interpolation to an engine that does not support it fully? With care is the answer unfortunately. There appears to be some support for interpolation in Unity for rigid bodies (Rigidbody.interpolation) so this is definitely worth investigating if you can get it to work, I ended up having to support it manually (ref 7) (if you are not using internal physics, the internal mechanism may not be an option). Many people have had issues with dealing with jitter in Godot and I am as yet not aware of support for interpolation in 3.0 / 3.1, although there is some hope of allowing interpolation from Bullet physics engine in the future. One option for engine devs is to leave interpolation to the physics engine. This would seem to make a lot of sense (avoiding duplication of data, global mechanism), however there are many circumstances where you may not wish to use physics, but still use interpolation (short of making everything a kinematic body). It would be nice to have internal support of some kind, but if this is not available, to support this correctly, you should explicitly separate the following: transform CURRENT (tick) transform PREVIOUS (tick) transform RENDER (where to render this frame) The transform depends on the engine and object but it will be typically be things like translate, rotate and scale which would need interpolation. All these should be accessible from the game code, as they all may be required, particularly 1 and 3. 1 would be used for most gameplay code, and 3 is useful for frame operations like following a player with a camera. The problem that exists today in some engines is that in some situations you may wish to manually move a node (for interpolation) and this in turn throws the physics off etc, so you have to be very careful shoehorning these techniques in. Delta smoothing One final point to totally throw you. Consider that typically we have been relying on a delta (difference) in time that is measured from the start of one frame (as seen by the app) and the start of the next frame (as seen by the app). However, in modern systems, the frame is not actually rendered between these two points. The commands are typically issued to a graphics API but may not be actually rendered until some time later (consider the case of triple buffering). As such the delta we measure is not actually the time difference between the 2 rendered frames, it is the delta between the 2 submitted frames. A dropped frame may for instance have very little difference in the delta for the submitted frames, but have double the delta between the rendered frames. This is somewhat a 'chicken and the egg' problem. We need to know how long the frame will take to render in order to decide what to render, where, but in order to know how long the frame will take to render, we need to decide what to render, and where!! On top of this, a dropped frame 2 frames ago could cause an artificially high delta in later submitted frames if they are capped to vsync! Luckily in most cases the solution is to stay well within performance bounds and keep a steady frame rate at the vsync cap. But in any situation where we are on the border between getting dropped frames (perhaps a high refresh monitor?) it becomes a potential problem. There are various strategies for trying to deal with this, for instance by smoothing delta times, or working with multiples of the vsync interval, and I would encourage further reading on this subject (ref 3, 8, 9). Addendum Note that the code examples given are pseudocode, and may be best modified for a particular engine. There is often some support for fixed tick rates within engines, for instance in Unity you may be able to use the following simplified scheme: // unity calls update once per frame void Update() { // interpolate fraction float fFraction = (Time.time - Time.fixedTime) / Time.fixedDeltaTime; // my frame update for interpolation FrameUpdate(fFraction); // same as earlier examples } // unity calls once per tick void FixedUpdate() { TickUpdate(); // same as earlier examples } I would also suggest implementing interpolation without engine physics first, because using with engine physics can incorporate subtle bugs as you may be moving a physics representation inadvertently. I will address this in a future article, however in short one solution is to create completely separate renderable objects and physics objects in the game (different branches of the scene graph), store a reference on the render object to the physics object, and interpolate the render object transform based on the physics object. In Godot you can also visualize the physics colliders in the Debug Menu, which is very helpful. References 1 2 3 4 5 6 7 8 9
  14. Is this for a game engine driven by a scripting language, like godot or unity? The onus is usually on the user not to have infinite loops, or performance sapping code. You can easily check for this just with a simple: bool bQuit = false; while (!bQuit) { for (int n=0; n<10; n++) { bQuit = RunVMInstruction(); if (bQuit) break; } if (TimeTooLong() || TooManyInstructions()) bQuit = true; } If a timer check is expensive, surely you can move it out of an inner loop? Perhaps there is not enough information to properly understand why this is a problem, as Wessam says. Afaik event based timers involving the OS are often wildly inaccurate (+- lots of milliseconds) so you are usually better off doing this kind of thing yourself. Even if calling your OS gettime function was that expensive (even outside a loop), you could also just roughly calibrate a number of VM instructions that would run in a given time, and use the instruction counter.
  15. lawnjelly

    Fixing your Timestep and evaluating Godot

    Yes, I guess wishful thinking lol! I do mention it at the end, and the reference 3: gives some discussion of the problem and possible solutions. Some of it may be a result of how close a frame is submitted to vsync giving stuttered delays. I'm really not an expert on deltaTime variations, perhaps a question on the forum might get you some good answers? There are quite a few more hardware orientated guys here. Running in the editor as you say is not a good way to get reliable measures of jitter for at least 2 reasons, afaik.. First is that all sorts of processing is going on in the editor aside from the game which can cause stutters and stalls. Second is that in my experience vsync doesn't tend to work in windowed mode, or at least not as you would expect it to. Another discussion:
  16. lawnjelly

    Fixing your Timestep and evaluating Godot

    As Septopus says, this probably would have been better as a fresh forum post linking to the blog lol but we have started so we shall finish! I am intending to write a follow up with some good approaches to dealing with the interaction between interpolation code and internal physics engines, which can be a very tricky subject. There is certainly a lot of info packed in your guys posts, and there does seem to be some confusion. I realise that I haven't made it clear enough in the blog that interpolation is to be used WITH the tick based update, i.e. the first and second pseudocode samples are intended to be used together. I alluded to this in the comment '// ... gameplay ticks' in order to save space but I will edit this to longhand in order to avoid confusion. This is great feedback, something that is obvious to me when writing, but not the reader. First my recommendation for anyone dealing with these issues, I cannot stress enough, for testing you should reduce your fixed tick rate, say to 1tps or 2tps. This will make clear any errors that would be hidden by testing at a higher tick rate. If you can't get smooth behaviour at a low tick rate, you will not get smooth behaviour at a high tick rate. Secondly I would also recommend getting interpolation working without using engine physics before attempting to get it working with engine physics (if you are using engine physics). Using engine physics with interpolation can be a minefield in itself, I will suggest some good approaches in a further article. I'll try and work through some of the posts, first nucky9's original: Very probably a mistake is basing the tick rate on the refresh rate. Refresh rate may vary on different computers, so any gameplay that you perfect on your development machine may run differently on players machines, giving different speeds / hard to predict bugs. I would strongly advise fixing your tick rate in advance, probably to something a multiple of the common refresh rate, 60fps. So 10tps, 12, 15, 20, 30, 60 might be sensible choices. Secondly, be very wary of using single precision floats to deal with time. My example uses integer math up until the calculation of the interpolation fraction, specifically to avoid precision issues. Engine scripting languages may not offer integer data types, but be especially on the lookout for bugs due to precision issues, consider when the game has been running a long time and the gap between a tick is small relative to the game time, what happens to the precision. The main problem as Mussi has spotted, is that the interpolation code is roughly correct, however there is no tick (fixed) update. This is probably my fault as I didn't make it clear enough, interpolation should be built on top of a tick update scheme. You are, in this quoted code, doing interpolation twice, once here: newPosition += direction * Time.deltaTime * speed; and once here: Vector3 interpolatedPosition = oldPosition + ((newPosition - oldPosition) * fractionalFrame); which gives an incorrect result. The first line is typical of the 'Delta Time' paragraph at the beginning of the blog post. It can work okay in certain situations (like for instance a 2d endless runner camera). If you use this, there's no need to add interpolation on top of this. The pattern I was explaining is as follows (in pseudocode): // ticks per second #define TICK_RATE 10 // time taken by each tick, in seconds #define TICK_TIME (1.0f / (float) TICK_RATE) // the name of this may vary with engine, in unity it is simply Update() void Engine_OncePerFrame_Update() { // calculate number of ticks required // do 0 to multiple ticks for (int n=0; n<iTicksRequired; n++) { // (move objects with a FIXED delta time, the tick time) TickUpdate(); } // Interpolation update FrameUpdate(); } // physics / logic / movement / ai void TickUpdate() { oldPosition = newPosition; newPosition.x += (TICK_TIME * speed); } // interpolation of rendered objects for this frame void FrameUpdate() { interpolatedPosition = oldPosition + ((newPosition - oldPosition) * fInterpolationFraction); } Note that the interpolation fraction is the fraction of a TICK, not fraction of a frame. Note also that this is pseudocode. While you can do all the tick updates yourself, in unity, there is provided a FixedUpdate() function which can be ticked by the engine, which you can use in many circumstances, instead of doing the ticking yourself: // unity calls update once per frame void Update() { // interpolate fraction float fFraction = (Time.time - Time.fixedTime) / Time.fixedDeltaTime; // my frame update for interpolation FrameUpdate(fFraction); // as above } // unity calls once per tick void FixedUpdate() { TickUpdate(); // same as above } So in unity you can sometimes get away with not calculating the number of ticks yourself, and allowing the engine to do it. The problem of getting inaccurate timing values from Unity for deltaTime may be a different issue. Essentially you should be able to simply move something across the screen with: void Update(deltaTime) { pos += deltaTime * speed; } and get a reasonable result (barring the issue with frame submit / display times mentioned in the 'Delta Smoothing' final paragraph). If you are still getting bad jitter there, it does suggest an issue with the times being passed.
  17. lawnjelly

    Help with 3d collision question

    Sorry I didn't notice this was in beginner section. However it is a good learning task, so don't be afraid to attempt. If the complexity (number of triangles) in the lake is low, then just using a simple point in polygon test will find whether a boat is attempting to move outside the lake. An example source code in c++ is here: However, this will mean the boat stops dead when it reaches an edge, instead of sliding, which is not too realistic. Even so it may be a good first test that you are able to load the lake data correctly. Once this is working, if you understand some programming like arrays etc I would encourage you to try creating a navigational mesh (navmesh), as these are conceptually easy to understand and very powerful in games, and will allow you to slide boats at edges, and later perform more advanced stuff like pathfinding (maybe in a later game). The first step in all cases in to load your polygons, I am not sure how you are doing this for rendering, but simplifying it to the case for triangles, you would end up with: A list of vertex positions (x, y, z). In the case of a lake you can ignore the height coordinate and do things in 2d. This will usually be either y or z depending on your convention. A list of triangles, with each triangle defined by 3 vertex indices (which can be used to look up the position in the first list). Triangles will often share vertices, i.e. the index of a vertex will appear in more than 1 triangle. You can use this to identify neighbouring triangles, and to build the navmesh, as once a boat is within a triangle, it can only progress to neighbouring triangles. When running the game, you keep track of which triangle the boat is within. If it crosses once of the edges it is either moving into a neighbouring triangle (in which case the move is okay), or trying to move into space outside the lake, in which case you can slide it against the edge of the triangle. Once you have a boat moving on the navmesh, the next step is to shrink the navmesh a little to take account for the radius of the boat (so it doesn't poke onto land). You can either do this in blender (easier) or computationally in your game code. This may sound complex, we don't know what level you are at, it maybe easier to start with a simpler game design, e.g. a grid based game. Another alternative is to use a third party physics engine to help, for instance Box2D. If you were using a physics engine you might send it the edges of the lake to be static colliders, then place a physics object for the boat within the lake, and push it around with forces.
  18. lawnjelly

    Help with 3d collision question

    I can think of lots of ways of doing this, depending on your game details as TeaTreeTim says, however, quite a cool cheap way of doing it and colliding the boat against the edges: Take the lake geometry (a bunch of triangles with mostly shared vertices) and build this into a navmesh. Then use a standard navmesh solution to detect moves to neighbouring polys, and prevent moves or slide against boundaries of the lake. You don't mention an engine so I'm assuming you are writing this all yourself. You can get the poly geometry from blender (e.g. export as an obj and parse the file yourself) but presumably you are loading in the lake geometry already to render it, so you can derive the navmesh from this. If you are using an engine it will probably have a mechanism of reading the geometry. There are also point in concave polygon tests, however you might want something cheaper depending on the number of edges to the lake. You could e.g. grid up the lake and list the edges to check on edge grid squares, or whether a grid square is totally inside / outside the lake. Also agree that having something more like bezier curves round the edge might be cool, and you may also want to take account of lake depth (boats might not be able to go right up to lake boundaries).
  19. lawnjelly

    Exluding Country from Using Software

    Have you thought about therapy?
  20. Or it can be running in a different thread. The audio often has multiple tiles to this audio buffer, and you fill one or several as they become free, perhaps in a callback. If you don't fill them on time you might get audio glitches as the audio plays tiles that contain old data. The size and number of tiles can affect the audio latency (the gap between playing a sound and hearing it). With a small buffer you can get small latency but you need to ensure it is filled 'on time'. With a larger buffer there is less need to keep it filled in a timely fashion. If the audio is running on a different thread, the effect of frame rate may be rather to affect the 'granularity' of sound effects playing in the game, unless the audio wrapper and system specifically compensates for this. Typically a game might issue a command like PlaySound(GUNSHOT) .. if it is running at 1fps, a bunch of these may be issued at the same time rather than spread out over the second. More likely to be the other way around if anything. If you can query your audio how far through it is, you know how far to advance your game. However with an accurate general timer this is less likely to be an issue, as most games are designed to be frame rate independent, i.e. they use their general timer to know how far to advance the game. The longer the audio the more the possibility of drift between general purpose timer and audio player rate, and afaik different sample players are not exact. This is more likely to be an issue in audio / music apps than in games though. Also note that you could in theory change the rate that audio plays, but this normally gives a change in pitch, which might be noticeable, especially with a varying frame rate. You can also do time stretching on audio, to shift the play rate without changing pitch, but that would probably be a very messy solution to this particular problem.
  21. Full expected Hodgman / MJP etc to provide definitive answer on this , but afaik, it may depend on the hardware. See this thread: And this linked page on the graphics pipeline:
  22. lawnjelly

    Scoring Navigation Precision on User Created Courses

    I'm not that familiar with skiing / snowboarding, but as a novice I'd be expecting to have score lowered only if I went out of the gated route, i.e. measure the distance from the edges formed by adjacent flags to the player, if he is outside the route. For any map you might be able to just precalculate on a grid the closest edge, then do a point - line segment distance calculation. You may have to take account of players cutting corners (e.g. missing out a whole section of course). Another possibility is simply to have the gates change colour as the player goes through them. If they miss a gate they have to go back and get it to change colour, so there is a time penalty for missing a gate. Also I read for slalom there seems to be single red and blue gates, and you have to go through the appropriate right or left side I think. This is another idea that might be good for some courses.
  23. Coffee machine Pizza / chinese delivery Threat of impending closure That Joel Test though, using an example of microsoft as a paragon of software development, the irony, the irony.
  24. lawnjelly

    Economic Development Simulation Game

    I wouldn't worry about things being 'done before', economic and similar simulations are great subjects for games, because computers do them so well, and I think humans intrinsically tend to find such complex simulations interesting, the results are often hard to predict and sometimes counter-intuitive. On a related note I wish politicians would make better use of simulations for examining the impact of policies instead of essentially plucking them from the air with the view that 'god told them their way is the right way'. For instance simulations can clearly show the effects of taxing people too little or too much, welfare etc. But as it is we are often governed by those with the weakest brains, sigh lol. As your design seems based on the students making the decisions, rather than AI, I think to gain the numbers necessary for meaningful results, you may like to have each student make decisions for a large number of different individuals (perhaps different roles). Of course then you have the possibility that a student might rig a group of individuals to perform better as a group than they would as individuals .. which is artificial, but perhaps giving some interesting twists.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!