Jump to content
  • Advertisement

iradicator

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

0 Neutral

About iradicator

  • Rank
    Member

Personal Information

  • Interests
    Business
    DevOps
    Education
    Production
    Programming
    QA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have a sphere casting a shadow on a quad that has parallax mapping. I was wondering how to displace the shadowmap to get the correct shadow results for the parallaxed fragment. BTW, what's the solution for shadow volumes?
  2. I'm adding shadow volume support into my engine (graphics backend written in OpenGL 4.5). Depth Pass was working fine and to deal with shadows when the camera placed inside a shadow volume - I've decided to use Carmack's Reverse (Depth Fail). I'm getting weird errors and I wanted to make sure I'm working with a good model (I'm using the Stanford Bunny / Dragon) before I continue to debug my algorithm. Depth Pass (working fine): Depth Fail (artifacts): I then positioned the camera inside the bunny and realized that the artifacts are actually triangles that aren't being culled (they facing the camera and the light source so they shouldn't be culled): Note: The bunny in this picture is the 2nd bunny behind the one in the previous picture - the camera is inside the first bunny in the picture above! I added a simple geom shader to paint front faces in green and back faces in blue. As you can see, This "patch" looks weird as some of the triangles facing the camera and some facing away from it - which might explain the artifact... Maybe the model is incorrect? I'm adding here the gltf I'm using (Bunny.gltf). Bunny.gltf I'm using silhouette extrusion to calculate the shadow volumes. I calculate the silhouette by using adjacency triangle toplogy (I'm calculating it once when loading the model) - and find edges that go around the light facing triangles (I'm using face normals for this calculation - I found it to yield better results then interpolated model normals). Would love to hear your thoughts about this method. Thank you for your help!
  3. This is a general question about player controller on a surface that contains geometry (curved roads / slopes, mountains, etc.) and obstacles (walls). The game should simulate a simple physical model (acceleration, collisions, etc.) and the character should navigate convincingly through the terrain. I'm using Unity but I think this is a general question about how to design a character controller. I wrote a simple character controller that uses player input to steer the character in the world. wasd keys move forward and turn. Since I'm controlling the character directly, I'm using a kinematic object (I don't even use the rigidbody) and moves it by setting the transform directly to some model I implemented (I have speed, acceleration, mass, etc.) Why did I wrote a physical kinematic simulation? I tried to use a rigidbody and apply forces based on player's input directly on it but I found the control felt a little bit "swimmy" and it was hard to tweak (example: the character slammed hard and spun out of control (even when locking xz rotating direction), it took a long time to accelerate, etc.) That worked well when during prototyping on a simple plane with no obstacles. Now I have a level with non-even geometry. The problem I have is how to make the players "stick" to the ground when they travel around (Prototype applies movement on the xz plane but doesn't take into account being connected to the floor). Another issue is to set the orientation (up vector) of the player (imagine a vehicle) in a way that looks both smooth and convincing - the vehicle should change its pitch / roll as it's navigating through some slopes. Even the simple example of a vehicle starting to climb from a plane on a road with a constant slop (say 20 deg) should change the orientation in a convincing manner, i.e. the vehicle should not start to "lift the nose" before touching the ramp, nor should it "sink the nose" colliding into the ramp. Again, this is where the physical engine can come in handy, but when I tried to apply force going up the vehicle slowed down because of friction. I also have problems with collisions since I'm moving the character directly by controlling its transform (kinematic), it feels weird and doesn't play well when the physics engine detects collisions and doesn't want to let the character penetrate a wall. It collides well with objects, it just feel very not natural. The real questions here are about best approaches to design a character controller (note: that SHOULD be applied also to agents using AI steering algorithms - that also calculates forces or running a model underneath). 1. How do you move a character? Are you using the physics engine to do the heavy lifting or you control the character directly like a kinematic object? 2. If you're using physics, what's the best approach to apply forces? (yes, it depends on the game, but let's say some realistic based physics model with accelerations and forces - let's assume animations don't apply root motion - to simplify) In Unity, there are multiple ways to apply force - relative / non-relative, impulse / continuous etc. 3, If you're not using physics, how do you make sure that collision detection play nice with your movement algorithms? How do you make collisions look natural and still give the player good control? 4. Uneven terrain, how do you make the character (let's assume a vehicle - a car - with no complex animations (so no IK in play)) "stick" to the ground while changing its orientation (up vector) in a smooth and convincing manner? 5. what's the best way to also allow the player to disconnect from the ground? (e.g. either jump or fall off platforms) For me, rigidbody vs. kinematic is the key question here. I saw tutorials that use both - but since they were super simple they didn't deal with the problems I mentioned above in depth. I'm wondering what's the best approach for the player controller and would love to hear more points to consider and tips based on your experience. Pseudo code / code samples (in any language / engine) would be much appreciated. Thank you!
  4. Yes! Thank you for looking into this. Calculating |dp| - |dv| * tcollision doesn't take into account the direction. That means that dv can drive the agents away from each other but when applying - |dv| it's like accounting that the entire velocity's magnitude is taken into reducing the distance between the agents (which again doesn't account for the direction of the vectors). That's why this calculation is not clear to me.
  5. Thank you for the explanation. I also tried to substitute 'source - target' instead of 'target - source' and understand that hackery might be involved to make things "feel right". Any idea of what the minSeparation var stands for?
  6. In Ian Millington's (excellent) book "Artificial Intelligence for Games", pages 87-88, he describes an algorithm to predict if two (or more) dynamic vehicles will collide, and if so, he applies a force to steer them out of collision. I found a similar source here http://www.imada.sdu.dk/~marco/Teaching/AY2014-2015/DM842/Slides/dm842-p2-lec2.pdf (Slides 36-37). I don't understand the algorithm's calculations; It was stated in the book that time to collision equals to - (dp . dv) / |dv|^2, but the pseudo code omit the minus sign. Second, I'm not sure I understand what's minSeparation stands for. If that's the minimum distance between the vehicles at the estimated "collision time", then it should be |dp+dv*tcollision| and not |dp| - |dv| * tcollision - unless I'm missing something (and the sign is again exactly the opposite of what I would expect). Then at the end, if we anticipate a collision, I would expect to apply a repulsion force proportional to the direction of the hit at the estimated time - that is: -1 * [dp + dv * tcollision], but what's actually been applied is |dp + dv * tcollision] (again the -1). Clearly, I've got something in my calculations wrong and I would appreciate if someone can explain the math (and semantics of minSeparation) in more detail. Thanks!
  7. What does the white chart represent? How is it being calculated? Can I use it to set the other parameters (offset + duration) in order to achieve smoother cross-fade results? Apologies if that's not the right forum to ask Unity questions.. I love gamedev.com and wanted to ask the question in here.
  8. iradicator

    Path Planning and Steering behaviors

    Thanks ApochPiQ for the explanation. I'm applying it right now. One additional question though, when the agent sees a target (e.g. the player) and would like to engage - let's say attacking... Should it plan the path towards the player, get there and then attack or rather should it use seek / arrive or any other of steering behavior using the player as a target directly? Most of the Unity examples I saw online didn't use any steering behavior, when the agent saw the player, the NavMeshAgent target was just set to the player and when the agent got in proximity it attacked. I wondered if using a steering behavior in this situation is actually better. What do you think?
  9. iradicator

    Path Planning and Steering behaviors

    Thanks for the response duckflock. Let me rephrase the question: how do you use path planning and steering behaviors at the same time? Should I instruct the agent follow the waypoints along the path after calculation and apply steering behaviors while its navigating?
  10. I'm writing a Diablo-like type game in Unity and currently working on the AI system. I want my agents to roam and interact with the world. I recently read the excellent book "Programming Game AI By Example" and I'm having a bit of a challenge understanding of how to use both path planning and steering behaviors. Let's say that the agent sees the player and decide to seek cover or stand in between the player and a treasure-chest, or even engaging the player while keeping a safe distance and avoiding dynamically created objects (e.g. a fireball spell). The path planning (and Unity's navigation system) allows me easily to set a target for the agent, but how do I incorporate steering behavior at the same time? Path planning helps the agent moves to some point and steering behaviors set forces to guide them... the two seems to compete with each-other and all the resources I see online explain each topic in separate not in conjunction. Would greatly appreciate if someone can share from his/her own experience or point me to the right resources. BTW, Which Path Planning and Steering Behaviors packages do you use in your own Unity games? Thanks!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!