• Advertisement
Sign in to follow this  

Unity Decoupling physics

This topic is 1179 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm having a problem with my current game architecture design; I'm using the ECS pattern but I'm still having trouble decoupling some of the physics code. Especially in case the player wants to place a block somewhere: this gameplay-related code needs to cast a ray, and aside from that check if that position is free. First, I had casted rays by emitting events, and letting the physics system handle them and invoke a callback function. But since the gameplay code now needs to check whether a position is free, it would need to emit a second event. This method already felt hacky and slow, but better than direct access.

 

I'm curious to how this is usually handled. I know that you can cast a ray from anywhere when using Unity, but I doubt this is the correct approach. How have you handled this kind of problem?

Share this post


Link to post
Share on other sites
Advertisement

Why are you doubting that this is the correct approach?

The physics systems acts as a service in this case and the other systems are consumers.

 

Events or not, your picking system is dependent on the physics system either way, as without it your code would not work.

Share this post


Link to post
Share on other sites

To add a bit to Madhed's post:

 


this gameplay-related code needs to cast a ray, and aside from that check if that position is free.

 

First: it's not clear how (or whether) you've separated collision detection from physics response. That is, why is the determination that a position is "free" not done by an entity attempting to move?

 


I had casted rays by emitting events, and letting the physics system handle them

 

Given no information on what "entities" or "positions" are, that's a common approach. Detect collisions (or lack thereof) and let physics determine responses.

Share this post


Link to post
Share on other sites

"player wants to place a block somewhere"

 

i assume you mean in a minecraft type game, as opposed to simple entity movement.

 

 

i personally prefer non-deferred processing.

 

"player wants to place a block somewhere"

 

so your in the process_input part of the game loop.

 

you need a raycast with collision check that returns space_is_empty: true | false.

 

if space_is_empty, add_block.

 

so your parts are a raycaster, a collision checker, and an add_block routine.

 

" I had casted rays by emitting events, and letting the physics system handle them and invoke a callback function....  This method already felt hacky and slow, but better than direct access."

 

sounds rather hacky and slow. think of all the unnecessary typing and processor cycles involved.

 

the non-deferred way:

 

process_input gets an "addblock" input. calls process_addblock.

 

process_addblock is the controlling code. it calls the raycaster. raycaster calls collision checker repeatedly as it steps though the scene. raycaster returns intersection point to process_addblock. process_addblock tests intersection point to determine if space_is_empty. if space_is empty, process_addblock calls addblock routine.

 

after all that input processing continues with the next input tested for or queued.

 

so your modules are:

process_input

process_addblock

raycaster

collision checker

addblock routine

 

the collision checker can be part of the physics engine or you can code a simple dedicated raycast and intersection check  specifically for the task (often simpler).

Edited by Norman Barrows

Share this post


Link to post
Share on other sites

Why are you doubting that this is the correct approach?

The physics systems acts as a service in this case and the other systems are consumers.

 

Events or not, your picking system is dependent on the physics system either way, as without it your code would not work.

The extra function calls and callback functions add additional overhead and indirection. And in this situation you would be able to replace the physics system, without changing anything to the side.

 

First: it's not clear how (or whether) you've separated collision detection from physics response. That is, why is the determination that a position is "free" not done by an entity attempting to move?

Collision detection while moving is done by the physics system. The logic code simply sets a velocity and forgets about it. Preventing an object from moving when there is a block in its face is something else than casting a ray, where you need the result instantly to perform a specific action.

 

Given no information on what "entities" or "positions" are, that's a common approach. Detect collisions (or lack thereof) and let physics determine responses.

I should have given more information; the physics system calculates if and where the ray hit, and then invokes a callback function. I didn't think it would be a good idea to let the physics system handle placing blocks.

 

...

Thanks for the suggestion! My problem is however, that I want to decouple my code as much as possible. If the logic code needs references to the world etc. to cast rays, it will be harder to maintain, I can't switch out the physics system, and the logic code does things it shouldn't do.

Edited by ProtectedMode

Share this post


Link to post
Share on other sites

I didn't think it would be a good idea to let the physics system handle placing blocks.

 

Sorry. I didn't understand the emphasis of the OP as placing blocks.

 

Not knowing what capabilities your "entities" have, and IF a block is an entity, you can create a block entity (perhaps at a hidden, always "free" location), attempt to move it to the desired position, and, if the entity says "Can't do it," inform the user, and keep the block for later. If the block can be placed, add it to your "world," and create another "for-later-use" block at that hidden location.

Edited by Buckeye

Share this post


Link to post
Share on other sites

 

Why are you doubting that this is the correct approach?

The physics systems acts as a service in this case and the other systems are consumers.

 

Events or not, your picking system is dependent on the physics system either way, as without it your code would not work.

The extra function calls and callback functions add additional overhead and indirection. And in this situation you would be able to replace the physics system, without changing anything to the side.

 

I was actually talking about your Unity example here and your doubting their approach.

 

I think the confusion here stems from the fact that you see ECS as a catch-all pattern for gaming related systems design.

The pattern originated because people were trying to find an alternative to the dilemma posed by heavy inheritance trees and the resulting god classes.

 

You still have plenty of different patterns at your disposal for dealing with other aspects of your game.

I would describe your situation as follows:

 

 

Physics system

responsible for maintaining the physical representation of your game. Rigid bodys, collision shapes, velocity rotation.

Maintains and encapsulates physics state, advances physics state in discrete timesteps.

 

Provides an interface to manipulate/query the physics state.

This interface includes components where you can apply forces/torque on single entities and a global view where you can query information about all physics components.

A raycast is just a query in this case: Return to me a list of all colliders/rigid bodies that intersect with this ray.

 

You can still use the interface pattern to abstract away the specific physics implementation.

Share this post


Link to post
Share on other sites

If the logic code needs references to the world etc. to cast rays, it will be harder to maintain, I can't switch out the physics system, and the logic code does things it shouldn't do.

 

mathed's approach divvies things up rather nicely. raycast is simply a query method of the physics engine's api. 

 

that way process_addblock, the controlling or logic code, need not know about the world representation used by the physics engine. it just calls a physics routine and gets back a result, from which it computes space_is_empty, and then possibly calls addblock depending on the value of space_is_empty.

 

by making raycast part of the physics api, you can swap physics engines with no change to process_addblock, assuming you use a translation layer (interface pattern?) between your code and the swappable libraries.

 

when re-organizing api's like this results in such lucid code as the example above, you're pretty much guaranteed to be on the right track as to how things should be organized and what should be a part of which API. IE if an API change results in godlike simplicity and clarity, you're doing something right - VERY right.

Edited by Norman Barrows

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Manuel Berger
      Hello fellow devs!
      Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed. My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle.

      In Pseudo-Code:
      angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis
      index = (int) (angle / (PI / 5));
      PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down

      Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle.

      What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions.

      I can't get my head around it. Any suggestions? Is the whole approach doomed?

      Thank you in advance for any input!
       
    • By khawk
      Watch the latest from Unity.
       
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By Ovicior
      Hey,
      So I'm currently working on a rogue-like top-down game that features melee combat. Getting basic weapon stats like power, weight, and range is not a problem. I am, however, having a problem with coming up with a flexible and dynamic system to allow me to quickly create unique effects for the weapons. I want to essentially create a sort of API that is called when appropriate and gives whatever information is necessary (For example, I could opt to use methods called OnPlayerHit() or IfPlayerBleeding() to implement behavior for each weapon). The issue is, I've never actually made a system as flexible as this.
      My current idea is to make a base abstract weapon class, and then have calls to all the methods when appropriate in there (OnPlayerHit() would be called whenever the player's health is subtracted from, for example). This would involve creating a sub-class for every weapon type and overriding each method to make sure the behavior works appropriately. This does not feel very efficient or clean at all. I was thinking of using interfaces to allow for the implementation of whatever "event" is needed (such as having an interface for OnPlayerAttack(), which would force the creation of a method that is called whenever the player attacks something).
       
      Here's a couple unique weapon ideas I have:
      Explosion sword: Create explosion in attack direction.
      Cold sword: Chance to freeze enemies when they are hit.
      Electric sword: On attack, electricity chains damage to nearby enemies.
       
      I'm basically trying to create a sort of API that'll allow me to easily inherit from a base weapon class and add additional behaviors somehow. One thing to know is that I'm on Unity, and swapping the weapon object's weapon component whenever the weapon changes is not at all a good idea. I need some way to contain all this varying data in one Unity component that can contain a Weapon field to hold all this data. Any ideas?
       
      I'm currently considering having a WeaponController class that can contain a Weapon class, which calls all the methods I use to create unique effects in the weapon (Such as OnPlayerAttack()) when appropriate.
  • Advertisement