Jump to content
  • Advertisement
  • 01/23/18 04:04 PM

    Designing Player World Interaction in Unreal Engine 4

    Engines and Middleware

    Martin H Hollstein

    Originally posted on Troll Purse development blog.

    Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.

    Interaction via Overlaps

    By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:

    Header

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API InteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
    	// Sets default values for this actor's properties
    	InteractiveActor();
    
        virtual void BeginPlay() override;
    
    protected:
    	UFUNCTION()
    	virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult);
    
    	UFUNCTION()
    	virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex);
    
        UFUNCTION()
        virtual void OnPlayerInputActionReceived();
    
    	UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction)
    	class UBoxComponent* InteractionTrigger;
    }

    This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.

    Source

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #include "InteractiveActor.h"
    #include "Components/BoxComponent.h"
    
    // Sets default values
    AInteractiveActor::AInteractiveActor()
    {
    	PrimaryActorTick.bCanEverTick = true;
    
    	RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root"));
    	RootComponent->SetMobility(EComponentMobility::Static);
    
    	InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger"));
    	InteractionTrigger->InitBoxExtent(FVector(128, 128, 128));
    	InteractionTrigger->SetMobility(EComponentMobility::Static);
    	InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap);
    	InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap);
    
    	InteractionTrigger->SetupAttachment(RootComponent);
    }
    
    void AInteractiveActor::BeginPlay()
    {
        if(InputComponent == nullptr)
        {
            InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component");
            InputComponent->bBlockInput = bBlockInput;
        }
    
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived);
    }
    
    void AInteractiveActor::OnPlayerInputActionReceived()
    {
        //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out.
    }
    
    void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    EnableInput(PC);
                }
            }
    	}
    }
    
    void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    DisableInput(PC);
                }
            }
    	}
    }

     

    Pros and Cons

    The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.

    Interaction via Raytrace

    Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.

    Source

    AInteractiveActor.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API AInteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
        virtual OnReceiveInteraction(class APlayerController* PC);
    }

     

    AMyPlayerController.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/PlayerController.h"
    #include "AMyPlayerController.generated.h"
    
    UCLASS()
    class GAME_API AMyPlayerController : public APlayerController
    {
    	GENERATED_BODY()
    
        AMyPlayerController();
    
    public:
        virtual void SetupInputComponent() override;
    
        float MaxRayTraceDistance;
    
    private:
        AInteractiveActor* GetInteractiveByCast();
    
        void OnCastInput();
    }

     

    These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.

    The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.

    The simple execution of the code is as follows in the class definitions.

    AInteractiveActor.cpp

    void AInteractiveActor::OnReceiveInteraction(APlayerController* PC)
    {
        //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.))
    }

     

    AMyPlayerController.cpp

    AMyPlayerController::AMyPlayerController()
    {
        MaxRayTraceDistance = 1000.0f;
    }
    
    AMyPlayerController::SetupInputComponent()
    {
        Super::SetupInputComponent();
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput);
    }
    
    void AMyPlayerController::OnCastInput()
    {
        AInteractiveActor* Interactive = GetInteractiveByCast();
        if(Interactive != nullptr)
        {
            Interactive->OnReceiveInteraction(this);
        }
        else
        {
            return;
        }
    }
    
    AInteractiveActor* AMyPlayerController::GetInteractiveByCast()
    {
        FVector CameraLocation;
    	FRotator CameraRotation;
    
    	GetPlayerViewPoint(CameraLocation, CameraRotation);
    	FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance);
    
    	FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn());
    	TraceParams.bTraceAsyncScene = true;
    
    	FHitResult Hit(ForceInit);
    	GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams);
    
        AActor* HitActor = Hit.GetActor();
        if(HitActor != nullptr)
        {
            return Cast<AInteractiveActor>(HitActor);
        }
    	else
        {
            return nullptr;
        }
    }

     

    Pros and Cons

    One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.

    Conclusion

    There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!

     

     

    Originally posted on Troll Purse development blog.



      Report Article


    User Feedback


    There is a typo in class declaration should be class AInteractiveActor instead of class InteractiveActor. Also identation is broken (mix of spaces and tabs).

    Edited by RootKiller

    Share this comment


    Link to comment
    Share on other sites
    On 1/23/2018 at 12:20 PM, RootKiller said:

    There is a typo in class declaration should be class AInteractiveActor instead of class InteractiveActor. Also identation is broken (mix of spaces and tabs).

    Thanks, I will fix that up in time. Any thoughts on the content other than that? Was this useful?

    Share this comment


    Link to comment
    Share on other sites

    Thanks for the post.

    As a none programmer I never find the time to develop new code like this, so having someone post there work and explain it is great for me to read.

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • GameDev.net and Intel Contest

    GameDev.net and Intel® have partnered up to bring GameDev.net members a gamedev contest running until December 21, 2018 - Submit your game for Intel® Certification and you could win big!

    Click here to learn more and submit your game.

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By tspartant
      Hey everyone! My name is Ryan. 
       
      Visualistic Studios is looking for experienced developers of all talents to join a game development team focused on completing contract work for compensation. 
       
      Work Description
      Typically you will either be assisting the team or working on your own contract.
      We usually bid $16-$25/h, however contracts can go above and below that so all pay grades are welcome, just be realistic. 
       
      Short Term Contracts
      Long Term Contracts
       
      We have the highest priority for these skills right now
       
      Programming - Unity, Unreal Blueprints
      Environment Artist
      Character Artist
      Character Animation
      UI Artist
      3D Asset Optimization
       
      VR/Mobile experience is a plus. 
       
      The Process 
      All communication is done through discord. All tasks and design documents will be laid out in "HackNPlan" for organization. 
      Initially, you'll get in contact with me and answer a few questions so I can get a scope of your experience. Afterwards, our outreach team will start looking for jobs that fit your description. Nothing is guaranteed, but if we know you're interested we can start looking 
       
      Our Experience
      For the past 3 years I've been working in game development contracting, and the past year I've been working full time from home. Since then, I've received more and more contracts and I'm now at the point that I have too many for myself to handle. This sparked the idea of creating a game development team for contract work! I've also been running my own hobby company for 5 years, and have a lot of experience in team management. 
       
      Get in contact!
      If you are interested in working on these contracts, please get in contact with me. Send me links to your work and your hourly rate. 
      You can get ahold of me through email - "ryan.hobbs@visualisticstudios.com", or Discord "TSpartanT#4670"
      Thank you everyone for reading, hope to hear from you soon!
    • By petya-kurochkin
      Hello, I have a 'join'-function that joins an array's elements to a string. This is what I want:
      vector<int> a = {1,2,3} array<int, 3> b = {1,2,3} cout << join(a) << endl; // "1, 2, 3" cout << join(b) << endl; // "1, 2, 3" So I'm trying to declare the function this way:
      template <typename T, typename C> string join(const C<const T> &arr, const string &delimiter = ", "){ ... } But without any success. I understand, that I could declare it this way:
       
      template <typename T> string join(T &arr){ ... } or this way:
      template <typename Iter> string join(Iter &begin, Iter &end){ ... }
      But I just wonder, is it possible to implement it like:
       
      template <typename T, typename Collection> string join(const Collection<const T> &data){ ... } Thank you!
    • By GameDev.net
      GameDaily.Biz spoke to Improbable about its new shortcuts to multiplayer game development for Unity and Unreal. 

      Improbable helps game developers build believable online worlds with its bespoke technology, SpatialOS. Now, that task is much easier and accessible for those building games on the technology with the recent release of the SpatialOS Game Development Kit (GDK) for Unity. With these kits, Improbable hopes that developers find it easier to create vast, dynamic and unique worlds.
      This GDK for Unity includes a 200-gamer, first-person project that allows developers to experiment and tinker with their ideas for what their vision of a multiplayer game will look like.
      GameDaily.Biz met with Improbable’s Head of Product Marketing, Paul Thomas, and Head of Comms, Daniel Nye Griffiths, to speak about the SpatialOS GDK for Unity, as well as the upcoming launch of the SpatialOS GDK for Unreal Engine.
      In its first week, the SpatialOS GDK for Unity achieved over 2,000 developer sign ups to use it. “What we're trying to do is basically make it really fast for people to build multiplayer games,” said Thomas. “It comes with all the multiplayer networking so that developers don’t have to do any multiplayer networking. It comes with feature modules to allow [easy] solutions to common multiplayer problems, like player movement and shooting. And it comes with a cool starter project where you have 200 players in a free-for-all scenario. You can obviously use the power of SpatialOS to scale that project up to more players, with NPCs, and things like that. It gives people a really good base to start building multiplayer games.”
      There are several games currently in development or early access that utilize SpatialOS. The first into Early Access was Spilt Milk Studios’ Lazarus, a space MMO where the player becomes a pilot in a universe that ends every week, complete with a map that’s twice the size of Austria. Additionally, Bossa Studios released its survival exploration game Worlds Adrift into Steam Early Access earlier this year.
      Also using SpatialOS is Scavengers from Midwinter Entertainment, a studio founded by former 343 Industries studio head and Halo 4 Creative Director, Josh Holmes; the game is heavily inspired by his Halo 5: Guardians’ multiplayer mode, Warzone. Right alongside that company, Berlin-based Klang Studios is working on Seed, a simulation MMO that, according to its developers, lets players “interact and collaborate to create a world driven by real emotion and aspiration.”
      According to Thomas, for those looking to use the SpatialOS GDK for Unity, there is no limit to  what their games can do with Improbable’s tech.
      “What we're doing is expanding the possible gameplay you can do. Traditionally, when you make a multiplayer game, you're constrained by one single server. So you can say you have a 64-player game with a handful of NPCs or you could have a world that's 3km by 3km. With Spatial, you can go beyond that, test a much broader canvas to start thinking about different gameplay.”
      “You can go for a massive online persistent MMO with 10,000 players and hundreds of thousands of NPCs, something very, very vast and big like that. But you can also have smaller experiences. For example, there's a lot of interesting space in just extending what you see in the Battle Royale genre and session-based gameplay.”
      Thomas continued: “Our partners at Automaton have a game in development called Mavericks. The interesting thing there is they have a Battle Royale with 1,000 people, but what I really find interesting is the gameplay mechanics they've put in, like footprints so you can track people. They've added a cool fire propagation mechanic so you can start a fire that  spreads across the map and changes the world. Or you can add destructible buildings and things like that.”
      “So I think even looking at smaller scale games, we add a lot of value in terms of the new gameplay you can start adding. I'm just interested to see what people do with this extra power - what they can come up with.”
      While Battle Royale games and MMOs are obvious standouts for genres that best fit with SpatialOS, Thomas introduced some other ideas of genres that could benefit from the technology.
      “I also think there's a space for very interesting MMORTSs as well,” he said. “An RTS where you have persistent systems, like telling AIs to do things and then coming back to them a week later and seeing what's happened is an interesting space.”
      “I also see interesting mobile experiences that could come up. Having these worlds where you lay down some interesting things and then come back a few weeks later to see how they've evolved and changed, and the massive player interaction. Say for example with Pokemon Go, we can actually roam around the world and battle on the streets. I can see something like that working very well. Again, these are just ideas we've had and talked to people about. It's about giving people that flexibility and the ability to explore these ideas.”
      Klang’s Seed
      Griffiths added the possibility of events in a game that will have a massive, rippling, and lasting impact on its world as something that has people excited. One example he gives is how someone on one side of the map can do something that’ll have a knock-on effect for the rest of the world in real time.
      “There's a whole bunch of different angles you can take, some of which are about much larger player numbers or a much larger map, but there are other things you can do which are taking a relatively constrained game experience, a smaller map, a smaller number of players and adding richness to the game as well.”
      In fact, this is something that Thomas refers to as a “persistent in memory database,” meaning that for every object in the game world, there’s a history. Two examples cited by Thomas: “...a player could chop down a tree and that tree stays disappeared forever. Or a player can kill a big monster that was raiding a town and that town no longer gets raided by that monster, and this changes the dynamics of the world. Worlds can have a history. That means players can have a lot more meaning in these MMO worlds.”
      “Normally in MMOs, they're kinda like roller coaster rides: you go into a dungeon, you kill the boss and that guy respawns. It all resets,” Thomas continues. “But in Spatial MMOs, you could have these persistent effects that should change the gameplay meaningfully for all the rest of the player base.”
      “The other one I think that is interesting is the level of dynamism that you could have. So because you can have so much more server-side compute, you could potentially have NPCs roaming around the world changing their mind and deciding all of a sudden, 'oh, we're going to attack this player's base' or 'we're gonna go attack this town' and they have a lot more range and emotion and intelligence to them that you'd not see in other MMOs.
      “Normally in MMOs, NPCs sit there tethered. You go near them and they come and attack you, you run away, and they go back to where they were. In a Spatial MMO, that NPC can trace you across the whole map or a group of them can decide to get together and attack someone..”

      Bossa Studios' Worlds Adrift
       

      Next week, Improbable plans to launch its SpatialOS GDK for Unreal Engine, which will have a big focus on ease of use for access to Unreal, as well as a big emphasis on porting your projects to SpatialOS.
      “One of the things we'll be trying to push is a porting guide so you'll be able to take your existing Unreal game, move it onto SpatialOS and then you can grow to expand it with new and extra gameplay,” says Thomas. “ You can bring across your existing Unreal game and it feels very, very native and similar to Unreal if you're familiar with Unreal.”
      Griffiths continued, explaining how testing these experiences includes free cloud deployments, to a certain point. “If you're developing in SpatialOS in other ways, we provide a sandbox environment so you can get your game running. When you’re happy, you can port it over and sort of experiment with it in a free sandbox environment with a small number of cores to get started.”
      Based on what we learned, Improbable’s SpatialOS GDK for Unity will give developers enhanced flexibility to produce more in depth and engaging videos games. That said, we look forward to catching up with the company in the near future to see how this exciting technology is being used in the different games that we play.
    • By rrlangly
      Hope this is the right forum.
      I'm using Ogre and have rendered a wireframe of triangles by creating manual objects and feeding it vertex and index buffers.
      My question is, if I want to turn some of the edges of my wire mesh a different color, how would that normally be done? I.e., A triangle having 3 edges where 2 edges are white and I change one edge to green.
       
       
    • By VanillaSnake21
      I'm doing a WinAPI/DX10 framework rewrite and this time around I'd like to start handling errors a little more seriously in my code. Up to this point I've allowed the code to fail naturally and catch the exception within VS. I always found that to be the quickest method. Usually a break occurs and I just step through the call stack and find the problem in seconds (hopefully). However I'm not really sure that's the proper way to do this, but it does feel like I can avoid writing code that is littered with FAILED() or if( == NULL) statements. So what is the proper way? I've read a few other threads on here that talk about using asserts, but I've never really seen any code written with that so I'm bit doubtfull. Most of the conditions I'd like to monitor for errors are usually DirectX function return values, sometimes null points, sometimes WinAPI function fail handles, sometimes index overrun errors, but mostly DX function fails. Again, I feel like debug DX libraries are very verbose and do a wonderful job interacting with VS as it is, so up to this point I've just relied on output window messages for that, but again I'd like to know if that is a proper way to do things.
       
      P.S the framework is really just a learning tool for me so it's not going to see serious use. So I lean on the side of cleaner, more readable code vs obfuscated one, so I'd like to not go too crazy with error checking however I'd like to get at least an idea of the way things are done properly. 
       
      P.P.S I'd also like to mention that I am aware of exceptions however I'm particularly asking about catching programming errors, I was under the impression that exception handling is more of a run-time error detection method to catch things that are not really the programmers fault (like memory, harwdware or network issues).
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!