Jump to content
  • Advertisement
  • Remove ads and support GameDev.net for only $3. Learn more: The New GDNet+: No Ads!

  • 01/23/18 04:04 PM

    Designing Player World Interaction in Unreal Engine 4

    Engines and Middleware

    Martin H Hollstein

    Originally posted on Troll Purse development blog.

    Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.

    Interaction via Overlaps

    By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:

    Header

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API InteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
    	// Sets default values for this actor's properties
    	InteractiveActor();
    
        virtual void BeginPlay() override;
    
    protected:
    	UFUNCTION()
    	virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult);
    
    	UFUNCTION()
    	virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex);
    
        UFUNCTION()
        virtual void OnPlayerInputActionReceived();
    
    	UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction)
    	class UBoxComponent* InteractionTrigger;
    }

    This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.

    Source

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #include "InteractiveActor.h"
    #include "Components/BoxComponent.h"
    
    // Sets default values
    AInteractiveActor::AInteractiveActor()
    {
    	PrimaryActorTick.bCanEverTick = true;
    
    	RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root"));
    	RootComponent->SetMobility(EComponentMobility::Static);
    
    	InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger"));
    	InteractionTrigger->InitBoxExtent(FVector(128, 128, 128));
    	InteractionTrigger->SetMobility(EComponentMobility::Static);
    	InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap);
    	InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap);
    
    	InteractionTrigger->SetupAttachment(RootComponent);
    }
    
    void AInteractiveActor::BeginPlay()
    {
        if(InputComponent == nullptr)
        {
            InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component");
            InputComponent->bBlockInput = bBlockInput;
        }
    
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived);
    }
    
    void AInteractiveActor::OnPlayerInputActionReceived()
    {
        //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out.
    }
    
    void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    EnableInput(PC);
                }
            }
    	}
    }
    
    void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    DisableInput(PC);
                }
            }
    	}
    }

     

    Pros and Cons

    The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.

    Interaction via Raytrace

    Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.

    Source

    AInteractiveActor.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API AInteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
        virtual OnReceiveInteraction(class APlayerController* PC);
    }

     

    AMyPlayerController.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/PlayerController.h"
    #include "AMyPlayerController.generated.h"
    
    UCLASS()
    class GAME_API AMyPlayerController : public APlayerController
    {
    	GENERATED_BODY()
    
        AMyPlayerController();
    
    public:
        virtual void SetupInputComponent() override;
    
        float MaxRayTraceDistance;
    
    private:
        AInteractiveActor* GetInteractiveByCast();
    
        void OnCastInput();
    }

     

    These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.

    The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.

    The simple execution of the code is as follows in the class definitions.

    AInteractiveActor.cpp

    void AInteractiveActor::OnReceiveInteraction(APlayerController* PC)
    {
        //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.))
    }

     

    AMyPlayerController.cpp

    AMyPlayerController::AMyPlayerController()
    {
        MaxRayTraceDistance = 1000.0f;
    }
    
    AMyPlayerController::SetupInputComponent()
    {
        Super::SetupInputComponent();
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput);
    }
    
    void AMyPlayerController::OnCastInput()
    {
        AInteractiveActor* Interactive = GetInteractiveByCast();
        if(Interactive != nullptr)
        {
            Interactive->OnReceiveInteraction(this);
        }
        else
        {
            return;
        }
    }
    
    AInteractiveActor* AMyPlayerController::GetInteractiveByCast()
    {
        FVector CameraLocation;
    	FRotator CameraRotation;
    
    	GetPlayerViewPoint(CameraLocation, CameraRotation);
    	FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance);
    
    	FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn());
    	TraceParams.bTraceAsyncScene = true;
    
    	FHitResult Hit(ForceInit);
    	GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams);
    
        AActor* HitActor = Hit.GetActor();
        if(HitActor != nullptr)
        {
            return Cast<AInteractiveActor>(HitActor);
        }
    	else
        {
            return nullptr;
        }
    }

     

    Pros and Cons

    One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.

    Conclusion

    There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!

     

     

    Originally posted on Troll Purse development blog.



      Report Article


    User Feedback


    There is a typo in class declaration should be class AInteractiveActor instead of class InteractiveActor. Also identation is broken (mix of spaces and tabs).

    Edited by RootKiller

    Share this comment


    Link to comment
    Share on other sites
    On 1/23/2018 at 12:20 PM, RootKiller said:

    There is a typo in class declaration should be class AInteractiveActor instead of class InteractiveActor. Also identation is broken (mix of spaces and tabs).

    Thanks, I will fix that up in time. Any thoughts on the content other than that? Was this useful?

    Share this comment


    Link to comment
    Share on other sites

    Thanks for the post.

    As a none programmer I never find the time to develop new code like this, so having someone post there work and explain it is great for me to read.

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • Latest Featured Articles

  • Featured Blogs

  • Popular Now

  • Similar Content

    • By sevenfold1
      I'm looking for a open source PBR rendering engine that I can use.  Basic requirements are Windows (C/C++) and free to use type license.

      The first two hits I get on Google are:

      Filament
      https://github.com/google/filament/

      LuxCoreRender
      https://luxcorerender.org/

      Does anybody have any experience using any of these, or do you recommend something else that's better?
      Thanks.
      Pluses: Active development, easy to compile, zero dependencies.
    • By Novakin
      Looking for a c++ pogrammer to help us on a Viking battle game. We are using unreal engine 4 so knowledge of blueprint would be handy. The project is intended to sell commercially so you will recieve revenue shares. For more info on the project please contact me. Thnak you
    • By Josheir
      This is a follow up to a previous post.  MrHallows had asked me to post the project, so I am going to with a new fresh thread so that I can get the most needed help. 
      I have put the class in the main .cpp to simplify for your debugging purposes.  My error is :  
      C1189 #error: OpenGL header already included, remove this include, glad already provides it  
      I tried adding : #define GLFW_INCLUDE_NONE, and tried adding this as a preprocessor definitions too. I also tried to change the #ifdef - #endif, except I just couldn't get it working. The code repository URL is :
      https://github.com/Joshei/GolfProjectRepo/tree/combine_sources/GOLFPROJ
       
      The branch is : combine_sources
      The Commit ID is: a4eaf31
      The files involved are : shader_class.cpp,  glad.h, glew.h
      glad1.cpp was also in my project, I removed it to try to solve this problem.
       
       
      Here is the description of the problem at hand:
      Except for glcolor3f and glRasterPos2i(10,10); the code works without glew.h.  When glew is added there is only a runtime error (that is shown above.) 
       
      I could really use some exact help.  You know like, "remove the include for gl.h on lines 50, 65, and 80.  Then delete the code at line 80 that states..."
       
      I hope that this is not to much to ask for, I really want to win at OpenGL.  If I can't get help I could use a much larger file to display the test values or maybe it's possible to write to an open file and view the written data as it's outputted.
       
      Thanks in advance,
      Josheir
    • By babaliaris
      Hello!
      I'm trying to understand how to load models with Assimp. Well learning how to use this library isn't that hard, the thing is how to use the data. From what I understand so far, each model consists of several meshes which you can render individually in order to get the final result (the model). Also from what assimp says:
      One mesh uses only a single material everywhere - if parts of the model use a different material, this part is moved to a separate mesh at the same node The only thing that confuses me is how to create the shader that will use these data to draw a mesh. Lets say I have all the information about a mesh like this:
      class Meshe { std::vector<Texture> diffuse_textures; std::vector<Texture> specular_textures; std::vector<Vertex> vertices; std::vector<unsigned int> indices; } And lets make the simplest shaders:
       
      Vertex Shader:
      #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform vec3 model; uniform vec3 view; uniform vec3 projection; out vec2 TextureCoordinate; out vec3 Normals; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TextureCoordinate = aTexCoord Normals = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader:
      #version 330 core out vec4 Output; in vec2 TextureCoordinate; in vec3 Normals; uniform sampler2D diffuse; uniform sampler2D specular; void main() { Output = texture(diffuse, TextureCoordinate); }  
      Will this work? I mean, assimp says that each mesh has only one material that covers it, but that material how many diffuse and specular textures can it have? Does it makes sense for a material to have more than one diffuse or more that one specular textures?  If each material has only two textures, one for the diffuse and one for the specular then its easy, i'm using the specular texture on the lighting calculations and the diffuse on the actual output.
      But what happens if the textures are more? How am i defining them on the fragment shader without knowing the actual number? Also how do i use them? 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!