Jump to content
Sign in to follow this  
  • entries
    5
  • comments
    3
  • views
    1236

Designing Player World Interaction in Unreal Engine 4

Martin H Hollstein

756 views

Originally posted on Troll Purse development blog.

Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.

UE-Logo-988x988-1dee3bc7f6714edf3c21ee71

Interaction via Overlaps

By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:

Header

// Fill out your copyright notice in the Description page of Project Settings.

#pragma once

#include "CoreMinimal.h"
#include "GameFramework/Actor.h"
#include "InteractiveActor.generated.h"

UCLASS()
class GAME_API InteractiveActor : public AActor
{
	GENERATED_BODY()

public:
	// Sets default values for this actor's properties
	InteractiveActor();

    virtual void BeginPlay() override;

protected:
	UFUNCTION()
	virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult);

	UFUNCTION()
	virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex);

    UFUNCTION()
    virtual void OnPlayerInputActionReceived();

	UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction)
	class UBoxComponent* InteractionTrigger;
}

This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.

Source

// Fill out your copyright notice in the Description page of Project Settings.

#include "InteractiveActor.h"
#include "Components/BoxComponent.h"

// Sets default values
AInteractiveActor::AInteractiveActor()
{
	PrimaryActorTick.bCanEverTick = true;

	RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root"));
	RootComponent->SetMobility(EComponentMobility::Static);

	InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger"));
	InteractionTrigger->InitBoxExtent(FVector(128, 128, 128));
	InteractionTrigger->SetMobility(EComponentMobility::Static);
	InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap);
	InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap);

	InteractionTrigger->SetupAttachment(RootComponent);
}

void AInteractiveActor::BeginPlay()
{
    if(InputComponent == nullptr)
    {
        InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component");
        InputComponent->bBlockInput = bBlockInput;
    }

    InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived);
}

void AInteractiveActor::OnPlayerInputActionReceived()
{
    //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out.
}

void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult)
{
	if (OtherActor)
	{
		AController* Controller = OtherActor->GetController();
        if(Controller)
        {
            APlayerController* PC = Cast<APlayerController>(Controller);
            if(PC)
            {
                EnableInput(PC);
            }
        }
	}
}

void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex)
{
	if (OtherActor)
	{
		AController* Controller = OtherActor->GetController();
        if(Controller)
        {
            APlayerController* PC = Cast<APlayerController>(Controller);
            if(PC)
            {
                DisableInput(PC);
            }
        }
	}
}

Pros and Cons

The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.

Interaction via Raytrace

Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.

Source

AInteractiveActor.h

// Fill out your copyright notice in the Description page of Project Settings.

#pragma once

#include "CoreMinimal.h"
#include "GameFramework/Actor.h"
#include "InteractiveActor.generated.h"

UCLASS()
class GAME_API AInteractiveActor : public AActor
{
	GENERATED_BODY()

public:
    virtual OnReceiveInteraction(class APlayerController* PC);
}

AMyPlayerController.h

// Fill out your copyright notice in the Description page of Project Settings.

#pragma once

#include "CoreMinimal.h"
#include "GameFramework/PlayerController.h"
#include "AMyPlayerController.generated.h"

UCLASS()
class GAME_API AMyPlayerController : public APlayerController
{
	GENERATED_BODY()

    AMyPlayerController();

public:
    virtual void SetupInputComponent() override;

    float MaxRayTraceDistance;

private:
    AInteractiveActor* GetInteractiveByCast();

    void OnCastInput();
}

These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.

The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.

The simple execution of the code is as follows in the class definitions.

AInteractiveActor.cpp

void AInteractiveActor::OnReceiveInteraction(APlayerController* PC)
{
    //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.))
}

AMyPlayerController.cpp

AMyPlayerController::AMyPlayerController()
{
    MaxRayTraceDistance = 1000.0f;
}

AMyPlayerController::SetupInputComponent()
{
    Super::SetupInputComponent();
    InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput);
}

void AMyPlayerController::OnCastInput()
{
    AInteractiveActor* Interactive = GetInteractiveByCast();
    if(Interactive != nullptr)
    {
        Interactive->OnReceiveInteraction(this);
    }
    else
    {
        return;
    }
}

AInteractiveActor* AMyPlayerController::GetInteractiveByCast()
{
    FVector CameraLocation;
	FRotator CameraRotation;

	GetPlayerViewPoint(CameraLocation, CameraRotation);
	FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance);

	FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn());
	TraceParams.bTraceAsyncScene = true;

	FHitResult Hit(ForceInit);
	GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams);

    AActor* HitActor = Hit.GetActor();
    if(HitActor != nullptr)
    {
        return Cast<AInteractiveActor>(HitActor);
    }
	else
    {
        return nullptr;
    }
}

Pros and Cons

One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.

Conclusion

There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!

 

 

Originally posted on Troll Purse development blog.



0 Comments


Recommended Comments

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By ryt
      Usually when I think of interfaces, or when I read about them, I see them as something as they are made in C#. There is a keyword for it and it's used for classes that can not be instantiated, they have to be inherited. In C++ they can be made by making a member function a pure virtual function, with = 0, where they are called abstract classes.
      When I was coding with DirectX I used "DirectX interfaces". Actually if I remember correctly DirectX uses them for almost everything. Some example would be ID3D11Buffer.
      I know that it's a COM interface though I don't know much about it.
      I never inherited them, though maybe someone would expect this because they are interfaces. I always used them as "real" objects, I would get their pointer through some function like CreateBuffer() or some similar method.
      What confuses me is why they are "called" interfaces and have an "I" in front? Are they maybe totally different from interfaces that I described, like C# interfaces, and have some other intended use?
    • By sidbhati32
      How do I detect the mouse event of moving my mouse left or right and wheel up or down?
      I have used Get_X_LParam for mouse movement and WParam for wheel movement.
      Like DWORD x = HIWORD(wParam) but both of these events return continuous values.

      for eg. if(x>0)
      {
      //do this
      }

      else
      {
      //do this
      }

      the Wparam only returns the same value every time even if I am moving my wheel downwards. Same with Get_X_LParam
       
    • By ryt
      I took a look at this video to see the difference between static and dynamic linking. Basically the author uses __declspec(dllexport) to export a function.
      How could we also export classes from the same file? Do we need to put the same keyword before class definition or maybe something else?
    • By ryt
      Consider the following classes and pInt declaration:
      class A { ... }; class B : public A { void function() { int A::*pInt; // use pInt } }; Where does pInt belongs, is it local to B::function() or it's a member of A?
    • By CommanderLake
      I've been experimenting with my own n-body simulation for some time and I recently discovered how to optimize it for efficient multithreading and vectorization with the Intel compiler. It did exactly the same thing after making it multithreaded and scaled very well on my ancient i7 3820 (4.3GHz). Then I changed the interleaved xy coordinates to separate arrays for x and y to eliminate the strided loads to improve AVX scaling and copy the coordinates to an interleaved array for OpenTK to render as points. Now the physics is all wrong, the points form clumps that interact with each other but they are unusually dense and accelerate faster than they decelerate causing the clumps to randomly fly off into the distance and after several seconds I get a NaN where 2 points somehow occupy exactly the same x and y float coordinates. This is the C++ DLL:
      #include "PPC.h" #include <thread> static const float G = 0.0000001F; const int count = 4096; __declspec(align(64)) float pointsx[count]; __declspec(align(64)) float pointsy[count]; void SetData(float* x, float* y){ memcpy(pointsx, x, count * sizeof(float)); memcpy(pointsy, y, count * sizeof(float)); } void Compute(float* points, float* velx, float* vely, long pcount, float aspect, float zoom) { #pragma omp parallel for for (auto i = 0; i < count; ++i) { auto forcex = 0.0F; auto forcey = 0.0F; for (auto j = 0; j < count; ++j) { if(j == i)continue; const auto distx = pointsx[i] - pointsx[j]; const auto disty = pointsy[i] - pointsy[j]; //if(px != px) continue; //most efficient way to avoid a NaN failure const auto force = G / (distx * distx + disty * disty); forcex += distx * force; forcey += disty * force; } pointsx[i] += velx[i] -= forcex; pointsy[i] += vely[i] -= forcey; if (zoom != 1) { points[i * 2] = pointsx[i] * zoom / aspect; points[i * 2 + 1] = pointsy[i] * zoom; } else { points[i * 2] = pointsx[i] / aspect; points[i * 2 + 1] = pointsy[i]; } /*points[i * 2] = pointsx[i]; points[i * 2 + 1] = pointsy[i];*/ } } This is the relevant part of the C# OpenTK GameWindow:
      private void PhysicsLoop(){ while(true){ if(stop){ for(var i = 0; i < pcount; ++i) { velx[i] = vely[i] = 0F; } } if(reset){ reset = false; var r = new Random(); for(var i = 0; i < Startcount; ++i){ do{ pointsx[i] = (float)(r.NextDouble()*2.0F - 1.0F); pointsy[i] = (float)(r.NextDouble()*2.0F - 1.0F); } while(pointsx[i]*pointsx[i] + pointsy[i]*pointsy[i] > 1.0F); velx[i] = vely[i] = 0.0F; } NativeMethods.SetData(pointsx, pointsy); pcount = Startcount; buffersize = (IntPtr)(pcount*8); } are.WaitOne(); NativeMethods.Compute(points0, velx, vely, pcount, aspect, zoom); var pointstemp = points0; points0 = points1; points1 = pointstemp; are1.Set(); } } protected override void OnRenderFrame(FrameEventArgs e){ GL.Clear(ClearBufferMask.ColorBufferBit); GL.EnableVertexAttribArray(0); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); mre1.Wait(); are1.WaitOne(); GL.BufferData(BufferTarget.ArrayBuffer, buffersize, points1, BufferUsageHint.StaticDraw); are.Set(); GL.VertexAttribPointer(0, 2, VertexAttribPointerType.Float, false, 0, 0); GL.DrawArrays(PrimitiveType.Points, 0, pcount); GL.DisableVertexAttribArray(0); SwapBuffers(); } These are the array declarations:
      private const int Startcount = 4096; private readonly float[] pointsx = new float[Startcount]; private readonly float[] pointsy = new float[Startcount]; private float[] points0 = new float[Startcount*2]; private float[] points1 = new float[Startcount*2]; private readonly float[] velx = new float[Startcount]; private readonly float[] vely = new float[Startcount];  
      Edit 0: It seems that adding 3 zeros to G increases the accuracy of the simulation but I'm at a loss as to why its different without interleaved coordinates. Edit 1: I somehow achieved an 8.3x performance increase with AVX over scalar with the new code above!
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!