• Advertisement
Sign in to follow this  

C++ Win32 Clang c++98 vs c++11

Recommended Posts

I am having trouble getting clang into a C++/98 mode, can someone help me with that?

 

1.) No uint32_t in c++/98

// Are this the correct includes for uint32_t and size_t ?
#include <stdint.h> // uint64_t, uint32_t, uint8_t, int16_t, etc.
#include <stdlib.h> // size_t
#include <limits.h> // UINT32_MAX

/*
..\final_platform_layer.hpp:1646:11: error: unknown type name 'uint32_t'; did you mean 'int32_t'?
                fpl_api uint32_t ReadFileBlock32(const FileHandle &fileHandle, const uint32_t sizeToRead, void *targetBuffer, const uint32_t maxTargetBufferSize);
*/


2.) Macro is expanded always - even though __cplusplus should not be greater than 199711...

 

#if (__cplusplus >= 201103L) || (_MSC_VER >= 1900)
	//! Null pointer (nullptr)
#	define fpl_null nullptr
	//! Constant (constexpr)
#	define fpl_constant constexpr
#else
	//! Null pointer (0)
#	define fpl_null 0
	//! Constant (static const)
#	define fpl_constant static const
#endif
  
/*
In file included from FPL_Console\main.cpp:13:
..\final_platform_layer.hpp:1623:4: error: unknown type name 'constexpr'
                        fpl_constant uint32_t MAX_FILEENTRY_PATH_LENGTH = 1024;
*/

 

I am compiling like this (Win32): 

set BUILD_DIR=bin\FPL_Console\x64-Debug
set IGNORED_WARNINGS=-Wno-missing-field-initializers -Wno-sign-conversion -Wno-cast-qual -Wno-unused-parameter -Wno-format-nonliteral -Wno-old-style-cast -Wno-header-hygiene
rmdir /s /q %BUILD_DIR%
mkdir %BUILD_DIR%
clang -g -Weverything %IGNORED_WARNINGS% -DFPL_DEBUG -std=c++98 -O0 -I..\ -lkernel32.lib -o%BUILD_DIR%\FPL_Console.exe FPL_Console\main.cpp > error.txt 2>&1

 

Edited by Finalspace

Share this post


Link to post
Share on other sites
Advertisement
2 hours ago, Finalspace said:

1.) No uint32_t in c++/98

The standard int-types in stdint.h are an C++11 feature. If you want to use them for C++98, you have to define them yourself.

2 hours ago, Finalspace said:

2.) Macro is expanded always - even though __cplusplus should not be greater than 199711...

What is the value of __cplusplus? You should be able to view/output it to see whats going wrong. Even so, it seems that clang rather uses a check for each indivdual feature instead (https://stackoverflow.com/questions/7139323/what-macro-does-clang-define-to-announce-c11-mode-if-any).

Share this post


Link to post
Share on other sites

<stdint.h> is a C99 header.  I don't know what vendor's C standard library you;re using, but if it's the native Microsoft Windows one it doesn't support C99 yet.

Clang uses a non-conformant method of indicating support for various features instead of the standards-sanctioned ways, which means you're going to need to use libc++ for the standard library because other vendor's standard C++ libraries are conformant.  There was some discussion in the standards committee about whether to do something like clang does, but it was rejected outright as non-scalable.

Is there any reason you can't just switch from an ancient primordial version of the language to an old out-of-date one?  The differences between C++98 and C++11 are subtle and obscure for he most part, if you avoid the newer features (and do you really care that std::list::length() is guaranteed O(1) instead of having its complexity order implementation-defined?) You should be able to just compile C++98 code in C++11 mode without problem.

Share this post


Link to post
Share on other sites
16 hours ago, Bregma said:

<stdint.h> is a C99 header.  I don't know what vendor's C standard library you;re using, but if it's the native Microsoft Windows one it doesn't support C99 yet.

Clang uses a non-conformant method of indicating support for various features instead of the standards-sanctioned ways, which means you're going to need to use libc++ for the standard library because other vendor's standard C++ libraries are conformant.  There was some discussion in the standards committee about whether to do something like clang does, but it was rejected outright as non-scalable.

Is there any reason you can't just switch from an ancient primordial version of the language to an old out-of-date one?  The differences between C++98 and C++11 are subtle and obscure for he most part, if you avoid the newer features (and do you really care that std::list::length() is guaranteed O(1) instead of having its complexity order implementation-defined?) You should be able to just compile C++98 code in C++11 mode without problem.

Well i have written a platform abstraction library and i want this to be as portable as possible, so its based on C++/98 - but i want to optionally want to support C++/11 features like constexpr, nullptr and the standard types.

So the only thing i want right know is to reliably detect if i am compiling with C++/11 or not - on common platforms (Win32, Linux, Unix) and compilers (MSVC, G++, CLANG, Intel, MingW).

But thanks for that infos!

Edited by Finalspace

Share this post


Link to post
Share on other sites

Seems like this is the way to go:

//
// C++ feature detection
//
#if (__cplusplus >= 201103L) || (defined(FPL_COMPILER_MSVC) && _MSC_VER >= 1900)
#	define FPL_CPP_2011
#endif

#if !defined(cxx_constexpr)
#	define cxx_constexpr 2235
#endif
#if !defined(cxx_nullptr)
#	define cxx_nullptr 2431
#endif

#if !defined(__has_feature)
#	if defined(FPL_CPP_2011)
#		define __has_feature(x) (((x == cxx_constexpr) || (x == cxx_nullptr)) ? 1 : 0)
#	else
#		define __has_feature(x) 0
#	endif
#endif

#define FPL_CPP_CONSTEXPR __has_feature(cxx_constexpr)
#define FPL_CPP_NULLPTR __has_feature(cxx_nullptr)

 

Edited by Finalspace

Share this post


Link to post
Share on other sites
4 hours ago, Finalspace said:

So the only thing i want right know is to reliably detect if i am compiling with C++/11 or not - on common platforms (Win32, Linux, Unix) and compilers (MSVC, G++, CLANG, Intel, MingW).

 

You need to go even deeper.  You need every single practical combination of OS, compiler, standard library, and possibly thread model and exception model.  If you're using clang on a Linux OS, for example, are you using libstdc++, libc++, or one of the more obscure third-party libraries?  If you're using mingw on a Linux OS to cross-compile for Win32 to run on Win64, do you choose the posix thread model or the win32 thread model?  It's a crazy crazy world out there and I can tell you from experience there are bizarre toolchain combinations that will never work 100% despite your manager a technical expert insisting they're the way things have to be done.

If you really want to make your stuff portable across platforms, look at how boost does it.  It's OK to leverage the work of others, and best of luck to you it can be a fun challenge.

Share this post


Link to post
Share on other sites
4 hours ago, Bregma said:

You need to go even deeper.  You need every single practical combination of OS, compiler, standard library, and possibly thread model and exception model.  If you're using clang on a Linux OS, for example, are you using libstdc++, libc++, or one of the more obscure third-party libraries?  If you're using mingw on a Linux OS to cross-compile for Win32 to run on Win64, do you choose the posix thread model or the win32 thread model?  It's a crazy crazy world out there and I can tell you from experience there are bizarre toolchain combinations that will never work 100% despite your manager a technical expert insisting they're the way things have to be done.

If you really want to make your stuff portable across platforms, look at how boost does it.  It's OK to leverage the work of others, and best of luck to you it can be a fun challenge.

I dont intent to use any libraries whatsoever in the library itself - except for very raw operating system libraries, like kernel32.dll in win32 and libld.so in linux. Also if possible i would eliminate the need of the c++ runtime as well. Therefore thirdparty libraries are out of question!

But i know that you cannot get it work on every platform/architecture - i just want to work it for common combinations like MSVC/Win32 or Clang/Win32 or Clang/Posix or G++/Posix in x86 and x86_64.

Edited by Finalspace

Share this post


Link to post
Share on other sites
Posted (edited)

I still cannot get it to compile with clang, even after i cleaned up all c++/98 incompatible stuff and now get the following error:

In file included from FPL_Console\main.cpp:13:
In file included from ..\final_platform_layer.hpp:2966:
In file included from C:\Program Files (x86)\Windows Kits\10\include\10.0.14393.0\um\Windows.h:168:
In file included from C:\Program Files (x86)\Windows Kits\10\include\10.0.14393.0\shared\windef.h:24:
In file included from C:\Program Files (x86)\Windows Kits\10\include\10.0.14393.0\shared\minwindef.h:182:
C:\Program Files (x86)\Windows Kits\10\include\10.0.14393.0\um\winnt.h:11483:1: error: unknown type name 'constexpr'
DEFINE_ENUM_FLAG_OPERATORS(JOB_OBJECT_NET_RATE_CONTROL_FLAGS)

C:\Program Files (x86)\Windows Kits\10\include\10.0.14393.0\um\winnt.h:11483:1: error: expected ';' after top level declarator
C:\Program Files (x86)\Windows Kits\10\include\10.0.14393.0\um\winnt.h:2288:38: note: expanded from macro 'DEFINE_ENUM_FLAG_OPERATORS'
inline _ENUM_FLAG_CONSTEXPR ENUMTYPE operator | (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) | ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \

The line 2966 in my library is just the include of the windows.h -.-

I have no idea why clang in combination with the windows header wants to use constexpr...

 

Here are the clang input:

clang -g -Weverything %IGNORED_WARNINGS% -DFPL_DEBUG -std=c++98 -O0 -I..\ -lkernel32.lib -o%BUILD_DIR%\FPL_Console.exe FPL_Console\main.cpp > error.txt 2>&1

 

If you want to try it yourself:

- Clone https://github.com/f1nalspace/final_game_tech

- Go into demos folder

- Call "clang_x64_debug.bat"

- Look into the errors.txt

 

Does LLVM/clang not support c++/98?

Edited by Finalspace

Share this post


Link to post
Share on other sites
1 hour ago, Finalspace said:

The line 2966 in my library is just the include of the windows.h -.-

I have no idea why clang in combination with the windows header wants to use constexpr...

Inspect windows.h and try to see why it tries to use constexpr.

You will probably need to define some macros that you'll discover while inspecting windows.h

 

Also:

></pre>
<table>				typedef unsigned char fpl_u8;					</table>
<table>				typedef unsigned short fpl_u16;					</table>
<table>				typedef unsigned int fpl_u32;					</table>
<table>				typedef unsigned long long fpl_u64;					</table>
<table>				typedef signed char fpl_s8;					</table>
<table>				typedef signed short fpl_s16;					</table>
<table>				typedef signed int fpl_s32;					</table>
<p>	typedef signed long long fpl_s64;</p><p>

 

This is not guaranteed to be what you think about them. Most of the time, yes, but sometimes no. You'll have to rely on other things (compiler pre-defined macros, standard headers max values for each types...).

Share this post


Link to post
Share on other sites
Posted (edited)
9 hours ago, _Silence_ said:

Inspect windows.h and try to see why it tries to use constexpr.

You will probably need to define some macros that you'll discover while inspecting windows.h

Good idea, i will look into that. No idea why i didnt thought this solution myself...

9 hours ago, _Silence_ said:

Also:

 


></pre>
<table>				typedef unsigned char fpl_u8;					</table>
<table>				typedef unsigned short fpl_u16;					</table>
<table>				typedef unsigned int fpl_u32;					</table>
<table>				typedef unsigned long long fpl_u64;					</table>
<table>				typedef signed char fpl_s8;					</table>
<table>				typedef signed short fpl_s16;					</table>
<table>				typedef signed int fpl_s32;					</table>
<p>	typedef signed long long fpl_s64;</p><p>

This is not guaranteed to be what you think about them. Most of the time, yes, but sometimes no. You'll have to rely on other things (compiler pre-defined macros, standard headers max values for each types...).

I know, but there are no "default" types in C++/98, so i have no choice to define them myself.

But really the only problem i see are unsigned int64 aka long long and the dilemma about long vs int. Maybe short can be defined in a weird way, but i wouldn´t expect that "int" is less than 32-bit or "char" more than 8-bit. Unfortunatly i dont have platforms which are defined other than that and right know, i support x86 and x86_64. But i will ask a friend which works all day on dozens of different platforms.

Edited by Finalspace

Share this post


Link to post
Share on other sites
Posted (edited)
1 hour ago, Finalspace said:

I know, but there are no "default" types in C++/98, so i have no choice to define them myself.

 

You don't need default sized types for that. It was possible to do that before C++11, fortunately.

You have these files which are fully ISO C++ 98 (and thus C++ 03/11/14/17) compliant:

http://www.cplusplus.com/reference/climits/

http://www.cplusplus.com/reference/limits/numeric_limits/

Edited by _Silence_

Share this post


Link to post
Share on other sites
Posted (edited)
3 hours ago, _Silence_ said:

You don't need default sized types for that. It was possible to do that before C++11, fortunately.

You have these files which are fully ISO C++ 98 (and thus C++ 03/11/14/17) compliant:

http://www.cplusplus.com/reference/climits/

http://www.cplusplus.com/reference/limits/numeric_limits/

Yeah until you need LLONG -> long long, 64 bit integer - then you are up in C99 or C++/11 space. So what should i do? Include the limits and match the size of all the macros on my types?

 

Regarding the constexpr include: Even microsoft itself cannot detect properly C++/11, you cannot turn C++/11 off in visual studio 2017+, but in clang its correct set to, except for _MSC_VER, this is set to 1900, because it uses the visual studio includes magically.

#if _MSC_VER >= 1900
#define _ENUM_FLAG_CONSTEXPR constexpr
#else
#define _ENUM_FLAG_CONSTEXPR
#endif

#define DEFINE_ENUM_FLAG_OPERATORS(ENUMTYPE) \
extern "C++" { \
inline _ENUM_FLAG_CONSTEXPR ENUMTYPE operator | (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) | ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE &operator |= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) |= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline _ENUM_FLAG_CONSTEXPR ENUMTYPE operator & (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) & ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE &operator &= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) &= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline _ENUM_FLAG_CONSTEXPR ENUMTYPE operator ~ (ENUMTYPE a) throw() { return ENUMTYPE(~((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a)); } \
inline _ENUM_FLAG_CONSTEXPR ENUMTYPE operator ^ (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) ^ ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE &operator ^= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) ^= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
}
#else
#define DEFINE_ENUM_FLAG_OPERATORS(ENUMTYPE) // NOP, C allows these operators.
#endif

Speechless, but wells its that what i was expecting... more or less.

Edited by Finalspace

Share this post


Link to post
Share on other sites
9 hours ago, Finalspace said:

Yeah until you need LLONG -> long long, 64 bit integer - then you are up in C99 or C++/11 space. So what should i do? Include the limits and match the size of all the macros on my types?

You still can try to test something like this:

	#if __LONG_MAX__ == 9223372036854775807 // long is 64 bits
	#if __LONG_LONG_MAX__ == 9223372036854775807 // long long is 64 bits
	

 

What I was saying in a previous post is that you cannot tell that int will forcely be 32 bits, long long 64 bits like this. If the OS and compiler were narrowed to very few combinations, then probably. But your header is meant to support other Unices and unknown compilers. This imply that the only rule applicable on these types are that:

sizeof(int) <= sizeof(long) <= sizeof(long long)

 

Hope that could help

Share this post


Link to post
Share on other sites
8 hours ago, _Silence_ said:

You still can try to test something like this:

 


	#if __LONG_MAX__ == 9223372036854775807 // long is 64 bits
	#if __LONG_LONG_MAX__ == 9223372036854775807 // long long is 64 bits
	

 

 

What I was saying in a previous post is that you cannot tell that int will forcely be 32 bits, long long 64 bits like this. If the OS and compiler were narrowed to very few combinations, then probably. But your header is meant to support other Unices and unknown compilers. This imply that the only rule applicable on these types are that:

sizeof(int) <= sizeof(long) <= sizeof(long long)

 

Hope that could help

Okay that helps. Thanks.

Share this post


Link to post
Share on other sites

Is this better?

//
// Limits & Types
//
#include <limits.h>
#define FPL_MAX_U8 UCHAR_MAX
#define FPL_MIN_S8 SCHAR_MIN
#define FPL_MAX_S8 SCHAR_MAX
#define FPL_MAX_U16 USHRT_MAX
#define FPL_MIN_S16 SHRT_MIN
#define FPL_MAX_S16 SHRT_MAX
typedef unsigned char fpl_u8;
typedef unsigned short fpl_u16;
typedef signed char fpl_s8;
typedef signed short fpl_s16;
#if LONG_MAX == 9223372036854775807
	// 64-Bit Platform
#	define FPL_IS_LONG64
#	define FPL_POINTER_SIZE 8
#	define FPL_MIN_S32 INT_MIN
#	define FPL_MAX_S32 INT_MAX
#	define FPL_MAX_U32 UINT_MAX
#	define FPL_MIN_S64 LONG_MIN
#	define FPL_MAX_S64 LONG_MAX
#	define FPL_MAX_U64 ULONG_MAX
typedef signed int fpl_s32;
typedef unsigned int fpl_u32;
typedef signed long fpl_s64;
typedef unsigned long fpl_u64;
#else
#	if defined(FPL_ARCH_X64)
#		define FPL_POINTER_SIZE 8
#	else		
#		define FPL_POINTER_SIZE 4
#	endif
#	if INT_MAX == 2147483647
		// X86 or X64 Platform
#		define FPL_MIN_S32 INT_MIN
#		define FPL_MAX_S32 INT_MAX
#		define FPL_MAX_U32 UINT_MAX
typedef signed int fpl_s32;
typedef unsigned int fpl_u32;
#		define FPL_MIN_S64 LLONG_MIN
#		define FPL_MAX_S64 LLONG_MAX
#		define FPL_MAX_U64 ULONG_MAX
typedef signed long long fpl_s64;
typedef unsigned long long fpl_u64;
#	else
		// Unknown Platform
#		define FPL_MIN_S32 LONG_MIN
#		define FPL_MAX_S32 LONG_MAX
#		define FPL_MAX_U32 ULONG_MAX
typedef signed long fpl_s32;
typedef unsigned long fpl_u32;
		// @TODO(final): Not sure if LLONG is right here
#		define FPL_MIN_S64 LLONG_MIN
#		define FPL_MAX_S64 LLONG_MAX
#		define FPL_MAX_U64 ULONG_MAX
typedef signed long long fpl_s64;
typedef unsigned long long fpl_u64;
#	endif
#endif
#if FPL_POINTER_SIZE == 8
typedef fpl_s64 fpl_sptr;
typedef fpl_u64 fpl_uptr;
#else
typedef fpl_s32 fpl_sptr;
typedef fpl_u32 fpl_uptr;
#endif
#if defined(FPL_ARCH_X64) || defined(FPL_IS_LONG64)
typedef fpl_u64 fpl_size;
#else
typedef fpl_u32 fpl_size;
#endif

 

Share this post


Link to post
Share on other sites

Is there any reason you're trying to adapt a modern compiler to work the way the old things worked, instead of just downloading archived versions of the old compilers and SDKs?

Many things in the language have changed, and there have been many breaking changes in each set of the standards. C++03 tightened up a lot of rules. C++11 had all kinds of breaking changes from things like the simple-seeming enum values to the rather complex template system, new promotion rules supporting 64-bit values, slight changes to standard library items, and much more. C++14 tightened a few more details of the language. C++17 broke a few longstanding systems like certain bool value processing, inherited constructors, template argument rules, and exception handling systems.

Even if you make the headers match, you're still looking at a long list of breaking changes.  The obvious ones prevent compilation. The less obvious ones you'll only discover by thorough testing.

I think it would be far easier to get compilers and libraries from the era. The windows binaries should still run just fine with proper compatibility options.

Share this post


Link to post
Share on other sites
Posted (edited)
13 hours ago, frob said:

Is there any reason you're trying to adapt a modern compiler to work the way the old things worked, instead of just downloading archived versions of the old compilers and SDKs?

Many things in the language have changed, and there have been many breaking changes in each set of the standards. C++03 tightened up a lot of rules. C++11 had all kinds of breaking changes from things like the simple-seeming enum values to the rather complex template system, new promotion rules supporting 64-bit values, slight changes to standard library items, and much more. C++14 tightened a few more details of the language. C++17 broke a few longstanding systems like certain bool value processing, inherited constructors, template argument rules, and exception handling systems.

Even if you make the headers match, you're still looking at a long list of breaking changes.  The obvious ones prevent compilation. The less obvious ones you'll only discover by thorough testing.

I think it would be far easier to get compilers and libraries from the era. The windows binaries should still run just fine with proper compatibility options.

Never mind, i am back to C++/11 and will never go back to older standards. I dont want to add dozens of lines just for detecting the proper size of things. Also i dropped the custom integral types and just include stdint.h and stddef.h.

Edited by Finalspace

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Martin H Hollstein
      Originally posted on Troll Purse development blog.
      Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.
      Interaction via Overlaps
      By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:
      Header
      // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API InteractiveActor : public AActor { GENERATED_BODY() public: // Sets default values for this actor's properties InteractiveActor(); virtual void BeginPlay() override; protected: UFUNCTION() virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult); UFUNCTION() virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex); UFUNCTION() virtual void OnPlayerInputActionReceived(); UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction) class UBoxComponent* InteractionTrigger; } This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.
      Source
      // Fill out your copyright notice in the Description page of Project Settings. #include "InteractiveActor.h" #include "Components/BoxComponent.h" // Sets default values AInteractiveActor::AInteractiveActor() { PrimaryActorTick.bCanEverTick = true; RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); RootComponent->SetMobility(EComponentMobility::Static); InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger")); InteractionTrigger->InitBoxExtent(FVector(128, 128, 128)); InteractionTrigger->SetMobility(EComponentMobility::Static); InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap); InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap); InteractionTrigger->SetupAttachment(RootComponent); } void AInteractiveActor::BeginPlay() { if(InputComponent == nullptr) { InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component"); InputComponent->bBlockInput = bBlockInput; } InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived); } void AInteractiveActor::OnPlayerInputActionReceived() { //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out. } void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { EnableInput(PC); } } } } void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { DisableInput(PC); } } } }  
      Pros and Cons
      The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.
      Interaction via Raytrace
      Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.
      Source
      AInteractiveActor.h
      // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API AInteractiveActor : public AActor { GENERATED_BODY() public: virtual OnReceiveInteraction(class APlayerController* PC); }  
      AMyPlayerController.h
      // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/PlayerController.h" #include "AMyPlayerController.generated.h" UCLASS() class GAME_API AMyPlayerController : public APlayerController { GENERATED_BODY() AMyPlayerController(); public: virtual void SetupInputComponent() override; float MaxRayTraceDistance; private: AInteractiveActor* GetInteractiveByCast(); void OnCastInput(); }  
      These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.
      The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.
      The simple execution of the code is as follows in the class definitions.
      AInteractiveActor.cpp
      void AInteractiveActor::OnReceiveInteraction(APlayerController* PC) { //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.)) }  
      AMyPlayerController.cpp
      AMyPlayerController::AMyPlayerController() { MaxRayTraceDistance = 1000.0f; } AMyPlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput); } void AMyPlayerController::OnCastInput() { AInteractiveActor* Interactive = GetInteractiveByCast(); if(Interactive != nullptr) { Interactive->OnReceiveInteraction(this); } else { return; } } AInteractiveActor* AMyPlayerController::GetInteractiveByCast() { FVector CameraLocation; FRotator CameraRotation; GetPlayerViewPoint(CameraLocation, CameraRotation); FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance); FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn()); TraceParams.bTraceAsyncScene = true; FHitResult Hit(ForceInit); GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams); AActor* HitActor = Hit.GetActor(); if(HitActor != nullptr) { return Cast<AInteractiveActor>(HitActor); } else { return nullptr; } }  
      Pros and Cons
      One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.
      Conclusion
      There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!
       
       
      Originally posted on Troll Purse development blog.
    • By mister345
      Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow.
      http://www.rastertek.com/dx11tut42.html
      He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it.
      The way he does it is :
      1. Project the objects in the scene to a render target using the depth shader.
      2. Draw black and white shadows on another render target using those depth textures.
      3. Blur the black/white shadow texture produced in step 2 by 
      a) rendering it to a smaller texture
      b) vertical / horizontal blurring that texture
      c) rendering it back to a bigger texture again.
      4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity.
       
      So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required.
       
      Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system?
      Like combining any of these render textures into one for example?
       
      If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time
      understanding the way this works, so a super complicated change would be beyond my capacity. Thanks.
       
      *For reference, here is my repo, in which I have simplified his tutorial and added an additional light.
       
      https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
       
    • By Sung Woo Yeo
       
      Guys, I've spent a lot of time to load skeletal animation
      but  It is very difficult...
      I refer to http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html
      but I didn't get the results I wanted
       
      Please Help Me
       
      This is my codes
       
      void LoadAnimation::BoneTransform(float time, vector<XMFLOAT4X4>& transforms) { XMMATRIX Identity = XMMatrixIdentity(); float TicksPerSecond = (float)(m_pScene->mAnimations[0]->mTicksPerSecond != 0 ? m_pScene->mAnimations[0]->mTicksPerSecond : 25.0f); float TimeInTicks = time*TicksPerSecond; float AnimationTime = fmod(TimeInTicks, (float)m_pScene->mAnimations[0]->mDuration); ReadNodeHeirarchy(AnimationTime, m_pScene->mRootNode, Identity); transforms.resize(m_NumBones); for (int i = 0; i < m_NumBones; ++i) { XMStoreFloat4x4(&transforms[i], m_Bones[i].second.FinalTransformation); } } void LoadAnimation::ReadNodeHeirarchy(float AnimationTime, const aiNode * pNode, const XMMATRIX& ParentTransform) { string NodeName(pNode->mName.data); const aiAnimation* pAnim = m_pScene->mAnimations[0]; XMMATRIX NodeTransformation = XMMATRIX(&pNode->mTransformation.a1); const aiNodeAnim* pNodeAnim = FindNodeAnim(pAnim, NodeName); if (pNodeAnim) { aiVector3D scaling; CalcInterpolatedScaling(scaling, AnimationTime, pNodeAnim); XMMATRIX ScalingM = XMMatrixScaling(scaling.x, scaling.y, scaling.z); ScalingM = XMMatrixTranspose(ScalingM); aiQuaternion q; CalcInterpolatedRotation(q, AnimationTime, pNodeAnim); XMMATRIX RotationM = XMMatrixRotationQuaternion(XMVectorSet(q.x, q.y, q.z, q.w)); RotationM = XMMatrixTranspose(RotationM); aiVector3D t; CalcInterpolatedPosition(t, AnimationTime, pNodeAnim); XMMATRIX TranslationM = XMMatrixTranslation(t.x, t.y, t.z); TranslationM = XMMatrixTranspose(TranslationM); NodeTransformation = TranslationM * RotationM * ScalingM; } XMMATRIX GlobalTransformation = ParentTransform * NodeTransformation; int tmp = 0; for (auto& p : m_Bones) { if (p.first == NodeName) { p.second.FinalTransformation = XMMatrixTranspose( m_GlobalInverse * GlobalTransformation * p.second.BoneOffset); break; } tmp += 1; } for (UINT i = 0; i < pNode->mNumChildren; ++i) { ReadNodeHeirarchy(AnimationTime, pNode->mChildren[i], GlobalTransformation); } }  
      CalcInterp~ function and Find~ function are like a tutorial
      (http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html)
       
      I think that I'm doing the multiplication wrong
      but I don't know where it went wrong
      If you want, i wall post other codes.
       
       here is my result
      (hands are stretched, legs are strange)

       
      and it is ideal result

    • By Ward Correll
      I include the source code from what I am playing with. It's an exercise from Frank Luna's DirectX 12 book about rendering a skull from a text file. I get a stack overflow error and the program quits. I don't know where I went wrong it's messy programming on the parts I added but maybe one of you masterminds can tell me where I went wrong.
      Chapter_7_Drawing_in_Direct3D_Part_II.zip
    • By mister345
      Hi guys, so I have about 200 files isolated in their own folder [physics code] in my Visual Studio project that I never touch. They might as well be a separate library, I just keep em as source files in case I need to look at them or step through them, but I will never actually edit them, so there's no need to ever build them.
      However, when I need to rebuild the entire solution because I changed the other files, all of these 200 files get rebuilt too and it takes a really long time.
      If I click on their properties -> exclude from build, then rebuild, it's no good because then all the previous built objects get deleted automatically, so the build will fail.
      So how do I make the built versions of the 200+ files in the physics directory stay where they are so I never have to rebuild them, but
      do a normal rebuild for everything else? Any easy answers to this? The simpler the better, as I am a noob at Visual Studio settings. Thanks.
  • Advertisement