• Advertisement

Search the Community

Showing results for tags 'C++'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 390 results

  1. Originally posted on Troll Purse development blog. Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture. Interaction via Overlaps By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player: Header // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API InteractiveActor : public AActor { GENERATED_BODY() public: // Sets default values for this actor's properties InteractiveActor(); virtual void BeginPlay() override; protected: UFUNCTION() virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult); UFUNCTION() virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex); UFUNCTION() virtual void OnPlayerInputActionReceived(); UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction) class UBoxComponent* InteractionTrigger; } This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code. Source // Fill out your copyright notice in the Description page of Project Settings. #include "InteractiveActor.h" #include "Components/BoxComponent.h" // Sets default values AInteractiveActor::AInteractiveActor() { PrimaryActorTick.bCanEverTick = true; RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); RootComponent->SetMobility(EComponentMobility::Static); InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger")); InteractionTrigger->InitBoxExtent(FVector(128, 128, 128)); InteractionTrigger->SetMobility(EComponentMobility::Static); InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap); InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap); InteractionTrigger->SetupAttachment(RootComponent); } void AInteractiveActor::BeginPlay() { if(InputComponent == nullptr) { InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component"); InputComponent->bBlockInput = bBlockInput; } InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived); } void AInteractiveActor::OnPlayerInputActionReceived() { //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out. } void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { EnableInput(PC); } } } } void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { DisableInput(PC); } } } } Pros and Cons The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene. Interaction via Raytrace Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting. Source AInteractiveActor.h // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API AInteractiveActor : public AActor { GENERATED_BODY() public: virtual OnReceiveInteraction(class APlayerController* PC); } AMyPlayerController.h // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/PlayerController.h" #include "AMyPlayerController.generated.h" UCLASS() class GAME_API AMyPlayerController : public APlayerController { GENERATED_BODY() AMyPlayerController(); public: virtual void SetupInputComponent() override; float MaxRayTraceDistance; private: AInteractiveActor* GetInteractiveByCast(); void OnCastInput(); } These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used. The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*) function is invoked. That final function is where inherited classes will implement interaction specific code. The simple execution of the code is as follows in the class definitions. AInteractiveActor.cpp void AInteractiveActor::OnReceiveInteraction(APlayerController* PC) { //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.)) } AMyPlayerController.cpp AMyPlayerController::AMyPlayerController() { MaxRayTraceDistance = 1000.0f; } AMyPlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput); } void AMyPlayerController::OnCastInput() { AInteractiveActor* Interactive = GetInteractiveByCast(); if(Interactive != nullptr) { Interactive->OnReceiveInteraction(this); } else { return; } } AInteractiveActor* AMyPlayerController::GetInteractiveByCast() { FVector CameraLocation; FRotator CameraRotation; GetPlayerViewPoint(CameraLocation, CameraRotation); FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance); FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn()); TraceParams.bTraceAsyncScene = true; FHitResult Hit(ForceInit); GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams); AActor* HitActor = Hit.GetActor(); if(HitActor != nullptr) { return Cast<AInteractiveActor>(HitActor); } else { return nullptr; } } Pros and Cons One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override. Conclusion There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers! Originally posted on Troll Purse development blog.
  2. Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow. http://www.rastertek.com/dx11tut42.html He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it. The way he does it is : 1. Project the objects in the scene to a render target using the depth shader. 2. Draw black and white shadows on another render target using those depth textures. 3. Blur the black/white shadow texture produced in step 2 by a) rendering it to a smaller texture b) vertical / horizontal blurring that texture c) rendering it back to a bigger texture again. 4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity. So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required. Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system? Like combining any of these render textures into one for example? If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time understanding the way this works, so a super complicated change would be beyond my capacity. Thanks. *For reference, here is my repo, in which I have simplified his tutorial and added an additional light. https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
  3. Guys, I've spent a lot of time to load skeletal animation but It is very difficult... I refer to http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html but I didn't get the results I wanted Please Help Me This is my codes void LoadAnimation::BoneTransform(float time, vector<XMFLOAT4X4>& transforms) { XMMATRIX Identity = XMMatrixIdentity(); float TicksPerSecond = (float)(m_pScene->mAnimations[0]->mTicksPerSecond != 0 ? m_pScene->mAnimations[0]->mTicksPerSecond : 25.0f); float TimeInTicks = time*TicksPerSecond; float AnimationTime = fmod(TimeInTicks, (float)m_pScene->mAnimations[0]->mDuration); ReadNodeHeirarchy(AnimationTime, m_pScene->mRootNode, Identity); transforms.resize(m_NumBones); for (int i = 0; i < m_NumBones; ++i) { XMStoreFloat4x4(&transforms[i], m_Bones[i].second.FinalTransformation); } } void LoadAnimation::ReadNodeHeirarchy(float AnimationTime, const aiNode * pNode, const XMMATRIX& ParentTransform) { string NodeName(pNode->mName.data); const aiAnimation* pAnim = m_pScene->mAnimations[0]; XMMATRIX NodeTransformation = XMMATRIX(&pNode->mTransformation.a1); const aiNodeAnim* pNodeAnim = FindNodeAnim(pAnim, NodeName); if (pNodeAnim) { aiVector3D scaling; CalcInterpolatedScaling(scaling, AnimationTime, pNodeAnim); XMMATRIX ScalingM = XMMatrixScaling(scaling.x, scaling.y, scaling.z); ScalingM = XMMatrixTranspose(ScalingM); aiQuaternion q; CalcInterpolatedRotation(q, AnimationTime, pNodeAnim); XMMATRIX RotationM = XMMatrixRotationQuaternion(XMVectorSet(q.x, q.y, q.z, q.w)); RotationM = XMMatrixTranspose(RotationM); aiVector3D t; CalcInterpolatedPosition(t, AnimationTime, pNodeAnim); XMMATRIX TranslationM = XMMatrixTranslation(t.x, t.y, t.z); TranslationM = XMMatrixTranspose(TranslationM); NodeTransformation = TranslationM * RotationM * ScalingM; } XMMATRIX GlobalTransformation = ParentTransform * NodeTransformation; int tmp = 0; for (auto& p : m_Bones) { if (p.first == NodeName) { p.second.FinalTransformation = XMMatrixTranspose( m_GlobalInverse * GlobalTransformation * p.second.BoneOffset); break; } tmp += 1; } for (UINT i = 0; i < pNode->mNumChildren; ++i) { ReadNodeHeirarchy(AnimationTime, pNode->mChildren[i], GlobalTransformation); } } CalcInterp~ function and Find~ function are like a tutorial (http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html) I think that I'm doing the multiplication wrong but I don't know where it went wrong If you want, i wall post other codes. here is my result (hands are stretched, legs are strange) and it is ideal result
  4. I include the source code from what I am playing with. It's an exercise from Frank Luna's DirectX 12 book about rendering a skull from a text file. I get a stack overflow error and the program quits. I don't know where I went wrong it's messy programming on the parts I added but maybe one of you masterminds can tell me where I went wrong. Chapter_7_Drawing_in_Direct3D_Part_II.zip
  5. Designing Player World Interaction in Unreal Engine 4

    Originally posted on Troll Purse development blog. Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture. Interaction via Overlaps By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player: Header // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API InteractiveActor : public AActor { GENERATED_BODY() public: // Sets default values for this actor's properties InteractiveActor(); virtual void BeginPlay() override; protected: UFUNCTION() virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult); UFUNCTION() virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex); UFUNCTION() virtual void OnPlayerInputActionReceived(); UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction) class UBoxComponent* InteractionTrigger; } This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code. Source // Fill out your copyright notice in the Description page of Project Settings. #include "InteractiveActor.h" #include "Components/BoxComponent.h" // Sets default values AInteractiveActor::AInteractiveActor() { PrimaryActorTick.bCanEverTick = true; RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); RootComponent->SetMobility(EComponentMobility::Static); InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger")); InteractionTrigger->InitBoxExtent(FVector(128, 128, 128)); InteractionTrigger->SetMobility(EComponentMobility::Static); InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap); InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap); InteractionTrigger->SetupAttachment(RootComponent); } void AInteractiveActor::BeginPlay() { if(InputComponent == nullptr) { InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component"); InputComponent->bBlockInput = bBlockInput; } InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived); } void AInteractiveActor::OnPlayerInputActionReceived() { //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out. } void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { EnableInput(PC); } } } } void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { DisableInput(PC); } } } } Pros and Cons The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene. Interaction via Raytrace Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting. Source AInteractiveActor.h // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API AInteractiveActor : public AActor { GENERATED_BODY() public: virtual OnReceiveInteraction(class APlayerController* PC); } AMyPlayerController.h // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/PlayerController.h" #include "AMyPlayerController.generated.h" UCLASS() class GAME_API AMyPlayerController : public APlayerController { GENERATED_BODY() AMyPlayerController(); public: virtual void SetupInputComponent() override; float MaxRayTraceDistance; private: AInteractiveActor* GetInteractiveByCast(); void OnCastInput(); } These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used. The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*) function is invoked. That final function is where inherited classes will implement interaction specific code. The simple execution of the code is as follows in the class definitions. AInteractiveActor.cpp void AInteractiveActor::OnReceiveInteraction(APlayerController* PC) { //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.)) } AMyPlayerController.cpp AMyPlayerController::AMyPlayerController() { MaxRayTraceDistance = 1000.0f; } AMyPlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput); } void AMyPlayerController::OnCastInput() { AInteractiveActor* Interactive = GetInteractiveByCast(); if(Interactive != nullptr) { Interactive->OnReceiveInteraction(this); } else { return; } } AInteractiveActor* AMyPlayerController::GetInteractiveByCast() { FVector CameraLocation; FRotator CameraRotation; GetPlayerViewPoint(CameraLocation, CameraRotation); FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance); FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn()); TraceParams.bTraceAsyncScene = true; FHitResult Hit(ForceInit); GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams); AActor* HitActor = Hit.GetActor(); if(HitActor != nullptr) { return Cast<AInteractiveActor>(HitActor); } else { return nullptr; } } Pros and Cons One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override. Conclusion There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers! Originally posted on Troll Purse development blog.
  6. Hi guys, so I have about 200 files isolated in their own folder [physics code] in my Visual Studio project that I never touch. They might as well be a separate library, I just keep em as source files in case I need to look at them or step through them, but I will never actually edit them, so there's no need to ever build them. However, when I need to rebuild the entire solution because I changed the other files, all of these 200 files get rebuilt too and it takes a really long time. If I click on their properties -> exclude from build, then rebuild, it's no good because then all the previous built objects get deleted automatically, so the build will fail. So how do I make the built versions of the 200+ files in the physics directory stay where they are so I never have to rebuild them, but do a normal rebuild for everything else? Any easy answers to this? The simpler the better, as I am a noob at Visual Studio settings. Thanks.
  7. Im working in this project for 1 year .... mostly i develop a tool and databases for make the different maps and now i'm doing the client for play the game Tell me if you like it...... this is a capture of how is viewing atm https://youtu.be/9251v4wDTQ0
  8. Releasing is scary

    I have released my first free prototype! https://yesindiedee.itch.io/is-this-a-game How terrifying! It is strange that I have been working to the moment of releasing something to the public for all of my adult life, and now I have I find it pretty scary. I have been a developer now for over 20 years and in that time I have released a grand total of 0 products. The Engine The engine is designed to be flexible with its components, but so far it uses Opengl, OpenAL, Python (scripting), CG, everything else is built in The Games When I started developing a game I had a pretty grand vision, a 3D exploration game. It was called Cavian, image attached. and yep it was far to complex for my first release. Maybe I will go back to it one day. I took a year off after that, I had to sell most of my stuff anyway as not releasing games isn't great for your financial situation. THE RELEASE When I came back I was determined to actually release something! I lowered my sights to a car game, it is basically finished but unfortunately my laptop is too old to handle the deferred lighting (Thinkpad x220 intel graphics) so I can't test it really, going to wait until I can afford a better computer before releasing it. Still determined to release something I decided to focus more on the gameplay than graphics. Is This A Game? Now I have created an Experimental prototype. Its released and everything: https://yesindiedee.itch.io/is-this-a-game So far I don't know if it even runs on another computer. Any feedback would be greatly appreciated! If you have any questions about any process in the creation of this game design - coding - scripting - graphics - deployment just ask, I will try to make a post on it. Have a nice day, I have been lurking on here for ages but never really said anything..... I like my cave
  9. For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials I also run grhmedia.com where I host the projects and code for the tutorials I have online. Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month. Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views. Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags. At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site. I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month. I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like. I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month. Maybe another fee if it is related but not directly in the content. The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process. So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable? Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea. I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos.
  10. Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception? _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up)); It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions. m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter. https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
  11. Hi, I am sending data to peers and those data need to be retreived from a scenegraph with a mutex to lock the data. The process of gathering the data is taking a bit less than a ms. I'm starting the thread every time I want to gather the data. If I'm running at 60 fps, I'm starting the thread 60 times per second so is that a performance or design problem? Would it be much better to have the thread always running and some kind of mechanism to ask him to perform the task whenever it's needed, so around 60 or 120fps? Also, does starting a thread creates some memory alloc/dealloc and then produce on the long run some kind of fragmentation? Thank you all.
  12. Hello! I wrote a simple bones system that renders a 3D model with bones using software vertex processing. The model is loaded perfectly, but I can't see any colors on it. For illustration, you can see the 3D lines list, the bones ( 32 bones ) are in correct position ( bind pose ). Now, here's the problem. When I try to render the mesh with transformations applied then I see this: As you can see the 3D lines are disappearing, I'm guessing the model is rendered, but the colors are not visible for whatever reason. I tried moving my camera around the line list, but all I can see is some lines disappearing due to the black color of vertices? I'm not loading any textures, am I suppose to load them? However, if I render the vertices without applying ANY bone transformations, then I can see it, but it's a mess, obviously. If you're wondering why it's red, I have set color of these vertices ( only half of them ) to red and the rest half is white. First of all, my apologies for the messy code, but here it is: I'm not sure if vertices are suppose to have weights in them for software vertex processing. I'm storing them in a container, so you don't see them here. #define CUSTOMFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_DIFFUSE ) struct CUSTOMVERTEX { D3DXVECTOR3 Position; D3DXVECTOR3 Normal; DWORD Color; }; This is how I store the vertices in container and give them red and white color: This is how I create the device: For every frame: This is the UpdateSkinnedMesh method: I have debugged bone weights and bone indices. They are okay. Bone weights add up to 1.0f, so I'm really wondering why I can't see the model with colors on it?
  13. Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file? I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and double clicked on the frame to open it, but no idea where to go from there. I've been searching for hours and there's no information on this, not even on the Microsoft Website! They say "open the Graphics Pixel History window" but there is no such window! Then they say, in the "Pipeline Stages choose Start Debugging" but the Start Debugging option is nowhere to be found in the whole interface. Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger? All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated and Microsoft's instructions are horrible! Somebody please, please help.
  14. I'm working a lot with identifiers types to help me while debugging or managing collections of instances in my code. Previously, I had using directives as the following example code: using AnimalId = unsigned int; using TreeId = unsigned int; After a while, I accidentally mixed AnimalId and TreeId in my logic, so I've decided to use structs in order to make the identifiers strong typed. To avoid code duplications, I've created a template template <typename TypeTag, typename ValueType> struct IdType { ValueType value; // Methods for comparison and hashes }; However, I believe there's two different ways of using this template: // Using directive struct AnimalIdTag {} using AnimalId = IdType<AnimalIdTag, unsigned int>; // Inheritance struct TreeId : IdType<TreeId, unsigned int>; And here is where I got in doubt. It seems both ways are valid, but there should be some differences. For example, with inheritance I can forward declare the TreeId on headers, which doesn't seem really feasible (in a painless way) with AnimalId. However, TreeId uses inheritance, and my knowledge on how inheritance and templates works "in background" is too weak to say, but it feels like there might be some hidden drawback. Are there differences to make deciding which one to use easier? Or there's currently no drawbacks (besides being able to forward declare or not)?
  15. Hello everyone. Me and a bunch of classmates are making a videogame from scratch in c++. We are currently using the Irrlicht Engine in our code, but will begin to develop our own engine in a couple of weeks. One of our concerns right now is the User Interface. After doing some research, I found several libraries for creating a GUI in c++, problem is in the past we have spent more time learning and figuring out how to make things work with libraries than we would have if we actually made the entire code from scratch ourselves. So, I wanted to ask more experienced game programmers out there, what would be your preferred choice? using a library or writting the code from scratch when making an UI, and in case of the libraries, which ones would you use?. Heres what we will most likely do: Healthbars Displaying numbers in HUD Displaying icons (images) in HUD Drawing game menu elements (text and rectangles) Also, we are currently working in linux, more specifically Manjaro KDE and without IDE's (using our own make file and the console). PD: Sorry in advance is this is in the wrong topic, It's my first time asking a question in GameDev.net
  16. Mie Scattering

    I implemented the paper, Numerical Methods for Mie Theory of Scattering by a Sphere, Link:https://prints.iiap.res.in/bitstream/2248/72/1/Numerical Methods for Mie Theory of Scattering by a Sphere.pdf Unfortunately, my implementation is not accurate. I couldn't find any mistakes in my code. Can anyone help me? struct XMDOUBLE2 { double x; double y; XMDOUBLE2() {} XMDOUBLE2(double _x, double _y) : x(_x), y(_y) {} explicit XMDOUBLE2(_In_reads_(2) const double *pArray) : x(pArray[0]), y(pArray[1]) {} XMDOUBLE2& operator= (const XMFLOAT2& Float2) { x = Float2.x; y = Float2.y; return *this; } }; XMDOUBLE2 Complex_Add(XMDOUBLE2 z1, XMDOUBLE2 z2)// z1 + z2 { return XMDOUBLE2((z1.x + z2.x), (z1.y + z2.y)); } XMDOUBLE2 Complex_Subtract(XMDOUBLE2 z1, XMDOUBLE2 z2)// z1 - z2 { return XMDOUBLE2((z1.x - z2.x), (z1.y - z2.y)); } XMDOUBLE2 Complex_Multiply(XMDOUBLE2 z1, XMDOUBLE2 z2)// z1*z2 { return XMDOUBLE2(((z1.x*z2.x) - (z1.y*z2.y)), ((z1.x*z2.y) + (z1.y*z2.x))); } double Complex_Norm(XMDOUBLE2 z)// |z| { return sqrt((z.x*z.x) + (z.y*z.y)); } XMDOUBLE2 Complex_Division(XMDOUBLE2 z1, XMDOUBLE2 z2)// z1/z2 { XMDOUBLE2 tmp; if (Complex_Norm(z2) != 0) { tmp.x = ((z1.x*z2.x)+(z1.y*z2.y))/((z2.x*z2.x)+(z2.y*z2.y)); tmp.y = ((z1.y*z2.x)-(z1.x*z2.y))/((z2.x*z2.x)+(z2.y*z2.y)); } return tmp; } //m = 1.5+i0.0 void MieCoefficient(XMDOUBLE2 m, float r, float Lamda)//m = (m')+i(-m"): complex index of refraction, r: radius of the spherical particle in um, lamda: wavelength in nm { #define TOTAL_MAX_TERMS 2100 double Alpha[TOTAL_MAX_TERMS], Beta[TOTAL_MAX_TERMS], P[TOTAL_MAX_TERMS], Q[TOTAL_MAX_TERMS], R[TOTAL_MAX_TERMS], S[TOTAL_MAX_TERMS], B[TOTAL_MAX_TERMS], C[TOTAL_MAX_TERMS], C_[TOTAL_MAX_TERMS]; XMDOUBLE2 A[TOTAL_MAX_TERMS], G[TOTAL_MAX_TERMS], H[TOTAL_MAX_TERMS], a[TOTAL_MAX_TERMS], b[TOTAL_MAX_TERMS]; for (int n=0; n<TOTAL_MAX_TERMS; n++) { Alpha[n] = 0; Beta[n] = 0; P[n] = 0; Q[n] = 0; R[n] = 0; S[n] = 0; B[n] = 0; C[n] = 0; C_[n] = 0; A[n] = XMDOUBLE2(0, 0); G[n] = XMDOUBLE2(0, 0); H[n] = XMDOUBLE2(0, 0); a[n] = XMDOUBLE2(0, 0); b[n] = XMDOUBLE2(0, 0); } // float Lamdaum = (float)Lamda*1e-3; float x = 23.1f;//((2*XM_PI*r) / Lamdaum); //int MAX_TERMS = (int)(x + 7.5f*pow(x, .34f) + 2); int MAX_TERMS = (int)(x + 4*pow(x, 0.33f) + 2); //Wiscombe if (MAX_TERMS<TOTAL_MAX_TERMS) { double y1 = x*m.x; double y2 = x*m.y; XMDOUBLE2 z = XMDOUBLE2(y1, y2); double zNorm = Complex_Norm(z); double y = zNorm*zNorm; for (int n=MAX_TERMS; n>0; n--) { Alpha[n] = ((((2*(double)(n))-1)*y1) / y) - P[n]; Beta[n] = ((((2*(double)(n))-1)*y2) / y) + Q[n]; P[n-1] = (Alpha[n] / ((Alpha[n]*Alpha[n]) + (Beta[n]*Beta[n]))); Q[n-1] = (Beta[n] / ((Alpha[n]*Alpha[n]) + (Beta[n]*Beta[n]))); R[n-1] = (x / (((2*(double)(n))-1) - (x*R[n]))); } S[0] = sin(x); for (int n=1; n<MAX_TERMS; n++) { S[n] = R[n]*S[n-1]; } for (int n=0; n<MAX_TERMS; n++) { XMDOUBLE2 t1 = Complex_Division(XMDOUBLE2(1.0f, 0), XMDOUBLE2(P[n], Q[n])); XMDOUBLE2 t2 = Complex_Division(XMDOUBLE2((double)(n), 0), z); A[n] = Complex_Subtract(t1, t2); B[n] = (((1/R[n]) - ((double)(n)/x))); } C[0] = cos(x); C[1] = ((cos(x)/x) + sin(x)); for (int n=2; n<MAX_TERMS; n++) { C[n] = (((((2*(double)(n))-1) / x) * C[n-1]) - C[n-2]); } C_[0] = 0; for (int n=1; n<MAX_TERMS; n++) { C_[n] = (((((double)(-n)) / x) * C[n]) - C[n-1]); } for (int n=0; n<MAX_TERMS; n++) { G[n] = XMDOUBLE2(1, (C[n]/S[n])); H[n] = XMDOUBLE2(B[n], (C_[n]/S[n])); } for (int n=0; n<MAX_TERMS; n++) { XMDOUBLE2 t1 = Complex_Multiply(m, XMDOUBLE2(B[n], 0)); XMDOUBLE2 t2 = Complex_Multiply(A[n], G[n]); XMDOUBLE2 t3 = Complex_Multiply(m, H[n]); a[n] = Complex_Division(Complex_Subtract(A[n], t1), Complex_Subtract(t2, t3)); XMDOUBLE2 t4 = Complex_Multiply(m, A[n]); XMDOUBLE2 t5 = Complex_Multiply(t4, G[n]); b[n] = Complex_Division(Complex_Subtract(t4, XMDOUBLE2(B[n], 0)), Complex_Subtract(t5, H[n])); } double Qext = 0, tmp = 0; for (int n=0; n<MAX_TERMS; n++) { tmp += (((2*(double)(n))+1) * Complex_Add(a[n], b[n]).x); } Qext = ((tmp*2) / (x*x)); char s[256]; sprintf_s(s, "%.25f", Qext); MessageBoxA(0, s, "", MB_OK); } }
  17. I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online. Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized. I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks. https://github.com/mister51213/DX11Port_SoftShadows Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly. https://github.com/mister51213/DX11Port_ShadowMapping
  18. Im using micropather, a c++ implementation of a* pathfinding in my games. I got it to work fine with a strict grid-based system, but now need it to also work with "nodes". My setup is like this: 1. I have a collection of nodes. Each node has a X/Y-position, an ID, and an array of max 10 node-ids it can move to ("adjacent nodes"). 2. Which nodes are reachable from each node are checked (based on a max distance and not having any obstacles between them). 3. Cost to move from a node to another is simply the distance between the 2 nodes' positions. I guess some things will be silimar to how a grid works, but im not sure how to reconstruct the code.
  19. We are pleased to announce the release of Matali Physics 4.0, the fourth major version of Matali Physics engine. What is Matali Physics? Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. The engine is available across multiple platforms: Android *BSD iOS Linux OS X SteamOS Windows 10 UAP/UWP Windows 7/8/8.1/10 Windows XP/Vista What's new in version 4.0? One extended edition of Matali Physics engine Support for Android 8.0 Oreo, iOS 11.x and macOS High Sierra (version 10.13.x) as well as support for the latest IDEs Matali Render 3.0 add-on with physically-based rendering (PBR), screen space ambient occlusion (SSAO) and support for Vulkan API Matali Games add-on Main benefits of using Matali Physics: Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit) Advanced samples ready to use in your own games New features on request Dedicated technical support Regular updates and fixes The engine history in a nutshell Matali Physics was built in 2009 as a dedicated solution for XNA. The first complete version of the engine was released in November 2010, and it was further developed to July 2014 forming multi-platform, fully manage solution for .NET and Mono. In the meantime, from October 2013 to July 2014, was introduced simultaneous support for C++. A significant change occurred in July 2014 together with the release of version 3.0. Managed version of the engine has been abandoned, and the engine was released solely with a new native core written entirely in modern C++. Currently the engine is intensively developed as an advanced, cross-platform, high-performance 3d physics solution. If you have questions related to the latest update or use of Matali Physics engine as a stable physics solution in your projects, please don't hesitate to contact us.
  20. We are pleased to announce the release of Matali Physics 4.0, the fourth major version of Matali Physics engine. What is Matali Physics? Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. The engine is available across multiple platforms: Android *BSD iOS Linux OS X SteamOS Windows 10 UAP/UWP Windows 7/8/8.1/10 Windows XP/Vista What's new in version 4.0? One extended edition of Matali Physics engine Support for Android 8.0 Oreo, iOS 11.x and macOS High Sierra (version 10.13.x) as well as support for the latest IDEs Matali Render 3.0 add-on with physically-based rendering (PBR), screen space ambient occlusion (SSAO) and support for Vulkan API Matali Games add-on Main benefits of using Matali Physics: Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit) Advanced samples ready to use in your own games New features on request Dedicated technical support Regular updates and fixes The engine history in a nutshell Matali Physics was built in 2009 as a dedicated solution for XNA. The first complete version of the engine was released in November 2010, and it was further developed to July 2014 forming multi-platform, fully manage solution for .NET and Mono. In the meantime, from October 2013 to July 2014, was introduced simultaneous support for C++. A significant change occurred in July 2014 together with the release of version 3.0. Managed version of the engine has been abandoned, and the engine was released solely with a new native core written entirely in modern C++. Currently the engine is intensively developed as an advanced, cross-platform, high-performance 3d physics solution. If you have questions related to the latest update or use of Matali Physics engine as a stable physics solution in your projects, please don't hesitate to contact us. View full story
  21. Hey guys, I'm using OpenAL for long time ago and I realized if you apply an effect to a audio source and then you apply another one to another audio source then the effect of both will change to the latest one, is this normal? I think it's a pretty stupid question to be honest but after some search through internet I couldn't get a clear answer. As far as I know, audio effects are handled by current listener zone (I'm not sure about this). Can anyone confirm that statement is right? Thanks in advance.
  22. Greetings! Who do I need? I am creating a game and I need a 2D Pixel Artist(s) for my game. Introduction: I created a game company two days ago called Dark Star Ship Studio (The link to the company: http://darkstarshipstudio.com/). The game I am creating is a 2D action RPG game with city building game play elements and world exploration game play elements. The world is a procedural generated map (its in the design process) and I am creating the game play; unfortunately, my art skills are very poor. How much work did I accomplished? I've been working on my home brew game engine in C++ and I did try to make some pixel art (I attached a image example). Again, I know my pixel art is horrible. I do not have the talent or patience to create visual art. In addition, I did try to learn on how to create a decent pixel sprite sheet; however, I lost my patience and decided to come here to seek talented artists that are looking for an opportunity. What's in it for you? I wish I can compensate for your services and help. I am in the process of adding more Software Design plans for the game engine and hopefully the game can be available in mobile devices. As a matter of fact, I did registered for the Nintendo Developer Portal and I do wish to release the game in the Nintendo 3DS or Nintendo Switch (I know what you're thinking, "This guy is nuts!"). I cannot promise if the product will succeed or not; however, the reward of creating a game and gaining experience can be very helpful in the future. Thank you for your time! cyberspace009
  23. Hi Guys, Just wondering if it is possible to get the window handle of another open application given it's window title? The reasoning behind this is I am making an external editor for an IDE and want to inject an 'F5' to force the external IDE to make a build. The IDE doe not have any external way to trigger a build (GameMaker Studio). Any advice would be awesome. Thanks in advance.
  24. Hi heading is float alike 12.3524423 std::string sheading = std::to_string(heading); now I only want to display 12.35 Many thanks in advance
  25. I made a spotlight that 1. Projects 3d models onto a render target from each light POV to simulate shadows 2. Cuts a circle out of the square of light that has been projected onto the render target as a result of the light frustum, then only lights up the pixels inside that circle (except the shadowed parts of course), so you dont see the square edges of the projected frustum. After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95 to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value, which should range between .95 and 1.0. This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However, there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander and let me know, please help, thank you so much. float CalculateSpotLightIntensity( float3 LightPos_VertexSpace, float3 LightDirection_WS, float3 SurfaceNormal_WS) { //float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace); float3 lightToVertex_WS = -LightPos_VertexSpace; float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS))); // METALLIC EFFECT (deactivate for now) float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))); if(dotProduct > .95 /*&& metalEffect > .55*/) { return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))); //return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct; //return dotProduct; } else { return 0; } } float4 LightPixelShader(PixelInputType input) : SV_TARGET { float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// LIGHT LOOP //////////////// for(int i = 0; i < NUM_LIGHTS; ++i) { // Calculate the projected texture coordinates. projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue ); // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. float spotLightIntensity = CalculateSpotLightIntensity( input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!! cb_lights[i].lightDirection, bumpNormal/*input.normal*/); lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight //lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light } } } } // Saturate the final light color. lightColor = saturate(lightColor); // lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection)); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); // Combine the light and texture color. float4 finalColor = lightColor * textureColor; /////// TRANSPARENCY ///////// //finalColor.a = 0.2f; return finalColor; } Light_vs.hlsl Light_ps.hlsl
  • Advertisement