• Advertisement
Sign in to follow this  

C++ Get HWND of another application

Recommended Posts

Hi Guys,

Just wondering if it is possible to get the window handle of another open application given it's window title?

The reasoning behind this is I am making an external editor for an IDE and want to inject an 'F5' to force the external IDE to make a build. The IDE doe not have any external way to trigger a build (GameMaker Studio).

Any advice would be awesome.

Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
20 minutes ago, lonewolff said:

Hi Guys,

Just wondering if it is possible to get the window handle of another open application given it's window title?

The reasoning behind this is I am making an external editor for an IDE and want to inject an 'F5' to force the external IDE to make a build. The IDE doe not have any external way to trigger a build (GameMaker Studio).

Any advice would be awesome.

Thanks in advance.

First result.

Share this post


Link to post
Share on other sites

Yes, I did do a google search, thanks ;)

Isn't this talking about getting the window 'host' applications though?

In your example, the OP is talking about an application that is using his own DLL in it.

If I were to get the HWND of something like notepad.exe how would I go about that?

Share this post


Link to post
Share on other sites
1 minute ago, lonewolff said:

Yes, I did do a google search, thanks

Isn't this talking about getting the window 'host' applications though?

In your example, the OP is talking about an application that is using his own DLL in it.

If I were to get the HWND of something like notepad.exe how would I go about that?

Read the reply in that thread. It outlines the method and mentions the fact that you may not be dealing with a single main window. Once you can list windows that belong to a process, figure out how to go about doing that to a different process. The first logical step here would be to substitute the current process for what ever process you need. Again, first reply.

Note that there may be more than once instance of a single executable running, so you probably need to list all processes called notepad.exe, open each one and list all windows, doing your best to figure out which one is the main window. This may be trivial for notepad.exe, but not so much for something messier, like Gimp.

Just now, lonewolff said:

Anyway, worked it out. Thanks :)

 


FindWindow(NULL, "Untitled - Notepad");

 

This will only give you a valid result if the user is editing an unnamed and likely unsaved document and only if there is one instance of Notepad running.

Share this post


Link to post
Share on other sites
9 minutes ago, irreversible said:

This will only give you a valid result if the user is editing an unnamed and likely unsaved document and only if there is one instance of Notepad running.

Correct.

In my case I know what the window title will be on every occasion (even if it changes). So this method works out well for me.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Martin H Hollstein
      Originally posted on Troll Purse development blog.
      Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.
      Interaction via Overlaps
      By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:
      Header
      // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API InteractiveActor : public AActor { GENERATED_BODY() public: // Sets default values for this actor's properties InteractiveActor(); virtual void BeginPlay() override; protected: UFUNCTION() virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult); UFUNCTION() virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex); UFUNCTION() virtual void OnPlayerInputActionReceived(); UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction) class UBoxComponent* InteractionTrigger; } This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.
      Source
      // Fill out your copyright notice in the Description page of Project Settings. #include "InteractiveActor.h" #include "Components/BoxComponent.h" // Sets default values AInteractiveActor::AInteractiveActor() { PrimaryActorTick.bCanEverTick = true; RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); RootComponent->SetMobility(EComponentMobility::Static); InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger")); InteractionTrigger->InitBoxExtent(FVector(128, 128, 128)); InteractionTrigger->SetMobility(EComponentMobility::Static); InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap); InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap); InteractionTrigger->SetupAttachment(RootComponent); } void AInteractiveActor::BeginPlay() { if(InputComponent == nullptr) { InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component"); InputComponent->bBlockInput = bBlockInput; } InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived); } void AInteractiveActor::OnPlayerInputActionReceived() { //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out. } void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { EnableInput(PC); } } } } void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { DisableInput(PC); } } } }  
      Pros and Cons
      The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.
      Interaction via Raytrace
      Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.
      Source
      AInteractiveActor.h
      // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API AInteractiveActor : public AActor { GENERATED_BODY() public: virtual OnReceiveInteraction(class APlayerController* PC); }  
      AMyPlayerController.h
      // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/PlayerController.h" #include "AMyPlayerController.generated.h" UCLASS() class GAME_API AMyPlayerController : public APlayerController { GENERATED_BODY() AMyPlayerController(); public: virtual void SetupInputComponent() override; float MaxRayTraceDistance; private: AInteractiveActor* GetInteractiveByCast(); void OnCastInput(); }  
      These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.
      The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.
      The simple execution of the code is as follows in the class definitions.
      AInteractiveActor.cpp
      void AInteractiveActor::OnReceiveInteraction(APlayerController* PC) { //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.)) }  
      AMyPlayerController.cpp
      AMyPlayerController::AMyPlayerController() { MaxRayTraceDistance = 1000.0f; } AMyPlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput); } void AMyPlayerController::OnCastInput() { AInteractiveActor* Interactive = GetInteractiveByCast(); if(Interactive != nullptr) { Interactive->OnReceiveInteraction(this); } else { return; } } AInteractiveActor* AMyPlayerController::GetInteractiveByCast() { FVector CameraLocation; FRotator CameraRotation; GetPlayerViewPoint(CameraLocation, CameraRotation); FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance); FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn()); TraceParams.bTraceAsyncScene = true; FHitResult Hit(ForceInit); GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams); AActor* HitActor = Hit.GetActor(); if(HitActor != nullptr) { return Cast<AInteractiveActor>(HitActor); } else { return nullptr; } }  
      Pros and Cons
      One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.
      Conclusion
      There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!
       
       
      Originally posted on Troll Purse development blog.
    • By mister345
      Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow.
      http://www.rastertek.com/dx11tut42.html
      He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it.
      The way he does it is :
      1. Project the objects in the scene to a render target using the depth shader.
      2. Draw black and white shadows on another render target using those depth textures.
      3. Blur the black/white shadow texture produced in step 2 by 
      a) rendering it to a smaller texture
      b) vertical / horizontal blurring that texture
      c) rendering it back to a bigger texture again.
      4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity.
       
      So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required.
       
      Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system?
      Like combining any of these render textures into one for example?
       
      If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time
      understanding the way this works, so a super complicated change would be beyond my capacity. Thanks.
       
      *For reference, here is my repo, in which I have simplified his tutorial and added an additional light.
       
      https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
       
    • By Sung Woo Yeo
       
      Guys, I've spent a lot of time to load skeletal animation
      but  It is very difficult...
      I refer to http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html
      but I didn't get the results I wanted
       
      Please Help Me
       
      This is my codes
       
      void LoadAnimation::BoneTransform(float time, vector<XMFLOAT4X4>& transforms) { XMMATRIX Identity = XMMatrixIdentity(); float TicksPerSecond = (float)(m_pScene->mAnimations[0]->mTicksPerSecond != 0 ? m_pScene->mAnimations[0]->mTicksPerSecond : 25.0f); float TimeInTicks = time*TicksPerSecond; float AnimationTime = fmod(TimeInTicks, (float)m_pScene->mAnimations[0]->mDuration); ReadNodeHeirarchy(AnimationTime, m_pScene->mRootNode, Identity); transforms.resize(m_NumBones); for (int i = 0; i < m_NumBones; ++i) { XMStoreFloat4x4(&transforms[i], m_Bones[i].second.FinalTransformation); } } void LoadAnimation::ReadNodeHeirarchy(float AnimationTime, const aiNode * pNode, const XMMATRIX& ParentTransform) { string NodeName(pNode->mName.data); const aiAnimation* pAnim = m_pScene->mAnimations[0]; XMMATRIX NodeTransformation = XMMATRIX(&pNode->mTransformation.a1); const aiNodeAnim* pNodeAnim = FindNodeAnim(pAnim, NodeName); if (pNodeAnim) { aiVector3D scaling; CalcInterpolatedScaling(scaling, AnimationTime, pNodeAnim); XMMATRIX ScalingM = XMMatrixScaling(scaling.x, scaling.y, scaling.z); ScalingM = XMMatrixTranspose(ScalingM); aiQuaternion q; CalcInterpolatedRotation(q, AnimationTime, pNodeAnim); XMMATRIX RotationM = XMMatrixRotationQuaternion(XMVectorSet(q.x, q.y, q.z, q.w)); RotationM = XMMatrixTranspose(RotationM); aiVector3D t; CalcInterpolatedPosition(t, AnimationTime, pNodeAnim); XMMATRIX TranslationM = XMMatrixTranslation(t.x, t.y, t.z); TranslationM = XMMatrixTranspose(TranslationM); NodeTransformation = TranslationM * RotationM * ScalingM; } XMMATRIX GlobalTransformation = ParentTransform * NodeTransformation; int tmp = 0; for (auto& p : m_Bones) { if (p.first == NodeName) { p.second.FinalTransformation = XMMatrixTranspose( m_GlobalInverse * GlobalTransformation * p.second.BoneOffset); break; } tmp += 1; } for (UINT i = 0; i < pNode->mNumChildren; ++i) { ReadNodeHeirarchy(AnimationTime, pNode->mChildren[i], GlobalTransformation); } }  
      CalcInterp~ function and Find~ function are like a tutorial
      (http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html)
       
      I think that I'm doing the multiplication wrong
      but I don't know where it went wrong
      If you want, i wall post other codes.
       
       here is my result
      (hands are stretched, legs are strange)

       
      and it is ideal result

    • By Ward Correll
      I include the source code from what I am playing with. It's an exercise from Frank Luna's DirectX 12 book about rendering a skull from a text file. I get a stack overflow error and the program quits. I don't know where I went wrong it's messy programming on the parts I added but maybe one of you masterminds can tell me where I went wrong.
      Chapter_7_Drawing_in_Direct3D_Part_II.zip
    • By mister345
      Hi guys, so I have about 200 files isolated in their own folder [physics code] in my Visual Studio project that I never touch. They might as well be a separate library, I just keep em as source files in case I need to look at them or step through them, but I will never actually edit them, so there's no need to ever build them.
      However, when I need to rebuild the entire solution because I changed the other files, all of these 200 files get rebuilt too and it takes a really long time.
      If I click on their properties -> exclude from build, then rebuild, it's no good because then all the previous built objects get deleted automatically, so the build will fail.
      So how do I make the built versions of the 200+ files in the physics directory stay where they are so I never have to rebuild them, but
      do a normal rebuild for everything else? Any easy answers to this? The simpler the better, as I am a noob at Visual Studio settings. Thanks.
  • Advertisement