Jump to content

  • Log In with Google      Sign In   
  • Create Account

why unreal engine wait the gpu to finish rendering after every Present?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 db123   Members   -  Reputation: 216

Like
0Likes
Like

Posted 30 April 2014 - 04:15 AM

i fount unreal engine code here:

 

http://www.cabalchronicle.com/download/InfinityBlade_UE3_WIN&IOS_SRC_By_MeGaMaX/InfinityBlade/Development/Src/D3D9Drv/Src/D3D9Viewport.cpp

void FD3D9DynamicRHI::EndDrawingViewport(FViewportRHIParamRef ViewportRHI,UBOOL bPresent,UBOOL bLockToVsync)
{
	DYNAMIC_CAST_D3D9RESOURCE(Viewport,Viewport);

	SCOPE_CYCLE_COUNTER(STAT_D3D9PresentTime);

	GPUFrameTiming.EndTiming();

	extern DWORD GGPUFrameTime;
	if ( GPUFrameTiming.IsSupported() )
	{
		QWORD GPUTiming = GPUFrameTiming.GetTiming();
		QWORD GPUFreq = GPUFrameTiming.GetTimingFrequency();
		GGPUFrameTime = appTrunc( DOUBLE(GPUTiming) / DOUBLE(GPUFreq) / GSecondsPerCycle );
	}
	else
	{
		GGPUFrameTime = 0;
	}

	check(DrawingViewport.GetReference() == Viewport);
	DrawingViewport = NULL;

	// Clear references the device might have to resources.
	Direct3DDevice->SetRenderTarget(0,*BackBuffer);
	Direct3DDevice->SetDepthStencilSurface(NULL);

	UnsetPSTextures();
	UnsetVSTextures();

	Direct3DDevice->SetVertexShader(NULL);

	ResetVertexStreams();

	Direct3DDevice->SetIndices(NULL);
	Direct3DDevice->SetPixelShader(NULL);

	#if WITH_PANORAMA
		extern void appPanoramaRenderHookRender(void);
		// Allow G4WLive to render the Live Guide as needed (or toasts)
		appPanoramaRenderHookRender();
	#endif

	// Tell D3D we're done rendering.
	Direct3DDevice->EndScene();

	if(bPresent)
	{
		// Present the back buffer to the viewport window.
		HRESULT Result = S_OK;
		if(Viewport->IsFullscreen())
		{
			Result = Direct3DDevice->Present(NULL,NULL,NULL,NULL);
		}
		else
		{
			RECT DestRect;
			if(GetClientRect((HWND)Viewport->GetWindowHandle(),&DestRect))
			{		
				RECT SourceRect;
				SourceRect.left		= SourceRect.top = 0;
				SourceRect.right	= Viewport->GetSizeX();
				SourceRect.bottom	= Viewport->GetSizeY();

				// Only present to the viewport if its client area isn't zero-sized.
				if(DestRect.right > 0 && DestRect.bottom > 0)
				{
					Result = Direct3DDevice->Present(&SourceRect,NULL,(HWND)Viewport->GetWindowHandle(),NULL);
				}
			}
		}

		// Detect a lost device.
		if(Result == D3DERR_DEVICELOST || Result == E_FAIL)
		{
			// This variable is checked periodically by the main thread.
			bDeviceLost = TRUE;
		}
		else
		{
			VERIFYD3D9RESULT(Result);
		}
	}

	// Wait for the GPU to finish rendering the previous frame before finishing this frame.
	FrameSyncEvent.WaitForCompletion();
	FrameSyncEvent.IssueEvent();

	// If the input latency timer has been triggered, block until the GPU is completely
	// finished displaying this frame and calculate the delta time.
	if ( GInputLatencyTimer.RenderThreadTrigger )
	{
		FrameSyncEvent.WaitForCompletion();
		DWORD EndTime = appCycles();
		GInputLatencyTimer.DeltaTime = EndTime - GInputLatencyTimer.StartTime;
		GInputLatencyTimer.RenderThreadTrigger = FALSE;
	}
}

what the effect of this code:

FrameSyncEvent.WaitForCompletion();
	FrameSyncEvent.IssueEvent();

if i comment them, the engine works well too.

 

so i want to know why they do this ?



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 30378

Like
4Likes
Like

Posted 30 April 2014 - 06:24 AM

It seems to be waiting for an evens that occurred at the beginning of the previous frame.

Whenever you send commands to the GPU, they are actually just being written into a command buffer. The GPU may be executing D3D/GL calls 1,2,3+ frames after the CPU makes those calls.

I would assume that this is there as a hack to ensure that the GPU is only ever 1 frame behind the CPU, and no more than that. It basically seems like a hack to disable tripple buffering or any other kind of excessive buffering hat a driver might automatically be performing. I guess this might help reduce input-to-screen latency on some systems?

#3 AgentC   Members   -  Reputation: 1331

Like
0Likes
Like

Posted 30 April 2014 - 06:29 AM

Yeah, the multi-frame queuing and the delay it causes can be seen in some FPS games when you enable vsync and move the mouse, and notice the camera movement to be clearly lagging behind. But in case the CPU has a lot of work to do anyway, this effect may not happen (as it never has time to queue multiple frames)


Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...

Urho3D (engine)  Hessian (C64 game project)


#4 Alessio1989   Members   -  Reputation: 1999

Like
0Likes
Like

Posted 30 April 2014 - 06:49 AM

It seems to be waiting for an evens that occurred at the beginning of the previous frame.

Whenever you send commands to the GPU, they are actually just being written into a command buffer. The GPU may be executing D3D/GL calls 1,2,3+ frames after the CPU makes those calls.

I would assume that this is there as a hack to ensure that the GPU is only ever 1 frame behind the CPU, and no more than that. It basically seems like a hack to disable tripple buffering or any other kind of excessive buffering hat a driver might automatically be performing. I guess this might help reduce input-to-screen latency on some systems?

 

I remember that UT3 (one of the first UE3 games) has an in-game option for that.


"Software does not run in a magical fairy aether powered by the fevered dreams of CS PhDs"


#5 db123   Members   -  Reputation: 216

Like
0Likes
Like

Posted 30 April 2014 - 07:35 AM

It seems to be waiting for an evens that occurred at the beginning of the previous frame.

Whenever you send commands to the GPU, they are actually just being written into a command buffer. The GPU may be executing D3D/GL calls 1,2,3+ frames after the CPU makes those calls.

I would assume that this is there as a hack to ensure that the GPU is only ever 1 frame behind the CPU, and no more than that. It basically seems like a hack to disable tripple buffering or any other kind of excessive buffering hat a driver might automatically be performing. I guess this might help reduce input-to-screen latency on some systems?

void FD3D9EventQuery::IssueEvent()
{
	if( Query )
	{
		Query->Issue(D3DISSUE_END);
	}
}

void FD3D9EventQuery::WaitForCompletion()
{
	if( Query )
	{
		UBOOL bRenderingIsFinished = FALSE;
		while ( D3DRHI->GetQueryData(Query,&bRenderingIsFinished,sizeof(bRenderingIsFinished),TRUE) && !bRenderingIsFinished )
		{
		}
	}
}


#6 MJP   Moderators   -  Reputation: 11330

Like
3Likes
Like

Posted 30 April 2014 - 07:57 PM

Yeah that's a common trick used by D3D games to keep the CPU from getting too far ahead of the GPU. Basically you just sync on a query from the previous frame, and it forces the driver to wait for the GPU to catch up.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS