Jump to content

  • Log In with Google      Sign In   
  • Create Account

db123

Member Since 10 Apr 2010
Offline Last Active Oct 21 2014 09:09 PM

Topics I've Started

what's the principle behind the shader debugger?

21 October 2014 - 02:24 AM

i found there are two shader debugger:

the first one :

SD_Flagship_550_Shadowed.jpg

https://developer.nvidia.com/nv-shader-debugger

the second one:

pix for windows

http://www.chromium.org/developers/how-tos/debugging/pix-for-windows

 

they can debug pixel shader code like cpu code, i want to know, what is the principle of this, if i want to implement a shader debugger, where is the start point?

 

 


how to implement ghost shadow effect like these pictures?

10 September 2014 - 06:35 PM

Attached File  1.jpg   71.55KB   0 downloads

 

this picture is a gif, you can click it to view animation.

Attached File  2.gif   57.98KB   0 downloads

 

 

Attached File  3.jpg   34.71KB   0 downloads

Attached File  4.jpg   24.37KB   0 downloads


why the alphablend is a better choice than alphatest to implement transparent on mobile...

20 May 2014 - 05:10 AM

i have notice that unreal engine have a transparent mask slot on its material root expression, and it implement this mask by discard instruction.

 

is this discard instruction or clip function equal to alpha test?

 

 

but in unity engine documents, it recommend use alphablend to implement transparent than alphatest, because alphatest is slowly than alphablend on mobile device.

 

what's the reason cause this phenomenon?smile.png


why unreal engine wait the gpu to finish rendering after every Present?

30 April 2014 - 04:15 AM

i fount unreal engine code here:

 

http://www.cabalchronicle.com/download/InfinityBlade_UE3_WIN&IOS_SRC_By_MeGaMaX/InfinityBlade/Development/Src/D3D9Drv/Src/D3D9Viewport.cpp

void FD3D9DynamicRHI::EndDrawingViewport(FViewportRHIParamRef ViewportRHI,UBOOL bPresent,UBOOL bLockToVsync)
{
	DYNAMIC_CAST_D3D9RESOURCE(Viewport,Viewport);

	SCOPE_CYCLE_COUNTER(STAT_D3D9PresentTime);

	GPUFrameTiming.EndTiming();

	extern DWORD GGPUFrameTime;
	if ( GPUFrameTiming.IsSupported() )
	{
		QWORD GPUTiming = GPUFrameTiming.GetTiming();
		QWORD GPUFreq = GPUFrameTiming.GetTimingFrequency();
		GGPUFrameTime = appTrunc( DOUBLE(GPUTiming) / DOUBLE(GPUFreq) / GSecondsPerCycle );
	}
	else
	{
		GGPUFrameTime = 0;
	}

	check(DrawingViewport.GetReference() == Viewport);
	DrawingViewport = NULL;

	// Clear references the device might have to resources.
	Direct3DDevice->SetRenderTarget(0,*BackBuffer);
	Direct3DDevice->SetDepthStencilSurface(NULL);

	UnsetPSTextures();
	UnsetVSTextures();

	Direct3DDevice->SetVertexShader(NULL);

	ResetVertexStreams();

	Direct3DDevice->SetIndices(NULL);
	Direct3DDevice->SetPixelShader(NULL);

	#if WITH_PANORAMA
		extern void appPanoramaRenderHookRender(void);
		// Allow G4WLive to render the Live Guide as needed (or toasts)
		appPanoramaRenderHookRender();
	#endif

	// Tell D3D we're done rendering.
	Direct3DDevice->EndScene();

	if(bPresent)
	{
		// Present the back buffer to the viewport window.
		HRESULT Result = S_OK;
		if(Viewport->IsFullscreen())
		{
			Result = Direct3DDevice->Present(NULL,NULL,NULL,NULL);
		}
		else
		{
			RECT DestRect;
			if(GetClientRect((HWND)Viewport->GetWindowHandle(),&DestRect))
			{		
				RECT SourceRect;
				SourceRect.left		= SourceRect.top = 0;
				SourceRect.right	= Viewport->GetSizeX();
				SourceRect.bottom	= Viewport->GetSizeY();

				// Only present to the viewport if its client area isn't zero-sized.
				if(DestRect.right > 0 && DestRect.bottom > 0)
				{
					Result = Direct3DDevice->Present(&SourceRect,NULL,(HWND)Viewport->GetWindowHandle(),NULL);
				}
			}
		}

		// Detect a lost device.
		if(Result == D3DERR_DEVICELOST || Result == E_FAIL)
		{
			// This variable is checked periodically by the main thread.
			bDeviceLost = TRUE;
		}
		else
		{
			VERIFYD3D9RESULT(Result);
		}
	}

	// Wait for the GPU to finish rendering the previous frame before finishing this frame.
	FrameSyncEvent.WaitForCompletion();
	FrameSyncEvent.IssueEvent();

	// If the input latency timer has been triggered, block until the GPU is completely
	// finished displaying this frame and calculate the delta time.
	if ( GInputLatencyTimer.RenderThreadTrigger )
	{
		FrameSyncEvent.WaitForCompletion();
		DWORD EndTime = appCycles();
		GInputLatencyTimer.DeltaTime = EndTime - GInputLatencyTimer.StartTime;
		GInputLatencyTimer.RenderThreadTrigger = FALSE;
	}
}

what the effect of this code:

FrameSyncEvent.WaitForCompletion();
	FrameSyncEvent.IssueEvent();

if i comment them, the engine works well too.

 

so i want to know why they do this ?


what's the precondition of hdr postprocess

24 April 2014 - 03:36 AM

i have create a simple scene, a simple mesh with a simple point light, and compute lighting by phong mode.

and then, i have create a simple hdr post process effect which is reference to the demo HDRRendering in nvidia sdk 10:

http://developer.download.nvidia.com/SDK/10/direct3d/samples.html

 

but this technique just make my scene more brightness which include the dark area.

and then , i find a article :

HDRRenderingInOpenGL:

 

https://www.google.com.hk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=61&cad=rja&uact=8&ved=0CCYQFjAAODw&url=http%3a%2f%2ftransporter-game%2egooglecode%2ecom%2ffiles%2fHDRRenderingInOpenGL%2epdf&ei=LNlYU7ePMqWbigepqYDoDQ&usg=AFQjCNG4iEoIWlA-fC5e8skaUMJd-ErSDQ&bvm=bv.65397613,d.aGc

 

it said that if we want to use hdr, we must have a hdr input texture, and all the input textures in that demo is .hdr format.

 

if i create the render target by RGBA16F, and render my scene on this surface, is this surface a hdr input texture?


PARTNERS