Rendering to texture... strange 'lag'...

Started by
7 comments, last by turnpast 19 years, 2 months ago
Hey, Im making a game in Managed DirectX and I'm trying to add render to texture to it. When i call "RTS.EndScene(Filter.none)" my program freezes for about 2-3 seconds.... any ideas? RTS is an instance of "RenderToSurface" Thanks, - aCiD2
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
Advertisement
** BUMP **
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
What is the format and pool of your render target? If it is not a compatable render target RTS will create a compatable target and copy it to your surface when you call EndScene, this couls account for the lag. NOTE: a compatable render target must be created in the Default pool and be of a size and format supported by your graphics hardware.
its a 512x512 texture created in the default render pool. Im using the format X8R8B8G8 - quite standard. My card is an ATi Radeon 9800, pretty sure it supports that format...
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
I have had trouble getting RenderToSurface to work correctly myself and simply use Device.SetRenderTarget().

Here is what I do.
tex = new Texture(device,flatSize.X,flatSize.Y,1,Usage.RenderTarget,texFormat,Pool.Default);//...device.BeginScene();device.SetRenderTarget(0,tex.GetSurfaceLevel(0));device.Clear(ClearFlags.Target, clearColor.ToArgb(), 1.0f, 0);device.DrawUserPrimitives(PrimitiveType.TriangleStrip,2,geom);device.EndScene();device.Present();device.SetRenderTarget(0,device.GetSwapChain(0).GetBackBuffer(0,0));

It may not be totaly correct, but it works without lag on my 9600.
Hey, cool solution! Works perfectly, but I'm not totally sure with it - is this solution just as reliable as using an actual RenderToSurface?
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
Just a stab in the dark here,


If i remember correctly, a 'surface' is considered to exist in system memory, not video memory.

This would explain the lag of having to pass the data from the video card over to main memory.

wherein setting a render target, specifies a video memory rendering surface, that is no need to communicate back to sram.

again, just a thought =)

Raymond Jacobs, Owner - Ethereal Darkness Interactive
www.EDIGames.com - EDIGamesCompany - @EDIGames

RenderToSurface has worked just as well before - it is meant to be the prefered method. I jsut can't understand why I am now getting this lag...
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
Quote:Original post by acid2
Hey, cool solution! Works perfectly, but I'm not totally sure with it - is this solution just as reliable as using an actual RenderToSurface?


I imagine that RenderToSurface is doing very a similar thing inside.

This topic is closed to new replies.

Advertisement