Jump to content

  • Log In with Google      Sign In   
  • Create Account


Tape_Worm

Member Since 21 Apr 2000
Offline Last Active Today, 07:33 AM
-----

Posts I've Made

In Topic: Weird corruption when drawing text with DrawString

08 March 2014 - 06:59 PM

OK, so I finally found another person that has run into the same problem (this is from 2012... I'm astounded it's still an issue):

http://social.msdn.microsoft.com/forums/windowsdesktop/en-US/7f02b531-529d-4940-a220-cde46e61e88f/windows-8-garbled-text-with-gdi-graphicsdrawstring

 

In the end I used the GraphicsPath method described by Gianpaolo64 (in the aforementioned link) and that solved my issue.

 


In Topic: Per Pixel Lighting

16 October 2013 - 11:22 AM

This is your problem:

    Output.Color = baseColor *(lightFactor+xAmbient);
    Output.Color = tex2D(TextureSampler, PSIn.TextureCoords);

It should be:


    Output.Color = baseColor *(lightFactor+xAmbient) * tex2D(TextureSampler, PSIn.TextureCoords);

Or something similar to that.


In Topic: Display Images fast with SlimDX .

19 August 2013 - 11:18 PM

Looking at your code, I'd move the TCP read and filling of the memory stream to another thread.  That way it can handle that part in the background and free up some CPU time.

 

Again, as for the decoding from the memory stream using FromStream, I'd just create a dynamic texture (outside of the loop) and just continuously update it.  This is where putting your stuff into another thread could help. You could decode the image data in the other thread and copy the data into a format that's friendly to the texture, and then when it's done filling the memory stream (and you're back on your main thread) you lock the dynamic texture, and just dump the contents directly into the texture and unlock.

 

You can still preserve the behaviour you have now.  That is, no image, then you show nothing (empty texture), first image arrives, it gets displayed, subsequent image arrives, and it gets displayed, nothing else arrives so just use the last image.  You need only update the texture after the memory stream has been filled with data.

 

Again, that's how I'd consider handling it, and it may not be the best way, but I do know that creating the resource and destroying it every frame is just bad for performance and I don't think there's anything you can do to improve the performance with the code as it is.  


In Topic: Using both D3D9 and D3D11

19 August 2013 - 10:49 PM

This is probably a very stupid question and I apologise if it is. But can't you just use the D3D9 feature levels in DX11 for the D3D9 renderer ?

It's not a stupid question at all, it's definitely something developers should look into if they wish to support direct3d 9 video devices.  However, there are caveats.  For example, if the dev wanted to run their application on Windows XP, they couldn't use the Direct3D 11 runtime because it won't work on XP.  
 
Also, from personal experience, the D3D9 feature levels are a pain in the ass to work with.  I've had several little "gotchas" when dealing with the D3D9 feature level (some of which have no or very sparse documentation) that I've nearly given up on caring if people can use it.  
 
An example of one of the issues I ran into was copying a texture into a staging texture.  Apparently, under feature level 9 you can't copy a resource into a staging resource if the source resource is in GPU memory and it's got a shader view attached to it.  So, if you created a texture with a shader view, you're out of luck.  So, create one without a shader view right?  Unfortunately, no, that leads to another set of issues (e.g. you can't bind to a pixel shader, kinda necessary).  There was that and a few other small issues I ran into (multi monitor was especially painful).

 

Another issue with feature level 9 is that the highest vertex/pixel shader model supported is SM2_x.  If you wanted to support SM3, then you're out of luck.
 

So in my own personal renderer, I want it to be able to support D3D9 and D3D11. To do this, I thought about having a master renderer interface that two classes will derive from. One will implement the D3D9 renderer and the other will implement the D3D11 renderer. However, this requires that I statically link to both D3D9 and D3D11, which will (I think) require that both dlls be present at run time. Is there any way I can avoid this?

You could design a system to load your renderer dynamically as a DLL.  This way the DLL would be the only thing linking to the libraries in question, your host application/API would have the interface abstracted so it wouldn't care about the dependencies.  So, for example, you can detect whether the user is running XP and force load the D3D 9 renderer DLL, or if they're not then load the D3D 11 renderer DLL.  
 
If you put both renderers in the same DLL then yes, you'll need to link against both.  Whether the end user will require D3D11 and D3D9 installed to run, I can't answer with any degree of certainty because it's been an incredibly long time since I've dealt with that stuff, but I'm going to guess that yes they would require both.


In Topic: Display Images fast with SlimDX .

19 August 2013 - 10:31 PM

var tx = SlimDX.Direct3D9.Texture.FromStream(device, ms, 0, 0, 0, SlimDX.Direct3D9.Usage.None, SlimDX.Direct3D9.Format.X8R8G8B8, SlimDX.Direct3D9.Pool.Managed, SlimDX.Direct3D9.Filter.None, SlimDX.Direct3D9.Filter.None, 0);

That line is the problem.  You're calling that -every- frame and it's not a fast process to decode an image from a stream.  Aside from the decoding process, you're also creating and destroying a GPU resource every frame and that's just a big no-no for performance.  It's better to create a single resource for the lifetime of the application and update its contents.

 

If you need to stream in textures there may be better ways to accomplish your goals. For example, you could use a dynamic texture and updating the contents by locking, decoding/writing the texture data and unlocking.  It might net you some performance improvement.  You could also detect when a new image appears in the stream and only update when you get that new image.  You could also read the image stream into a memory buffer on another thread and upload that buffer to the texture after it's finished filling.  There are many ways to achieve this, but creating a new texture, decoding an image into it, and then destroying it after display is not a good way to do it.


PARTNERS