Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 05 May 2012
Offline Last Active Sep 25 2013 06:39 AM

Topics I've Started

SlimDX RawInput and WPF

23 February 2013 - 07:54 AM

I'm trying to set up keyboard and mouse controls for a space game using SlimDX and RawInput. My current code is as follows:


Device.RegisterDevice(UsagePage.Generic, UsageId.Keyboard, DeviceFlags.None);
Device.KeyboardInput += new EventHandler<KeyboardInputEventArgs>(keyboardInput);
Device.RegisterDevice(UsagePage.Generic, UsageId.Mouse, DeviceFlags.None);
Device.MouseInput += new EventHandler<MouseInputEventArgs>(mouseInput);



However I read here: http://code.google.com/p/slimdx/issues/detail?id=785 that for WPF I need to use a different overload for Device.RegisterDevice(), as well as assigning a HandleMessage using Device.HandleMessage(IntPtr message)


I've found the correct overload for RegisterDevice() which is:

RegisterDevice(UsagePage usagePage, UsageId usageId, DeviceFlags flags, IntPtr target, bool addThreadFilter)




What I can't work out, though, is:


1) Now that I have to use a target, what am I meant to set as a target?

2) Where do I get this IntPtr message from?

Any point in moving the world in reverse these days?

12 July 2012 - 06:55 AM

When I was learning 3D a couple of years back, fixed function was still raging and we were taught to move everything backwards when the player moved forwards, as opposed to moving the camera. But since you can always supply a position in the view matrix, what's the actual point in moving the whole world around you? Are there any advantages over either method?

Sorry if this is a duplicate, I struggled to think what the correct search terms for this would be.


Technical Differences between Texture1DArray and Texture2D?

14 June 2012 - 09:52 AM

To my best knowledge, Texture1DArrays must all be of the same length. So what makes a Texture1DArray containing 10 texture1Ds, each with a width of 256 texels, different from a Texture2D of size 256x10?

As far as I can tell they all take up one texture register, the same amount of memory and are even accessed in the same way, so is there any real difference in implementation or is the only difference just the name?

I assume the same argument applies to Texture2D arrays and Texture3Ds too.

Sample a Texture2D with the CPU

13 June 2012 - 07:03 AM

I'm using SlimDX and Direct3D11 and I have a Texture2D created by loading from a file. I want to use this, and a couple of other textures to build a set of procedural textures at runtime - however I don't want to use the GPU as it's already very busy doing other things.

Unfortunately I can't find any way of sampling a Texture2D outside of HLSL or using Texture2D.SaveToStream() and skipping through to the pixels I want manually.

Is there a simpler way of doing it? Something similar to HLSL's Texture2D.Sample(sampler,coords) or am I going to have to wade through the data stream?


Deferred Shading and Many Large Procedural Textures

27 May 2012 - 05:26 AM

Hi, I'm using SlimDX and Direct3D11. I'm coming up to a bit of a problem when it comes to rendering textures.

My game is a procedural galaxy, where the content is created before the game begins (as opposed to dynamically). This includes planetary surface textures (although they'll be really instantiated as a set of parameters to only be created and stored when the player is close to them - to save on hard-disk space and memory). The problem with the surface textures is that they have to be very large indeed (I aim to have stylistic cube planets, which means I need a 3x2 net of 2048x2048 textures with matching normals and specular maps (to deal with when the player is close to the planet). I've read a few articles on how to generate these textures, and I think I now have Perlin noise in my grasp, but I'm struggling to work out how I'm actually going to render these textures once I've created them.

I have a deferred rendering pipeline set up where I pass in my geometry, my textures, normals, specular maps and lights and calculate the various light maps, colour maps and position maps, which I combine later. I used to combine textures into a couple of texture maps, send them to the GPU and work from there, however with the sheer size of some of the textures involved that's not an option any more.

Has anybody had this problem before? If so, how did they get around it?