Jump to content
  • Advertisement

NickUdell

Member
  • Content count

    46
  • Joined

  • Last visited

Community Reputation

292 Neutral

About NickUdell

  • Rank
    Member
  1. NickUdell

    Title Screenshot Rendered

    It looks nice, but I'd definitely recommend using a GUISkin to change those buttons
  2. I'm trying to set up keyboard and mouse controls for a space game using SlimDX and RawInput. My current code is as follows:   Device.RegisterDevice(UsagePage.Generic, UsageId.Keyboard, DeviceFlags.None); Device.KeyboardInput += new EventHandler<KeyboardInputEventArgs>(keyboardInput); Device.RegisterDevice(UsagePage.Generic, UsageId.Mouse, DeviceFlags.None); Device.MouseInput += new EventHandler<MouseInputEventArgs>(mouseInput);     However I read here: http://code.google.com/p/slimdx/issues/detail?id=785 that for WPF I need to use a different overload for Device.RegisterDevice(), as well as assigning a HandleMessage using Device.HandleMessage(IntPtr message)   I've found the correct overload for RegisterDevice() which is: RegisterDevice(UsagePage usagePage, UsageId usageId, DeviceFlags flags, IntPtr target, bool addThreadFilter)       What I can't work out, though, is:   1) Now that I have to use a target, what am I meant to set as a target? 2) Where do I get this IntPtr message from?
  3. Now that makes a lot of sense. I always assumed they were different techniques. Thanks very much.
  4. When I was learning 3D a couple of years back, fixed function was still raging and we were taught to move everything backwards when the player moved forwards, as opposed to moving the camera. But since you can always supply a position in the view matrix, what's the actual point in moving the whole world around you? Are there any advantages over either method? Sorry if this is a duplicate, I struggled to think what the correct search terms for this would be. thanks
  5. NickUdell

    Don't start yet another voxel project

    For pet projects, voxels can make a lot of sense. They allow you to build a world by simply editing a few noise functions and move on, no hastily-made tools that need to be rebuilt a year later, no mucking around in Blender. For the programmers trying to learn how to make a game with the aim of getting a team together once they have the basic skills, this is ideal. They can leverage procedural content generation to provide them with nice-looking (usually) visuals, without their needing to show of their terrible 3d modelling / design skills. it's also inherently and intuitively deformable and requires that the developer pick up some very useful knowledge on compression, multi-threading, GPU computing and optimization. All of these skills transfer to a more mature project with a real team, as well as typically encompassing the standard skill-set required to code a game. There's also the allure of games like Minecraft and ID's new tech they're toying with. People are seeing voxel tech more and more and it's intuitively linked to their understanding of matter. They figure it's easier to work with, overall than building the shell of a thing. My first pet-project was a Minecraft-like infinite terrain generator (actually more infinite than minecraft, as minecraft uses a height limit) using 3D perlin and perfect cubes at 1/4 the size of minecraft's cubes. It ran fairly well on my admittedly high-end machine, even with all the fancy shaders I could muster. I theorize that for marching cubes I had more than enough resolution to produce a realistic terrain with enough detail. You see, these days terrain is still fairly low-poly. Due to normal mapping and tessellation you can get away with a fairly low-res voxel base (perhaps 1 voxel per metre). I got bored before messing around with marching cubes, though so I can't be certain on the performance
  6. I've been working on a game made in SlimDX for the last year or so. I've gotta admit, I'm madly in love with the platform. It emulates the C++ native DirectX functionality so well that 9 times out of ten you can flat out use a C++ DX tutorial and immediately port it to equivalent SlimDX in your head. Very useful stuff, that. I'd recommend SlimDX over XNA if you want immediate access to D3D11, if not then you're probably going to profit from the sheer quantity of resources available for XNA.
  7. A couple of DX11 sites I used while learning (and I'm still learning) SlimDX Direct3D11: rastertek for D3D11 and HLSL RB Whitaker which, despite sounding like a purveyor of fine coffees, has some amazing HLSL tutorials This blog is not as good as rastertek, which I couldn't recommend more (some site formatting issues notwithstanding), but it also helps cover a few things. Honestly the best idea is to read through the basic tutorials on rastertek, think about what you want to make, think about how you would make it, and then start googling for techniques and ideas available based on the beginner knowledge you picked up from that first read-through.
  8. To my best knowledge, Texture1DArrays must all be of the same length. So what makes a Texture1DArray containing 10 texture1Ds, each with a width of 256 texels, different from a Texture2D of size 256x10? As far as I can tell they all take up one texture register, the same amount of memory and are even accessed in the same way, so is there any real difference in implementation or is the only difference just the name? I assume the same argument applies to Texture2D arrays and Texture3Ds too.
  9. NickUdell

    Sample a Texture2D with the CPU

    Hmm I see your point, I'll do a GPU implementation for now then and when it comes to optimization later I'll write up a CPU implementation and test them side by side.
  10. NickUdell

    Sample a Texture2D with the CPU

    I was planning to build the CPU implementation on a separate thread so I could run it with low priority and build the textures slowly. It's a space game and I'm actually limiting the number of procedurally-textured objects on screen at any one time to 8 or 9, and due to the distances involved, this will update very slowly - so I have in the order of several minutes to actually generate the new textures - this in turn allows me to build very detailed (many octaves of fBm noise) texture representations without causing stuttering due to the GPU currently generating a large texture while my render thread waits.
  11. NickUdell

    DirectX Tutoial issues

    Protip: If you highlight your code in the post editor and select the "code" button (symbol: <>) it'll be a lot easier for us to understand. Like so: public static void main() { Console.WriteLine("This is ugly"); } or: public static void main() { Console.WriteLine("This is beautiful"); }
  12. I'm using SlimDX and Direct3D11 and I have a Texture2D created by loading from a file. I want to use this, and a couple of other textures to build a set of procedural textures at runtime - however I don't want to use the GPU as it's already very busy doing other things. Unfortunately I can't find any way of sampling a Texture2D outside of HLSL or using Texture2D.SaveToStream() and skipping through to the pixels I want manually. Is there a simpler way of doing it? Something similar to HLSL's Texture2D.Sample(sampler,coords) or am I going to have to wade through the data stream? Thanks
  13. As usual that's looking really cool. What's your target platform for this?
  14. NickUdell

    Dynamic Deformable Terrain

    Well you'd be building the triangles from a 3D texture, essentially that would be the set of voxels that make up your terrain. You could run physics against that, but as kauna said it's probably best to simplify it a little into something more suited for the calculations you want to do.
  15. NickUdell

    Dynamic Deformable Terrain

    Ah, the old Minecraft gold rush. I made a voxel-based minecraft clone a while back - made my own renderer for it though, so I can't say if there are any out there that'll let you do it for you. If not, you might want to consider using a 3D texture sent to a geometry shader. The 3D texture would be the surrounding voxels (only needing maybe a 100x100x100 texture, maybe 256x256x256 at the most of R8_UINT) and the shader builds the vertices from there either using marching cubes or - if you want a cubic world like minecraft - by simply finding empty space and creating faces. I can't vouch as to whether this would be faster than if it was done on the CPU, but it seems quite separable.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!