• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

NickUdell

Members
  • Content count

    46
  • Joined

  • Last visited

Community Reputation

292 Neutral

About NickUdell

  • Rank
    Member

Personal Information

  • Location
    Southampton, United Kingdom
  1. It looks nice, but I'd definitely recommend using a GUISkin to change those buttons
  2. I'm trying to set up keyboard and mouse controls for a space game using SlimDX and RawInput. My current code is as follows:   Device.RegisterDevice(UsagePage.Generic, UsageId.Keyboard, DeviceFlags.None); Device.KeyboardInput += new EventHandler<KeyboardInputEventArgs>(keyboardInput); Device.RegisterDevice(UsagePage.Generic, UsageId.Mouse, DeviceFlags.None); Device.MouseInput += new EventHandler<MouseInputEventArgs>(mouseInput);     However I read here: http://code.google.com/p/slimdx/issues/detail?id=785 that for WPF I need to use a different overload for Device.RegisterDevice(), as well as assigning a HandleMessage using Device.HandleMessage(IntPtr message)   I've found the correct overload for RegisterDevice() which is: RegisterDevice(UsagePage usagePage, UsageId usageId, DeviceFlags flags, IntPtr target, bool addThreadFilter)       What I can't work out, though, is:   1) Now that I have to use a target, what am I meant to set as a target? 2) Where do I get this IntPtr message from?
  3. [quote name='Brother Bob' timestamp='1342098194' post='4958382'] The position in the matrix you mention is what move the vertices when you multiply them by the matrix. In the end, you have to move everything into a unit size cube (z-range differ between OpenGL and Direct3D though) centered at the origin. This cube cannot change so you must move the vertices into the cube, not move the cube to around the vertices. This is why documentation says you move the world and not the viewpoint. The two are conceptually identical though so it is a matter of how you look at it. [/quote] Now that makes a lot of sense. I always assumed they were different techniques. Thanks very much.
  4. When I was learning 3D a couple of years back, fixed function was still raging and we were taught to move everything backwards when the player moved forwards, as opposed to moving the camera. But since you can always supply a position in the view matrix, what's the actual point in moving the whole world around you? Are there any advantages over either method? Sorry if this is a duplicate, I struggled to think what the correct search terms for this would be. thanks
  5. For pet projects, voxels can make a lot of sense. They allow you to build a world by simply editing a few noise functions and move on, no hastily-made tools that need to be rebuilt a year later, no mucking around in Blender. For the programmers trying to learn how to make a game with the aim of getting a team together once they have the basic skills, this is ideal. They can leverage procedural content generation to provide them with nice-looking (usually) visuals, without their needing to show of their terrible 3d modelling / design skills. it's also inherently and intuitively deformable and requires that the developer pick up some very useful knowledge on compression, multi-threading, GPU computing and optimization. All of these skills transfer to a more mature project with a real team, as well as typically encompassing the standard skill-set required to code a game. There's also the allure of games like Minecraft and ID's new tech they're toying with. People are seeing voxel tech more and more and it's intuitively linked to their understanding of matter. They figure it's easier to work with, overall than building the shell of a thing. My first pet-project was a Minecraft-like infinite terrain generator (actually more infinite than minecraft, as minecraft uses a height limit) using 3D perlin and perfect cubes at 1/4 the size of minecraft's cubes. It ran fairly well on my admittedly high-end machine, even with all the fancy shaders I could muster. I theorize that for marching cubes I had more than enough resolution to produce a realistic terrain with enough detail. You see, these days terrain is still fairly low-poly. Due to normal mapping and tessellation you can get away with a fairly low-res voxel base (perhaps 1 voxel per metre). I got bored before messing around with marching cubes, though so I can't be certain on the performance
  6. I've been working on a game made in SlimDX for the last year or so. I've gotta admit, I'm madly in love with the platform. It emulates the C++ native DirectX functionality so well that 9 times out of ten you can flat out use a C++ DX tutorial and immediately port it to equivalent SlimDX in your head. Very useful stuff, that. I'd recommend SlimDX over XNA if you want immediate access to D3D11, if not then you're probably going to profit from the sheer quantity of resources available for XNA.
  7. A couple of DX11 sites I used while learning (and I'm still learning) SlimDX Direct3D11: [url="http://www.rastertek.com/"]rastertek[/url] for D3D11 and HLSL [url="http://rbwhitaker.wikidot.com/hlsl-tutorials"]RB Whitaker[/url] which, despite sounding like a purveyor of fine coffees, has some amazing HLSL tutorials [url="http://d3dtutorials.blogspot.co.uk/"]This blog[/url] is not as good as rastertek, which I couldn't recommend more (some site formatting issues notwithstanding), but it also helps cover a few things. Honestly the best idea is to read through the basic tutorials on rastertek, think about what you want to make, think about how you would make it, and then start googling for techniques and ideas available based on the beginner knowledge you picked up from that first read-through.
  8. To my best knowledge, Texture1DArrays must all be of the same length. So what makes a Texture1DArray containing 10 texture1Ds, each with a width of 256 texels, different from a Texture2D of size 256x10? As far as I can tell they all take up one texture register, the same amount of memory and are even accessed in the same way, so is there any real difference in implementation or is the only difference just the name? I assume the same argument applies to Texture2D arrays and Texture3Ds too.
  9. Hmm I see your point, I'll do a GPU implementation for now then and when it comes to optimization later I'll write up a CPU implementation and test them side by side.
  10. I was planning to build the CPU implementation on a separate thread so I could run it with low priority and build the textures slowly. It's a space game and I'm actually limiting the number of procedurally-textured objects on screen at any one time to 8 or 9, and due to the distances involved, this will update very slowly - so I have in the order of several minutes to actually generate the new textures - this in turn allows me to build very detailed (many octaves of fBm noise) texture representations without causing stuttering due to the GPU currently generating a large texture while my render thread waits.
  11. Protip: If you highlight your code in the post editor and select the "code" button (symbol: <>) it'll be a lot easier for us to understand. Like so: public static void main() { Console.WriteLine("This is ugly"); } or: [CODE] public static void main() { Console.WriteLine("This is beautiful"); } [/CODE]
  12. I'm using SlimDX and Direct3D11 and I have a Texture2D created by loading from a file. I want to use this, and a couple of other textures to build a set of procedural textures at runtime - however I don't want to use the GPU as it's already very busy doing other things. Unfortunately I can't find any way of sampling a Texture2D outside of HLSL or using Texture2D.SaveToStream() and skipping through to the pixels I want manually. Is there a simpler way of doing it? Something similar to HLSL's Texture2D.Sample(sampler,coords) or am I going to have to wade through the data stream? Thanks
  13. As usual that's looking really cool. What's your target platform for this?
  14. [quote name='oggs91' timestamp='1338538682' post='4945242'] but if you build your triangles in a shader, how would you do physics with the terrain ? [/quote] Well you'd be building the triangles from a 3D texture, essentially that would be the set of voxels that make up your terrain. You could run physics against that, but as kauna said it's probably best to simplify it a little into something more suited for the calculations you want to do.
  15. Ah, the old Minecraft gold rush. I made a voxel-based minecraft clone a while back - made my own renderer for it though, so I can't say if there are any out there that'll let you do it for you. If not, you might want to consider using a 3D texture sent to a geometry shader. The 3D texture would be the surrounding voxels (only needing maybe a 100x100x100 texture, maybe 256x256x256 at the most of R8_UINT) and the shader builds the vertices from there either using marching cubes or - if you want a cubic world like minecraft - by simply finding empty space and creating faces. I can't vouch as to whether this would be faster than if it was done on the CPU, but it seems quite separable.