• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Super Llama

Members
  • Content count

    226
  • Joined

  • Last visited

Community Reputation

152 Neutral

About Super Llama

  • Rank
    Member
  1. Oh okay, I was wondering where that came from, that makes sense too. Thanks for the explanations, this is exactly what I needed to know.
  2. OHH right, that makes perfect sense, thank you! I knew I was missing something obvious lol. I guess that means it technically works if you keep re-normalizing the quaternion and the game doesn't go longer than a few fractions of a second before updating the delta, but I guess using trig functions is inevitable if I want to do it right in all situations. I think I'll just construct a new unit quaternion from my angular velocity (since that only takes 5 math function calls) and multiply that onto it, unless there's a more efficient way I don't know about.
  3. Okay I'm sure this is a stupid question and I'm either completely misusing this formula or I'm missing something obvious, but I can't for the life of me figure out how I'm supposed to apply an angular velocity vector to an orientation quaternion, despite finding the Quaternion Time Derivative formula on multiple websites.   From what I've read, you can take an angular velocity vector like (0, 0, 3.14) for a 180 degree turn around the Z axis each second, then use these values in a non-normalized quaternion with zero w (0 0 3.14 0) -- I'm using xyzw representation for my quaternions -- and then you can multiply it by a delta time and an orientation quaternion and divide it by two, then add it to the orientation quaternion, and that should properly rotate the orientation quaternion according to the velocity. The formula looks like this: Q' = Q + 0.5t(W x Q).   Anyway, when I went to try it with a delta time of one second, it didn't seem to work. I take a unit quaternion with no rotation: (0 0 0 1) then multiply it by the angular velocity quaternion (with the original on the right, as per the formula) (0 0 3.14 0) x (0 0 0 1) = (0 0 3.14 0) then divide that by two (0 0 1.57 0) then add it to the original quaternion (0 0 1.57 1) but when I apply this rotation, it's 0, 0, -115 in euler angles, not 0, 0, 180.   So yeah... does anyone know what the problem is? I'd rather avoid having to build a quaternion from axis-angles each frame if I can, since that would require sin and cos which are more costly than just arithmetic, and everyone seems to say this formula works just as well, but I'm not seeing it.
  4. I'm trying to create a shared texture from a Direct3D 9 device using the pSharedHandle parameter that supposedly works starting with windows vista-- but no matter what I put in the parameters it always just returns an invalid call. I'm intending to share it with another device in another thread, but I can't even get the initial creation of the shared resource to work.   ddev->CreateTexture(width, height, 1, 0, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &tex, &handle);   I've tried changing pretty much all the parameters but if I pass it a &handle it always fails. The handle is pointing to a NULL value. I've been unable to find a lot of information about the texture sharing API, but everything I find says this should work. I'm compiling with the headers/libraries both from the Windows 7 SDK and the June 2010 DX SDK, and I'm able to see and instantiate D3D9Ex classes, so it's definitely a new enough version. The device is perfectly able to create the texture if I don't specify a handle, so I'm guessing I must not be fulfilling one of the restrictions for shared textures that microsoft barely told anyone about. MSDN is not helping very much, all it says is that I have to use D3DPOOL_DEFAULT, which I am.   Any ideas? I'll keep messing with the parameters while I wait for an answer.   EDIT: Well, I got it to create the handle by changing all the DirectX9 objects to DirectX9Ex objects, but this is not ideal and if there's still a way to do it with ordinary D3D9 interfaces, that would be preferable.
  5. I've recently written a collision detection system for detecting collisions between ellipsoids (i.e. players, NPCs) and triangle lists. The detector works flawlessly using a ton of vector operations all baked into a few lines of scalar math, but I'm having trouble with the collision response. For a single triangle, this is easy, all I have to do is work out the closest point ON the triangle to the center of the sphere (I smash the coordinates into ellipsoid-scaled space so at this point it's just a sphere), and push it away from that point until the distance is greater than or equal to the radius. Working with multiple triangles, I tried several approaches. First, I tried allowing any triangle the player collides with to push you away in turn, all on the same frame. This worked for 90 degree angles, but caused trouble with sharp angles and high-poly collisions since the player could end up on the edge of one of the small polys and be pushed directly into another one. Next, I tried finding the average of all "closest points" and using that as the hit position and moving away from there. While it worked far better than the previous method, the average point would sometimes be inside a sharp external corner, or not far enough out from a sharp internal corner. I have no idea how to calculate how far to push the ellipsoid from this hit position, and the only solution I can think of to stop the player from getting stuck is allowing movement away from the hit normal but not towards it, so they can leave simply by moving away from the point they're stuck in. The trouble with that is that you'll still phase slightly through a sharp spike or steep crevice. Though that might not be too big of a problem, it would make it easier to phase past a mesh and into the void if the penetration is deep enough. One last problem arises with this method-- if the player is standing on the edge of a cliff or stair step, gravity will slowly pull him off due to the "push away from edge" concept. I suppose I could treat the bottom of the ellipsoid as flat, but I'd have to think of a new collision response for that too. Maybe I could use traces to check for footholds... I guess that's up to me how I want to handle it but any tips would be appreciated. I wish I could just use binary space partitioning, but I don't want to have to write an editor/compiler for that and I'd prefer to just use 3dsmax to export single texture UV-mapped chunks of level geometry, which can then be arranged to make up a whole level. I guess I could also try forcing convex collision meshes though, but I'd still have the crevice problem between two convex bodies.
  6. I did look into BSP's and such, but my main weakness is writing an editor for something rather than actually writing the system itself. My engine is very strangely coded and even the "easy to implement" things like bullet or lua would require far more work than usual to make them play nice with my system. I won't go into detail, but I have a very polymorphic exe that loads different scenes (including editors) depending on the "map file" you load. And the map file is not just geometry and entities-- it's basically an entire program, and may or may not have GUI's, physics objects, shaders, textures, etc., all potentially embedded inside them. I also have my own compiled scripting language specifically for creating these map files. I guess my wacky interface wouldn't really be too much of a roadblock for implementing a third party tool if I really needed to, but I really just prefer making it myself because I can understand fully how it works. If anything in my engine is slow, I want it to be my fault and my fault only. I don't take the beaten path with coding, and I never have... I really don't know why. However, this doesn't apply with editors, and I'd much rather figure out a way to texture an exported hammer map than make my own new map editor. I suppose an additive mesh system suits the engine better the way it's built anyway-- the main reason I want multitex is because I want to be able to use hammer's excellent brush toolset, and the easiest way I can find to render it is using multiple textures. I guess it's just time to give up on hammer and use static mesh pieces to build maps, that seems like it is still the best alternative to hammer and multitex.
  7. My collision engine does allow for a mesh to have a separate triangle list for collision, but it's still 1 physobject per mesh. I pretty much code everything from scratch except DirectX and Windows, so I don't use third party libraries. It's just an Ellipsoid/Polygon intersection test, I don't have Poly-Poly collision yet and when I do it'll probably be convex polygons only. For now, though, the map file has one mesh that's already fairly low poly, and I've been using the same mesh for rendering and collision because of the fact that it's a hammer map exported into my format. Technically if I were to take your suggestion of using a single mesh for collision and several for rendering, I would just have to give the collision mesh a nonrenderable mesh and set its physics mesh to the one I had been using, then fill the scene with a lot of non-physics objects set up for culling. Still, though, I'll have to figure out what criteria to cull based on-- I mean, I don't have any sort of area portal system and I'd rather not manually separate them in 3dsmax... though technically that is an option. Another idea though-- I did manage to get a W-based texture selector working using a massive switch case, but it only supports up to 16 textures because of the SM3 sampler limit. However, if I were to chunk the map into a bunch of culled pieces, it'd be easy to have less than 16 textures per chunk at least. So to summarize, I'm thinking I could turn off rendering on my big one-piece non-textured map file, split it into a bunch of non-physical pieces, multitexture each of the pieces separately, then draw the pieces separately using frustum culling. Technically though if I make a few changes to the collision response I don't even need the one-piece physics object and I can just make the chunks physical as well. Do you see any problems with the switch-based per-chunk multitexture idea? Is swapping out a bunch of samplers per pixel shader painfully slow or something? It sounds like it would work to me, but that massive switch case does make me feel a little nautious
  8. That's true, I hadn't thought of frustum culling. The main reason I wanted it to be one mesh is because the collision response is easier to calculate that way and it's also easier to export from 3dsmax in one piece rather than several. Cutting it into one mesh per triangle would be ridiculous, and I'm not really sure what would be a good intermediate step. I mean, technically I could make all maps using an additive scheme like the interiors in the Elder Scrolls games, using nothing but single-texture single-mesh objects all put together into a world... but I'd planned on using Valve Hammer Editor, which I'm very used to, exporting to DXF, opening them in 3dsmax and exporting to my format, then texturing them in an in-engine tool with special W coordinate stuff. It sounded great until I hit this obstacle... EDIT: to restate-- I'm not sure where to divide a mesh up if I were to do so-- dividing it based on texture wouldn't help with culling, but dividing it based on culling wouldn't help me texture. Technically I could divide it based on culling and then use a texture atlas, but that's actually quite a bit of work...
  9. I guess I could use a texture atlas with manual tiling, but I really would prefer to use separate textures if possible. I kinda think that if I'm willing to take the time to make a texture atlas generator, I might as well just make a 3D texture generator instead, since then I could use the W coordinate like I originally wanted. Though I did just test a massive switch case and technically that works, but only with up to 16 samplers, which probably won't be enough for an entire map.
  10. I'm trying to make a level rendering system using a single mesh for the map geometry. I'd planned on using vertices' W coordinate as a texture ID and then passing multiple textures to the shader using device->SetTexture. I got this far before I realized that you can't index a sampler array using a variable, which I'm very upset about as it invalidates my entire plan unless I want to make a massive mess of elseifs... is there some better way to do this? A 3D texture would work on the shader side of things, but it'd be harder to set up in my C++ backend. I just want a way to put multiple textures on one mesh, determining which texture to draw using data from the vertices. Because they're tiled, I can't just bake them all into one texture either. I wish there was a way to convert an array of sampler2D's into one sampler3D, but I'm sure there isn't... any other ideas? I'm using C++ with DX9. No FX, just shaders and SetShaderConstantF and such.
  11. Yeah, the main thing about non-normalized eulers is that it's much easier to tween them, since you don't have to wrap around. Quaternions are really starting to look like the only way at this point... pretty much any attempts to fix these eulers just distorts them worse. I suppose I'll have to modify my format after all :/ EDIT: Wow. Switching to quaternions only took like 15 minutes and they work perfectly... lol, thanks for verifying that my eulers were hopeless
  12. Well, a normalized euler angle is an angle with an XYZ component where each component is limited to between -180 and 180. This is how most angles are stored-- but for animation sometimes you need to perform multiple cycles, so it's best to not normalize them and allow values like 270. The problem is, combined XYZ angles seem to be impossible to perform operations on without converting them to quaternions, and the process of converting them to quaternions automatically normalizes them (keeps the quaternion length less than a full revolution) even though it's possible to have a non-normalized quaternion. What I want to do is somehow take the starting non-normalized euler angle, convert it into a quaternion, and then modify the quaternion so it matches the number of revolutions that the starting angle had, and do a similar operation in reverse. getEulerQuatAngleRatio looks like it might do something like this but I can't seem to find any straightforward information about it.
  13. Hi everyone, I'm writing an exporter for my animation format in Maxscript, and I ran into a problem with angles. Basically, the angles start out as denormalized euler angles, but I have to do some transformation operations on them to get them in the proper format. Since you can't just do operations on euler angles in maxscript, I have to convert them to quaternions first-- the problem is, this automatically normalizes them, which causes its own host of problems. Basically my question is (yes, I've googled it), is there any way at all to convert back and forth from euler angles and quaternions without normalizing them? Or is there some way to un-normalize them given the starting angle? All I need to do is subtract the new position from the starting one, but even basic subtraction wreaks havok if you try it on a euler angle. If all else fails, is there some way of subtracting denormalized euler angles? The only remaining courses of action if I don't find a way to do this is to completely change my format so it uses quaternions, and that'd take a lot of editing.
  14. Yep, apparently the initial problem was that I wasn't using a pixel shader which is why transposing didn't work initially. Once I started using a pixel shader, it was still untransposed and now that I've transposed them all AND used a pixel shader, now it works. I'm at a loss as to why it worked without a pixel shader several months ago, though...
  15. Debug output said to use a pixel shader, so I wrote a basic one that always returned blue. When I tested it on an XYZRHW version of the same mesh, it only drew the very top-left pixel, so I increased the 1's to 100's and got a nice blue square in the top left. However, when I applied the vertex shader again, which looks like it should just pass the coordinates directly through as if it was an XYZRHW mesh instead of an XYZ mesh-- it disappeared again. I wasn't aware of PIX, I'm afraid to say-- is there a quick way to use it? EDIT: well, I grabbed a frame with PIX and I'm still deciphering the information. EDIT2: Apparently the vertex shader seems to be drawing just fine-- when I look at the Mesh tab on the DrawPrimitive call, the preVS and postVS boxes are identical, though the viewport box shows the square mostly extending off the top-right edge of the screen. Still, even though it looks like the mesh is on the screen, it's still invisible. EDIT3: Finally! Apparently it was upside-down and the culling was removing it-- now that I know about this amazing tool called PIX I'll try and get the matrices working too. Thanks!