• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Numsgil

Members
  • Content count

    1433
  • Joined

  • Last visited

Community Reputation

501 Good

About Numsgil

  • Rank
    Contributor
  1. Thanks MJP, that sounds doable and sounds like it'd give me what I want.   @Jason - The simulation scales with O(n^3) at the moment, though I'm hoping to get that down to O(n^2), for some definition of N :)  I can break up the tasks well enough to avoid the Windows' watchdog restarting the driver, but trying to timeslice it with rendering tasks would be a huge pain.  If N is small, I can run dozens of complete simulation cycles per render frame (think of something like Simcity in fast forward).  And if N is large, it can take dozens of render frames for each simulation frame.  Decoupling the two seems fairly obvious, though you're right, I've never heard of anyone trying to do this.
  2. To be clear, there's an obvious way to get this working: build two separate processes that communicate with each other over TCP/IP.  Each could have their own DirectX context, and the sim process could run at whatever framerate it wanted, and the rendering process could do its best to run at 60 FPS and grab updates from the sim process periodically.  But that's a super heavy handed way to approach the problem, and I'm wondering if there's a better way.
  3. I wasn't necessarily talking about deferred contexts.  Deferred contexts seem like just a way to gather up commands from multiple threads.  I was talking more about the possibility of creating two different immediate contexts.  I noticed that when you have two different games running at the same time, for instance, they both get time on the GPU without the games needing to communicate.  And that long DirectCompute tasks (like several seconds long), won't freeze the system like I've seen long OpenCL tasks do.  But they do seem to freeze the executing program.
  4. Is it possible/kosher to create multiple device contexts in DirectX 11 with the purpose of using one for rendering and one for GPGPU?   Basically I'm working on a something sort of like a sim game.  I want a render thread running as close to 60 FPS as possible, but I also want to offload a lot of the sim calculations to DirectCompute, and run it as fast as possible.  That means the sim thread can run anywhere from 1 to 1000 FPS, depending on what's going on.  The sim also needs to push data to the render thread eventually, but I don't necessarily mind that going out through the North Bridge to the CPU and back to the GPU (I don't mind a bit of latency from the sim to the renderer, as long as things stay responsive to the user).
  5. @clb: I don't think you can do h' = |M*H| because M*H is not guaranteed to be a shortest path between the two parallel lines after they're transformed by M.   That is, just because H is perpendicular to both lines before transformation doesn't mean it's perpendicular to both lines after transformation.  Consider the case of a sheering of a square in to a parallelogram.  One of those funny properties of affine transformations: closest point pairs on parallel lines aren't preserved.
  6. Affine transformation is just another name for a matrix with non uniform scale, rotation, and translation baked in (more or less).  I didn't mean to be too obtuse when forming the question :)   Anyway, I came up with: h' = h * length(S*(b - a)) / (length(b - a)) after a bit of algebra.  Can someone confirm/refute that?  I took the method I described and worked it out algebraically.  But intuitively I'm surprised I'm using b-a and not a vector perpindicular to b-a or something along those lines.
  7. I have a line (with points A, B on the line), and a distance 'h' to a parallel line.  Under an affine transformation, the lines should stay parallel, but the distance between them might change.  How can I get the new distance between the lines?   My thinking right now is to construct a point on the red line, transform that point with the affine transformation, and then find the distance of that new point to the transformed black line AB.  But is there a more direct way to calculate the new distance?
  8. Sure, you can do that. But doesn't that mean you basically built your own RAM-only database technology with spatial queries, etc.? Which has all the benefits and pitfalls that rolling-your-own-solution does. If we assume that "the database" lives across multiple boxes, and the servers running the game logic/script updates are yet different machines, you've basically built a local database cache in RAM on each of the game logic servers, that communicate changes to "the database" lazily. Which sounds a lot like some sort of eventual consistency/[url="http://en.wikipedia.org/wiki/Multiversion_concurrency_control"]MVCC[/url] database system (maybe not exactly? I'm not sure what vocab word fits here) So if you're going that route anyway, wouldn't it make sense to actually use some sort of MVCC database system directly? A good database system should be able to handle caching things in RAM just fine, so the primary drawback is that the local database cache has a different application memory space. But I think there are some technologies that you can integrate directly in to your application (I think BerkeleyDB can do this?)
  9. Thanks for the comments, everyone. I'm looking at various NoSQL systems right now. They seem to fit closer to the mental model you have for how an MMO works. [url="http://www.mongodb.org/"]MongoDB[/url] especially seems promising. It has spatial queries and indexing, which you'd need if you want to have, say, large numbers of items floating freely in the world (ala Ultima Online, if anyone remembers that game). In the case of storing bags of items, it also has arrays natively supported. So you could have bags represented as documents in a collection (ie: as rows in a table), with items stuffed in to an array inside each bag document (row). It feels very similar to how tables work in Lua, actually. It seems to map pretty well to OOP. The primary downside is that it's not ACID compliant. So transactions (like moving an item from one bag to another) get tricky and you probably have to manage a two-phase commit manually. ... To give a different usecase example, I'm trying to scope out how technically possible a "living world" MMO would be. For instance, I'm thinking of trees that spread seeds that grow more trees, with players allowed to chop the trees down or manually move seeds around. To handle it in a general way, I'm imagining that all entities that exist in the world can register themselves to run certain scripts at some future date. So on the server side you'd need to periodically query for entities that are due to run scripts and run them. The scripts can update the state of the entity, change behavior based on neighboring entities, spawn new entities, etc., and write the results back to the database. Clients could request spatial queries to get all the entities in a given area that they're playing to find all the visible entities so that the client can display them. The number of dynamic entites in a given area of the world could be quite large. You'd probably want to cache highly active entities (like animals) in RAM, and only push changes to the database periodically. Or even treat things like animals totally differently from more static entities. But let's ignore that and just call the entire game state recording mechanism the "database". As a single player game this wouldn't be so unusual, but making it work as an MMO would be quite difficult. I imagine a traditional RDBMS would quickly choke on something like ths.
  10. Are there any books or article series discussing database use in MMOs? For instance, there's a few [url="http://gamedev.stackexchange.com/questions/2282/what-kind-of-databases-are-usually-used-in-an-mmorpg"]one-off posts[/url] on various sites, but I'd be interested in a more comprehensive examination. I've done a bit of SQL work, and my impression is that it would be a bit like a square-peg-in-a-round-hole situation trying to get the relational model to make sense for an MMO. If we take the simplified case of just inventory management, would you maintain a table of every item in the game universe, and have a field defining a UID for which container it lived in? Wouldn't that make a simple query like "what's in my inventory?" take [i]forever[/i]? Many MMOs get around this by having a limited number of slots in your inventory, with few objects existing outside of players' inventories (or in banks) but if you wanted to really open that up and do an inventory system like the late Ultima games, or even something like Skyrim, where the player can have hundreds of unique items, and there are tens of thousands of unique items randomly placed throughout the world, I imagine things quickly get hairy. You'd probably also want a database model that lends itself to spatial queries (what items exist within a 20 meter bubble of some position in the world), which isn't something relational databases do very well. I feel like there's enough domain-specific knowledge here that there must be a book or article series somewhere, either in getting relational databases to work well, or in some Non-SQL technology.
  11. Try posting on the Havok forum. They're usually pretty good about answering questions about use cases and such. http://software.intel.com/en-us/forums/havok/
  12. [quote name='taby' timestamp='1336596762' post='4938776'] If you succeed, you should probably be working at NASA. ;) [/quote] They were launching craft 50 years ago and doing the math. The computing power I have in my toaster is probably 1000 times what NASA had then. It'd still be a bit of a project to read up the academic papers and figure out how to approach the problem. But even a completely naive implementation would probably run more than fast enough. Especially if it's an autopilot feature for a proper simulation type game, there's nothing wrong with a "plotting autopilot course" wait dialog of a second or two.
  13. If you haven't yet you should read through [url="http://software.intel.com/en-us/articles/fluid-simulation-for-video-games-part-1/"]this set of Intel articles on fluid sim[/url]. It specifically uses a hybrid approach where vortons and a static grid interact so you can (hopefully) get the best of both worlds. Importantly the article comes with source you can try out (though I haven't actually tried it out yet). To your specific problem, I think at least part of the trick is to [i]not [/i]put the vortons on the actual boundary. Instead you offest the vortons a bit from the boundary, but set them up so that they still cancel out the velocity at the boundary surface. This helps eliminate potential singularities, as I understand it. For the case of velocity flow exactly perpendicular to a flat surface, your vortons [i]would[/i] happen to be on the surface, but to the left or right of the point on the boundary you're trying to cancel out. For non perpendicular flows, the vorton will be a bit ahead of the surface, at a glancing angle to the point you're trying to cancel out velocity at. Importantly the actual position of these new vortons change based on the relative angle of the incoming flow and the surface. I'm curious if it's possible to construct something like a vorton, but with surfaces instead of just points. Like for a sphere, you construct something like a vorton, but instead of causing vorticity, it pushes velocity out in all directions uniformally. That, centered right on top of the sphere and tweaked so that velocity just ahead of the sphere is 0, and added to the background uniform flow, might produce a realistic flow pattern cheaply. I'm not sure exactly, I probably need to think it trough a bit more. At first glance it seems like it would violate some conservation laws, and it would probably only really make sense for a uniform background velocity field. The more chaotic the flow around the sphere, the more likely you'd need to use more than one primitive, no matter what your primitive is (vorton or something else).
  14. I'm working on a 2D graphics engine using Shader Model 3 to draw the primitives (lines, circles, polygons, etc.) I want to be able to draw polygons with an "interior" region of one color and an "exterior" region of another color. (Basically drawing a border around a polygon). So basically you could imagine drawing the polygon once, then shrinking it, and drawing it again in a different color. Basically what [url="http://stackoverflow.com/questions/1109536/an-algorithm-for-inflating-deflating-offsetting-buffering-polygons"]this Stack Overflow[/url] thread is talking about. In my case, however, I'd like the border thickness to be a constant thickness in [i]pixels[/i]. So as you zoom in/out the thickness of the border will change. And ideally I don't want to perform any extra CPU-side computation to do this. Which means doing it in the pixel shader by testing if a given pixel is inside the border or the interior of the polygon. I already have a triangulation algorithm that works using ear-clipping. I'm trying to figure out a way (or if there is a way) to get a pixel-in-the-interior-but-not-the-border test in the pixel shader using essentially only local information from the polygon's vertices (plus maybe neighbor information). Right now I'm basically doing a series of half-plane checks using the polygon edges on either side of the vertices. So I have a triangle with 3 vertices from the polygon, and I pass information on those 3 vertices' and their neighbors into the shader and construct the half planes from that. This seems to work for convex polygons, but for non-convex polygons I'm getting some artifacts that make me suspect that this idea is fundamentally flawed. (The intersection of half planes usually form convex regions anyway, so I'm pretty sure this idea just won't work). There's a lot of ways to approach the problem from literature on the web, but it's not clear which ways might work best for this situation. Usually the goal for this problem is to get a new polygon with different vertices after the shrinking. I think (hope) it's possible to instead turn it in to an implicit test (so you test a point and get a yes/no for whether it's inside the shrunk region). My current idea is to calculate the straight skeleton and triangulate that. The the shader becomes a much simpler test since each triangle would just need to check a single half plane. But that represents a lot of work... So I'm curious if others have any ideas or have done something similar.
  15. [quote name='Digitalfragment' timestamp='1330052373' post='4916081'] bit logic isn't available on SM3.0, but the math can be done using float ops. Bit shifts are pretty much just multiplies/divides by powers of 2, etc. Bewarned though, its not a simple amount of math, and is hideously expensive if you are doing this per vertex or per pixel. It would likely be faster to sample a point filtered texture for your extra data. [/quote] Texture lookup is an idea, but it'd take a lot of lookups to get the same amount of data as a single register (even assuming floating point textures, you'd need 4 lookups). If you implement teh decompression with divides it doesn't seem like a bad amount of math in terms of operation count. All the operations are vectorized, after all. [quote] On a side note, SM3.0 you have 256 seperate registers for the vertex shader and pixel shader, how on earth have you managed to blow that and not kill performance? ;) [/quote] There's >200 constant registers. I still have plenty of those. But there's only 10 "interpolated" registers. I don't actually need them interpolated, but I do need them specific to each triangle. Which means passing them to the pixel shader as texture coordinates from the vertex shader. Even then, I haven't quite run out of interpolated registers, but I'm close (I'm at about 9.5 used registers). My pixel shader is getting pretty beefy (~2K instructions), so I might need to cut out features to make it fit in lower end SM3 cards, etc., but for now I'm just stuffing everything I want it to do in to the shader. [quote name='Ashaman73' timestamp='1330064019' post='4916113'] A simple way to pack 2 colors into one register is to pack one color in normalized space, that is 0..1, clamp 1 to 0.99999 and to use 'byte' space(0,1,2..255) for the other color. Packing,unpacking looks like this: [CODE] pack vec4 packed_color = min(vec4(0.9999),first_color); packed_color += floor(second_color*255.0); unpack: vec4 first_color = fract(packed_color); vec4 second_color = floor(packed_color) / 255.0; [/code] [/quote] That seems reasonable enough for 2 colors. But you're only using part of the mantissa, so you still have a lot of wasted bits, and you can't store 3 colors that way (the mantissa is only 23 bits wide). [quote name='Crowley99' timestamp='1330071490' post='4916134'] As a matter of interest, how do you know you are hurting for registers? [/quote] I get fun error messages like this: "Problem building "Main.fx", "(1): error X5629: Invalid register number: 12. Max allowed for o# register is 11. ID3DXEffectCompiler: Compilation failed "