Jump to content
  • Advertisement

Numsgil

Member
  • Content Count

    1433
  • Joined

  • Last visited

Community Reputation

501 Good

About Numsgil

  • Rank
    Contributor
  1. Numsgil

    Separate DirectCompute context?

    Thanks MJP, that sounds doable and sounds like it'd give me what I want.   @Jason - The simulation scales with O(n^3) at the moment, though I'm hoping to get that down to O(n^2), for some definition of N :)  I can break up the tasks well enough to avoid the Windows' watchdog restarting the driver, but trying to timeslice it with rendering tasks would be a huge pain.  If N is small, I can run dozens of complete simulation cycles per render frame (think of something like Simcity in fast forward).  And if N is large, it can take dozens of render frames for each simulation frame.  Decoupling the two seems fairly obvious, though you're right, I've never heard of anyone trying to do this.
  2. Numsgil

    Separate DirectCompute context?

    To be clear, there's an obvious way to get this working: build two separate processes that communicate with each other over TCP/IP.  Each could have their own DirectX context, and the sim process could run at whatever framerate it wanted, and the rendering process could do its best to run at 60 FPS and grab updates from the sim process periodically.  But that's a super heavy handed way to approach the problem, and I'm wondering if there's a better way.
  3. Numsgil

    Separate DirectCompute context?

    I wasn't necessarily talking about deferred contexts.  Deferred contexts seem like just a way to gather up commands from multiple threads.  I was talking more about the possibility of creating two different immediate contexts.  I noticed that when you have two different games running at the same time, for instance, they both get time on the GPU without the games needing to communicate.  And that long DirectCompute tasks (like several seconds long), won't freeze the system like I've seen long OpenCL tasks do.  But they do seem to freeze the executing program.
  4. Is it possible/kosher to create multiple device contexts in DirectX 11 with the purpose of using one for rendering and one for GPGPU?   Basically I'm working on a something sort of like a sim game.  I want a render thread running as close to 60 FPS as possible, but I also want to offload a lot of the sim calculations to DirectCompute, and run it as fast as possible.  That means the sim thread can run anywhere from 1 to 1000 FPS, depending on what's going on.  The sim also needs to push data to the render thread eventually, but I don't necessarily mind that going out through the North Bridge to the CPU and back to the GPU (I don't mind a bit of latency from the sim to the renderer, as long as things stay responsive to the user).
  5. @clb: I don't think you can do h' = |M*H| because M*H is not guaranteed to be a shortest path between the two parallel lines after they're transformed by M.   That is, just because H is perpendicular to both lines before transformation doesn't mean it's perpendicular to both lines after transformation.  Consider the case of a sheering of a square in to a parallelogram.  One of those funny properties of affine transformations: closest point pairs on parallel lines aren't preserved.
  6. Affine transformation is just another name for a matrix with non uniform scale, rotation, and translation baked in (more or less).  I didn't mean to be too obtuse when forming the question :)   Anyway, I came up with: h' = h * length(S*(b - a)) / (length(b - a)) after a bit of algebra.  Can someone confirm/refute that?  I took the method I described and worked it out algebraically.  But intuitively I'm surprised I'm using b-a and not a vector perpindicular to b-a or something along those lines.
  7. I have a line (with points A, B on the line), and a distance 'h' to a parallel line.  Under an affine transformation, the lines should stay parallel, but the distance between them might change.  How can I get the new distance between the lines?   My thinking right now is to construct a point on the red line, transform that point with the affine transformation, and then find the distance of that new point to the transformed black line AB.  But is there a more direct way to calculate the new distance?
  8. Numsgil

    MMO and databases

    Sure, you can do that. But doesn't that mean you basically built your own RAM-only database technology with spatial queries, etc.? Which has all the benefits and pitfalls that rolling-your-own-solution does. If we assume that "the database" lives across multiple boxes, and the servers running the game logic/script updates are yet different machines, you've basically built a local database cache in RAM on each of the game logic servers, that communicate changes to "the database" lazily. Which sounds a lot like some sort of eventual consistency/MVCC database system (maybe not exactly? I'm not sure what vocab word fits here) So if you're going that route anyway, wouldn't it make sense to actually use some sort of MVCC database system directly? A good database system should be able to handle caching things in RAM just fine, so the primary drawback is that the local database cache has a different application memory space. But I think there are some technologies that you can integrate directly in to your application (I think BerkeleyDB can do this?)
  9. Numsgil

    MMO and databases

    Thanks for the comments, everyone. I'm looking at various NoSQL systems right now. They seem to fit closer to the mental model you have for how an MMO works. MongoDB especially seems promising. It has spatial queries and indexing, which you'd need if you want to have, say, large numbers of items floating freely in the world (ala Ultima Online, if anyone remembers that game). In the case of storing bags of items, it also has arrays natively supported. So you could have bags represented as documents in a collection (ie: as rows in a table), with items stuffed in to an array inside each bag document (row). It feels very similar to how tables work in Lua, actually. It seems to map pretty well to OOP. The primary downside is that it's not ACID compliant. So transactions (like moving an item from one bag to another) get tricky and you probably have to manage a two-phase commit manually. ... To give a different usecase example, I'm trying to scope out how technically possible a "living world" MMO would be. For instance, I'm thinking of trees that spread seeds that grow more trees, with players allowed to chop the trees down or manually move seeds around. To handle it in a general way, I'm imagining that all entities that exist in the world can register themselves to run certain scripts at some future date. So on the server side you'd need to periodically query for entities that are due to run scripts and run them. The scripts can update the state of the entity, change behavior based on neighboring entities, spawn new entities, etc., and write the results back to the database. Clients could request spatial queries to get all the entities in a given area that they're playing to find all the visible entities so that the client can display them. The number of dynamic entites in a given area of the world could be quite large. You'd probably want to cache highly active entities (like animals) in RAM, and only push changes to the database periodically. Or even treat things like animals totally differently from more static entities. But let's ignore that and just call the entire game state recording mechanism the "database". As a single player game this wouldn't be so unusual, but making it work as an MMO would be quite difficult. I imagine a traditional RDBMS would quickly choke on something like ths.
  10. Are there any books or article series discussing database use in MMOs? For instance, there's a few one-off posts on various sites, but I'd be interested in a more comprehensive examination. I've done a bit of SQL work, and my impression is that it would be a bit like a square-peg-in-a-round-hole situation trying to get the relational model to make sense for an MMO. If we take the simplified case of just inventory management, would you maintain a table of every item in the game universe, and have a field defining a UID for which container it lived in? Wouldn't that make a simple query like "what's in my inventory?" take forever? Many MMOs get around this by having a limited number of slots in your inventory, with few objects existing outside of players' inventories (or in banks) but if you wanted to really open that up and do an inventory system like the late Ultima games, or even something like Skyrim, where the player can have hundreds of unique items, and there are tens of thousands of unique items randomly placed throughout the world, I imagine things quickly get hairy. You'd probably also want a database model that lends itself to spatial queries (what items exist within a 20 meter bubble of some position in the world), which isn't something relational databases do very well. I feel like there's enough domain-specific knowledge here that there must be a book or article series somewhere, either in getting relational databases to work well, or in some Non-SQL technology.
  11. Numsgil

    How Physics Engine Works?

    Try posting on the Havok forum. They're usually pretty good about answering questions about use cases and such. http://software.intel.com/en-us/forums/havok/
  12. Numsgil

    Computing space travel

    They were launching craft 50 years ago and doing the math. The computing power I have in my toaster is probably 1000 times what NASA had then. It'd still be a bit of a project to read up the academic papers and figure out how to approach the problem. But even a completely naive implementation would probably run more than fast enough. Especially if it's an autopilot feature for a proper simulation type game, there's nothing wrong with a "plotting autopilot course" wait dialog of a second or two.
  13. Numsgil

    Realistic Fluid Simulation

    If you haven't yet you should read through this set of Intel articles on fluid sim. It specifically uses a hybrid approach where vortons and a static grid interact so you can (hopefully) get the best of both worlds. Importantly the article comes with source you can try out (though I haven't actually tried it out yet). To your specific problem, I think at least part of the trick is to not put the vortons on the actual boundary. Instead you offest the vortons a bit from the boundary, but set them up so that they still cancel out the velocity at the boundary surface. This helps eliminate potential singularities, as I understand it. For the case of velocity flow exactly perpendicular to a flat surface, your vortons would happen to be on the surface, but to the left or right of the point on the boundary you're trying to cancel out. For non perpendicular flows, the vorton will be a bit ahead of the surface, at a glancing angle to the point you're trying to cancel out velocity at. Importantly the actual position of these new vortons change based on the relative angle of the incoming flow and the surface. I'm curious if it's possible to construct something like a vorton, but with surfaces instead of just points. Like for a sphere, you construct something like a vorton, but instead of causing vorticity, it pushes velocity out in all directions uniformally. That, centered right on top of the sphere and tweaked so that velocity just ahead of the sphere is 0, and added to the background uniform flow, might produce a realistic flow pattern cheaply. I'm not sure exactly, I probably need to think it trough a bit more. At first glance it seems like it would violate some conservation laws, and it would probably only really make sense for a uniform background velocity field. The more chaotic the flow around the sphere, the more likely you'd need to use more than one primitive, no matter what your primitive is (vorton or something else).
  14. I'm working on a 2D graphics engine using Shader Model 3 to draw the primitives (lines, circles, polygons, etc.) I want to be able to draw polygons with an "interior" region of one color and an "exterior" region of another color. (Basically drawing a border around a polygon). So basically you could imagine drawing the polygon once, then shrinking it, and drawing it again in a different color. Basically what this Stack Overflow thread is talking about. In my case, however, I'd like the border thickness to be a constant thickness in pixels. So as you zoom in/out the thickness of the border will change. And ideally I don't want to perform any extra CPU-side computation to do this. Which means doing it in the pixel shader by testing if a given pixel is inside the border or the interior of the polygon. I already have a triangulation algorithm that works using ear-clipping. I'm trying to figure out a way (or if there is a way) to get a pixel-in-the-interior-but-not-the-border test in the pixel shader using essentially only local information from the polygon's vertices (plus maybe neighbor information). Right now I'm basically doing a series of half-plane checks using the polygon edges on either side of the vertices. So I have a triangle with 3 vertices from the polygon, and I pass information on those 3 vertices' and their neighbors into the shader and construct the half planes from that. This seems to work for convex polygons, but for non-convex polygons I'm getting some artifacts that make me suspect that this idea is fundamentally flawed. (The intersection of half planes usually form convex regions anyway, so I'm pretty sure this idea just won't work). There's a lot of ways to approach the problem from literature on the web, but it's not clear which ways might work best for this situation. Usually the goal for this problem is to get a new polygon with different vertices after the shrinking. I think (hope) it's possible to instead turn it in to an implicit test (so you test a point and get a yes/no for whether it's inside the shrunk region). My current idea is to calculate the straight skeleton and triangulate that. The the shader becomes a much simpler test since each triangle would just need to check a single half plane. But that represents a lot of work... So I'm curious if others have any ideas or have done something similar.
  15. Texture lookup is an idea, but it'd take a lot of lookups to get the same amount of data as a single register (even assuming floating point textures, you'd need 4 lookups). If you implement teh decompression with divides it doesn't seem like a bad amount of math in terms of operation count. All the operations are vectorized, after all. On a side note, SM3.0 you have 256 seperate registers for the vertex shader and pixel shader, how on earth have you managed to blow that and not kill performance? ;) [/quote] There's >200 constant registers. I still have plenty of those. But there's only 10 "interpolated" registers. I don't actually need them interpolated, but I do need them specific to each triangle. Which means passing them to the pixel shader as texture coordinates from the vertex shader. Even then, I haven't quite run out of interpolated registers, but I'm close (I'm at about 9.5 used registers). My pixel shader is getting pretty beefy (~2K instructions), so I might need to cut out features to make it fit in lower end SM3 cards, etc., but for now I'm just stuffing everything I want it to do in to the shader. That seems reasonable enough for 2 colors. But you're only using part of the mantissa, so you still have a lot of wasted bits, and you can't store 3 colors that way (the mantissa is only 23 bits wide). I get fun error messages like this: "Problem building "Main.fx", "(1): error X5629: Invalid register number: 12. Max allowed for o# register is 11. ID3DXEffectCompiler: Compilation failed "
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!