Jump to content
  • Advertisement

Dirge

Member
  • Content Count

    474
  • Joined

  • Last visited

Community Reputation

300 Neutral

About Dirge

  • Rank
    Member
  1. Dirge

    exp(-x) vs 1 - x

    I'm likely using the wrong terms but am surprised I'm completely unintelligible to you smart gents. :-) I'll break it down and maybe you guys could correct me. Please don't misinterpret this reply as me getting defensive -- I really just want to make sure to get this stuff right. Lets see, first, alvaro, your language pretty much sums up what started this chain of thought in the first place; in optimizing a bit of code I found that 1-x is close to exp(-x) but obviously faster, the loss in accuracy from the first order estimation being acceptable. When I said 1-x is a linear function I meant more in regards to it's plot on a graph, but here is a definition from wikipedia to back that up: "These functions are known as "linear" because they are precisely the functions whose graph in the Cartesian coordinate plane is a straight line." If you don't believe me find a graphing calculator online and plot the function Kambiz described to the 3rd order or so and compare vs 1-x. Furthermore: "In analytic geometry, the term linear function is sometimes used to mean a first-degree polynomial function of one variable." Hence what I meant by "analytic polynomial" (though I could definitely have worded that better). mystical49: Well lets start with this: "a continuous function is a function for which, intuitively, small changes in the input result in small changes in the output. Otherwise, a function is said to be "discontinuous"" and "All polynomial functions are continuous". Given that, both 1-x and exp(-x) can be considered continuous, correct? By "non-differential" I meant non-differentiable. A "Differentiable Function" is defined as "a function whose derivative exists at each point in its domain". As you add terms in the taylor series, the derivative at each point is used to move closer to the solution. exp(-x) has this trait while 1-x does not. OK, school me. :-)
  2. Dirge

    exp(-x) vs 1 - x

    Sneftel: I'm afraid you may have misunderstood. I was inquiring about the mathematical similarities between two seemingly related functions. Their similarities in output could have been incidental but intuitively it feels like they have meaning. Thank you for your input, though. :-) Kambiz: That makes a lot of sense and was close to what I was thinking! Not quite a Eureka moment but I think I understand the relationship between these functions to each other better now. Essentially the linear function (1 - x) is just the first integral of the expansion that generates an analytic polynomial. Essentially the difference between a continuous but non-differential function and one that is. I just played around with those numbers on a graphing calculator and was surprised to see how fast they converge! Thanks guys!
  3. Dirge

    exp(-x) vs 1 - x

    Can someone please explain to me the main mathematical difference between exp(-x) and 1 - x when 1>x>0? Visualizing the numbers on a graph they appear quite similar, essentially an inversion where the exp is more on a slow curve, but I'm wondering if I'm missing something. Also, I may be incorrect in this assumption but exp(-x) = 1.0 / pow( 2.71828, x ), no? Just trying to understand the math behind exponents to better intuit their results. Thanks for any help!
  4. Does anyone know of a way to do bitwise logic operations on the framebuffer write, similar to what glLogicOp does in OpenGL using Direct3D 9?
  5. Dirge

    3D Mesh to Volume Texture

    PolyVox: The technique in that GPU Gems article was exactly what I was thinking for with the stencil buffer approach. The downside again is that polygons in line to the view direction are missed since there are no pixels visible for the stencil test. I'm not sure how bad this would be -- apparently the results are good enough for the collision detection in that fluid simulation, but is it accurate enough for a volume texture? Perhaps altering coplaner polygons by a small epsilon would work? swiftcoder: I considered the approach of intersecting the model's polygon bounding boxes against a 3D regular grid to gather the silhouette voxels but hadn't really though about how I would fill the cavity inside the model. Flood fill makes a lot of sense. The only major limitation of this technique is that only matter is filled, not color, which is a requirement for me (and yes I know the stencil technique suffers from the same problem). If the model was completely convex a cubemap capture of the colors would work but I cant rely on that. As far as rendering directly to the volume texture, unfortunately my target hardware is D3D9 and this is only supported in D3D10 and up. spacerat: I'm aware of this technique and while it's very good it also suffers from the inability to store voxel colors (since the full 32-bits is used to store the 32 slices). Storing 32 slices in a single pass most DEFINITELY meets my criteria for excellent hardware accelerated performance, ha. I'll have to see if there's some way to store the color (perhaps to a second MRT buffer). I'm not too worried about being limited to 32 slices as I can just cap the model and do as many slice passes as is needed. Jason Z: Interesting! I like how that technique transfers the problem to image space. The idea of using the surface normals in addition to the distance map to determine interior and exterior voxels is a nice touch. The parity check however can be done more efficiently using the stencil method outlined in the previously mentioned GPU Gems article, equivalent to the "ray-stabbing" method. Using multiple projections might solve the limitations I had mentioned above, however. I'll have to marinate on this further but thanks for that! Thanks for the suggestions so far guys. I have a lot to think about... [Edited by - Dirge on February 14, 2010 3:53:47 PM]
  6. Can anyone recommend any good algorithms for converting a 3D Polygonal Mesh to a Volume (3D) Texture (or voxelized data)? Specifically I'm looking for a way to do this with some level of hardware acceleration (D3D9 level hardware). One idea I've had is to use the stencil buffer to mask out the pixels for individual volume slices, however, this fails miserably in the pathological case when polygons are coplaner to the camera frustum planes, e.g. a double-sided square where each face is labeled with it's direction (-x,+x,-y,+y...) will have -x/+x missing if rendered using an orthographic projection. Another crazy idea was to build a 3D Voxel Field and for each voxel a cube map is rendered to with only the polygons _inside_ that voxel (so render origin is at the middle of a voxel wall looking towards the voxel origin). The colors are then summed and averaged which results in a "color" value (rgb) for that voxel. The z-buffer can be used in a similar way to determine "solidness" of that voxel. A 32x32 cubemap is probably more than sufficient and the summing can be done by taking numerous cubemap samples within a special shader (no cpu touching). While slow this would technically still be hardware accelerated. Other options are rasterizing and voxelizing the pixels, or taking the vertices, inserting into a voxel grid and making them solid if a vertex is in that voxel. This means missing texture data though I could probably just manually sample the texture map for a given vertex to get an approximated vertex color... ugh. Note that the efficiency of whatever algorithm I choose to use is merely necessary to reduce development time and is not consequential to the end result as long as it is of sufficient quality. The goal is eventually to voxelize large data sets (1 million polygons+) into a collection of volume textures (at a specified granularity) but right now I'm just hoping to be able to render a 10,000 poly model to a 64x64x64 3d texture in under a minute. I'm likely over thinking this so some thinking outside the box would be very helpful. Thanks ahead of time for any suggestions.
  7. Perhaps a stupid question, but in a client/server based architecture, does it make sense to run server simulation code at the same rate as the packet/snapshot transmission rate? i.e. Simulation runs at 15fps (66ms) and includes things like updating enities, physics etc... so a game state update is sent to clients 15 times every second (on every simulate). The assumption is that if you're using a good interpolation/extrapolation scheme as long as your updates aren't too low it shouldn't matter if you run at a low frame rate as long as the game state update frequency is greater than or equal to the simulation update frequency. In contrast (and for arguments sake), when would it make sense to run your simulation at a higher framerate than your game state update rate. i.e. Simulation runs at 30fps but updates are only sent every 15fps. Thanks ahead of time for any insightful comments. p.s. I normally use the latter method.
  8. Dirge

    Input Messages

    The previous way had less coupling in that any message type was strictly limited to and enforced by the observer/observable that used them. In addition, message types were anonymous and notifications could come directly from the source since that observable tracked who was observing it. The new way requires 1) pools for each message type (which the observable served as before), and 2) global declaration of message types. I think there's an easy way to solve two by anonymizing message type id's but, like I said, just doesn't feel elegant to me (which is to say things don't just fall into place -- they have to be jiggled a little to get there).
  9. Dirge

    Input Messages

    Alright, so I implemented the method I described and it works well, for the most part. There are some things that are not so elegantbut in general I think it works better. What I did was made a message system that allows classes to register themselves to receive an event whenever a message they want to watch for is sent. When something wants to send a message, it notifies the message system which in turn queues that message to be processed later. At a certain point, all queued messages are dispatched and notification received for any of the watchers. Whats nice is that I don't need to have any classes need to inherit to be an observer or observable -- it's implicit. You just send a notify with a message and anyone watching for that type gets it eventually. The downside is that to find out who is registered for a message I needed to create a separate pool for each message type (otherwise I'd be searching ALL registered classes over and over -- very slow). Before the observable would just keep a list of everyone watching it (which also made sending the notification pretty simple). When an object registered with it, it sent a filter with all the messages it wanted (if not all of them). Now, instead of a bit mask, every message type is given an id (and pool). That sucks a little but I think its still better than the inheritance dependency.
  10. So a little while back I implemented an input messaging system based on the observer design pattern that would notify anybody observing an input device whenever an input event occurs (i.e. mouse movement, button pressed...). I've found this to be superior to the way I had been processing input previously which was just to ask the input system if a device had a specific key down, a cursors current location, etc... However, I've run into some issues. For instance, if an input event occurs that requires that an object observe that same input device that triggered that event, well, badness ensues. An obvious way to fix this would be to have the input events put in a queue and processed at some later point which is not a bad way and I'll probably try it, but I'm curious to some opinions on this method in general. The technique requires an input listener (which serves as the observer) which is kind of an annoying step and I'd prefer to do without. Would it perhaps be better to just queue input events and process them in-place? In other words, the input system doesn't notify an observer of an input event but rather allows them to go through all the input events and react accordingly. What do you guys do in your projects? Thanks ahead of time for your feedback.
  11. On modern (i.e. next-gen) consoles there is a significant performance penalty for using multiple streams but it really just depends on how you use it -- it's a very context specific and in some cases algorithm specific thing. Ex: If you're using multiple streams to do vertex blending for facial animation, it's faster to do that than the alternative of hundreds of bones transforming verts. Also, it's just like Evil Steve said - depends on the GPU and driver set.
  12. Sorry, I shouldn't have narrowed it down to c/c++ codebases. This subject relates to large codebases of any language. Let me pose an additional question; How useful would it be to have a number of sub-directories as an additional organizational structure? In other words, if you were working on your graphics renderer and had a number of files related to models and textures (ex. not just model.cpp but model_loading.cpp, model_rendering.cpp, model_exporter.cpp, texture_loading.cpp etc...), does it make sense to have a denser hierarchy like: Renderer/D3D/Include/Models/Model_Rendering.cpp, Renderer/Common/Include/Textures/Texture_Loading.cpp, ...
  13. Inquiring minds must know! How do you organize your source and header files? Include and Source directories? i.e. Codebase/Include/Foo.h, Codebase/Source/Foo.cpp OR Codebase/Foo.h, Codebase/Foo.cpp etc... OR Some other way? What are the merits of one over the other? How should nested hierarchies work when using an Include/Source directory structure (i.e. Codebase/Include/FooLib/Foo.h OR Codebase/FooLib/Include/Foo.h)? I know this is a silly question but please humor me. :-)
  14. MrBastard: Indeed, I'm with you 100% here. I do use solution level project dependencies but prefer to be explicit with what I link in anyways so can do without. I'm inclined to believe this is an idiosyncrasy of visual studio that can not be easily resolved (and knowing MS will likely never be). An all-inclusive solution sounds like a sure bet at least for now. Thanks again.
  15. _moagstar_: Unfortunately that's not an option. The reason there are two solutions is that one solution contains the engine code and the other the game logic specific code and that can't change. Since the engine is still in heavy development, I can't just export an SDK and work off that, hence, I'll on occasion change engine code from within the game solution (and switch over the to engine solution to compile). What I may try doing is just creating a new all encompassing solution that contains the project files for both the game and engine and see if that alleviates the issue. Thank you for the suggestions.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!