Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

183 Neutral

About voguemaster

  • Rank
  1. How did you derive O(n+m) exactly ? It wouldn't be sufficient to test overlap using just the vertices alone. Maybe you should explain your idea a little better.. One other possible solution is to have the two polygons actually polyherda. Make them have some thickness so it will be easier for you to use both of them in an algorithm like GJK. Actually, it may be possible to use just the 3D polygons themselves with the algorithm but I really don't know if its going to work because I don't know if a set of coplanar vertices (a subset of the polygons vertices) can make up a simplex, a necessary step in the algorithm. By extruding the polygons and making them polyhedra you can be sure its going to work, and depending on the penetration depth you could determine if the collision is of sufficient interest to you.
  2. Rob, You failed to notice that I didn't say to start optimizing like crazy right from the start. I wrote, more than once, to keep those points in mind when designing the code and profiling it. I also wrote that if the number of objects isn't large or that the code isn't performance *critical* then he should use std::map without worrying too much about it. By "ruin performance" I mean that if the code is performance critical then you absolutely need your data to be contiguous in memory. If you use pointers within a map you have to take that into account. Now, if the OP is storing model data in a scene graph type of data structure, or pointers to model data, then no, not all data is going to be present in GPU memory, and generally performing scene graph traversal on many objects (lets say several thousands) needs to be as efficient as possible. And Rob, if you perform something like "UpdateWorldTransform" for a node, using a base class pointer and calling a virtual function, in every node update of your graph then thats also hurtful to performance, or possibly could be. Not the data cache, but the instruction cache. Now I'm not going to claim that you always need to optimize this part or don't use virtual functions but designing with this in mind. And when you do get to profiling, sometimes its better to structure the code without those virtuals. No matter what you do, you'll have to profile and I never said otherwise. You just need to understand what your hardware is doing but still, knowing possible pitfalls is important as well. That's why Tony Albrecht's Pitfalls presentation is so interesting :). I really advise anyone that deals with performance critical code to take a look at it.
  3. voguemaster

    [C++] Templates

    Red Ant, If you look closely at my simple example you'll notice that I didn't use the inline keyword as it isn't needed. In order to satisfy the first example where the method's body is part of the declaration, the compiler will most cases inline that code. In the second case, you allow it more freedom to inline based on the function and what it does and therefore the compiler is more likely not to inline all the functions. its just like writing method bodies in the header file inside the declaration or just declaring the methods and at the end of the header file include an .inl file or a .cpp file which is a better approach to inlining. I read a good source about this a while back but forget where, sorry..
  4. The thing is, a node in a map is part of a red-black tree (if I recall) and if you're storing a pointer within a node, you're might be doing several things that will ruin performance: 1. Storing pointers within complex std containers can be wasteful in terms of memory. For example, std::list at its most basic form allocates space for two pointers (prev/next), and possibly another pointer. So 12 bytes in order to store a pointer that is 4 bytes (on 32-bit platforms of course) is wasteful. I imagine std::set/map may be wasteful just as well without a custom allocator. So either use something else in this case or use a custom allocator that does what you want. 2. The second is that pointers by nature point somewhere else. That is, even if your pointers are stored in a contiguous array, the items they point to aren't necessarily in contiguous memory which is bad for the cache. So keeping the items in the proper layout in memory will boost performance considerably. A point to keep in mind. This can also be done using your custom allocation scheme. 3. One of the rationals for keeping pointers to objects in containers is to store a pointer to a base class and allow runtime polymorphism based on object type. While I love the concept, you need to understand that calling a virtual function is by definition dangerous to the cache to say the least, depending on how much code it needs to load into the cache. Also a point to keep in mind as the code also needs to be loaded into cache before executed. I don't know how many of those points you already know so if you do, just ignore what I said but if not, just remember them so that when you profile for performance you'll know what are the possible pitfalls.
  5. voguemaster

    2D quadtree collision - Variety in size

    Graham, I think he meant he wants object sizes within the leaf nodes to vary in size, which is the same basic problem with a uniform grid. If I recall correctly, Real-Time collision detection (the book by Christer Ericson) mentions quadtrees and octrees that have a division strategy that is non-uniform but I can't recall the pros and cons of it. Maybe you can check it out. One way to solve this problem with grids is to have a hierarchical grid. That is, several "grids" with varying cell sizes so that the grid and therefore cell chosen for an entity is based on its size. Maybe you can do a variation on this here. Have another quadtree for larger objects. A node in the large object quadtree will be the same size as 4 nodes of the regular quadtree, just like the parent of those 4 nodes. If you want to test a large object vs smaller ones you have one node from the big quadtree and 4 nodes of the smaller quadtree occupying the same space. You collect all the objects in those nodes and perform collision culling between them. I hope you understood what I just said :) For objects that vary that much in size I'd choose a different broadphase anyway. A persistent sweep and prune works very well for almost all situations, even objects in varying sizes (which is no problem at all).
  6. Alvaro, you are of course correct. I remembered I encountered that problem long ago when I wrote the code to approximate a sphere with tessellation of triangles. That reminds me that I read somewhere that researchers found that even in nature its not possible to obtain equilateral triangles in a perfect way to approximate a sphere. They tried to see how crystals form around a spherical drop of water and notices irregularities in the formation. On a plane, the system will reach equilibrium and a lattice of perfect triangles will be created. In the spherical case, a vertex may be shared by 5 or even 7 triangles. So, even the forces method described here, which should emulate nature's work will not suffice. At lease if I understand correctly :)
  7. Hmm, Just FYI - have any of you seen the work done by Ron Fedkiw and his students ? He's claiming the code will be released to the community but from the looks of it, it looks incredible compared to any other thing I've seen. Don't know when its gonna happen but damn it looks awesome :-)
  8. I fail to understand the method you're using to tessellate. Why not subdivide each triangle into 4 equal triangles and renormalize their vertices to be on the unit sphere ? If you need code for this, I wrote it once and it works quite well. It can also start from other primitives if you need to.
  9. Vanderry, If you're looking at a lot of tree nodes and you'll be doing a lot of queries on them, I'd structure the code a bit differently. First you need to realize that std::map may waste a lot of memory, depending on T. If good cache behavior is needed for performance reasons, you won't want the default allocation scheme that it uses. If the tree doesn't change *much*, maybe "serializing" it to contiguous memory (maybe managed by std::vector but any other way is good) will be better. There are ways to express trees in an array-like fashion. Actually, having each level in one array is also a good way of doing the same but simplifying the data structure. However, if the tree changes a lot it will be a pain in the ass to manage :) The thing is - the way you write the code greatly depends on the data it manages and the way you're using it. Its always a tradeoff between the generality of the code and how well it runs. Now, if there aren't thousands of nodes in the tree or the performance hit is negligible, forget it and just use std::map if you need to..
  10. voguemaster

    How to resize a java BufferedImage?

    Hi, By resizing you mean creating a BufferedImage with new dimensions ? Or to take an image and render it on a graphics context with whatever scaling you want ? I'm assuming its the first, and no, you can't. What you *can* do is to create a new BufferedImage, larger in size, and draw the original onto it. You can do: Graphics g = newImage.getGraphics(); g.drawImage(...); // here you draw using the old image Basically you can draw into the new image however you like.
  11. voguemaster

    [C++] Templates

    Hi, To understand this in a better way, you can read the section about "Templates in practice" in C++ Templates: The complete guide. In any case, there are two ways to write functions as well, inlined to the definition or outside the definition, even on the same header file, like so: 1. inlined: template<typename T> class MyClass { public: T* m_pT; void foo(T* t) { // call some operation on T t->bar(); m_pT = t; } } 2. outside the definition you must name the class with its full name: template<typename T> class MyClass { public: T* m_pT; void foo(T* t); } // now comes the function, notice its preceded by template arguments and MyClass<T> is used as the name of the class it belongs to! template<typename T> void MyClass<T>::foo(T* t) { // call some operation on T t->bar(); m_pT = t; } Its not always known but they compiler will freely inline functions if written like in 1, but will less likely do so when written in 2. This can affect your code size considerably since the template's header may be included in many compilation units. I prefer a standard function call unless I *KNOW* that an inline version will serve me better, otherwise its just a lot of waste. As for .cpp files - either include them at the end of your header file or don't bother with them. As I said before, check out the book for an explanation, its a good one :)
  12. voguemaster

    Any good books?

    Hi, I primarily like books about physics and collision detection but I've read some more. I can attest that the following are a great read, IMO: 1. Real-Time Rendering. Its just a gold mine and one of the best books I've seen. 2. Real-Time Collision Detection, my personal favorite. 3. Physics based simulation. Excellent at describing the main approaches to physical simulation (although some other books elaborate more, like Eberly's Game Physics or various papers out there) 4. Geometric Tools. For anything math related in this field, this is like a bible :) If I remember anything else... I'll post.
  13. Heck, I'd like to work in bioware too :). Can't say enough good words about their games. Kudos.
  14. voguemaster

    Physics with a huge scene

    And, lets not forget its possible to represent cell locations or even object locations using 64-bit fixed point with a small number of fractional bits (which gives great range).
  15. Adam, you're right, however the picture is far from complete. A local array to a function can give a better speed boost if performance is a concern because it helps the compiler optimize the code better. Otherwise, the compiler assumes aliasing and emits code that is less than the optimal. So basically - it depends on profiling like everything else. Just a couple of things to note for the original poster: 1. At least in Windows, default stack space per thread is 1 MB. Every platform has its own threshold. So allocating too much (like 32 Kb) is not a good idea. 2. Allocation in the heap is expensive both when allocating and especially when deallocating. Each memory operation is a search on the memory allocator's tables. Not only that, some implementations perform coalesce when freeing memory to defragment the heap. its one of those things that there is no single answer to - you have to test what your hardware and OS are doing and pick a solution both good for performance and also for efficient memory management. If neither performance of fragmentation bother you - just do whatever you need :) That's always the most fun heh
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!