Jump to content
  • Advertisement

XXX_Andrew_XXX

Member
  • Content Count

    162
  • Joined

  • Last visited

Community Reputation

340 Neutral

About XXX_Andrew_XXX

  • Rank
    Member
  1. After four years of working on a PhD thesis in motion planning I can tell you that you're very very unlikely to find a drop-in implementation of ADA*. There are a few reasons why: - ADA* implies some degree of concurrency. That is, the planning 'Main' function of figure 5 runs in one thread, whilst your execution and sensing/obstacle code runs in other threads. A major question is then, 'how do I efficiently update my obstacle map, without using excessive memory and avoiding concurrency bugs?' - Like most planners described in academic publications, the focus is on describing a generic concept. A specific implementation will involve a selection of concrete data structures (priority queues, open lists, sets) that are potentially hand tweaked to be suitable for a given application. - Finally, the pseudo-code presented in the paper, coupled with the description of the algorithm is *the* reference implementation for this type of problem. Suck it up and write some code.
  2. XXX_Andrew_XXX

    Is there a built-in Python vector class?

    In almost all cases which make use of vector math, the array type in the numpy module is appropriate. All of the appropriate functions are implemented in optimized C, and the types are closely compatible with python lists. Some 'common' operations on numpy arrays that you would expect on a custom 2 or 3 dimensional vector class aren't present (dot product), but are very easy to reproduce.
  3. XXX_Andrew_XXX

    rebuilding a std::priority_queue

    The standard c++ heap API works on containers whose items support strict-weak ordering. The simplest implementation would be: struct VertexDistancePredicate { bool operator()(const Vertex* v0, const Vertex* v1) const { return v0->getDist() < v1->getDist(); } }; ... std::vector<Vertex*> q; VertexDistancePredicate pred; push_heap(q.begin(), q.end(),pred); pop_heap(q.begin(), q.end(),pred); make_heap(q.begin(),q.end(),pred);
  4. In the POSA book (Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects, Volume 2), the pattern is described as pipes and filters. However in this case, what you're describing is a C++ specific anti-pattern. I assume what you're looking for is flexibility in being able to design and assemble components in an arbitrary fashion at runtime. This type of architecture is attractive in situations where you need to be able to add numeric/scientific data transformations to a data set (eg. for image processing). The best example of this architecture is Microsoft's DirectShow. If you need flexibility at runtime, static typing may not be the best way to go. Although templates and MPL might allow you to produce a working system, they tend to be quite brittle from a design perspective - you still have to do a lot of work duplicating function signatures that you're over-riding. An alternative is to pass less strictly typed objects around your system, and add runtime tests to check whether data is in the correct format. boost::any is a neat way to do this. From experience, you're probably better off to look at a language which has better built in support for reflection and dynamic typing (Python/C#/Java/MATLAB). If performance in these languages is an issue, consider rewriting only the performance critical task bodies in C/C++. From a concurrency perspective, such architectures tend to scale poorly with respect to load balancing and resources. One thread per task isn't necessarily the best way to divide the problem into pieces, particularly if the arrangement of pipes and filters can change at runtime. For example, a single thread in the middle of the pipeline can have a severe impact on performance for the rest of the components (threads later in the queue are left with nothing to do), which effectively makes the problem serial. An alternative approach is to use a job queue and a thread pool with size dependent on available resources (number of cores).
  5. XXX_Andrew_XXX

    AI for an Air Hockey player

    The most basic school of thought is design a set of layered behaviors, which can be used as components in a state machine. At the lowest level there is are two competencies * setting a new trajectory for the puck (a path which includes time information) for the puck. * estimating future states of the puck At the middle layer you can add a number of basic air hockey defensive and attacking techniques * standard block * block off sidewall in the AI half (when you 'set' the puck to be slammed into the other half) * standard attack * sidewall attack in the opponent half (useful when the opponent comes forward). At the top layer you can use an state evaluation function (state machine) to select the technique which is most appropriate at the current time. An alternative to the state machine/layered behaviors is machine learning. Although these techniques might work, but it would probably be a bit of a stretch to get something like that going in a reasonable amount of (your) time. The basic concept is to try to maximize the probability of scoring goals, whilst minimizing the chance of the opponent scoring goals. The problem with even something as simple as air hockey is that the problem is large. The state space is quite large (velocities and positions for the two players and the puck), continuous and contains significant discontinuities (collisions) in some regions. The actions space is more reasonable (velocity of the ai), but is still continuous. As for specific ML techniques, anything which results in a function which maps states to inputs will result in *a* strategy, but controlling the level of difficulty of the ai is hard. Although something like NN might work, they're probably not worth the time you'll spend on training and debugging. AFAIK a 12D state-space is probably verging on being too difficult to solve with standard dynamic programming / reinforcement learning techniques. google keep-away soccer, a simplified robocup variant if you're interested in a hybrid behavior / reinforcement learning approach.
  6. XXX_Andrew_XXX

    Design a scientific data visualizer

    Check out matplotlib http://matplotlib.sourceforge.net/ its a python library with similar functionality to MATLAB
  7. XXX_Andrew_XXX

    pthreads VS boost::thread

    Quote:Original post by fpsgamer The thing is that as I read the boost::thread documentation, the library feels slightly ... incomplete I've been lurking about following the boost threading code for the least the last five or six major boost releases. One of the old maintainers of the package (Bill Kempf) disappeared and not much happened for a while, consequently where most of the other boost packages have had major documentation upgrades, boost::thread hasn't (other than to docbook). The only really significant changes to the implementation to boost::thread has been related to thread-specific-storage, particularly with respect to making it function correctly regardless of the linking model used on windows. As for reliability, you really aren't going to find better threading code anywhere that will run (pretty much) anywhere with similar results. If simple threads and mutexes are all you need, sure, by all means roll your own. If however you need reliable cross platform support for doing practical synchronization (barriers, conditions, ...) that work and are *fair*, you'll struggle to implement them correctly, test them and validate to the same degree that the boost code is.
  8. XXX_Andrew_XXX

    Inverse Kinematics library

    The only open source IK library I know of is Peter Corke's (A very highly respected Australian Scientist/Researcher), MATLAB robotics toolbox. Given enough time it might be possible to convert this to native code, http://www.petercorke.com/Robotics%20Toolbox.html . IK can get pretty ugly, and it's nice if you don't have to worry about the intricacies of all of the math. In some cases there isn't one clearly defined 'optimal' solution to a problem. In such cases it might make more sense to explore alternatives which use forward simulation to produce many approximate solutions very quickly. James Kuffner (http://www.kuffner.org/james/papers/) has some great papers on using RRTs as an alternative to IK for controlling simulated humans.
  9. XXX_Andrew_XXX

    Field D* implementations or explanations

    Quote:Original post by mfawcett Does anyone know of any sample code implementing Field D*? Like most of the algorithms in this family A*/D*/D*-lite/LRTA* the implementation depends heavily on the type of data structure you are using to represent the grid/graph. I've implemented some of these algorithms in python and C++, and getting started was one of the hardest parts. 0. Make sure your underlying data structures are ready before you start implementing Field D* (your graph and priority queue structures should be really well tested). This may involve using grossly simplified data structures (sorted vectors instead of heap-queues and trivial adjacency list implementations of the graph), aim for correctness at the start rather than performance. 1. Start by implementing the simplest possible algorithm that works in a similar case (A*), you'll need it as a baseline to test other parts of your system. 2. Implement D*-lite, it is by far the easiest and best documented of this class of algorithms. From memory this involved a fair bit of fiddling around with my priority queue implementation. 3. Then try implementing field-D*. My implementations are based on maintaining a fixed length list of nodes, indexed by state. I tend to store the following in each 'node'. * the accumulated cost g * the cached heuristic h (only required if the heuristic calculation is expensive) * the backpointer bp (a performance improver, without a backpointer when you need to extract a solution you need to do a little more work). * an 'open' flag * a 'closed' flag A also provide a lookup function which takes a state and returns all of the states which can be reached (a list of the successors). As for the rest, I'm still going through the paper, I'll try to post more in a little while. Look at Dave Ferguson's website for a list of his papers, they are quite comprehensive. Quote:Original post by mrsmes i honestly don't know but you can go to google and type the question in there Sorry to rant, but you're a new member, if you can't directly answer a question in the AI forum, you really aren't qualified to make a statement like this. Just to make it very clear, these algorithms are non-trivial to implement, hard to debug and hard to ensure that they perform reasonably in a wide range of conditions. This forum has a large number of very intelligent readers whose comments I would like to hear, as I'm sure the OP would.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!