• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

156 Neutral

About chuck22

  • Rank
  1. AI equal cpu usage

    Quote:Original post by wodinoneeye It really doesnt do much good running the same simplistic AI logic faster... Well that depends on what the AI logic is doing. If the AI logic is solving a problem, the problem is solved faster. If the AI logic is learning and gaining knowledge, similar to how you mention a human has to teach the AI 'lessons', then that is done faster too.
  2. AI equal cpu usage

    Innovative programming does not happen by admitting defeat at the beginning. I do appreciate you keeping my self-expectations in check. One thing I'm certain of is if this idea is unsuccessful, it's something I will find out the hard way. I'd still like to keep this on topic for any of those who have read responses down to this point. I am still up for suggestions on fair, parallel processing of multiple NPCs.
  3. Pathing AI around other units

    You may want to search for flocking behavior. Avoidance is part of that behavior. Each unit has an invisible radius around them and the closer it is to another unit the greater it resists travelling that same path.
  4. AI equal cpu usage

    One of the reasons I'm trying to do this the seemingly difficult way is because software has always been losing in the arms race with hardware. Software is written in a certain way because it is limited by hardware speeds. When hardware speeds increase, software is then written to take full advantage of those speeds. I don't believe that artificial intelligence fits that model yet, not with current hardware speeds. If it does fit the model with current hardware, then it isn't artificial intelligence, it's behavior that simulates intelligence. It may sound like I've just defined AI, but I've defined the spin-off definition people have been using so far. I am not looking to simulate intelligence. Simulated intelligence is not intelligence. It is trickery to fool the person behind the controller. I am looking to create actual intelligence (within a small scope). And by that I mean an NPC should be able to solve problems within its domain. If programming intelligence the way we problem solve can be done with an NPC that can still only solve complex problems 1000 times slower than a person could, that would be great. That would be actual intelligence and soon enough hardware will come along to speed that up to real time performance. Going back to my original post I better reiterate the word "simulation". I'm not looking to create a fun massively multi-player game. I'm looking to practice programming real intelligence. Not cheating intelligence (shared state, enemies always know where you are, opponent cars behind you get a speed boost to make it a competitive race, etc.) Real time parallel execution of real, not simulated, intelligence is the goal here. The more I think about this and read new responses the more it looks like I've set the bar way too high. In that case, I still would like to shoot for real intelligence that performs at a slower speed for this simulation. And the original problem mentioned in my first post is still the main problem.
  5. AI equal cpu usage

    Sneftel: That anytime algorithm idea is pretty interesting. That does solve another unmentioned problem which was whether or not any action could be performed at all when the NPC ran out of "thinking" time. If I implement my algorithms in an 'anytime' way then that issue is solved. I do not believe that solves the main problem though. An anytime algorithm will give me a partial result more or less at any point in time. But anytime algorithms don't help me govern all the NPC's so that each NPC must give me a partial result after a specified maximum amount of time (ie. each turn). ddn3: My goal for performance is for a small number of NPC's to run in parallel in near real time. Because I'm limited to a two-core processor, I don't want two NPC's hogging the cpu for complex planning, for example. I'm curious that this problem doesn't have an obvious answer. There are tons of console games on the market that must have faced this problem before. If you think of a first-person shooter when playing in campaign mode. Most games will have a one hero vs. lots of bad guys plot going on. Or a racing game. One car driving against a dozen opponent cars. All of these agents seem to be thinking all in parallel. How do these commercial games solve that problem? Thanks again for the responses.
  6. AI equal cpu usage

    To Narf: This question actually doesn't require extensive knowledge of Artificial Intelligence. It is more of a question relating to concurrent/parallel programming. In the context of AI it is a little bit easier to visualize, though. Each class does have its own state data, I agree. With Java, we're talking about members of a Java class. But I also need to take into consideration any variables that are in scope when the execution is "broken off". A quick and dirty example: for (double i = 0.0; i < 1000000; i += .01) { this.computeFactorial(i); } Ignoring the fact that this example is stupid, I think we can agree that this for-loop will take a while to run... relatively speaking. Let's say the X amount of time originally referred to is reached when the variable, i, is 1000.00. "i" is just a local variable so I'm not quite sure how to store the state of this class, which includes storing the value of "i", without actually putting a method called saveState() in the for-loop too. To lightbringer: I will look into the Executors factory class. I am decently familiar with multithreading in C from taking an Operating Systems class. Unfortunately, I also learned in that class that different operating systems don't necessarily implement the fair but equal round-robin scheduling scheme like what would be desirable for this solution. I'll see what I can find after a bit more research.
  7. AI equal cpu usage

    Theoretically that sounds like a solution that would work. But I'm thinking to implement that I would need to save the state of the AI after each line of code in the algorithm for whatever action that AI is trying to perform or compute. This would double the amount of code, and likely more than double the amount of cpu time needed to execute a given algorithm. Could you throw out some pseudo-code to clarify if I'm missing the mark here? Though I do like the idea, and that's basically what I'm trying to accomplish, I feel that it is probably not the job of each individual AI to count how long it has been executing an algorithm. I need something like a thread monitor. Thanks for the response.
  8. AI equal cpu usage

    I'm working on a Java game/simulation and there are several NPC's that will be active at once. I want to implement different types of AI for each NPC, but one problem I came across right away was that I want each NPC to have at most X amount of time to "think" when it is that NPC's turn to think. In other words, I don't want the game to bottle-neck while one NPC is thinking because I implemented a more resource-intensive algorithm. What I would like to achieve is the following: - There are 5 NPC's, each with a different AI algorithm - Each NPC will get X amount of time to run through it's algorithm, or Y amount of cpu instructions I've considered assigning each NPC to a thread, but I have no guarantee that the operating system will allow each thread to be executed for the same amount of time.
  9. AI in a dynamic Tic-Tac-Toe Game

    let's say it's a 4x4 grid ---- ---- ---- ---- ---- -X-- ---- ---- ---- -XO- ---- ---- ---- -XO- --X- ---- ---- -XO- --X- ---O X--- -XO- --X- ---O i'm saying that even in a 4x4 grid whoever moves first still gets their 3 X's or 3 O's in a row. once you figure out how to solve your current problem then come up with a basic formula that relates grid size to how many pieces in a row. you may also find that you won't need anymore than 4 or 5 in a row, or else games will last forever or games will always end in a tie. again, solve your current problem first, worry about this later. the number of pieces in a row is simply one variable you have to change.
  10. AI in a dynamic Tic-Tac-Toe Game

    i hope that as your grid size gets bigger the number of spaces in a row increases also. otherwise the first person to move wins every time.
  11. circular Big Bang theory

    i would not mind people having previous knowledge to discuss in this topic but i don't think they need to have read a physics book. i've taken 2 years of physics, my first semester of astronomy, and i watch almost every episode related to this stuff on the history/discovery channel. i haven't read a physics book but i feel that i have the basic credentials to know what i am talking about. @Oxyd the universe is expanding. space as we know it is infinite. @boolean why would the "edge" of the universe reflect light? all the edge of the universe is is that fartherst known matter in all directions which means that the edge is getting farther away.
  12. circular Big Bang theory

    are you talking about the big freeze theory? whenever i try to comprehend something so vast, such as the result of how the universe will interact with itself in billions of years..i start thinking on tangents and i soon lose interest in what i was first thinking of.
  13. circular Big Bang theory

    i read the overview on the Big Freeze and it said that the entire universe could eventually reach absolute zero. i don't see how this could be possible since energy can neither be created or destroyed. in this case, all energy is destroyed. also, it appears that i have accidentally re-suggested the Big Crunch theory without meaning to.
  14. circular Big Bang theory

    i am pretty familiar with the big bang theory. for those of you who don't know what that is it's the theory that the universe began with all matter at a point of singularity. this point exploded and sent matter in all directions, thus an ever-expanding universe, and it was the interaction between the exploded contents, thought to be just helium and hydrogen, over billions of years that created the complex galaxies and solar systems we see today. heavier elements then came about by the nuclear forces and processes contained in stars. so here's the gravity equation F = GMm/R^2 G = gravitational constant M,m = masses R = radius between masses F = force anyway, the point of this post is that i watched a few discovery channel episodes on the topic and i'm also taking astronomy. black holes are a topic that interest me because nothing can escape a black hole once it is within a certain radius and this is mainly because you have a large amount of mass in a very small area (or volume to be more precise). based on that equation, any matter that was caught in the black hole would exponentially increase its gravitational force. since black holes suck up more and more matter and even other black holes it seems to me that this process leaves one inevitability and that the point (radius) of no return for black holes will increase large enough to encompass the known matter existing in the universe. this would suck everything back into a central point and so much mass would be in such a small space you could get a point of singularity again and the big bang would start all over. this idea seems possible to me because more matter decreases the radius between other matter which increases its gravitation force to pull even more matter in. it's seems like a nice circular theory to me and would explain how everything came from a point of singularity but still wouldn't explain the beginning of the universe. the universe may well just be on this circular cycle. i'm curious to hear other thought or interpretations.
  15. best version of linux

    Quote:Original post by necreia Just so that you know in case you didn't, "Linux" itself is just a kernel. What is commonly talked about as Linux is actually the name of the Distribution (Ubuntu, Gentoo, Slackware, etc). Distributions are just collections of packages (Such as a file managing system, networking tools, everything). So, going on the question you asked "Why one version of linux is better than the other":: The highest version of Linux is the best (kernel) generally. As for the Distribution you choose to use ,"better" is only partially subjective based on what your goals for the O/S are. Ultimately since you know exactly what it is you are going to use it for, the "best" would be "rolling your own" (here is some basic detail). That means compiling the Linux Kernel yourself, which doesn't require programming, and adding the packages that you wish to use. after having read that article would i be correct in relating the linux kernel to the "engine" for the OS? much like a physics engine for a game. also, can i only 'roll' my own linux if i have something like Ubuntu already installed. thanks for any help. i am willing to learn in this process.
  • Advertisement