• Advertisement

uutee

Member
  • Content count

    396
  • Joined

  • Last visited

Community Reputation

142 Neutral

About uutee

  • Rank
    Member
  1. Hi, In the past, startup scripts could be run on Windows/MS-DOS computers by calling them from AUTOEXEC.BAT. However, modern Windowses don't seem to support that mechanism anymore. I'm mostly considered about the VCVARS32.BAT used in Microsoft's compiler environment. I use command line a lot, and thus I'm really interested whether it's possible to have the .BAT launched once at startup, as opposed to each time when command prompt is run. Thanks, - Mikko
  2. Game AI is often simplified by the fact that the AI doesn't really have to deduce everything from observations (e.g. via computer vision techniques), but it can "cheat" simply by looking at the game state (e.g. just read off the position of the opponent instead of using Kalman filter and whatnot). But your AI could, for example, find out the hidden "mental state" of the player from the recent game states. This mental state could then be used to pick a strategy. - Mikko
  3. C++ refactoring tools?

    Short answer: C++ is probably the most difficult programming language to parse (does anyone know a more complicated language anyway?). This leads to the lack of automated tools. I'd *love to* refactor my C++ code the way I do Java code in Eclipse, but I understand that writing such tools is a major pain, especially with templates. -- Mikko
  4. Using neural network to replace math

    >>My questions shoot for broad, abstract formulations...the specifics of >>different types of neural/logic networks probably aren't pertinent >>inasfar as my questions go. Neural networks are afaik used mostly as a regression technique: you teach them how to map input (e.g. physical world state) to output (e.g. an updated physical world state). Of course, "mapping" is the heart of all mathematics and algorithmics: you give your physics simulator a scene description, it gives you an updated scene. You plug X into a mathematical formula and you get Y. The "mappings" produced by neural networks have the amazing property that they have an incredible number of degrees of freedom (i.e. independent variables). For example, even a smallish feedforward network can have a +1000-dimensional weight-space. This is simultaneously both exciting and depressing, since our human brains cannot cope with the intricate relationships between these thousands of degrees of freedom. The mathematics produced by human mathematicians has always been rather high-level; humans seem to be able to consciously understand reality only through "languages" such as mathematics, from which we form "sentences" such as "g = 9.81ms^-2". High-level languages are our tool of understanding the low-level languages of the reality. Neural networks have their own problems as well. Not all networks can learn everything, neither in theory nor in practice. Curiously, the learning formalisms for neural networks are still formed by human mathematicians, and tend to be sentences of very high level languages, such as "minimize the error against teaching set". -- Mikko
  5. Hi, Please notice that "layer" is a slightly misleading term, since other people talk about layers of neurons, whereas other people talk about layers of weights. But assume we had one layer of weights, i.e. two layers of neurons. In this case it is well known that the output neurons can only depict linearly separable functions; i.e. each output neuron ("perceptron") can be seen as a hyperplane cutting through the input space. This geometric intuition of hyperplanes can also express simple Boolean logic, i.e. all linearly separable Boolean functions, such as AND. The next step is to add one more layer of neurons. These neurons can represent simple boolean logic on the hyperplanes represented by the middle neuron layer. This allows us to represent any _convex_ shapes. The final step is to add yet another layer of neurons. These will represent simple Boolean logic acting on the convex functions represented by the preceding layer. This allows arbitrarily complex shapes to be represented. -- Mikko
  6. >>>you´ll see a reason to have more than one hidden layer of neurons Ah sorry, the definition of "layer" isn't totally coherent. I meant four neuron layers, corresponding to two hidden neuron layers, corresponding to three weight layers. This is known to be maximally expressive. Still, thanks for the tip. I'm currently reading a book on ANNs, it should probably get to CCN soon. -- Mikko
  7. Hi, So basically, three layers can be shown to be theoretically maximally expressive with feed-forward ANNs. Of course using four or more layers doesn't decrease this expressivity, but it doesn't increase it either. However, I've seen networks with four or more layers; is there a reason for this? Intuitively it would seem that the non-linearity caused by a high number of layers just makes the learning more difficult; wouldn't it be better to stick with three layers and just increase the number of neurons? Thanks, -- Mikko
  8. Threadsafe Spring System Query

    >>>I'm not sure you can distribute the physics & mesh update to another >>>processor without some synchronization. If two tasks are completely separate, they can be parallelized without locking. Of course a renderzvous, i.e waiting for all the threads to finish, is still required, but that doesn't slow down things like locking does. >>>There is still the issue of rendering, and rendering likes to be >>>done in a single thread due to the need to do GPU state management. Personally I would just do the physics update and rendering completely sequentially, but so that when the physics update takes place, multiple processors are used. With a functional approach it's possible to do physics and rendering simultaneously without locking: the physics engine maps the old state to new state, and graphics engine uses the old state to render a pretty picture. You're right: rasterization-based rendering is not trivial to parallelize. -- Mikko
  9. Threadsafe Spring System Query

    When multithreading an application, you should find a correct "granularity" of doing things. For example, giving half of the springs to processor 1 and the other half to processor 2 isn't very efficient since particles depend on each other through springs, so processors 1 and 2 require some kind of synchronization. However, if you give half of the *hairs* to processor 1 and the other half to processor 2, you'll require no synchronization (if individual hairs are considered independent). Unfortunately this disallows treating all hairs as a big collection of springs. One "generic" solution to MT problems is to aim for a more "functional" programming style, i.e instead of updating the "states" of the particles, you "map" the old particle set to a new particle set. If all processors use the old particle set in their update stage, everything works fine. The tradeoff is that updating the state might be faster than creating a completely new state (but mostly it's just the allocation overhead). (Yes, there are way more solutions, especially if you can afford a little precomputation time, but no time to discuss everything here.) -- Mikko
  10. Hi, In object oriented programming languages we can often do this: void myfunc(Paintable p) { p.paint(); } I.e we specify a *required* *interface* that the argument object must support. This immediately raises a question: are there *statically* *typed* languages which support requiring *multiple* interfaces for the argument object to support? E.g void myfuncCleanup(p : Paintable, Closeable) { p.paint(); p.close(); } Is there a name for this kind of construct? Do any statically typed languages support it? What are the most common ways of faking it it traditional "single-interface" languages? Thanks! -- Mikko
  11. So basically lockstep means "parallel simulation of the game state on all client machines"? Some time ago, I designed a multiplayer algorithm which worked roughly as follows: each client would have its own "belief state" of the world which it simulates, and the server would have the "real state" of the world, which it then spreads to the clients (not the whole world-state per frame, just a little update), so the clients' belief states are kept at sync. This keeps traffic relatively low and is quite general. Does this algorithm have a name? -- Mikko
  12. Hi, Lately, I tried to teach my straightforward sigmoid-back-propagated networks the mandelbrot set, i.e a mapping from complex plane to {0,1}. The input vector was an aggregation of the few first points of the Z<-Z^2+C iteration. Actually, it worked, but the problem was that the resulting images had extremely coarse resolution: the neural network couldn't handle the self-similarity inherent in mandelbrot. Now the questions are: 1) Is it possible to teach neural networks periodic structures such as sin(x)? I tried changing the sigmoid to sin, but this caused a completely noisy look 2) Is it possible to teach neural networks fractal-like structures (self-similarity) e.g by using recurrent networks (and how does one teach/evaluate recurrent networks in the first place)? Thanks, -- Mikko
  13. Hi, Basically, a game world consists of two things, 1) Static game world (e.g houses) 2) Dynamic, moving bodies (e.g cars) Both of these are rendered using polysoups. For collision handling, there are two main options A) Use the polysoup directly B) Use some simpler geometric primitives (e.g ellipsoids and distance fields) distinct from the rendering This raises a few questions, X) Are simplified geometric primitives usually used for the static game world? This sounds pretty tedious, as it will require writing specialized tools. It would be much easier if the artist could just create a polygonal model in e.g 3dsmax. Y) Assuming the answer to question X is false, the most realistic method seems to be polysoups for the static world, and simplified geometric models for the dynamic bodies. Now the question is: what algorithms are there for this kind of stuff? GJK can't be used, since the world can't be assumed to be convex. Are there algorithms for checking convex polysoups (e.g car bodies) against non-convex polysoups (e.g buildings&roads&whatever)? Cheers, -- Mikko
  14. Thanks kirkd! The character/digit training data looks especially promising and humanly parseable.
  15. Hi, I've seen papers that describe using ANNs for various cool recognization problems, such as speech, images, stuff like that. But the question is: are there any freely available data sets for this kind of training? Or do people usually craft these data sets by hand (perhaps combined with generating algorithms)? Thanks, -- Mikko
  • Advertisement