#### Archived

This topic is now archived and is closed to further replies.

# Flight Simulator / Vehicle Simulator Research Project

This topic is 5694 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi Guys I am looking for a subject to do an MSC project on. THe topic of a flight simulator has been suggested. I would love to write a physics engine for a flight simulator or vehichle simulator or one engine that can do both. The thing is that there has to be a research aspect to the project or some novel way of applying recognised techniques. If I just write a flight simulator this will not be good enough. Could anyone with experience in this area suggest any research topics or problems with current methods so I have somehting to get my teeth into? Thanks Giles giles.roadnight.name

##### Share on other sites
Try taking an aspect of the flight simulator and research that! For example terrain generation is a good example.....

##### Share on other sites
Thanks for the suggestion but if I was researching that sort of thing then there would be no point writing the flight simulator. I could just write a terrain generation program with no physics.
It really needs to be some aspect of the flight simulation or physics, or similar for a vehicle simulator.

##### Share on other sites
Well you could code a vehicle simulation, with different types of terrain (like grass, mud, shallow waters, desert, etc.) where the vehicles (like quads, or maybe motorcycles) could slip away. You could make cliffs, where the vehicles could jump off (and be sure to implement the suspension of the vehicles). Then allow the player to drive around, and maybe code something like a time trial (with only one person, so you don''t have to think about collision detection with other vehicles ), where you''ll have to get from location a to b as fast as possible (with highscore lists!). Just some ideas, hope you can use some of this.

##### Share on other sites
Try writing a true flight model for a flight simulator that accurately considers thrust/drag, lift/weight and accurately calculates the airflow over surfaces, including different types of surfaces which suffer from different levels of parasite drag and different shapes of aircraft. This is not a small project and involves much research.

##### Share on other sites
The mathematics/physics of flight are well established so there is little research content in just writing a flight sim. Even freakchilds suggestion has been done to death.

As an alternative suggestion, there has been some interesting work over the past few years on control models for high speed road vehicles. You might consider implementing a closed loop controller for such a vehicle. There''s a paper by Simon Julier on "Process models for high speed road vehicles" that''s a good place to start if you want to accurately simulate the dynamic processes involved in such a vehicle. As for a controller, you''ll need to use a non-linear inference technique. There''s plenty out there on these topics. I can highly recommend Jazwinski''s, "Stochastic Processes and Filtering Theory". The more recent Unscented Kalman filter, also developed by Julier, would be a good choice of an easy second order filter to implement.

If you don''t like the idea of writing a controller for a road vehicle, you could do the same for an aircraft, or even a space vehicle.

Cheers,

Timkin

##### Share on other sites
Thanks Timkin, I have looked for the papers you mention on the internet but cannot find them. Could you give me a link for them?

Thanks

##### Share on other sites
Go here and click on "what I do" in the left hand menu. This is Simon Julier's old home page with several papers available. Funnily enough the paper I mentioned is not there... yet I have a copy on my desk... strange! There are other papers there though that are applicable.

As for Jazwinski, that's a book and not available online. Try your local college library or state library if necessary. If you have particular questions about stochastic processes I would be happy to answer them.

Cheers,

Timkin

[edited by - Timkin on July 22, 2002 10:05:10 PM]

##### Share on other sites
Thanks again Timkin.

Could you tell me a little bit more about what stochastic processes are and what a closed loop controller is. I take it you don''t mean control of the vehicle as in steering wheel and accellerator. DO you mean AI control of a vehicle or what?

THanks for youyr help, sorry if I am being a bit thick.
Does sound like an interesting project.

##### Share on other sites
quote:
Could you tell me a little bit more about what stochastic processes are and what a closed loop controller is. I take it you don''t mean control of the vehicle as in steering wheel and accellerator. DO you mean AI control of a vehicle or what?

A stochastic process is a dynamical system in which there is some uncertainty associated with the evolution of the system and thus there is some uncertainty in which state the process is in at any given time. The only true stochastic processes in nature are quantum systems. In all other cases, the uncertainty is actually in our knowledge of the system. Thus, if you want to model a complex, deterministic, dynamic process in the real world, but you cannot model it exactly enough to account for every single factor, you can model it with a stochastic process that models the ''average'' deterministic behaviour as well as the uncertainty. One of the most studied scenarios is additive noise. Mathematically the following is not correct, but it serves to illustrate the point. The velocity of a process can be expressed as an arbitrary, nonlinear, deterministic component plus an additive noise term, written as:

dx-- = f(x,t) + G(x,t)w(t)dt

where f(x,t) is the deterministic component of velocity, G(x,t) is an arbitrary function and w(t) is a random process. I won''t go into why the above is not correct (since its largely irrelevant to the current post), but the above equation can be rewritten as an Ito Stochastic Differential Equation (for the stochastic process dx):

dx = f(x,t)dt + G(x,t)dWt

where now dWt is a white noise process. Assuming certain mathematical assumptions hold (which are almost always valid for the real world) then the state of the system at some future time is given by

x(t+dt) = x(t) + inttt+dtf(x,t)dt + inttt+dtG(x,t)dWt

The first integral is taken in the normal sense and the second integral is defined by Ito and is called the Ito Stochastic Integral.

Depending on the choice of f and G this equation may or may not have an analytic solution. If f is linear and G is constant then the optimal closed form solution of this sytem is found using the Kalman filter (linear regression). Non-linear cases are more difficult to deal with, however, the Unscented Kalman Filter developed by Simon Julier is a nice approach to an approximate solution.

As for controlling such a system... imagine now that f is written as f=f(x,u(t),t), where u(t) is a control input. Clearly the trajectory of the process depends on the input. Optimal control of this system requires choosing the inputs that maximise some reward function or minimise some loss function for the process. So, imagine you wanted your vehicle to drive a racing line. Optimal control would involve finding the values of u at different times that kept the vehicle on the racing line. A closed loop controller is a solution to the this control problem. At any time there is a goal state that the agent should be in (over the racing line) and the current state that it is in. The controller finds the actions that take the current state toward the goal state (essentially a gradient ascent/decent in the reward/penalty function).

There are a ridiculous number of texts out there on real-time optimal control. I''m sure you can find something in your college library. I would suggest though that you don''t restrict yourself to engineering texts (where this work has been done for decades). Look also at AI texts that deal with Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), which are the generalisation of the above control problem.

I note from your website that you are interested in computer games (which is obvious given your involvement here!). You might be interested in MDPs and POMDPs from that perspective as well, since they represent arbitrary decision processes for artificial agents. A closed loop controller can also be thought of as an agent solving a particular MDP or POMDP. A very simple example of such a controller is a script that controls the transitions in a finite state machine given the observations made by the system. There is so much interesting information out there, I can only encourage you to read as much as possible. If you can handle the mathematics, I would highly recommend Thom Dean''s book "Planning & Control". It''s an essential read!

quote:
THanks for youyr help, sorry if I am being a bit thick.

Never apologise for asking what you think is a stupid question, only apologise for giving a stupid answer!

Cheers,

Timkin

##### Share on other sites
By MDPs and POMDPs do you mean Markov decision process and partially observable Markov decision process. Am I correct in understanding that these are other approaches to the same sort of problem? - route planning?

##### Share on other sites
quote:
By MDPs and POMDPs do you mean Markov decision process and partially observable Markov decision process.

Yes (I actually did associate the abbreviations with the names in my post!)

quote:
Am I correct in understanding that these are other approaches to the same sort of problem? - route planning?

Route planning is just a specific type of planning task. Planning in general can be thought of as solving a particular MDP or POMDP, depending on the (type of) information available.

There are many different approaches to solving planning problems. They can be broadly categorised by the sorts of plans they seek to find:

• policies (universal plans)

• route/path planning (serial plans/single path through the state space)

• reactive plans

...and then you have replanning systems, which seek to update/alter the present plan or policy given new information.

If you''re interesting more references for solving MDPs and POMDPs I can post a few bibliography entries for the key papers from the AI perspective.

Cheers,

Timkin