sergamer1

Members
  • Content count

    86
  • Joined

  • Last visited

Community Reputation

814 Good

About sergamer1

Personal Information

  • Location
    Munich, Germany
  1. 2D

    Yes, I was thinking more a 3 or 4 layer parallax starfield, but that's your preference of course. The key to making it work is the further the starfield, the slower it moves in parallax, the more stars it has and the dimmer those stars are. This should in principle be quite easy to do I think. If you have a uniform (on average) density of stars, then the simple random sampling works. But let's say you want some structure, like the spiral arms of the galaxy, or perhaps some dense star clusters spread around? Then you can either tabulate this or make a function of this density field of stars. And then you perform the Monte-Carlo rejection sampling over the density field, and abracadabra, you have a more interesting density field :-) . Other possibilities include perhaps Perlin noise or some other combination of simple density modes. However, like I said, I think you'll find the uniform random (with different brightness parallax layers) to be more than adequate! ;-)
  2. 2D

    Sorry, I just realised you did say the word 'random', missed that one! :-) . But actually my point about the brightnesses still stands. A 'random' distribution of points does not actually make a 'uniform' distribution of points. It follows a Poisson distribution with clumps and voids naturally forming. Adding the brightness factor I think will accentuate this and it will be a pretty decent effect I feel. Of course, if that was not good enough, you could do some other things, like create some density function across the sky. Let's say it was sinusoidal (yes, that's crap but just for example). Then you can use something called Monte-Carlo rejection sampling. You get some (on average) underlying density field plus an element of randomness on top. But I think you should try the simple random x/y/brightness approach first. I think you'll find it is better than you might first think.
  3. 2D

    Hi there, I thought I should start with the obvious suggestion; have you tried simply a random distribution of points? Remember that stars do not all have uniform brightnesses so 2d random positions plus a random brightness factor would probably give you a nice effect I would have thought (and pretty simple to do). Also, what do you mean by 'slowly animate'? I don't know the game in question, but do you mean as you move around the x-y plane and you see the stars in the background move with some sort of parallax effect? Because if that's the case, my random suggestion above still works but in that case, the brighter the star, the closer it is so the movement parallax will be bigger and pretty easy to generate I think. Or maybe I'm thinking of something completely different to you :-)
  4. Okay, that's one solution indeed!  :-)   I guess it's important to remind yourself, what are you trying to achieve.  Are you trying to model the actual physics or make something that LOOKS real for gaming purposes?  If the latter, then tricks like this or softening which was mentioned earlier (which is used in many 'real' science/astro simulations btw), are perfectly valid.  Regarding the suggestion about adaptive timesteps, this is again something you would definitely do to follow the physics more accurately but then you have unpredictable rates since your simulation slows down whenever two stars get close and then speeds up again, all very jerky and unstable.  So a fixed timestep works fine if your not bothered about getting things right with close interactions.  But if you want to model the physics accurately (out of some scientific curiosity for instance) then definitely consider it.   The advantage is you get more accurate integration of the orbits (actually important for both scientifically accurate simulations and for making it look better) since leapfrog/mid-point gives second-order errors compared to Euler integration which is first-order.  A leapfrog integrator is also a special kind of integrator (called a symplectic scheme) which has wonderful conservation properties which amongst others preserves circular and elliptical orbits (e.g. planets or binaries) very well.  The thing is, all these schemes are as expensive as each other, just one force computation per timestep so in that case you would use the better one right? :-)    I'm going to have to disappoint you on this one.  You are of course correct for two bodies.  This problem was solved by Sir Isaac Newton some 300+ years ago :-) .  And in the intermediate centuries, everybody and their cat has tried to add just one more body to solve the so-called 3-body problem.  But there is no general solution I'm afraid that can be used like you hope for.  There are various special so-called 'restricted' 3-body problems where the motion of 1 body is limited in some way so it becomes a 2-body problem again or has some special solution for some configuration.  But in general, and especially when this becomes 4, 5 ... N bodies, you have to do the numerical integration I'm afraid!  This is one of the step-ups from linear/soluable equations to non-linear equations.  All the 'geniuses' of yesteryear solved the 'easy' linear stuff :-D and left all the difficult non-linear stuff to our generation!  :-/   Oh well ...    Regarding the oct-tree, I'm still a little unsure if you really need it for gravity integration (for the reasons discussed earlier about the ~1000 body limit).  However, if you want to model some kind of larger Universe with large-scale structure and then 'zoom' into galaxies and further, then having a tree structure to organise this data hierarchically is obviously a good thing.  If you can combine it with helping to compute the gravity, then great but I'm still unsure if you'll get that much benefit while maintaining 30/60fps.
  5.   Yes, I did read through some of them.  I have to say most of those are completely irrelevant to the problem of the formation of structure in the Universe, hehe :-) .  But having a broad knowledge is always a good thing before embarking on an epic project.  However, its also not good to get lost amongst all that since it might be tricky figuring out where on Earth do you start (believe me, I've done the same thing).     Okay, first of all the large-scale filamentary structures that develop in the Universe are kind of static in a way, although they are more the hubs of growth of galaxies and clusters of galaxies.  Take this famous simulation of the Universe, the Millennium Simulation (https://wwwmpa.mpa-garching.mpg.de/galform/virgo/millennium/#slices);  if you look at the slice images of different redshifts/z-values (which is the same as different ages of the Universe).  You can see the filaments start very thin but then grow and accumulate mass but they don't really move around that much.  What you want (if you really want to mimic the evolution of the Universe) is something that starts roughly uniform and then filaments grow out of the background, gravitationally attracting more and more mass creating large galaxies and clusters.  And the trick will be to do this without actually computing all that expensive gravity.   About your 50 particle run, that actually looks exactly what these simulations should look like with basic gravity so guess you're on the right track ;-) .  You'll occasionally see stars get ejected and also binary stars (you can see the two stars orbitting each other almost getting kicked out).  Here's a famous (in Astrophysics) example with just 3 stars but has similar features to yours (http://www.ucolick.org/~laugh/oxide/projects/burrau.html).   But back to the issue of computing for more bodies.  The reply above is correct in that you would need some kind of tree to efficiently compute gravity for larger numbers.  However, you'll only need this when the numbers are of order 1000 or more and by this point, it takes (on a single CPU) about 0.01s to compute all the forces for all bodies.  If you want a game running at 30/60 fps, you're already near the limit so it's questionable whether using a tree is worth it.  Also, as I said in my previous post, this is the route to actually computing the real physics terms which is NOT what you want to be doing!  You want to be able to mimic these so it looks real rather than doing all the hard work.     I was trying to make this point yesterday (probably not well enough) with the globular clusters.  Globular clusters have 1,000,000s+ of stars and there's no way in hell you could model that with computing the real gravitational force.  But these clusters have a simple average density and potential field (given by the equations in the wiki article) and using that instead will give you the motion of the star around the cluster, so 1 equation not 100000...s with real gravity.  There are sophistications that can be made of course, such as finding if there are any nearby interactions with stars to model scattering and ejections but these are the icing on the cake really!  I'm a bit busy atm, but maybe I'll put together some pseudo-code for you to see what I mean because it's tricky stuff.  But it'll have to be in a few days sorry!   In the meantime, don't drown on all the knowledge!  :P
  6.  Yes, I didn't feel like you were planning that from your original post but the way the thread was going, it seemed like things were pointing that direction which I thought I should just point out right now is a very bad idea, unless you want to train to be an Astrophysicist yourself :-)   So, the Universe is a pretty big topic to discuss so where to begin :-) .  Newton's law of gravity is certainly useful for very basic motions, such as how planets (or moons or rocket ships) may orbit around stars (or other planets) but will be too expensive if trying to model larger systems with more than a dozen (or perhaps a couple of hundred) objects.  What you should be looking at is developing (or reading about) simple models that use simple mathematical equations to represent the gravitational potential field or simple combinations and then your physical objects (e.g. galaxies, stars, etc..) move as test particles (https://en.wikipedia.org/wiki/Test_particle) in these potential fields rather than computing the full insanely expensive gravitational.  Let's take a basic galaxy as an example.  It contains A spherically symmetric halo of dark matter.  It doesn't do anything interesting really but holds the galaxy together gravitationally and can be approximated by a simple equation (e.g an NFW profile, https://en.wikipedia.org/wiki/Navarro%E2%80%93Frenk%E2%80%93White_profile) A halo of old stars, many contained in very old massive clusters called globular clusters.  The globular clusters can be very easily modelled with something called a Plummer potential profile (https://en.wikipedia.org/wiki/Plummer_model) The gas in the galaxy (often called the Interstellar medium or ISM) lies in the disc of the galaxy but often concentrated in spiral arms, as like the classic picture of galaxies  (There's no simple article for this sorry, but maybe this helps : https://en.wikipedia.org/wiki/Gravitationally_aligned_orbits) The gas is compressed in these arms leading to new stars forming from the gas.  The density of gas and the formation of new stars has several empirical relations, like the Kennicutt-Schmidt relation, which allows you to approximately model how this happens (Sorry again, no simple article on this).  These stars then spread into the disc of the galaxy and move around in (approximately) circular orbits around a spiral potential. And that's just to model a typical everyday spiral galaxy that you might find.  If you want to model how this forms (in a hand-wavy way) from the big bang, then you need to consider the large-scale picture like the formation of Galaxy clusters, how these protogalaxies merge and form the next generation of galaxies.  That's not quite my own field but I know enough to know it's possible to do in some nice approximate way like you are probably suggesting.  You could for example, model a Universe with various smaller galaxy seeds (but not too many unless you want the performance issues again).  There would need to be some underlying filament potential which describes the large scale structure of the Universe (e.g. https://phys.org/news/2014-04-cosmologists-cosmic-filaments-voids.html) and then your test particles will move towards them to mimic the natural formation of the large-scale structure.     As you might guess, there is so much to go into this (and even I'm not 100% sure of all the steps) but I hope this little post has helped explain a possible path to modelling a Universe on the cheap!  ;-)   Haha, guilty as charged!   Never thought I was hiding though!  :P   Hmmm, unfortunately some of these 'basic' things don't have nice pages for the layman like wiki.   There's a ton of astro articles, such as University pages, if you type 'King cluster profile' into google but not sure which one is good to recommend.  But there's hopefully enough info and links here to help for now!  ;-) 
  7. Okay, since I work in Astrophysics (as one of those many PhDs doing this still ;-) ), I thought I should add my weight to this one.  Basically there's no way in hell you can model a full Universe, even with crude detail using actual physics.  I have colleagues who do these things for a living and to get even half-crude simulations they need to run for weeks/months on parallel super-computers usually with 100s or 1000s or processors.   However, what you can (and probably should) do is use the various models and mathematical prescriptions that describe the structure in the Universe to create a pseudo-model.  For example, the large-scale structure of the Universe is a network of filaments connecting large galaxy clusters which obey some sort of structure function.  Then at smaller scales, galaxy clusters are often modeled using simple radial profiles such as a King Profile.  Galaxies themselves either have spiral or ellptical profile, depending on various parameters such as the amount of gas, angular momentum of gas, the age of the stars, whether it has merged with another galaxy earlier and other factors.  And then of course the contents of galaxies can be parameterized statistically, such as Globular clusters, or the distribution of stars (masses and binarity) and of course the distribution of planets around stars!   But getting a pseudo-physical algorithm that can 'simulate' the formation of a full Universe like this is a tough ask in itself, even if you only use approximations rather than real physics.  I've not quite thought about it myself tbh but I'm sure there are ways to do it that will give you the kind of results you are hoping for.  Sorry this is more of a 'what you can't do' post than 'what you can do' but thought I should just point that out now before you spend/waste too much time on it and point you more in the direction of what is possible.  This is of course simply creating procedural models of the Universe rather than just planets but same principle ;-)   As a small aside, I'm sure you know about these but thought I'd link just in case since the techniques these guys use will probably overlap with what you might want to do : http://universesandbox.com/ http://spaceengine.org
  8. I know what you mean, but I was talking about how you feedback time travel events into that.  Imagine your 'clunky pixelated' view of the world is a simple graph/map describing the every town/village, major road link, river, mountain, etc.. .  Then imagine you go back in time and do something massive, like destroy an entire town with a nuclear bomb, or divert the course of a river by building a dam (I know, this is weird and complicated sounding game).  Then these small scale (but important) changes have to first propagate back up to your course map and then persist and affect the future.  And every event you do in the past has the possibility to change things.  Of course, if we make it extremely hard to change anything (like you were suggesting in a previous post), then I guess this is less important.  But at some scale, changes in the past need to affect the larger scale in the future.  This of course needs to be kept as simple as possible so that a dynamic, real-time PCG algorithm can operate (and not need a supercomputer).  Hopefully something like that could be within reach!  :-)   Yeah, you're probably right.  PCG has been used in 3D games for a while for some things, like repetitive landscape features (don't the Just Cause games use it for this?) or buildings.  But to build up an entire game?  Maybe not, or at least if they do, it won't be hyped up like 'No Man's Sky' (which was more the problem than the game itself.  Too much hype is never good).   Oh yes, I forgot about that movie actually :-)  That would be cool to do but perhaps tricky to get the gameplay right.  In reality, it probably would end up something like 'Sands of Time' where players would just effectively rewind when everything goes wrong!  Actually, this does make me think of another way of using time travel in games.  Imagine you could travel to any point in time and space (in the past of future), but more like an invisible observer where you can only watch and not affect things.  Perhaps you can observe mysteries in the past and then 'solve' them in the future, or observe how things go in the future and avoid them happening in the past!  This would not bring any paradox issues at least ;-)
  9.   Haha, yes, I've heard about some of these weird things, so-called 'emergent gameplay' elements where random stuff happens.  It would be great to have such randomness in any game, to make it more interesting.  And yes, me too, but that's some bar to achieve!       Of course, PCG for a time-travel game could not possibly function without either spatial or temporal factors (or both).  And while I agree that a huge amount of the PCG can be local, factoring in the other parts does add to the complexity.  I've not really started experimenting with PCG myself yet (other than a few trivial examples) but I've looked at, for example, taking some world map like continents generated by some algorithm, then adding a river network with mountain topology, then adding towns and civilisations, etc..  It all has to be done in sequence.  And then if you add the time-travel factor in, where you can affect the way this sequence develops, then you start to see how things change (the butterfly effect, etc..).  I'm probably not quite as confident as you about how much existing maths can be used for PCG (partly because there would be more PCG in games if there were) but I am pretty confident that it's possible to make if enough brains were bashed together thinking about how to solve these problems!  I'd love to have more time to do it myself anyway, if I didn't have to do so much work, hehe :-)       Yes, I agree this is a design choice for the kind of game and story you want.  I think a much smaller-scale time-travel game, or one with a more tightly controlled story, will need some kind of paradox detection!!  But there are so many ways to do time travel (Multiverse branching, single adapting timeline, predestination/bootstrapping paradoxes, etc..), then there are plenty of types of games you could feasibly do!  :-)      One idea I had was where you could have some kind of 'timeline googles' where you can see in 'real-time' how your changes maybe affect things (the closest thing I can think of is Vibe from the Flash).  This would unfortunately require effectively modelling 2 PCG universes simultaneously, the original timeline (or previous one) and your current alternate one, so perhaps it's too much to ask for, but still would be cool!  Too many possibilities, not enough hours in the day to try any of them!!
  10.   Yes, like I said with my ideas, I would have three main types of event (which could perhaps be reduced to two for simplicity anyway).  The door would stay open in my idea until perhaps you left that area and came back, then it would be closed.  I think that's good enough considering the tonne of other problems that need solving in this kind of game!  ;-)   Some things can be decorations, like having bullet holes in a wall.  It means nothing, but nice if they were there (until some time later when it's fixed perhaps?)       Yes, this is where things become extremely interesting (and complicated).  Take killing Lincoln (or any prominent person for that matter).  If you killed them as a child, then there's presumably 20 years or so of the timeline that's essentially the same (except for their family and friends).  But then when they're supposed to do that significant thing (like becoming president) then things can start to change.  I agree that one way of compensating for this is having events that have to happen anyway.  However, I would also say this would possibly make for a slightly boring game because it means you can time travel but not really change anything!  The beauty of this kind of game (at least to me) would be that you *can* make significant changes, either by killing someone prominent directly, or maybe some kind of butterfly effect where you make some small, what you thought was an insignificant change, but then find that you've contaminated history completely (think accidentally dropping the Sports Almanac in 1955 and someone picking it up; best burn things when you time travel ;-) ).  Maybe as you say, you should still need to work for it, but I also like the idea of accidentally doing something significant.   About the 'snapping back to your origin' thing, I have thought about that also.  Let's say you try to change the timeline 'Bill and Ted' style (or Doctor Who perhaps if you watch that) where your timeline crosses over but then you create a paradox.  Maybe you want to kill a person your past self needs to talk to in the future, or you even try to kill yourself!!   One way for the game to prevent this from happening is to instantly snap you back to a non-paradox state (almost like dying and restarting that section again).  However, I also like the Looper/Back-To-The-Future style where you (almost) instantly fade from history :-) .  There are so many possibilities (as long as you're consistent!)       If you want to make time-travel games with long histories and the potential to go back and change everything(!!) then yes, this is the problem to solve.  The closest thing that I can think of to this is Dwarf Fortress.  I've not really played it (apart from a couple of aborted attempts to get into it) but at the beginning of every game, it procedurally generates the entire world AND a full history of all the civilisations giving you plenty of things to explore, both in the world and its history.  You can't actually time travel of course, but its the kind of algorithm that would be needed to do this kind of thing.  As you say, any significant change would need to be able to generate a new history (from that moment) and I guess whenever you time travel this change needs to be propagated.  Doing this efficiently with maths would be awesome, but I have a feeling the maths doesn't exist yet :-)  (or at least, nobody has tried to formulate it for this purpose).  One issue I think is apparent with PCG is everything is local because it's just some numbers that have come from an algorithm.  Having PCG that accounts for the world and its history is much more of a challenge.  It would be great to find out how to do it (and it sounds like something you're interested in too) but I'm not 100% sure even where to start.  I have ideas about world generation and how to 'evolve' it in time (e.g. take some land with rivers and mountains; seed civilisations in key areas like near rivers; let them grow and develop, etc..) but doing it efficiently with maths so that it can be employed without massive loading/waiting times in a game is a big challenge!!   Anyway, you said more stuff but that's enough for now :-)
  11. Okay, it's very interesting reading this post and discussion since it's something I've also considered and wondered how to do, how to make a game that has some kind of genuine time-travel element to it.  And the more you think about the various possibilities and complexities, the more you realise how you need to give yourself some boundaries (and rules) to make it doable.  Just to highlight a couple of issues that have gone through my head thinking about this (some of which you discuss above).   Recording events - Consider the Bethesda games (e.g. Skyrim, Fallout, etc..).  It seems like any object you move, any door you open, every bad guy you kill, gets recorded and is still there next time you go there.  That's an incredible amount of information if you think about it and it's even more important for time-travel games.  As you discussed above, I thought about bracketing events in three categories, short-term, intermediate and long/persistent.  Short-term is opening a door, creating footsteps, etc.., stuff that can be forgotten about when you leave an area.  Intermediate is say destroying some object or causing some damage, which needs to be remembered for some time but eventually would be 'repaired' and then there's killing a person, going on a rampage in a city, changing the cause of history with some major event that always has to be remembered.  For example, kill a person and maybe if you return there, your picture is on Wanted posters and then there's the ramifications of removing that person from the timeline.  All big (but interesting problems) to deal with.   Timelines : What happens when you cause a major event?  This depends on your time-travel mode.  It could be you have 'parallel universes' so any events you did in the future are effectively removed.  Or do you try and 'merge' timelines together?  How do you resolves paradoxes in that case?  (This might start to sound like merging two branches in git btw, hehe.)  What if you went back in time and killed somebody you had already interacted with in the future?  Maybe it would work like the film 'Looper' perhaps where changes to the timeline are immediately 'updated'.  Of course, time travel stories in fiction all suffer from inconsistencies but a game you have to write consistent rules but you have to decide the rules first.  This is perhaps the most difficult thing to decide really since it will affect the whole procedural generation of the world in different times and your style of story-telling in the game.   Epochs : Is the game restricted to a single time-period?  If you have a GTA-style open world, maybe you're restricted to some short period (or even years) and then your world does not really change except for changes you make to it.  But if you don't restrict yourself then you can travel back to earlier periods with different cars, architecture, fashion.  If you go back far enough the towns and cities should be smaller and eventually will only be small Hamlets with horses and carriages.  That's a lot of work to code a procedural game that can handle all this.  One way I've thought about is to go low-poly or more cartoony so you can create more simple procedural models and nothing too complex.   Anyway, I've given thought to this kind of procedural generation myself, but not really got stuck into any real coding since it's such a big project that it's hard to know where to begin.  I've just started making cubic buildings really and will go from there!  :-)   But as you say yourself, maybe it's good to know other people interested in chatting about these kind of things.  Either some solutions will be found or we'll just realise it's impossible and go back to making the next Angry Birds instead!  ;-)
  12.   True, although I've been creating messages similar to the Gregory book (Game Engine Architecture) design, i.e. messages/events are simple data structs containing the event type, the object/entity id and a variant (union) with the data.  This allows me to pack it all into contiguous arrays easily to be messaged around.  I've seen implementations though where different event types are different classes inheriting from a common Event parent.       Okay, I think I see what you mean.  I guess the ECS design could simple be objects (i.e. entities) containing their component objects (through composition) and then the entities can be passed through all the systems and those containing the correct components will get operated on!  I guess the difference though is the way you can split the object/entity up between different systems, which is more of a DOD optimisation than a fundamental rethink?  Since I come from an Applied Maths/Science background rather than a Computer Science background, I'm not so bothered/offended by this.  As long as I understand it and it works and makes my life easier, then great!  :-)       Yes, I'm well aware of that article (I've already mentioned that I know it once in this thread :P ).  I was just surprised by the difference and that the update rate could be so much lower than the rendering rate.  If anything, I was thinking of perhaps the opposite case, e.g. rendering rate of 30Hz and an update of 60Hz.  But I guess thinking about it (obvious in hindsight) that the high rendering rate is simply interpolating the positions between the (longer) update steps.  This means the rendering system needs to perhaps be tripled buffered then (i.e beginning, end and interpolated states).  Something more to think about..
  13.   Thanks.  Had a quick peek at those and seems like there's some good information in there.  Will have a proper look again when I come back to the multithreading issue.       Well, sure, if there wasn't any data dependency between the systems at all, then they wouldn't even need to be in the same program :-)   The decoupling is surely at the code level, where your 'system' doesn't have a bunch of references or pointers to other systems causing Spaghetti hell, and thereby making the development and debugging process easier (allowing individual systems to be unit tested for example).  Sure, bugs can come from screwing up the messaging, sending wrong events, etc.. but that's a different problem.  That's the element of ECS that sold me on it, and I have to say it definitely seems to work (for me at least) in practice.  I guess part of my problem here is the data dependency between the systems (via the messaging) has moved all the coupling problems to the messaging and doing things in the correct order.  I was a bit confused and stressed about this earlier, but thanks to the helpful comments on this thread, I think I have a better idea with what I'm doing now thankfully!       Yes, that's one thing I've been doing now actually.  Before coding, I'm just deciding how my various systems should be set up, which ones depend explicitly on another being run first, and then seeing how I should order this, for now just in serial (although with half an eye on multithreading for the future).  Some are obvious (as I mentioned earlier in the thread) but others are quite subtle.  Plus my engine is quite minimalistic and needs more systems (e.g. physics) before it can do anything decent (i.e. better than Pong, Pacman, etc.. ;-) ).    Thanks for the youtube link btw.  I had a quick look just now but will bookmark it for when I come back to multithreading!       Okay, this is quite interesting and definitely will change things for me.  I've just been running my own engine using vsync to effectively fix it to 60fps but thinking in the future I'll need to have separate rates.  But I wasn't thinking the rates would be that different.  I know you mentioned RTS specifically, but are they really as low as 10Hz?  I'm guessing my dynamical games (e.g. first person shooters) will need a faster update rate than this?       Oh ok, I've heard about immutable objects but never paid too much attention to it tbh (because I didn't understand it from a software engineering point of view).  I've tried to use const religiously when I know some data should not be changed but that's a different thing I guess.  I'll have to read a bit more about this then, since if there's one thing I've learned over the last few years is that I really want to design my code (any code!) to be easier to maintain and debug from the beginning!!   Btw, just in case this thread dies anytime soon, I just wanted to say thanks to everyone for all the comments.  Even if I've disagreed with you, just making me think about some things have definitely helped :-)
  14.   Fetching one array of components is fine, but if you need two arrays and have to keep hoping back and forth between them (e.g. TransformComponent1, RenderingComponent1, TransformComponent2, RenderingComponent2, etc..) then it will be thrashing the cache.  Sure, the arrays might all fit into say the L3 cache, but much better to stay in the fast L1 cache as much as possible.  Of course, it all depends on the number of components plus how long is spent operating on them.  However, as we discussed earlier, it also depends on what is the extra cost of the messaging.       Yes, correct in both cases :-).  My experience in parallel programming is in a different paradigm altogether (OpenMP and MPI operating over large data sets) and I'm trying to figure out the best way in this case (although I'm getting the impression there's no best way, or at least it's horse for courses!)       Oh, move away from which model?  One thread per system?  I know they're trying to get away from having dedicated threads locked to one system (e.g. a rendering thread or an audio thread) and I thought to more asynchronous designs like thread pools.  Maybe I was just using the wrong terminology there.   And yeah, a bit worrying that all that effort can lead to zero benefit!  Hope not to fall into that trap anyway :-/       Yeah, as I mentioned earlier, this is more the kind of parallelism I'm used to dealing with (i.e. parallelizing over say large loops of data with say OpenMP).  If I could get away with that, I would actually be much happier.  I might be tempted to give it a try since parallelising loops with OpenMP is so trivial really.  But yes, I'll solve the original problem first before moving onto other issues!  ;-)
  15.   Yes, I agree.  It's just trying to find out how to best implement that in my own little framework in a satisfactory way (for me that is).  I think the first thing I am going to take out of this thread (when I get back to working on it properly next week) is to just concentrate on the serial code without worrying too much about the multithreading (since many of my issues here are directly related to multithreading).       Well, as I said above, purely a multithreading (if done wrong issue).       Well, I've not measured this in my framework, because I have other bottlenecks that I would need to fix first.  But I have a lot of experience in this with non-gaming codes and cache-thrashing makes a huge difference as the arrays get larger and larger.   On the second point, no, of course there is a cost to messaging also.  But everything is being done in batches in memory and not jumping around in memory so while the number of CPU instructions is likely to be higher with messaging, there is again less cache-thrashing.  It might prove that you're correct in the long run but I've made this design decision and am happy with it (including its other benefits also) so I'll stick with it!  :-)       No, I disagree.  This is not wrong; it is rather the simplest way to do it for someone relatively new to engine design.  You're correct that tasks can be made for building up the lists to be sent to the GPU (especially with Vulkan) and it's something I'll considering for the future, but for now, one task per system is perfectly valid to get things up and running!!  Maybe in a year or so (if I'm still at it and not jumped ship to Unreal or something :P) then maybe I'll be back to try this though! ;-)       Yeah, this is something I've not thought completely about.  But surely this is a problem with a single buffer also?  I mean if multiple systems are moving the positions and then there is only one buffer, then you have to make the same decisions right?  The double buffer just allows you to safely read the (old) positions knowing they are not going to change and maintaining determinism.  But like I said, I've not thought 100% about it (or even implemented anything like that)!