Posted by PeterStock
on 26 November 2013 - 12:35 PM
I had the same problem and my approach was to suck the character down to the ground when it was touching the ground last frame and <= [some limit] distance from the ground this frame. I set [some limit] based on the max x velocity and max slope of the ground.
Posted by PeterStock
on 10 October 2013 - 02:29 AM
You know that the number of items (cards) is going to be very small (52), so I'd think that the O() order of performance doesn't matter so much.
I noticed one thing though - you mentioned extracting statistics for betting strategies. I think real-world shuffling is actually very un-random, if you do your shuffling as described it might not be such a good model. Maybe make N splits in the deck and re-arrange those stack of cards, repeated multiple times? I don't know how casinos do this though, maybe they have a machine or something?
Do the optimization made less power consumption on computers?
Yes, with the exception of the situation Adam described - if you use the extra time you have to do more stuff (render more frames), then that might not be lower power. But if you use the time to do nothing (your process/thread sleeps) then it's lower power.
When you have >60 fps then you might want to put in a frame limiter to throttle it to 60 and sleep during the 'spare' time. Or substitute whatever X you want instead of 60.
Actually I remember being impressed by some of the videos in the new wave of free university courses available now (I think that was udacity). And your point about doing is important - I agree you'll not get far with a book or lectures if you just read/listen and don't do exercises.
I should have qualified education with 'good' - agreed that not all are good.
I think the point I intended but may have failed to make is that programming is not all about learning the syntax of a certain language. My algorithms and data structures course from university was very useful, and showed me that for all the low-level optimisation you can do, it doesn't really count for much compared to the gains you can get from taking a better high-level approach to a problem.
I'll try splitting the buffer and see what happens, but just curious about something. Every single example/tutorial I have seen always stores the object in a vector but never a pointer to that object, why is that? std::vector<particle *> /////////<--------------------------- would this make no performance difference.
That might be more convenient if you want to have multiple lists of different particles (i.e. render all the 'fire' particles together in one way, all the 'water' ones together in a different way, but still simulate them all together in the same way). However, then they aren't necessarily consecutive in memory and you can't do your call to
which incidentally looks dubious to me - the value of 6*4 might not work if a compiler does struct member packing differently, or you change the code later on.
If it's really performance-critical, I'd split out the positions into a float array, so it can be passed to GL in a simple, reliable way and store a pointer to the relevant index into this array within each instance of the particle class. All the other stuff I'd leave nice and tidy, wrapped up in particle.
All modern computers I'm aware of have 8-bit bytes, and store integers in 2's complement. So you'd get the same thing on all platforms when you write bytes. Cast signed ints to unsigned before doing any bitwise operations on them (cast them back to signed on reading in, once re-constructed). Handle floats by making the bit pattern into an int, then decomposing to bytes as you would an int.
float f = 1.0f;
iof.f = f;
Oh, and you probably want to use defined width types rather than int and float, which will be different on different platforms (e.g. 32/64 bit OSes).
float f = 1.0f;
int i = *((int *)&f);
because it might not do what you want. Look up 'aliasing' to find out why.
In my opinion, clean design is much to do with minimising the connections between modules. Every program has to be decomposed into multiple modules (sub-problems), and the higher the independence between each of these, the cleaner the program design.
Of course, how you choose to divide a program up can be linked to the algorithms you choose to use. Going back and making a different choice can require a (very) different program architecture, or you end up with a horrible mess