Rockoon1

Members
  • Content count

    534
  • Joined

  • Last visited

Community Reputation

104 Neutral

About Rockoon1

  • Rank
    Advanced Member
  1. I seem to recall one synth methodology that had a bug in it that quickly became a prized feature of the company that stumbled upon it, because it could produce very desirable sounds (I think the method came to be called Phase Distortion) I wouldn't be so concerned with doing things in some sort of mathematical pristine fashion, but instead simply form the most generalized methodology possible (Of course, if you are using FM to transmit signals meant to be decoded, what I say means nothing.. you'de have to get things right in terms of the decoders)
  2. As far as shape distortions and so forth, that is normally considered a DESIRABLE property for most additive-style synths. Its the sort of thing that gives synths its character, where synths which do things 'mathematically pristine' are considered undesirable junk. The entire purpose of FM, in practice, is in fact to distort the shape of the carrier waveform in a significant way.
  3. 1div 2muls faster than 2 divs?

    Quote:Original post by Drilian Wow, way to be a complete dick while simultaneously providing absolutely nothing to the discussion at all. Care to give an example of some place where you beat a modern C++ compiler in a substantial way at low-level optimization? Anything? Bueller? I don't mind being a dick to people who spread misinformation like it is gospel. You yourself do not require evidence for your beliefs, but instead only request evidence to the contrary of your belief. Have you ever seen evidence that (pick your favorite compiler) is better than humans, or did you eventualy just start believing the repetitious misinformation of others, without actual evidence? Since there are decades of evidence to the contrary, the burden of proof is on those that now think that compilers produce optimal code. You may now want to argue that producing optimal code isnt the claim.... but actualy that IS the claim. Either the compiler output is optimal, or an assembly programmer can beat it. You may now also want to claim that such gains made by assembly programmers are only minor, but that in itself defeats the original claim, and is contrary to emperical evidence. ..and finally on the subject of ignorance.. if you arent using ICC for a compiler than you are all washed up. You cannot simultaneously argue that your compiler produces anything even close to optimal code when there is a perfectly good compiler and forums full of assembly language programmers that makes your pet compiler look like shit. ------ You wanted me to add to this thread.. so I will. Nobody here has asked the most important questions of all: What precision are we talking about? On what processor is performance most important? (often what will be the stated minimum requirement) Do you need to use the results immediately, or, will there be lots of idependent work available that can be done that doesnt need the resources that the division is consuming? If you dont know why theses are the actual important questions, then I suggest that you brush up before trying to argue otherwise.
  4. [C++] Low memory handling

    The OS and other programs should not be allowed to starve your applications working set, but that is exactly what you are working towards. Imagine two such programs running at the same time. Is there an eventual winner? Now, you really want to avoid swapping data that is already on disk. Why have a situation where data is written to disk when its already on the disk to begin with? I realize that avoiding swapping is problematic, so its usualy an OK choice to just let the OS memory subsystem do its job. If you are wildly changing the amount of memory you have allocated, then the OS memory subsystem CANNOT do its job. It CANNOT make predictions anymore. In effect, those wild changes are working to produce worst-case behavior from it, which is exactly what you are experiencing with extremely bad thrashing. Remember, the OS knows approximately how long its been since you accessed a memory page, so its swapping behavior is fairly intelligent without your intervention.
  5. 1div 2muls faster than 2 divs?

    Quote:Original post by Zipster There are very few circumstances where I can beat it There, fixed that for you.
  6. There are easier ways to prevent the degenerate behavior with splay-tree's, such as simply performing a random search every once-in-awhile. Any sort of hybrid could break some of the desirable properties proven to be exhibited by splay trees.
  7. Troubles in OO design

    Quote:Original post by Fingers_ Also, you're going to do a death check somewhere anyway. And I don't mean "only when hit in the head with a freakin' laser". If you did it like that, you'd be writing duplicated code for each way your object can take damage. object.apply_damage(type_of_damage, amount_of_damage) which might itself call object.begin_death_state(type_of_death) or particle_engine.begin_something_fancy(particulars) If you are doing it any other way, perhaps you can elaborate. Quote:Original post by Fingers_ What are you going to do when you decide to have your object play a death animation rather than just erase itself from the list? 'death' here has meant 'should be deleted from the list because now it shouldn't exist', not 'taken 100% hp damage and per the rules of game mechanics.. the condition is terminal' No matter how you slice the behavior you have described, you need to maintain more state. For instance, in some games an object in death animation should not interact with other objects, while in others it should interact with a subset of them, and still others it should interact with all of them. This is merely a question of maintaining game state. The object itself could have the maintenance code, some other object could have the maintenance code, it could be handled seperately within the main iterator. There can even be a generic state manipulator object which runs script code. Still further, such states could be explicit properties of the object, or implicit information based simply on what list the object is currently on. Each has its pros and cons for sure. Some of them performance related. Some of them development related. Some of them flexability related. 'More performant' is definately a good starting point for a justification. It isnt the only possible justification, but never the-the-less you cannot argue that it isn't in fact a justification. Performance has value.
  8. Troubles in OO design

    Quote:Original post by DevFred Eliminating 50,000 boolean tests is still premature optimization. Even if you can, with refactoring, speed the test up by a factor of thousand or completely get rid of it, it will still be insignificant for the performance of the game. Conflicting statements here. Which is it: a premature optimization, or a useless optimization? Quote:Original post by DevFred Thinking about how to remove the objects, for example, is orders of magnitude more important to performance than testing wether they should be removed. Heres what I think. You have observed through experience that certain things (such as deleting things from lists) are important enough that your design, from the start, should take performance into account. Now I ask you. Why should this method, of having the main update iterator check for deaths and act on them, be the one you want to use? Is there a reason why this method is superior? It doesnt seem to me to be easier to maintain, nor does it seem to me like a better seperation of responsibilities. Exactly how much object-type-specific code can the main iterator have before you decide that its too much? "Its not a performance problem" isn't a good justification. Thats a good reason to not reject the methodology, which is different than having a good reason to use it. Since performance isnt on the list of "good" qualities... that its just on the list of "not a problem" qualities... what are the actual pro's of this methodology?
  9. Troubles in OO design

    Quote:Original post by DevFred Concerning the problem discussed here, I don't see how eliminating 50,000 boolean tests is going to make any significant performance difference. How do you even put that many objects on the screen? 50,000 was pulled out of thin air It could just as easily be 10,000,000 (1 million objects * 10 states of object being within lists) or only 100. We could be talking about individual terrain polygons here, instead of space ships... and games typically handle more than whats on screen. Sure.. with a space invaders clone, optimization might not be a concern.. not even for a cell phone, ipod, or other portable device... but using that 50,000 figure.. 50,000 of anything (especialy branches) is a big number on some of these devices, so don't simply dismiss it because you are sporting a 3gz core2. As far as pure OO: A 3rd object (the main iterator) handling the deletions of objects from lists is known as the Mediator pattern in OO circles. The 3rd object mediates the exchange. The objects themselves signaling the deletion from the lists is known as the Observer pattern. When the object state changes, it informs all the observers. The proposed more optimal methodology is very similar to the Observer pattern, so it shouldn't be thought of as violating OO principles. Most programmers it seems cannot stomach a 'pure' observer implementation however (because its inefficient), and would rather design smart objects which already know who to interact with, even if that violates encapsulation (because such a violation is minor, and the objects are tightly coupled no matter how you boil it)
  10. Troubles in OO design

    Quote:Original post by smcameron Right, and the classic OO principles, frankly, just plain suck. I wouldnt go that far. They suck in as much as it pertains to many optimization strategies, but that isnt in the scope of their intent. OO is a solution for a development efficiency problem, not a runtime efficiency problem. It is no suprise that there will be sacrifices. Quote:Original post by ToohrVyk To reuse smcameron's example, assume that instead of setting an isdead boolean, the object instead removes itself from a list of objects then deletes itself. This approach prevents us from storing the object in other lists (such as 'enemies that are about to appear in the game') or even be referenced by other objects smcameron would probably point out that if the space ship has a reference to the alive list, then why can't it also have a reference to things that are homing in on it? (Or, why can't the alive list have a reference to a list of homers) The purely OO methodology which can accomplish this stuff is the Observer pattern. Having a specific list of things homing in on it, the space ship is in effect implementing a subset of the observer pattern. In this point of view the difference is that the space ship knows that it has a list of homing missiles and what a homing missile is (what it needs to know), instead of the more generic concept of a list of arbitrary observers.
  11. Troubles in OO design

    Quote:Original post by smcameron Exactly, and well put. And since you mention that this point is a tangent, I'll bow out now. But nice to see that somebody gets what I'm saying. :) Its mainly tangent because the OP did ask for advice on a "clean" object oriented methodology. Essentialy, he is looking to draw on the experience of others, and in that vein, restricting to OO principles is exactly what he is after.
  12. Troubles in OO design

    Quote:Original post by Antheus At some point, somewhere in there, you need to determine the /* oh, I'm dead */. Whether you then rewire the pointers or do something else, it's a consequence. This can be coded in many ways. Yes, you do need /* oh, i'm dead */ ..but you do not also need /* oh, it's dead */ I realize that this point is sort of tangent to this discussion, but you are making innaccurate observations so you shouldn't get to use them as leverage for your primary arguement without them being pointed out. The methodology that forbids an object from knowing who contains it, in this case, is causing at least twice as many maintenance checks as is actualy necessary. And thats assuming an object always has to check if it should die, where the reality is that normally an object only has to make that check when it discovers a pending interaction with another object.
  13. Troubles in OO design

    Quote:Original post by Antheus Or something along these lines. There are no excess comparisons. I realize that you probably would see them if you spent a little more time thinking about it, but you have approximately 50,000 excess comparisons there. The object flags itself killed or does not, and then another object checks to see if it is or is not flagged. That is more work than a system where the object simply kills itself, where other objects need not check for this state. I know that it may seem minor right there with your example, but what happens when there are a lot more lists than just the alive list? What about the burning list or the waiting for orders list? I can easily envision a dozen such lists for a simple top-down shooter...
  14. Mixing Algorithm

    Quote:Original post by Grain I'v done extensive testing by playing many many sources that a are close to saturation and they don't clip. hum. can I ask what your hardware is, and what software path you have chosen? Using a DirectSound8 capture buffer I have noted the opposite, by actualy recording the data being played out through 16-bit Stereo Mix and playing audio using both WinAmp (ShoutCast radio) and Flash (YouTube) simultaneously. This is on an AC97 chip mobo using the 64-bit realtek drivers for XP/64. I would caution against using a compressor because this is never the best solution, but instead just a stop-gap for when the real problem hasn't been addressed or is impractical to address.
  15. Mixing Algorithm

    Quote:Original post by Grain I realize the computer clipping problem, however there must be a way around it. I know its possible because the sound card does it all the time. If you have several programs that all play sound at or near the saturation point they will still overlap and somehow not clip and distort. This is incorrect. Feel free to record from the 'Stereo Mix' or 'What U Hear' sound source and plot the waveform while multiple programs are playing. There will be tons of (now visible) clipping. The difference yopu are experiencing, most likely, between this case and the case of an incomplete software mixer... ...is that the sound cards mixer drivers will saturate the audio data at -32768 or +32767 when the data overflows, rather than allow it to 2's-complement overflow ala C integer math (which creates wild sign changes.) see: http://en.wikipedia.org/wiki/Saturation_arithmetic Quote:Original post by Grain The only other operation I can think of other than the ones mentioned above is to simply overlay waveforms using which ever sample is greatest at any given time. But I don't think that is right either. Its very wrong.