Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Offline Last Active Today, 01:59 AM

#5288225 Optimizing Generation

Posted by Ravyne on 22 April 2016 - 07:03 PM

Also, you'll probably want to invert your for loop order, and do z, y, x, it's more memory friendly.  If you are wondering why, imagine if you had a single array that was of length 16*16*16, and think of how you're jumping around in it as you travel through your innermost for loops.


This is always a good place to look, but OP's code seems consitent in that X is both the outside loop and the most-significant ordinal (array indexer?). In other words, the X and Z "labels" are consistently swapped, but they don't seem to be mismatched in the way that usually causes the cache to be thrashed. The speedup seen was likely entirely due to getting rid of sqrt() -- or I'm not reading the code with the comprehension I think I am :)

#5288216 Should you load assets/resources at runtime or compiletime?

Posted by Ravyne on 22 April 2016 - 05:43 PM


For small games I'd recommend embedding it into the executable because it allows a smaller package.



A resource doesn't magically shrink because you embed it into an executable.


This is probably not what WoopsASword meant, but its worth mentioning that packing very small resources, such as small, low-color sprites, into a file together can actually reduce the size of your installation on disk. Prior to 2009, 512 byte disk sectors were the standard so a 16x16 pixel, 8-bit sprite would consume a whole disk sector even though it only needed 256 bytes, for 50% wastage -- you couldn't make the physical file any smaller, but you could have stored another sprite inside "for free". After 2009, disk manufacturers started migrating to even larger, 4KB sectors and this was the majority of disks starting in about 2011; this would result in about 94% wastage (you could store up to 15 additional sprites "for free"). Of course, 16x16, 8bit sprites are not so common today, but a 32x32, 16b color sprite gets us right back where we started with 50% wastage, or 32x32, 8-bit sprites waste 75%.


The flash in SSD drives is (exclusively, as far as I know) 4KB sectors physically in silicon, so 4k or larger logically; and these 4K physical sectors are the unit of write-cycle endurance as well, so its extra considerate of SSD users to fully utilize each sector.


If you have lots of individual files that are smaller than 4k you really should consider packing them together to eliminate wastage, such as by packing sprites into a sprite sheet or simply flat-file. I mention this specifically since its a relevant consideration for 2D sprite games (lost of small images that aren't compressed) -- of course if you have larger textures/images and especially ones that compress well with acceptable quality, standard compression will do you fine with minimal wastage.


I still would not pack those kinds of files into the executable itself (better to pack them together in files/units that make sense), but it would achieve having less wastage all the same.

#5288063 Going multi-threaded | Batches and Jobs

Posted by Ravyne on 21 April 2016 - 06:34 PM

What are 'Batches' and 'Jobs' when generally speaking about Thread Pooling?

What are the best way to identify a function call as a 'Batch' or either as a 'Job'?
How would one go about creating a multi-thread system?


Skimming Sean's article, it looks to me to break down like this -- a batch is some portion of work as determined by a subsystem (e.g. a physics engine, rendering, resource-loaded) where different sub-systems have different needs. He uses the example of a physics system where it creates a batch for each "island" of physics objects (which is further explained to mean "nearby physics objects which potentially interact with one another, but not with other physics objects in that exist elsewhere"), because these "islands" are independent, you can actually create a batch out of each "island" and run them simultaneously on different processor cores since they don't interact with one another, giving potentially higher CPU utilization. And because the big-O cost of physics calculations usually square with the number of bodies in consideration, its also more efficient to have more smaller batches than fewer larger ones (e.g. 32+32+32 < 52+42 < 92). For other kinds of work, other batching strategies (or no particular strategy) might give best results. Rendering might create batches that use the same materials (textures+shaders+etc), a resource loader probably does one asset per "batch". A job seems to be an individual work submission -- a job is the result of batching.


As for creating the system itself, Sean posted a bunch of good links. The basic idea of a thread-pool, which seems to be the universally-preferred system today, is that you allocate a certain number of threads statically using OS mechanisms, based on the number of CPU cores (and hyperthreads) you find available. Your game logic puts jobs into a queue, and there's some mechanism that runs (could be a dedicated thread, could be a periodic or event-driven system) that moves work off the queue and onto on of those threads you allocated. There's a lot of detail I'm glossing over about what a job looks like in terms of an interface, but you can think of a job as some kind of class with access to all the operations and information that are needed to do the work, and some kind of "DoWork" method that kicks things off once it lands in one of those threads. You probably also need a way to return the results and signal when a job is done, this could be through a decoupled messaging system (like signals/slots) or a result queue -- you need to drop the result into some other delivery mechanism to free the thread as soon as possible.

#5287682 Should you load assets/resources at runtime or compiletime?

Posted by Ravyne on 19 April 2016 - 04:04 PM

I don't see how "compile time loading" could be any faster, you might be confusing this with the fact that you're paying the cost of loading the resource when the program is loaded, rather than paying it when you call LoadAsset(...) or somesuch to load the resource from a file at runtime. In other words, its not faster, you've just failed to measure the compile time scenario at all.


Josh gave a nice overview of pros/cons, and suitable use-cases. You definitely don't want to "compile time load" all your assets, least of all on the false premise that its somehow faster. Particularly, the down-side is that whatever is in the data segment is in memory whenever your program is -- that means if you have 4GB of assets, they're all in memory even if you're only using 400MB of them in the current level or scene, and your minimum requirements will reflect that. Now, with virtual memory that's not the whole story and the OS will jump through hoops making your game work, but--and here's the point--had you loaded those assets at runtime then you only will have in memory exactly what you need in a given level or scene; and what's more, you gain the flexibility of loading lower-fidelity versions (e.g. smaller mip-levels) of assets if you needed to get your memory footprint down even smaller, or the reverse to load higher-fidelity versions for users with tons of memory.


The great majority of your assets should be loaded at runtime. Personally, I would only consider compile-time-loading assets which, if missing, would mean that the engine, not the gamewould be unable to continuing to function as designed. Even at that, I would strongly consider loading them at run-time, as soon as possible, rather than embedding them in the executable just because it still affords greater flexibility.

#5285933 Why learn STL library

Posted by Ravyne on 08 April 2016 - 06:10 PM

You should learn it because its The Right Way* to write code in C++ -- You should write std::vector almost every time you want dynamic memory, you should write std::unique_ptr every time you have a pointer who's contents are owned by a single entity, you should be using the standard algorithms wherever you can and not writing raw loops, you should be using std::move, and countless other examples.


I would argue that, without the standard library, it can hardly be said you're writing C++ at all. Yes, just "the syntax" is C++, but that's a bit like saying the collection of English words are English. While technically true in a legalistic sense, a language -- whether computer or human -- is more than its atoms. Its about how the parts fit together, and what patterns have emerged to deal with common situations. Most programmers, especially new or young programmers, without utilizing the standard library, just end up writing C++-flavored nonsense. You can use a library that's not the standard library, such as would likely be instituted by your company or engineering organization, but there better be a good reason for avoiding the standard library, and there better be a rock-solid implementation of its alternative.



* The C++ Standard Library / STL is not perfect, its got some warts and imperfections and accumulated cruft, but by and large its great and there's a lot of good, useful stuff inside. When it offers a solution to your problem, or a bunch of parts that can be wired together to solve your problem, it really ought to be the first thing you reach for. For the general case, I don't believe there's a more performant and battle-hardened library on the planet. Its true that there are those with special needs who might avoid the standard library because they can't use exceptions in their environment, or that they might be able to devise special ways of doing things that are faster than the standard library because they understand their exact use-case better (e.g. which corners they can cut) and so their solution actually has to do less work. Bit its a rare thing that anyone comes up with something that's faster than the standard library functions while also maintaining the same level of general-purpose correctness.


Don't make the mistake of assuming, as most starting game developers do, that the standard library is "too slow" or just plain to be avoided. Don't make the mistake of assuming, as most starting game developers do, that "real (game) programmers" write every line of code they've ever laid eyes on. Don't make the mistake of assuming, as most starting game developers do, that using such a pedestrian library doesn't live up to the game-developers-as-programming-gods myth, and if you do then you'll never be elite. Don't make the mistake of assuming, as most starting game developers do, that you know what needs to be optimized before you've measured it with real tools that you really know how to use.

#5285492 How would you go about developing a game console OS?

Posted by Ravyne on 06 April 2016 - 03:23 PM

You're more or less talking about spinning a whole new FreeBSD distribution, which is complicated but clearly has been done by many parties. The scope of that discussion is far too large to be had here.


Now, without being unduly discouraging, your questions worry me because you seem to be concentrating on superficial elements of this would-be operating system. How to play a video during startup, tamper checking, one specific boot-loader operation. These might seem like trivial and or fundamental questions, but by the time you can display a video and sound you'll have already had to have booted enough of an operating system to give you near-complete functionality anyways -- You'll need at least a basic sort of kernel, capable of initializing memory and IO, talking to a filesystem, can load a driver module (or contain built-in drivers) for video and audio, handles input -- probably another driver, and is capable of fast interrupt handling for being able to fill/move those audio buffers with super-low latency if you want to avoid popping and other audio artifacts (because our ears are so sensitive to aberrations, latency is even more important in audio playback than in video).


And that's just to get something to boot -- not something optimized for running games. I'm certain that Sony's FreeBSD-based Orbis OS is highly streamlined for gaming workloads and has a unique interface to graphics and audio that, I would guess, minimizes dependence on the kernel and on "traditional" drivers as such. Without such care and attention, whatever you might produce from a collection of standard *BSD components will be just that, a slightly slimmer BSD that performs in no significantly different way.

#5285364 How To Unit-Test A Tile-Map-Class

Posted by Ravyne on 05 April 2016 - 06:07 PM

The idea of Unit Testing is basically to exercise the entirely of the (public, at minimum) interface, and that all input parameters to such respond in the anticipated way -- that might mean, for example, that a default-constructed tilemap object has an empty tiles vector, or that attempting to load a file that you know to be missing responds in a certain way, or that a malformed map file responds in a certain other way. You might also test things such as memory reclamation, the level of exception-safety guarantees provided, performance characteristics of certain operations, or any other behavioral guarantee you make -- all of those things can be considered as a part of the interface. You decide what the interface you are promising your users is, and the unit tests attempt to exercise all common and corner cases to verify that you are upholding that promise. What the tests are, exactly, depends on the promises you are making.


In general I would not recommend that a tile-map should be responsible for drawing itself, and so I would not recommend you find it necessary to test its ability to draw, but only its ability to represent its data-model consistently (that is, the vectors and other variables inside of it). In general a tile-map is correct if its data-model is consistent with what you expect, and whether it draws the same pixels is not really a proper question to ask -- in other words, if one of the tile bitmaps changes, the output may be different independently of whether its correctness changes. However, if you must test such things, you could write a bitmap out and compare it to a known-good bitmap of the same map (or compare hashes thereof) -- as long as the same parameters and tile images go in, they should compare equal. As I said though, a better design to begin with would be one in which a tile-map does not actually draw itself (that is, be directly responsible for causing pixels to be put onscreen) -- an alternative design would allow a "drawer" of some description to query the tile-map for the information it needs to draw it.


Integration testing is basically just unit-testing only its concerned with how independent systems fit and work together. If you used such a design as I describe above, where drawing of pixels is separate of tile-map representation, then this interaction would be a good example of something I'd expect to find in an integration test. The main difference between unit testing and integration testing is that you are looking at compound concerns, and sometimes at state that sort of floats somewhere between the subsystems in the interaction (that is, in the glue that binds them together). If you're going to test, then a commitment to testing at all levels is important, though like all things there are diminishing returns at some point and you have to move on. A good target is to test a representative sample of every good or bad input and corner-case you can think of initially, and to add a test for every bug you fix, so that the bug does not get re-introduced later (this is part of what's called regression testing). That strategy should put you in good shape.

#5285353 Community College or Game Development?

Posted by Ravyne on 05 April 2016 - 04:33 PM

Setting aside the clear advice to stay in school, I think the honest way to ask the question you've started to ask, "Is school the right path for me?" is to make the distinction between whether school is an inconvenience to your current circumstances or whether it really is holding you back in the grand scheme. Honestly answering that question is the difference between making an impulsive reaction and a rational response.


In many of the high-profile cases you read about, you wouldn't call those people anything less than prodigious. They already had learned, or demonstrated the ability to learn on their own, everything their plans hinged upon -- hence the value of continuing their formal education right then and there was low. Combine that with a reality where moving on those plans immediately (for competitive or opportunity reasons) was necessary for success -- hence the opportunity cost of staying in formal education was very high -- and you can see how they rationally came to their own decisions. And still they could have failed had any single thing simply not aligned. Microsoft would not be what it is today if Gates' and Co. had not been able to secure the software that became DOS, or if they had failed to license it to IBM for the Personal Computer -- even the fact that it was licensed-to rather than sold outright to IBM was crucial to the Microsoft you know today. Facebook's success hinged largely on being in the right place at the right time, as early social-networking-ish things (livejournal, myspace) began to fall out of favor, and created a base so large that no new social network--to include some real juggernauts--has posed a real threat since; had it launched too early or too late, it could have ended up in the dustbin or as a too-late also-ran itself.


Even (or perhaps especially) if your indie game is really great, its unlikely that whether it launches this fall or two years from now will make a difference to its success -- or certainly not in any way you can quantify and capitalize on. If time-to-market is a significant factor, it would imply that your game lacks a certain uniqueness and is closely derivative of others, hence the competitive landscape it will face now or in the future will not be significantly different. If it is unique  (and good), then time to market is not a significant concern because its value is derived from its enduringly-unique qualities, hence the competitive landscape will be no different in that case, either. Sure, its possible someone might independently develop and release something close to your idea, its happened and it will happen again, but that's the business equivalent of a rogue wave and there's no planning for that.


I read recently that there are now something like 1000--yes, one-thousand--games released on iOS and Android every day, and so every day just a few gems ride a wave of crap and only a couple might truly stand-out. Whichever category you find yourself in won't be significantly different in months or in years. All the other factors are essentially left to chance. You can only really control content and quality, and rushing usually doesn't help there. Note, though, that time to market is not a synonym for timing -- timing your release and marketing push is certainly crucial and is something you can and should reason about, but for a small player at least, has little if anything to do with whether you release this year or next, or the year after that.

#5284340 N64, 3DO, Atari Jaguar, and PS1 Game Engines

Posted by Ravyne on 30 March 2016 - 03:26 PM

I'm just watering down Dark Souls for the (here we go) Sega Dreamcast, Sega Saturn, Nintendo 64, 3DO, Atari Jaguar, Nintendo DS, Nintendo Game Boy Advance (not the two older Game Boys because of lack of buttons), and PS1




I will start working on Saturn, Dreamcast, DS, and GBA now


But why? What do you possibly hope to gain from the experience of targeting all of these different consoles? They're nothing like each other, and they're nothing like modern consoles. You know what is like a modern console? PCs! And PCs are also like--I daresay identical to-- PCs. Between PCs, PS4, and XBone you've got like 70 percent of the home gaming market as it exists today, and all of them have AMD CPUs and GPUs little different than the ones on store shelves right now.


And the Saturn is literally the last place you should start. Set aside whatever false conception you've got that all this work you have laid out is important. I *enjoy* this kind of useless intellectual adventure, I *enjoy* reading arcane technical manuals, I've been programming in some form for 22 years, I've done GBA programming, I've done embedded systems programming and bare-metal programming in the DOS days (which are not unlike early console programming in many ways, save the exotic hardware) -- I JUST BOUGHT A 20-YEAR-OLD 486 COMPUTER TO RUN DOS BECAUSE i HAD A WILD ITCH TO RELIVE MY HIGH SCHOOL PROGRAMMING DAYS, AND I WANT TO SEE HOW MUCH i CAN PUSH IT WITH ALL I'VE LEARNED THESE PAST 15 YEARS, JUST FOR FUNSIES!


And still I have not one actually inkling to ever attempt programming a Saturn in any serious measure. Its high-level architecture can be summarized in three words -- Complicated, Obscure, and Bizarre.


And you talk of single-handedly "watering down" a recent AAA blockbuster.


You can learn good lessons by programming limited systems. You can learn those lessons effectively from just one or two of those systems. You can learn those lessons effectively from the one or two least complicated among them. You'll learn by being forced to do without modern conveniences and hand-holding, not by throwing yourself against the obscure ghosts, gremlins, and hobgoblins of abandoned futures past.


Pick PC, or if you really must the GBA or Dreamcast. Choose one and don't even think about the others. Find out how difficult any one of them is, and understand that those are the least of the troubles you're asking to find. 


And if you choose to ignore this advice, at least try to promise yourself that you'll have the humility to start back at the beginning when it all comes crashing down and you're frustrated, because I really get your enthusiasm and want to do all the things all at once. I had piles of notebooks full of game designs well beyond my means, controller and console designs, stories, characters, and gameplay mechanics. There's nothing wrong with daydreaming, in fact its a great thing in moderation, but you can set yourself up for harm if you tie your expectations too closely to it before you're ready. I honestly want to see you succeed in smaller, attainable victories, and all I've said thus far is the best advice I can give you from lived experience.

#5284264 N64, 3DO, Atari Jaguar, and PS1 Game Engines

Posted by Ravyne on 30 March 2016 - 08:43 AM

And to be clear, I am totally *NOT* discouraging you from learning to write games, I'm just saying you should slow down. I suspect you're quite young -- So was I when I first started programming, like many people here. I was probably 10 or 11 when I copied my first BASIC program out of a book my school library had. I spent the summer between 6th and 7th grades, and my 7th grade studyhalls writing text adventures in a notebook because I didn't have my own computer. But I kept reading and got myself an old computer spent the rest of highschool writing increasingly complex games and tools -- I also tried (and mostly failed) to teach myself C (the language was less of a problem, setting up my environment is what killed me) and dabbled even in some gameboy programming.


There's a path for you here, but you'll only be discouraged if you try to skip right to the end.


I saw in your other thread that you were considering getting a Raspberry Pi -- do that (Get a Pi 3) its a whole computer you can do whatever you want with, and its got tons of programming environments available for it. Or get a used PC or laptop that'll make a serviceable Linux machine (say a dual-core with 4GB of RAM, which you can probably get for not much more than a Pi 3 and which someone might very well give you for free.) a more-standard PC than what a Pi is means better support for software out of the box. Just program -- start small, don't worry about what its going to run on. Program, program, program.

#5284261 N64, 3DO, Atari Jaguar, and PS1 Game Engines

Posted by Ravyne on 30 March 2016 - 08:22 AM

To be blunt, If you're asking the kinds of questions you are, its likely the case that you're not at all ready to actually tackle programming these machines in the way they need to be programmed.


Even if you're pretty familiar with programming in general, these machines you name are sparsely documented and incredibly complicated.

  • The 3DO has a very early ARM processor with an entirely custom floatingpoint unit, and two custom GPUs that split various duties, having to work in tandem. Its got only 2 megabytes of RAM, and 1 MB of VRAM.
  • The Jaguar is two custom DSPs strapped to a 68000 that's barely fast enough to coordinate the two -- one runs audio in software, the other runs graphics (mostly) in software. Only 2MB of RAM total.
  • The PS1 has a fairly standard MIPS 3K CPU, but a custom Graphics DSP (graphics mostly in software) designed in-house by Ken Kuturagi. Again, 2MB RAM, 1MB VRAM -- and this might be the most approachable system on your list.
  • The N64 has a fairly standard MIPS 4K CPU, and a custom graphics chipset designed by SGI, again two custom DSPs -- one runs 3D graphics and audio, the other runs 2D graphics. 4/8MB RAM. In many ways this should be the most friendly on your list, but its probably less understood than the PS1.
  • The Saturn had 8 processors of various kinds, drew quads instead of triangles, and hardly anyone but Sega's own arcade developers could make it sing.


What documentation exists is unofficial, mostly reverse-engineered or gleaned from 1000s of pages of technical manuals for the processors that aren't fully custom. If you've never cracked open the datasheet for an obscure RISC CPU and actually understood the words in front of you -- much less its errata -- this isn't the fight you're looking for.


If you're really hellbent on a console, I'd suggest that the Gameboy advance is going to be your best bet -- its simple, well documented, and is modern enough to do interesting things while still providing a good intellectual challenge. The original gameboy or gameboy color is also a good choice, though more limited and more challenging because of it. The Dreamcast isn't half-bad -- probably the most approachable of the home consoles ever, though the Gamecube/Wii was starting to gather a good scene last I followed this stuff.


Of course the kicker is that you're pining to go through all this work and torture and practically no one will have the means, desire, or will to run the fruits of your labor. If you want, you know, customers, you really should just stick to PC or Mobile, or maybe try to get in on indie development for one of the current consoles, though again, the kinds of questions you're asking suggests to me you've got a lot to learn yet before you're ready for anything I've put on the table here.

#5284144 N64, 3DO, Atari Jaguar, and PS1 Game Engines

Posted by Ravyne on 29 March 2016 - 06:09 PM

I don't disagree -- there's a sort of in-between AAA class of games that's biggish-budget, might be a biggish name from a biggish studio, but don't really push boundaries of performance or design; You do see a good amount of licensing there (Usually unreal, some Unity as you get further from bleeding edge) that's where commercial engines make a really strong showing.


But few of the highest-end or most-unique games get made with commercial engines -- I'm talking your Metal Gear Solids, your Halos, etc -- for whatever reason. I do discount the games that come first-party from the engine vendors from consideration though -- its not really the same thing to say that all of Valves games are made with Source Engine, or Epic's with Unreal engine and those are all very high-end AAA games so engines must dominate; that's not the same kind of endorsement as when outside companies license those engines and make things.


Its certainly become more prevalent and the trend will continue until until it is the majority of games at all levels I'm sure. I don't think we've reached that just yet though. There's the most pressure to adopt commercial engines on cross-platform games, where its most difficult to make something in-house that exceeds commercial offerings across platforms, but I think we tend to expect less of those titles in terms of boundary pushing, you can't avoid compromise when you go cross-platform.

#5284129 N64, 3DO, Atari Jaguar, and PS1 Game Engines

Posted by Ravyne on 29 March 2016 - 04:41 PM

Yep, pretty much all code for those consoles were written to suit the particular game -- Studios would possibly re-use modified versions of what they had written before if their next game was a sequel, or similar enough -- I've heard of a few times where engines were licensed from other companies who had done similar games (in particular, I'd read that the Southpark N64/PS1/PC game was essentially a mod of Turok, which never appeared on PS1, but did appear on PC), but I don't know how much truth there is to it.


Consoles of that era were so specific and different from one another in ways that go well beyond using different CPUs or GPUs that any common abstraction--like an engine--would be virtually guaranteed to throw away big performance to get there, and none of those consoles had any to spare. cross-platform engines just wouldn't have made any sense.


Even the needs of different games were so pushing these machines to the limit that an engine suitable for multiple kinds of games on just one of the systems would be pretty hard to pull off. That's why what little code reuse there was, was so often limited to the same studio making the same kind of game (or direct sequels).


Truth be told, I'd say half or more of AAA development is still that way -- all in-house, or heavily-modified in-house -- especially for those high-end (often-exclusive) games. Its only been two console generations or so where the idea of a off-the-shelf engine was reasonably viable for AAA titles. Currently, lots of flashy titles get made with those engines too (though, again, often modified in-house) but I bet its not a majority. But for sort of "standard", not-relly-pushing-the-boundaries console games and more casual console games, certainly engines like Unreal, Crytek, Source, and Unity have made serious inroads.

#5283894 Fast way to cast int to float!

Posted by Ravyne on 28 March 2016 - 12:44 PM

No, casting between primitive types shouldn't incur function-call overhead no matter how its spelled -- remember, compilers are amazing and just because something looks like a function call doesn't mean it is.



Both of those are bad, two strikes, and disabling the warning makes three.


If you mean to cast, use the language of a cast -- otherwise it obscures your intent and makes the job of understanding what you mean to happen harder for both humans and the compiler.


Case in point -- had you used a proper cast, explicitly, I think you'd not have gotten that warning in the first place -- or at least you could look at it trivially and know it was intentional rather than working out every time whether it was an unintentional bug or not.


Always be wary of ignoring warnings whether you disable them or you just glaze your eyes over -- Its impossible not to miss real bugs when you routinely flood yourself with warnings you'll simply ignore. When you really do need to disable a warning, you should do it as tightly as is practical. Using #pragmas you can disable a warning for a single-line of code, a few lines, one function, one source file, whatever -- tighter is better.


And don't complain that's a pain in the ass -- it is and it ought to be, because there's no reason you should be doing it so often that its a burden. If you reach for it enough to be a burden, its a good sign you're doing something horribly wrong.


Just stahp.

#5283808 Fast way to cast int to float!

Posted by Ravyne on 28 March 2016 - 12:49 AM

I would expect the fastest way is the most direct -- C or C++ style casts, assuming C++ as your language.


Its always hard to tell in Coding Horrors whether people are serious or trying to be funny -- if serious, what kind of profiling leads you to believe this is faster and what other methods are you testing against?