Quote:One of the other advantages of such a 'peer' system might also be the communications bandwidth costs can be distributed (versus the usual large pipe into a central server) to go along with the lower cost of fewer company run servers.
In a peer-to-peer system, I need to send my data to N other people, and I need to receive data from N other people, and I need to send data about more than just me, for redundancy, error resiliency, and security auditing.
In a client-server system, I send data to one place, and receive data from one place.
Clearly, the client-server system uses significantly less bandwidth. If the price of bandwidth is a marginal cost (which, in aggregate, it is for society at large), then client/server solutions are actually more economically efficient, and deliver better experiences for cheaper aggregate cost than peer-to-peer.
To view it another way: You would need, say, a 4 megabit peer-to-peer connection to get similar network fidelity as you'd get from a 1 megabit client-server connection. If the client/server costs less than the price difference between those two connection types, it's actually a win, even for the user.
Quote:Combine that with having the task program's code mutate (new differently compiled resent every day -- or even more frequently) to make it nearly impossible to reverse engineer fast enough
In my opinion, no organization could QA code that mutates several times a day. There are only a finite number of scrambling techniques, after all. The best you can do is change parameters of a known algorithm, such as keys. There are simple attacks that run deltas on diffs (because you need to distribute these changes to all players, right?) and determine what the change is. If you think the user machine can be trusted at all, you probably shouldn't spend your time on trying to frustrate the untrustworthy users; your time is better spent on designing game mechanics that don't benefit from client cheating.