Sign in to follow this  

C++ limit on the number of methods per class?

This topic is 3490 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there, Is it true that the C++ compiler has a limit on the maximum number of methods per class? http://www.addsimplicity.com/downloads/eBaySDForum2006-11-29.pdf It is stated in the above link, that ebay architecture did encountered this problem with C++, that is why I decided to ask. Thanks, Jr

Share this post


Link to post
Share on other sites
I skimmed through the PDF, and indeed, they claim that they reached a limit in the number of methods per class.
I'm just glad that I don't work in that programming team for so many reasons...

Share this post


Link to post
Share on other sites
The language itself does not dictate these sorts of implementation quantities -- although it does provide an (informative; that is, it does not dictate compliance) appendix listing a number of recommended minimums.

But any finite limitations that exist in practice are purely determined by the implementation.

Also, reaching any of those limitations is a strong indication that you're doing something wrong from a fundamental design perspective. The original ISAPI DLL that they're referring to that broke those limits was, I imagine, a stunningly poor example of good software design (this seems to be confirmed by the various pros they list that came about in their rewrites).

Share this post


Link to post
Share on other sites
Good grief! Obviously a compiler has some limits, but I've never before heard anyone ever having too many methods in a class. They probably though that since globals are bad, let's put everything in one class. And hundreds of programmers working on that spaghetti? They probably had a high turnover rate due to frustration.

Share this post


Link to post
Share on other sites
Uh, I did a quick check and it seems that I'm safe so far (although it kept the compiler busy for a while) :)


struct X {
void foo0(){}
void foo1(){}
void foo2(){}
...
void foo9998(){}
void foo9999(){}
};

int main()
{
X x;
x.foo0();
}



Share this post


Link to post
Share on other sites
Quote:
Original post by gp343
Is it true that the C++ compiler has a limit on the maximum number of methods per class?

1. What is "the C++ compiler"?
2. I'd be surprised if any C++ compiler supported more than 4 billion member functions per class without running out of memory.

Share this post


Link to post
Share on other sites
If you put things into context - it happened before 2002.

This would imply VC6 (or perhaps, though unlikely, gcc 2.x).

It also produced 150Mb executable, with 3.3 MLOC source! That approaches the size of things like Windows.

-----

Of course, there's also that harsh reality check - code quality didn't matter. eBay kept running, they kept growing, they did what had to be done.

Writing good code is incredibly expensive, and in majority of today's business isn't even necessary. Keep this in mind when you have hard time explaining to your boss or manager why your 20% performance improvement is so crucial.

I'm not advocating poor code. Just the usual disconnect between perfect code and earning money or growing a business. Knowing what really matters in the big picture is the really difficult part these days.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Of course, there's also that harsh reality check - code quality didn't matter. eBay kept running, they kept growing, they did what had to be done.

Writing good code is incredibly expensive, and in majority of today's business isn't even necessary. Keep this in mind when you have hard time explaining to your boss or manager why your 20% performance improvement is so crucial.

I'm not advocating poor code. Just the usual disconnect between perfect code and earning money or growing a business. Knowing what really matters in the big picture is the really difficult part these days.

But code quality did matter and they had to rewrite the whole system. That, and the additional maintenance cost of a rotten code base, likely cost them a lot more money than what they would've paid had they done things right in the first place. Of course the decision to invest less money today only to be forced to burn through a lot more in the future may sometimes be a viable business decision, particularly if you don't expect to be around when the shit hits the fan.

Share this post


Link to post
Share on other sites
Quote:
Original post by SnotBob
But code quality did matter and they had to rewrite the whole system. That, and the additional maintenance cost of a rotten code base, likely cost them a lot more money than what they would've paid had they done things right in the first place. Of course the decision to invest less money today only to be forced to burn through a lot more in the future may sometimes be a viable business decision, particularly if you don't expect to be around when the shit hits the fan.


Sometimes, getting the crappy solution up and running and using whichever ugly methods you need to get it working is the only way to guarantee the business will be around long enough to write the decent version.

Share this post


Link to post
Share on other sites
Quote:
The C++ standard recommends limits for various language constructs.
That's funny. Microsoft's C++ standard seems to be a different one than mine, because mine says:
Quote:
Because computers are finite, C++ implementations are inevitably limited in the size of the programs they can successfully process. Every implementation shall document those limitations where known. This documentation may cite fixed limits where they exist, say how to compute variable limits as a function of available resources, or say that fixed limits do not exist or are unknown.
The limits may constrain quantities that include those described below or others. The bracketed number following each quantity is recommended as the minimum for that quantity.
(emphasis added)
I mean, those are different things, aren't they? Or maybe they're reading it differently... :-)

Share this post


Link to post
Share on other sites
I wonder if this was some sort of automatically generated class? If they had one big executable for their entire website perhaps these were entry points for different page requests.

It seems ironic that they switched to Java, given that it specified hard (and smaller IIRC) limits on the number of methods in a class.

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Quote:
Original post by SnotBob
But code quality did matter and they had to rewrite the whole system. That, and the additional maintenance cost of a rotten code base, likely cost them a lot more money than what they would've paid had they done things right in the first place. Of course the decision to invest less money today only to be forced to burn through a lot more in the future may sometimes be a viable business decision, particularly if you don't expect to be around when the shit hits the fan.


Sometimes, getting the crappy solution up and running and using whichever ugly methods you need to get it working is the only way to guarantee the business will be around long enough to write the decent version.


I remember hearing a decent analogy (I'm probably remembering it wrong; I can't find a reference to it) about how code quality is something like taking out a bank loan. If the original code is like your debt then fixing the code is like paying off the debt. Sometimes writing code that is of low quality is necessary to start off a business, but unless that code is never updated some effort should be placed into gradually fixing it over time.

Share this post


Link to post
Share on other sites
Quote:
Original post by nobodynews
Quote:
Original post by Kylotan

Sometimes, getting the crappy solution up and running and using whichever ugly methods you need to get it working is the only way to guarantee the business will be around long enough to write the decent version.


I remember hearing a decent analogy (I'm probably remembering it wrong; I can't find a reference to it) about how code quality is something like taking out a bank loan. If the original code is like your debt then fixing the code is like paying off the debt. Sometimes writing code that is of low quality is necessary to start off a business, but unless that code is never updated some effort should be placed into gradually fixing it over time.


People often gripe at overpaid consultants who write crappy code.

But look at it this way.

A $FINANCIAL_INSTITUTION needs some $UTILITY software to run on monday, at 8:00. Every second, this software makes thousands of dollars.

Now you have two programmers. One that says: "I can fix it properly, it will take 6 hours", and the other who says:"It will be up and running at 8:00".

The proper fix will cost the bank $21 million and more. The just-in-time crappy fix will cost the bank $2000 (the price of consultant).

Does proper fix make sense? Let's say that you need to hire this consultant every single day for 5 years. That comes down to only $3 million.

eBay handles $1500-worth of listings every second. Can you really take 1 year off to "properly" rewrite that system?

Simply put - the proper fix will never *ever* pay for itself. Especially since it will still require feature updates, and it will still show bugs, and it will still show problems.

Know your business. Know the finances. It really isn't as easy as assuming a dogmatic approach that proper code, superb design and long-term benefits are better. Sometimes the market simply cannot pay for them, much to the dismay of the programmers. Myth says, that 1 line of code costs NASA $1000. If that is the way to go, can you build your FOSS forum software this way and finance it through support contracts? Should eBay use this methodology? The have 3.3 MLOC. The cost of such software would exceed the yearly income of most countries.

A relevant read about Waterworld syndrome.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Quote:
Original post by nobodynews
Quote:
Original post by Kylotan

Sometimes, getting the crappy solution up and running and using whichever ugly methods you need to get it working is the only way to guarantee the business will be around long enough to write the decent version.


I remember hearing a decent analogy (I'm probably remembering it wrong; I can't find a reference to it) about how code quality is something like taking out a bank loan. If the original code is like your debt then fixing the code is like paying off the debt. Sometimes writing code that is of low quality is necessary to start off a business, but unless that code is never updated some effort should be placed into gradually fixing it over time.


People often gripe at overpaid consultants who write crappy code.

But look at it this way.

A $FINANCIAL_INSTITUTION needs some $UTILITY software to run on monday, at 8:00. Every second, this software makes thousands of dollars.

Now you have two programmers. One that says: "I can fix it properly, it will take 6 hours", and the other who says:"It will be up and running at 8:00".

The proper fix will cost the bank $21 million and more. The just-in-time crappy fix will cost the bank $2000 (the price of consultant).

Does proper fix make sense? Let's say that you need to hire this consultant every single day for 5 years. That comes down to only $3 million.

eBay handles $1500-worth of listings every second. Can you really take 1 year off to "properly" rewrite that system?

Simply put - the proper fix will never *ever* pay for itself. Especially since it will still require feature updates, and it will still show bugs, and it will still show problems.

Know your business. Know the finances. It really isn't as easy as assuming a dogmatic approach that proper code, superb design and long-term benefits are better. Sometimes the market simply cannot pay for them, much to the dismay of the programmers. Myth says, that 1 line of code costs NASA $1000. If that is the way to go, can you build your FOSS forum software this way and finance it through support contracts? Should eBay use this methodology? The have 3.3 MLOC. The cost of such software would exceed the yearly income of most countries.

A relevant read about Waterworld syndrome.


[idealist]
As I'm sure you're well aware: a person can be both financially conscious and technically competent. A "good" programmer can can make quick progress without utterly crippling the software for the future.

The coder that would get you in this sort of quandary is a liability, and a good financial mind should be able to see that that costs money too.
[/idealist]

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Know your business. Know the finances. It really isn't as easy as assuming a dogmatic approach that proper code, superb design and long-term benefits are better.

Equally well a dogmatic approach to save as much money now any way possible isn't usually better. The pages of the Daily WTF are filled with examples of what happens when you get a 'consultant' to do a few quick fixes. My impression is that the people who make business decisions aren't often on the ball on what the final costs will be. For example, the 'traditional' way of producing software, design-implement-test, has been shown to be more costly than maintaining good quality of code from the start, mainly because of testing requiring more time in the end.

Share this post


Link to post
Share on other sites
Quote:
Original post by SnotBob My impression is that the people who make business decisions aren't often on the ball on what the final costs will be.


Well, with eBay, you have hard numbers. They facilitate over $1500 worth of listings per second at 99.94% up-time.

You can do the math now of what it would take to update the codebase of 6 MLOC and 100 kLOC produced each week.

I'm not talking about idealistic or dogmatic approach. I'm talking about very hard numbers from the presentation.

Can you produce such software which will scale with the exponential growth presented that will hold for next 5/10/15 years? Can you do that by *reducing* the cost of this development? By still allowing features to be added on weekly schedule? And including the cost of migration, training? Supporting both versions at the same time? And still solving all the, at present time, unknown features that will arise during this time? And still allow 300 features to be added each week?

If so, then eBay has a job for you. A job that will write history and leave a mark on software development, while making you rich, famous and influential at the same time.

eBay is not a software company. They don't develop software, they don't make money from it. Software and hardware they run is the facilitator for their real business. Auction houses have existed for centuries. All of this is used merely to improve the process.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Can you produce such software which will scale with the exponential growth presented that will hold for next 5/10/15 years? Can you do that by *reducing* the cost of this development? By still allowing features to be added on weekly schedule? And including the cost of migration, training? Supporting both versions at the same time? And still solving all the, at present time, unknown features that will arise during this time? And still allow 300 features to be added each week?

Didn't they do just about that when they rewrote their system? Perhaps they decided to adopt better practices so that there wouldn't be a need for that in the future?

Share this post


Link to post
Share on other sites
Quote:
Original post by SnotBob

Didn't they do just about that when they rewrote their system? Perhaps they decided to adopt better practices so that there wouldn't be a need for that in the future?


Read through the presentation, the choices are clearly presented. They had to rewrite it when they reached the limits of current architecture. It doesn't matter why they had so many methods, the system as a whole didn't scale anymore.

It started as a weekend project.

Later, they assembled third party solutions for everything.

These in turn enabled them to grow the business, up to the point where they exceeded the limits of that software.

At that point, they ran into lack of third-party solutions, so as the only way to scale beyond that, they implemented their own, custom solution.

But they did not, from day one, they leveraged existing solutions and proven software.

This is where common mistake lies. Software needs to perform some crucial function to be commercially successful. It's not software that generates revenue. As such, any excessive investment into software where it's not needed is often the reason for failure.

Note that none of this says anything about code quality. So they have more methods than compiler can handle. I'm pretty certain that they've heard of refactoring.

But the important thing is that they understood where the value comes from. eBay could not have been started by a geek. They'd spend 5 years building that ultra-scalable, custom-optimized, assembly-based web site.

And this is where the argument for feature-first-optimize-second comes from.

eBay will reach limits no matter how superb their solutions are. And given exponential growth, the difference between best and cheapest solution will likely be one week or two, while the effective price difference between the two will a factor of 10 or even a 100.

How many refuse to use std classes because they're slow. Just like with eBay - some projects really did reach the limits, so they went for custom version. But most simply hear about their problems, and assume they will apply to their project as well.

Let's say you're setting up an auction site yourself. You have no customers, no money, and no real employees. Is the proper way to go about it to look at eBay, and build a data center around 6 MLOC custom code, 2000 servers and Oracle license?

Or will a simple set of Python scripts do, until you reach its limits? How many people would say that Java is slow, and it could never be used for anything serious?

Knowing your project and your business is the key here. And such real life experiences are an incredibly valuable lesson to learn for anything hoping to make money out of software (or better yet, making money, but using software to expedite the process).

Share this post


Link to post
Share on other sites
Quote:
Original post by samoth
Quote:
The C++ standard recommends limits for various language constructs.
That's funny. Microsoft's C++ standard seems to be a different one than mine, because mine says:
Quote:
Because computers are finite, C++ implementations are inevitably limited in the size of the programs they can successfully process. Every implementation shall document those limitations where known. This documentation may cite fixed limits where they exist, say how to compute variable limits as a function of available resources, or say that fixed limits do not exist or are unknown.
The limits may constrain quantities that include those described below or others. The bracketed number following each quantity is recommended as the minimum for that quantity.
(emphasis added)
I mean, those are different things, aren't they? Or maybe they're reading it differently... :-)


It's a recommended minimum value for a limit. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Read through the presentation, the choices are clearly presented. They had to rewrite it when they reached the limits of current architecture. It doesn't matter why they had so many methods, the system as a whole didn't scale anymore.

[...]

Note that none of this says anything about code quality. So they have more methods than compiler can handle. I'm pretty certain that they've heard of refactoring.

Nowhere did I suggest that eBay should've aimed for the kind of volumes they have today right from the start with an unproven business concept.

That they actually ran into the compiler limit on the number of methods says precisely that they had quality of code issues, or can you think of an even remotely reasonable scenario where that could happen (I'm assuming that the limit was hundreds or thousands of methods in one class)? It's impossible to say much about that based on the presentation, but that they mentioned it suggests that this was representative of the sort of scalability problems they had with that part of their system.

You're exactly right that you need to know your business, which is why it's so sad that the common strategy for dealing with these sorts of issues appears to be 'don't fix, until it's really, really broken'. (Although, it may be that eBay started to do the rewrite before they actually ran into serious problems and the new system was ready when they did.)
Quote:

Let's say you're setting up an auction site yourself. You have no customers, no money, and no real employees. Is the proper way to go about it to look at eBay, and build a data center around 6 MLOC custom code, 2000 servers and Oracle license?

A bit besides the point, but with no money to do exactly that, I'd say you're out of luck, because to compete with the likes of eBay, you'd really need to be able match their capacity from the start. And that'd be the least of your worries.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Well, with eBay, you have hard numbers. They facilitate over $1500 worth of listings per second at 99.94% up-time.

Is that $1500 profit for eBay, or just $1500 of which they take a few % charge?
Quote:
You can do the math now of what it would take to update the codebase of 6 MLOC and 100 kLOC produced each week.

No more than it would if they had no business at all, because obviously you don't shut down the old system while developing the new one! Really, I've never heard of a business that, when a software upgrade was needed, shut down their existing system before the new one was ready for use. The costs you present are therefore entirely fictitious.
Quote:

Read through the presentation, the choices are clearly presented. They had to rewrite it when they reached the limits of current architecture. It doesn't matter why they had so many methods, the system as a whole didn't scale anymore.

Isn't that kind of the point others are trying to make? Had they made a scalable system from the start, they wouldn't have had these problems now. If they failed to realize they'd ever become this successful (a reasonable mistake), why not upgrade after things got rolling but before they ran into scalability issues that they had to know would eventually occur?

If your software-dependent business is doing well but you know your codebase sucks, then it's time to fix it ASAP even if it "works" now, because the sooner you upgrade the cheaper it will be. If your code is not maintainable or scalable, you will face serious problems down the road if your company continues to be successful.

Share this post


Link to post
Share on other sites

This topic is 3490 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this