I tend to see many people referring to using X, Y, or Z libraries. Would there be a point in the development phase where having too much libraries can actually worsen the development? Or it can simply bloat up the program with DLLs and/or JARs (and native DLLs included for Java compatibilties)?
Is this one of the main reasons why people tend to reinvent the wheel in programming, because they actually don't want to bloat the programming with X, Y, and Z all integrated into the project, else they would have to remember what each X, Y, and Z does what?
Or is this more akin to how Java classes implements multiple interfaces, up to a point where having to override functions becomes a nuisance to do?
[source lang="java"]public class A implements B,C,D,E,F,... {}[/source]
I'm just pondering. Thanks in advance.
Would using some libraries make debugging programs better or worse? If worse, is this one of many reasons of having to reinvent the wheel?
Define bloat? If X, Y and Z are all useful in the project, then they can't be classified as bloat, since bloat denotes something unnecessary. If you are using library X for only a very tiny portion of X's functionality, then sure, it might be better and more orderly to excise the rest, either by pulling that code out of X or by re-implementing that portion if it's not too complicated. And you have to remember what X, Y and Z do regardless of if you use an external library or reinvent the wheel. At some point, if you need to accomplish Task A then you will need code that actually accomplishes Task A. You can either write that code yourself (with all of the time sinks of development, debugging, etc...) or you can use library X, where the development time sinks are someone else's problem, and your time sinks are merely integration.
But, sure, in some cases using a library could make debugging worse. Say, if the library itself has bugs. And sometimes tracking bugs across DLL boundaries can get tricky. But if the bug actually is in your code rather than the library, then your tests should catch it before it gets to the library code. And you have to learn how to make the correct trade-offs in something like this. Is it a good idea to plan for low debugging time cost at the expense of high development time cost (re-implementing library X)? Maybe, but probably not.
But, sure, in some cases using a library could make debugging worse. Say, if the library itself has bugs. And sometimes tracking bugs across DLL boundaries can get tricky. But if the bug actually is in your code rather than the library, then your tests should catch it before it gets to the library code. And you have to learn how to make the correct trade-offs in something like this. Is it a good idea to plan for low debugging time cost at the expense of high development time cost (re-implementing library X)? Maybe, but probably not.
That's why I chose to reinvent the wheel a lot in the past. And sure, the code tends to take up less space,
and it is nice to be in control. But I've switched to reinventing wheels less often.
I make use of small extensions, that I can actually reach into and debug if I need to,
but i discourage that unless you plan on tweaking a library or you really enjoy tinkering with stuff.
I'm convinced that I have learned more by reinventing the wheel than I otherwise would have.
On a lower level, I mean. Because if I'd used libraries more often, for more tasks,
I would just have learned somthing else instead, on the larger picture.
It can also be very time consuming to reinvent the wheel,
and as many other things it's probably a matter of tradeoff and personal preferences.
I started out wanting to program games, but curious of nature,
I wanted to see and understand how "everything" works.
Then I'd naturally create as much as possible on my own,
and today I'd call myself a hobbyist engine developer. Not a game developer, yet.
But what do you feel like spending time on?
(Issues with space taken up by code is often an organization thing, IMO, don't worry too much about bloat; if there's good utility to a library then apply it.)
and it is nice to be in control. But I've switched to reinventing wheels less often.
I make use of small extensions, that I can actually reach into and debug if I need to,
but i discourage that unless you plan on tweaking a library or you really enjoy tinkering with stuff.
I'm convinced that I have learned more by reinventing the wheel than I otherwise would have.
On a lower level, I mean. Because if I'd used libraries more often, for more tasks,
I would just have learned somthing else instead, on the larger picture.
It can also be very time consuming to reinvent the wheel,
and as many other things it's probably a matter of tradeoff and personal preferences.
I started out wanting to program games, but curious of nature,
I wanted to see and understand how "everything" works.
Then I'd naturally create as much as possible on my own,
and today I'd call myself a hobbyist engine developer. Not a game developer, yet.
But what do you feel like spending time on?
(Issues with space taken up by code is often an organization thing, IMO, don't worry too much about bloat; if there's good utility to a library then apply it.)
I tend to see that "reinventing wheels" and "implementing libraries" are like optimizations.
Like SuperVGA mentioned, reinventing the wheel makes the creator of the same wheel more understandable about the scenes behind it, and gives a finer control over the logical flow of the program. To be, it has always been a lull for me when reinventing a wheel so many times, even though I get to see what's behind all this, and how it gets to behave.
I'm thinking that if you were to continuously add more "libraries" to a program, in turn, the program's size will "bloat" itself up.
By defining "bloat", I'm making the assumption that if a program were to embed some amounts of libraries, for a casual techie user of the program, it may seems to be full of "bloat", for example, VS redistributables included in installation wizards, some redundant drivers in some HP printer series, etc. They may not be libraries, but if we were to look at it and do a little comparsion, you can see that it sort-of fits the "library" description.
This is how I started my ponderments.
Like SuperVGA mentioned, reinventing the wheel makes the creator of the same wheel more understandable about the scenes behind it, and gives a finer control over the logical flow of the program. To be, it has always been a lull for me when reinventing a wheel so many times, even though I get to see what's behind all this, and how it gets to behave.
I'm thinking that if you were to continuously add more "libraries" to a program, in turn, the program's size will "bloat" itself up.
By defining "bloat", I'm making the assumption that if a program were to embed some amounts of libraries, for a casual techie user of the program, it may seems to be full of "bloat", for example, VS redistributables included in installation wizards, some redundant drivers in some HP printer series, etc. They may not be libraries, but if we were to look at it and do a little comparsion, you can see that it sort-of fits the "library" description.
This is how I started my ponderments.
Is this one of the main reasons why people tend to reinvent the wheel in programming, because they actually don't want to bloat the programming with X, Y, and Z all integrated into the project, else they would have to remember what each X, Y, and Z does what?
Not in my experience.
The biggest IMO is failure to understand the costs involved. Initially you think "Why should I spend $X on a full-blown library when I can do something myself for free". What they fail to realize is that doing it yourself costs in terms of hours in development and hours in bug fixing and hours in troubleshooting and so on. Maybe it really is cost effective to do it yourself. Maybe not. Either way, that is the biggest reason I've seen for not using libraries.
Another reason is the "not invented here" syndrome. Many people are not comfortable learning something new, and would rather take the fun and exciting route of doing something new rather than the boring route of using someone else's work.
Finally, another reason is laziness. It is 'easier', or at least shorter in the near term, to just write your own thing than it is to invest the effort of doing it the right way. For example, it is often easier to implement a search function in your own code rather than spending the effort of making a predicate function and calling the built-in search. In the long term of maintenance and bug fixing it is not cost effective, but there are blinders when it comes to long-term development costs.
Over the decades I've grown to the point where I have no problem using other people's libraries. I'd much rather pick up their already written, already debugged, already supported library rather than hack through the same problems that others have faced.
As for what defines bloat, my definition of that has also changed over the years. If it does something useful it isn't bloat. To some people a 25MB networking library seems incredibly bloated. In fact, after digging I can see that such a library has incredible logging and debugging functionality, has code to handle many varieties of NAT punchthough and uPNP and has a simultaneous unreliable and reliable UDP, and many other very useful features.
Often it turns it 'bloat' isn't bloat at all. Instead it is mature software that handles edge cases.
The only thing you should ask yourself when deciding whether you should choose a library is: "Does this help me save time so I can focus on my goals?" That is the point of libraries. They should help you focus.
When developing as a hobby, you should ask yourself whether you want to focus (on the game) or just wander and see where the path leads you to (programming for knowledge, not for results). When working as a professional, the answer is a no-brainer: whatever helps with developement time is good.
The important thing to see when you don't know whether you want to use a particular library is its documentation. Incomplete documentation screams "under developement / abandoned". Also LOOK AT THE LICENSE! Many useful libraries have been lost to the GNU GPL curse.
When developing as a hobby, you should ask yourself whether you want to focus (on the game) or just wander and see where the path leads you to (programming for knowledge, not for results). When working as a professional, the answer is a no-brainer: whatever helps with developement time is good.
The important thing to see when you don't know whether you want to use a particular library is its documentation. Incomplete documentation screams "under developement / abandoned". Also LOOK AT THE LICENSE! Many useful libraries have been lost to the GNU GPL curse.
Would there be a point in the development phase where having too much libraries can actually worsen the development?
The only issue is if the integration of the library is more trouble than not. Most libraries are a piece of cake. Include some dlls, use the code, done. Some (I'm looking at you log4net) are invasive, and their configuration/fragility take a lot more time an effort than just slapping something together.
But in general, no; there's no sort of increasing complexity or diminishing returns as you add libraries.
Admittedly I often reinvent the wheel for quite a few tasks such maths library, image loader, socket code, model and animation library.
I have a few reasons for this...
1) User Convenience I hate it on Linux / FreeBSD when I have to drag in thousands of dependencies to install a single (often simple) piece of software. For example, the DevIL and SDL image loaders support a large number of image formats even though I only use PNG. So for this reason they will drag in libjpeg, libpng, zlib, libxpm, libgif, etc..., etc...
2) Maintainability If a library changes in an unusable way, on Unix, it is very hard to maintain your own specific version of the library in each package repository.
3) Profiling / Bugs / Leaks If I find a bug (for example a memory leak using valgrind) and I fix it... even if I can get the change to the upstream vendor, it will be a matter of months before the change appears in package repositories.
One of my main reasons against writing code for Java and C# is the user needs to install that whole platform. This is such a pain for the end users (especially on open platforms where Java still isn't 100% free).
However, I also use Glut 99% of the time for pretty much any 3D software so I am not against utilizing the code of others. (And yes, valgrind throws a fit when I allow an exception to propagate up through glutMainLoop() ;)
When it comes down to it however... you still need to write the damn game so any time saved on tech is a bonus ;)
I have a few reasons for this...
1) User Convenience I hate it on Linux / FreeBSD when I have to drag in thousands of dependencies to install a single (often simple) piece of software. For example, the DevIL and SDL image loaders support a large number of image formats even though I only use PNG. So for this reason they will drag in libjpeg, libpng, zlib, libxpm, libgif, etc..., etc...
2) Maintainability If a library changes in an unusable way, on Unix, it is very hard to maintain your own specific version of the library in each package repository.
3) Profiling / Bugs / Leaks If I find a bug (for example a memory leak using valgrind) and I fix it... even if I can get the change to the upstream vendor, it will be a matter of months before the change appears in package repositories.
One of my main reasons against writing code for Java and C# is the user needs to install that whole platform. This is such a pain for the end users (especially on open platforms where Java still isn't 100% free).
However, I also use Glut 99% of the time for pretty much any 3D software so I am not against utilizing the code of others. (And yes, valgrind throws a fit when I allow an exception to propagate up through glutMainLoop() ;)
When it comes down to it however... you still need to write the damn game so any time saved on tech is a bonus ;)
I tend to see that "reinventing wheels" and "implementing libraries" are like optimizations.
What are you optimizing for? Development time? Performance? Size?
If optimizing for development time, prefer to use a solid, well-tested library if one is available. That is work that is already done.
If optimizing for performance, prefer to use a solid, well-tested library if one is available. Again, that is work that is already done.
If optimizing for size... maybe there is a case for rolling your own. It's thin, though; size is much less of an issue these days, and even still a good, robust library might end up being more efficient space-wise than whatever you end up cobbling together.
Like SuperVGA mentioned, reinventing the wheel makes the creator of the same wheel more understandable about the scenes behind it, and gives a finer control over the logical flow of the program. To be, it has always been a lull for me when reinventing a wheel so many times, even though I get to see what's behind all this, and how it gets to behave.
[/quote]
Reinventing the wheel is great for gaining understanding. At this point, it becomes a decision as to why you are working on the project. If you are working on it purely to learn, then great. Re-invent away. If you are working on it for release, and it is part of your livelihood, then why are you wasting time?
I'm thinking that if you were to continuously add more "libraries" to a program, in turn, the program's size will "bloat" itself up.
By defining "bloat", I'm making the assumption that if a program were to embed some amounts of libraries, for a casual techie user of the program, it may seems to be full of "bloat", for example, VS redistributables included in installation wizards, some redundant drivers in some HP printer series, etc. They may not be libraries, but if we were to look at it and do a little comparsion, you can see that it sort-of fits the "library" description.
This is how I started my ponderments.
[/quote]
So, just because some libraries are used, it's "bloated"? That's like saying a house is bloated because it includes some walls and floors. If those libraries are used (ie, not just useless cruft littering the project) then by definition they can't be bloat. The inclusion of re-distributables in the installer is a convenience. If the user already has them installed then yeah, it's bloat. But if they don't already have them installed, then they'll have to get them from somewhere. What's better for retaining your customer, giving them what they need in one neat little package, or telling them to go hunt down what they need on a website somewhere? At that point, you tend to lose a lot of customers who see your product as too inconvenient to use.
Why would a casual, techie user of your program consider it bloated because you use libraries? That just doesn't make sense to me. A casual techie user is probably going to understand that the program includes what it needs to function. Hell, you might consider me a casual techie user and I don't give two squats about what a particular program is using library-wise. I only care about whether or not the program works as advertised, and whether or not the program was released in a timely fashion.
I'm a big fan of eliminating useless cruft, but it seems like you have too broad a definition of bloat, one that encroaches upon necessary components.
You should use exactly as many libraries as you need to get the functionality you require. If that works out to 1 library, fantastic. If that works out to 15, awesome. Because all that means is that you spent less time coding the support framework for your game, and more time coding your actual game.
OK, this is a big topic, so I'm going to outline a few things in fairly general terms.
There are 2 main ways you can use a library - you can link to it statically or link to it dynamically.
When you link to it statically, only the code in the library that you use is pulled into your program. It doesn't matter if the library is 400mb in size and contains 5 billion different routines for doing different things; if you only need a few kb in one routine, that's all that your program gets.
When you link to it dynamically, no code from the library gets pulled into your program. The code runs directly from the library.
Dynamic linking also enables something else to happen. If multiple programs need the same piece of functionality, they can share it. So, as an example, if you have two programs on your machine that need to do printing, by using a dynamic library for printing, both of them get to share the same printing functionality.
That offers a mixture of advantages and disadvantages. The major advantage is that printing functionality is now consistent between both programs. End-users will thank you for that. Also, if printing functionality needs to be upgraded, you just need to upgrade the shared library and programs using it will automatically get the upgraded functionality. However, it also opens the possibility of a breaking change affecting all programs, and of versioning conflicts.
There's no absolute right or wrong answer to that one; it's a balancing act. Moving on.
Libraries may generally be seen to have already been tested. To have been debugged. You may assume that the code in them is fairly solid (provided you use it correctly). By contrast, if you write a whole chunk of (say) printing code yourself, you're going to be spending a lot of time testing and fixing bugs that have already been fixed in a library. Is that a productive use of your time? Only you can answer that.
"Bloat", as you seem to define it in your OP, is a myth. If code is being used for a purpose it is - by definition - not "bloat". A DLL or a JAR is not "bloat"; if it's never used, if it's never loaded, it won't be loaded into memory and it won't consume any resources, aside from disk storage. And disk storage is cheap and plentiful; programmer time, on the other hand, isn't.
There are many reasons why people engage in wheel reinvention, some practical, some psychological. Maybe they believe in myths? Maybe they have a bad infestation of the "not invented here"s? Maybe they can't find a suitable library for what they want to do? Maybe their needs are so specialized that no such library really exists?
There are 2 main ways you can use a library - you can link to it statically or link to it dynamically.
When you link to it statically, only the code in the library that you use is pulled into your program. It doesn't matter if the library is 400mb in size and contains 5 billion different routines for doing different things; if you only need a few kb in one routine, that's all that your program gets.
When you link to it dynamically, no code from the library gets pulled into your program. The code runs directly from the library.
Dynamic linking also enables something else to happen. If multiple programs need the same piece of functionality, they can share it. So, as an example, if you have two programs on your machine that need to do printing, by using a dynamic library for printing, both of them get to share the same printing functionality.
That offers a mixture of advantages and disadvantages. The major advantage is that printing functionality is now consistent between both programs. End-users will thank you for that. Also, if printing functionality needs to be upgraded, you just need to upgrade the shared library and programs using it will automatically get the upgraded functionality. However, it also opens the possibility of a breaking change affecting all programs, and of versioning conflicts.
There's no absolute right or wrong answer to that one; it's a balancing act. Moving on.
Libraries may generally be seen to have already been tested. To have been debugged. You may assume that the code in them is fairly solid (provided you use it correctly). By contrast, if you write a whole chunk of (say) printing code yourself, you're going to be spending a lot of time testing and fixing bugs that have already been fixed in a library. Is that a productive use of your time? Only you can answer that.
"Bloat", as you seem to define it in your OP, is a myth. If code is being used for a purpose it is - by definition - not "bloat". A DLL or a JAR is not "bloat"; if it's never used, if it's never loaded, it won't be loaded into memory and it won't consume any resources, aside from disk storage. And disk storage is cheap and plentiful; programmer time, on the other hand, isn't.
There are many reasons why people engage in wheel reinvention, some practical, some psychological. Maybe they believe in myths? Maybe they have a bad infestation of the "not invented here"s? Maybe they can't find a suitable library for what they want to do? Maybe their needs are so specialized that no such library really exists?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement