Eelco

Members
  • Content count

    8326
  • Joined

  • Last visited

Community Reputation

301 Neutral

About Eelco

  • Rank
    Contributor
  1.   Yeah, I very much share this sentiment. Even though specialization is great for the economy, I cant stand not-knowing. I may not be the best programmer, the best mathematician, the best physicist, or the best biologist, but my employability in knowing a whole lot about all of these subjects is just fine, and I get to do actually interesting work. Aspiring to be a generalist has worked out for me. Lying here in my self-constructed bed, Id like to think id do relatively well if society did collapse.
  2.   I havnt been geeking out on SSD stats for a while, so I don't want to make a specific recommendation, but as a general one: don't geek out to much about SSD stats. The performance differences between current gen SSD's are not that interesting in practice. Aim for reliability and a good price per GB.
  3.   Seconded
  4. Great talk. Reminds me of why I don't use C++ for anything but extension modules.   Speaking of which, its over ten years ago that I started using D. Havnt touched it in a while, but a) im getting old, and b) shame that it hasn't caught on more yet.   That being said, pyd is looking pretty cool; it might just be the entry point I need to start using D more again.
  5. Good old QBASIC, around the age of 9 or 10. I didn't have any books, instructors, or internetz. Just me and the QBASIC help, of which I neither grasped the CS jargon, or the frigging language it was written in. But QBASIC made it quite convenient to copy and execute lines from the help, and by combing through the help file and example programs countless of times, I reversed engineered my first CS and English language skills.   Not the most efficient way to study, but it was a blast. Damn you grown up life, with all your responsibilities, and so little time.
  6.   This. Most programming jobs would make me want to kill myself; and I find most game programming pretty tedious as well. But I have plenty of hobby projects that I like to work on whenever I have a good quiet stretch, and at work im also programming at least 50% of the time, and loving it. Saying you like programming is a little like saying you like making things. Make what? Shoes? Bread? Quite different professions. Personally, I get my fix where the hardcore math meets the silicon, but your mileage may vary.
  7. From my MacBook retina running win7, I have similar experiences as posted above.   One, it is really epic for programming. I never considered myself a sucker for high resolutions, but I am converted. I can no longer stand looking at text and being able to see pixels. The amount of stuff you can fit on such screens without feeling claustrophobic, straining your eyes, or hurting your neck, really is amazing.   Two, windows does scaling very well. The exception is chrome, which stubbornly keeps screwing up its text rendering, but IE11 really is a great browser as far as I am concerned, not only when it comes to the quality of its rendering/scaling. MSVS, office, IE, and all MS flagship products scale without the slightest issue. Old win32 style apps or cringe-tastic java UI's may not fare so well though. I usually disable scaling for those apps, increase the font size internal to the application where applicable (textboxes do scale properly), and put up with the tiny menu's.
  8. It appears you seem to be implying there is a link between the degree of parallelizability of the execution and compiling of a piece of code; but there is none that I know of. Afaik, msvs makes great use of additional cores during compilation (although you may have to enable it in the options for C++). Compilation involves a lot of steps which are trivially parallelized.     I wouldn't jump all the way to trivially-parallizable. Yes, many build steps and modules can run in parallel, but also the way that you structure your code has a great deal to do with it, because you, as a programmer, can create serial dependencies between code. Well-written code avoids this (not just for build times, but for general modularity as well), I concede fully that more cores will build this kind of code more quickly, until you reach other system bottlenecks. But what I contest is this -- either your code is *not* well written in this way, in which case you will gain more by re-structuring your code, or your code is written this way, and it probably already builds acceptably fast on a machine with 4 cores -- faster by any margin is always "better" of course, but unless you're re-building very large code-bases completely, with relative frequency, its unlikely that having or scaling to 8+ cores is going to be of any great benefit to you -- unless, perhaps, you're the type runs *nix of some flavor and builds all your software from source. Otherwise, most of us don't build in sufficient volume.   Keep in mind that going from a 4-core intel CPU to an 8-core model running at the same clockspeed will cost you 3x as much at a minimum for the CPU alone. A supporting motherboard will likewise come at a premium. If you go Xeon, you'll also need ECC RAM, another premium, and a still-more expensive motherboard.   For compiling code, hyperthreading does quite well on its own -- between 20-30 percent throughput increase in benchmarks of popular open source projects -- which is why the 50% price premium is well worth while in going from a 4-core i5 to 4-core-4-hyperthreads i7 of equal speed. You can see some analysis here: http://blog.stuffedcow.net/2011/08/hyperthreading-performance/   But the structure of C++ itself is a problem, at least for now. This is part of the reason we have idioms like pImpl/Firewall, and all of the reason why they speed up build times. One needs to look no further than Go, D, or even C# to see that those languages compile in seconds what would take C++ possibly minutes, with no special effort on the part of the programmer -- A speed that even fast C++ compilers like CLang can't match, even with source that does make special effort to increase build throughput. If they manage to get Modules into the standard one of these days, the picture might change, but for now its a trait of the language that can only be mitigated.     I agree; some code takes longer to compile than others. But if it takes really long (as per the OP), more cores is more id say. That said, I single source file full of template magic can already be a massive pain in the ass, and there is probably no parallelism at the level of a single source file (maybe there is, never tested it).   My rules for keeping C++ programming fun are: if your code contains a main function, you are abusing C++ if your activities can be described as software design rather than algorithm implementation, you are abusing C++ Anything else takes too long to compile to meet my productivity standards. REPL FTW. But what about jobs that require the maintenance of large legacy C code bases, you may ask? Your mileage may vary, but my solution is to turn them down categorically.
  9. It appears you seem to be implying there is a link between the degree of parallelizability of the execution and compiling of a piece of code; but there is none that I know of. Afaik, msvs makes great use of additional cores during compilation (although you may have to enable it in the options for C++). Compilation involves a lot of steps which are trivially parallelized.
  10.   It does not sound like that at all. Even at my most splugiest of moments I have never been able to justify going from a high end single socket intel to a dual socket setup. You may think you see a good xeon deal out there now and again, but if you look closely you will always find that it is a highly castrated and non-overclockable model, and if you factor in the price of two of them, plus a three times as expensive mobo, you really arnt going to feel too great about having paid extra to have halved your single thread performance.
  11.   If you are using .NET then you are unlikely to have a need for raw SQL queries.   Id say programming is changing quite a bit, especially in my field of scientific computing. Staticaly compiled libraries are quickly losing ground to specialized JIT compilers for various programming domains (lots of activity there in the python ecosystem). Language interop is becoming the norm, even for small projects.   I once believed in 'one language to rule them all', but now I think seemless interop is much more important for the future. In some contexts, functional semantics are great. In some, nothing but a pain in the ass. In some context, dynamic programming techniques are a ridiculous slowdown. In some, a very useful feature. Rather than cramming that all into one language, which is never going to happen elegantly, id say its all about the interop. Python and .NET have the right idea here.  
  12. As far as 'it just works' is concerned (not something cloud computing is known for), by far the best thing out there as far as I know of is picloud. But im not an expert; I just dabble.
  13. Sounds familiar to me. Ive also always wanted to do everything. And then I feel bad when someone younger than me but with more focus beats me at something. I would say this has changed with age though. Im late 20ies now, and I don't have nearly as many epiphanies or sudden obsessions as when I was in my early 20ies. Which in a way makes life more boring, so we careful what you wish for. But I do find I have a longer breath now, and more focus to actually finish things. If you are not spending your free time actually building your company, then maybe the thing you are actually doing is the thing you actually want to be doing? Don't fret too much. You are still young.
  14. It does not appear conceptually different from the stuff functional language guys have been talking about for years.   However, if ms throws its weight behind it and actually delivers on these promises, thatd be awesome.
  15. [quote name='Shippou' timestamp='1350841055' post='4992493'] [quote] Python would be more than capable of creating a game similar to Civilization or Minecraft.[/quote] It has been shown that Python has a difficult time rendering voxel worlds ( Minecraft ). It can be done, however you will suffer incredible frame rate problems. [/quote] One of pythons great strengths is interoperability with C and its libraries. You write it in python, and if it isnt fast enough, and you have the skill to take it to a lower level, that option is always wide open to you. As opposed to, say, java, which makes this a royal pain in the ass.