• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

lawnjelly

Members
  • Content count

    205
  • Joined

  • Last visited

  • Days Won

    1

lawnjelly last won the day on July 25

lawnjelly had the most liked content!

Community Reputation

1243 Excellent

About lawnjelly

  • Rank
    Member

Personal Information

  • Location
    England
  • Interests
    |programmer|
  1. Because if I'm going to install a new OS every few years (and not forgetting all the other software) I'll usually try and do it on a new PC. Time is money and all that. That way I keep the old PC functional for testing etc and it *usually* coincides with time to update hardware. But you are correct sometimes it can be worth just replacing the hard drive.
  2. Yes, certainly Win 7 was good, I still have it on a couple of old PCs, and I am certainly not averse to sticking with something that works (see this post). I moved to Win 8 with my previous PC partly because at the time, the hardware didn't appear to support Win 7 (UEFI and other drivers I seem to recall, these may be available now). Another example my new PC is Intel kaby lake, which I have read may not be fully supported on versions below Win 10. You also have the considerable problem of software supporting your particular version of Windows. Some will not install on previous versions of windows (or newer versions, maya for example). I also ran into a lot of 3d / video software that required the newest GPU support. And you get a PC with the newest GPU etc and it's hard to guarantee that there will be the right drivers for older versions of the OS... So yup sure, if your hardware / drivers are compatible and you are sure the software you are using is okay with it.. There is still always the risk that an update to your favourite software that you *must have* no longer supports your OS.
  3. For an introduction to my reasons for migrating from Windows to Linux, see my previous blog post. Here I will try to stick to my experience as a Linux beginner, and hopefully inspire other developers to try it out. Installing Linux The first stage of course in migrating to Linux is to either install it on your PC, or try a 'live' version off e.g. a usb stick (or even try it on a virtual machine?). I can't say too much here, because I got my new PC with Linux Mint pre-installed, and there should be plenty of guides on google. I went for Mint because I had briefly tried Ubuntu a few years ago, and I liked the look of Mint and fancied a change. I knew it was based on Debian like Ubuntu so there should be lots of software. My first stage after unplugging my windows machine was just to take baby steps to familiarize myself with it, without running away in fright. After plugging in my network cable, I was away with Firefox browser. But after a few minutes I decided to install Chrome, as I am a fan and used that on windows (going with familiar!! safe space!!). This entailed installing software. Installing Software On windows, the process of installing software usually involves downloading an installer package from the internet and running it, and hoping it likes your machine windows version / hardware / dependencies. You can do this on Linux too (particularly for cutting edge versions of software), but there is also a far easier way to do it, via a 'package manager'. The package manager is actually the pre-cursor to all the various 'app stores' that have become popular on Android and iOS, but the idea is simple, you have a searchable database of lots of software you can install, usually simply with a click. It also has the magic advantage that it has a very good system for automatically working out the dependencies required by any software, and installing those for you too in the background, or for finding conflicts (if these do occur, when I have rarely had conflicts it has been because I've been trying to do something nonsensical!). I don't know whether it is my new machine, or linux, but the process of installation (and removal) is orders of magnitude faster than windows. It honestly only takes a couple of seconds for most of these installations. Anyway suffice to say I was very quickly running chrome, installing my favourite plugins, running my favourite websites. Accessing Windows Hard Disks The next stage was to get some of my data across from my old windows PC. This is where things get slightly interesting. Predictably enough, linux uses a different filesystem to windows, 'ext4' on my machine, whereas my windows external hard disk was formatted as NTFS. As is Microsoft's way (to discourage competitors, no doubt), NTFS is not public domain. The clever Linux devs have presumably reverse engineered much of NTFS, because you can mount and read from the NTFS disk. However, I am erring on the side of caution and not writing to NTFS for now, because from previous experience of exFAT on Android, it is possible that an incorrect write can bork the file system, and hence lose a LOT of work. My solution for now was to copy my working source code etc from the NTFS hard disk to my ext4 linux SSD. Longterm I intend to convert all my NTFS external hard drives to ext4. It would also presumably be useful if Windows could read from ext4 drives, but I don't know how easy this is as yet. Great! I had some data on my new machine. I tried some movies and they worked great in the in-built player, and VLC (which I installed). Image files loaded fine in the in-built viewer and in 'the gimp', which is sort of like the linux photoshop. I've used the gimp a little on windows, and am hoping it can do a lot of the photoshop duties. Blender For 3d models, I've been using blender on windows, and as luck would have it, this open source software is available and runs very nicely on linux. Was installed and loading my game models in no time. For development, this just left an IDE and compiler for c++ (my language of choice). Linux has a very handy standard compiler which is easy to install (g++ / gcc). This is where I might mention 'the terminal'. The Terminal Although the name windows has become synonymous with the windows GUI, it is important to realise that an operating system doesn't have to be irrevocably intertwined with a GUI system. In linux, the operating system can use several different GUIs, depending which flavour you prefer. Or none at all, if for example you are running a server. The way to talk to the operating system below the level of the GUI is a command line interface called 'the terminal'. There used to be one used commonly in windows too, the DOS prompt, but it is rarely used now. In contrast on Linux, the terminal is still very useful for a number of operations, unfortunately it can be a little scary for beginners but this is a little unjustified. To get the terminal up I just press Alt-T. You can list what is in your current directory by typing 'ls'. You can navigate up a directory with 'cd ..'. And you can navigate into a directory with 'cd MyFolder'. It will also auto-complete the folder / filename if you press tab. From the terminal you can do a lot of stuff you would also do from the graphical file manager (the excellent 'nemo' is built in to linux mint), such as copying, deleting, moving files. You can also manually tell it to install packages just like it would install from the package manager, with the command 'apt-get'. To install software, you need admin privileges (this is handy as it prevents malware from doing anything naughty without you typing in the admin password). To get admin you type 'sudo' before the command: sudo apt-get install build-essential This tells it to run as admin (sudo) to run apt-get, and install (or remove) the package called 'build-essential'. This contains the compiler and other building tools. IDE Unless you fancy yourself as a hardcore compile from the terminal from the getgo type of guy, you will also probably want to use an IDE for development. As I use C++, there are several to choose from, such as Eclipse, Code::Blocks, KDevelop, Code Lite etc. I went for QT creator, as I have used it on windows (again, familiarity!! baby steps!!). Once QT creator was installed, it was fairly easy to tell it to create a hello world app and test it, it worked great! This is where things got slightly more interesting. My current project is an Android game. I had been maintaining both a PC build on windows, and the Android build, with the platform specific stuff isolated into things like creating a windows, setting up OpenGL, input, and low level sound. OpenGL ES Where things got slightly confusing is that because I am developing for Android, I was using OpenGL ES 2.0, rather than the desktop version of OpenGL. On windows I had been using the ARM Mali OpenGL ES Emulator, which emulates OpenGL ES by outputting a bunch of normal OpenGL calls for each ES call. I was anticipating having to use something similar on linux, so I attempted to install the Mali emulator in Linux, however I had little joy. I was getting conflicts with existing OpenGL libraries used in SDL (which I intended to use for platform specific stuff). Finally after investigation I realised that my assumptions were wrong, and Linux actually directly supports OpenGL ES AS WELL as desktop OpenGL, through the open source Mesa drivers. I eventually got a 'hello world' OpenGL ES program working, and was convinced I now had the necessary libraries to start work. 64 Bit Conversion The next stumbling block was a biggie. For historical reasons, all my libraries and game code were 32 bit. I had been developing with the idea that a lot of Android devices were 32 bit, and I was hoping the 64 bit devices would run the 32 bit code (hadn't really tested this out lol). So I had been previously compiling a 32 bit windows version, and a 32 bit android version. And it soon became clear that my linux setup was compiling by default to 64 bit. No problem I thought, I should be able to cross compile. With some quick research I managed to get 32 versions of the libraries, however I had no joy with 32 bit version of OpenGL. It refused to install, and being a linux beginner I was stuck. I did some little research, but no simple path, and realised that maybe it was time to convert my code to 64 bit. Or rather, to have my code run in 32 bit and 64 bit. I had been (rather unjustifiably) dreading this, as I have a lot of library code written over quite a few years. As it happened, aside from some changes to my template library, the biggest problem was in the use of 'fixup' 32 pointers in flat binary files formats. I have been using this technique for a long time now as it greatly speeds file loading, and also helps prevent memory fragmentation. Fixup Pointers Essentially the idea with a 'fixup' pointer is you store into the file an 'offset' from a fixed point in the file to a resource, often the start, because there is no point in saving a real pointer to a file as it points to a (changeable) memory location. Then you can load the entire binary file as one big block, and on loading 'fixup' the offset pointer to a real pointer by adding e.g. the offset to the memory location of the start of the file in memory. This works great when the offsets are 32 bit and your pointers are 32 bit. But when you move to 64 bit, your offsets are fine (as long as the file is smaller than 4gb), but there is not enough room to store a 64 bit pointer. So you have a choice, you can either do some pointer arithmetic on the fly, or change your file formats to use 64 bit offsets / pointers. After a trial with the first method, I have eventually settled on going with 64 bit in the file, even if it uses a little more space. Of course the disadvantage is that it has meant I have needed to re-export all my assets. So at the same time as converting my libraries to 64 bit, the game code, I also needed to convert my exporters to 64 bit, and re-export all the assets (models, sprites, sound etc). This has been a frustrating big job, particularly because you are coding 'blind'. Normally when you program you will change a little bit, recompile, run and test. But with such a conversion, I had to convert *everything* before I could test any of it. Success! It has been demoralizing doing the conversion, I won't lie. But I have been so impressed with the operating system I was determined to make it work. And finally bit by bit I got the exporters working, re-exported, then the game, debugged. Got some crazy graphical errors, errors in the shaders that the OpenGL ES implementation didn't like (that's a whole other story!) but finally got it displaying the graphics, then did an SDL version of the sound this afternoon which is working great. One thing I will say is I should have been using SDL before, it is really simple and makes a whole lot of sense of taking out the eccentricity of setup code on different platforms (windows in particular is very messy). So to summarize I now have (nearly) everything working, compiling and running on linux. I still have to install android studio and try debugging an android hardware device through usb but I'm very hopeful that will work. Even if it doesn't it's not a show stopper as I can always use a second PC. I am gradually becoming more familiar with linux every day and even feeling I might get tempted to learn QT so I can do some nice 'native' looking apps.
  4. I've been developing on Microsoft Windows for a long time, since around 1992/93, when I got my first PC. Various other platforms before that, but I've pretty much stayed with it, not because it is a technical marvel (it's not), but based on the idea that it was the most popular OS so it should be easy to get programs running on other people's machines. Coupled with this (and no doubt because of this) there is also loads of good software for development, which had made it the 'default' choice for me. Don't get me wrong, I have certainly admired certain aspects of the various Apple OSes over the years (especially when they embraced BSD), but been put off by having to relearn the 'backwards' way of doing everything, and rightly or wrongly the suspicion of a 'control freak' walled garden approach, where you are not in control of the computer, Apple are. And don't get me started on my experiences of having to use iTunes to do something as simple as transfer a file over usb from a Mac to an i-something. And the obvious bias towards monetizing every aspect of the experience. In contrast I sometimes feel that Windows is *overly* open, exposing too much to developers, allowing them to too easily 'hijack' your PC and take over its resources for their own purposes at startup, as well as a series of insecure 'technologies' that seem more appropriate for malware authors than legit developers. It seems to be designed so that the OS will run slower and slower the more apps you install, until you give up and re-install windows. Along this line comes the other unpleasant thing I found with windows, that a lot of the software would rely on some other flavour of the month technology being installed as a dependency. Want to use a text editor? No, first you needed to spend half a day installing the latest huge bloated .NET runtime, to find it probably breaks some other app. And for something that is meant to be backward compatible, certain software companies (particularly Microsoft themself) seem to go above and beyond the call of duty in making their software incompatible with anything but the latest builds of the OS. And so we come to my personal last straw .. I spent some time last year evaluating different IDEs, and preparing projects, converting code etc, until I finally settled on using Visual Studio 2017, which was in the final release candidate stages at the time. The first version worked great until it expired. Then I tried the updater, which failed miserably at installing the next version, so I had to manually tweak things until it installed. Finally I came back from holiday 3 weeks ago to find that the 'final final' RC candidate had expired, and I was required to install the release version. Unfortunately I found the installer refused to work on my system. During the time between the RC and the release, they managed to screw up the installer (of all things??). So I was left unable to do any work until I had it resolved. I spent several days backing up my PC and trying to update it, but even with the windows updates no joy with the installer. I resigned myself that I had a choice of either buying a new hard disk and installing windows 10, or buying a new PC. Given I didn't want to risk losing my old work, I went for a new PC, even though my old one was perfectly adequate. £650 or so later I had ordered a fanless kaby lake system. During the order I had the choice of OS to put on it. I had originally planned to put Windows on it, but thought what the hell, I should have another play with Linux, as one of the options was Linux Mint, and I could be sure the hardware would all work, so it should be easy. While I waited a few days for the build, I did some research into Windows 10. Unfortunately I became more and more disillusioned the more I read. While I'm sure technically the OS has got better over the years, I've heard only disturbing things (from 'the register' etc) about the roadmap Microsoft is taking with Windows. One of the things I hate about Windows is the need for updates, and the way you are left to pray during the process that they don't break some other bit of software. So usually I turn automatic updates off, and carefully manually select any if they are really required. Not so with Windows 10! As (allegedly) the 'last' version of windows, it will now automatically update itself, forever, whether you like it or not. That's nice to know that if you are a business, you have the very real possibility of waking up one morning to find Microsoft have borked your work and there's absolutely nothing you can do about it. This is clearly a showstopper for many people, for instance having a meeting to show clients the next day and finding your PC has been remotely broken by some well meaning folks who I'm sure have your best interests at heart and not theirs. But it doesn't end there, no now the operating system is designed to take your personal info, searches, work etc etc and send it (without your permission) to Microsoft central command mothership. Simple, you turn it off, you would think, except that, apparently, it seems you can't turn it off. So you think you will block the MS servers in your firewall etc. No dice, as the OS apparently ignores these rules because slurping your private data is too important. And even if you think you've worked a way round this, you only have to leave the PC till the next morning, for the next AUTOMATIC update to circumvent your attempt to circumvent the data slurping. Honestly, there must be laws against this kind of thing. All this made me realise I had to seriously think about moving off windows as a development platform in the longterm, and that time may just be NOW! Several of my old dev colleagues had by now moved to other platforms, notably a lot have moved to Apple. I admit I have an irrational phobia of all Apple products, so the only choice for me was to investigate Linux. I only had some *very* basic grounding in unix (having done some pascal on unix machines at Uni), and having played with linux on my Asus EEE netbook many moons ago. So my experiences, in the next blog post, should be useful for anyone who is an absolute beginner like me. Suffice to say, it has been a very difficult slog learning the basics and converting my code, but I have *finally* got my libraries and game code working, and I am now a convert. The whole Linux experience seems light years ahead of windows. I may still end up having to install windows in a VirtualBox machine, but I haven't had a need as yet. Next blog post will be my migration experience...
  5. Well to suggest anything, we need to know more .. how old are you? Who owns the PC? Why are they taking it away from you? Is it because of the cost of the electricity to run it / internet? Good news is many computer type bods prefer many of their interactions online than in social situations, so you are not alone, and computers / internet provide a massive opportunity for such people which was simply not available in the past. If you want to, you can earn your living from your PC, meet your girlfriend, play games, read, get your education, entertainment etc etc.
  6. It should absolutely be possible to learn development on such a machine, consider that many of the games you mention will have been developed on lower spec machines. As the others point out, particularly on windows, you may have problems getting the most recent versions of development software to install, they often tend to flat out refuse if they consider your OS 'too old' (visual studio *cough*) or your graphics card doesn't have the latest functionality, and getting a legit copy of old commercial development software may be tricky. As an alternative that no one else has mentioned, can I suggest the possibility of testing a lightweight version of Linux on your PC, perhaps with a live USB stick. It may well run very well even on your old PC, and give you an up to date operating system, and run much of the latest development software for the platform. Any experience you build up here should be directly transferable when you get a more powerful PC, as well as your source code, although if you are going to write directly to e.g. OpenGL you would be learning an old version.
  7. Ah some very nice prebuilt solutions there from Hodgman, many thanks. I might have to steal some of those ideas, using the offset from the "fake pointer" looks like it works well!
  8. Yes I admit, given the need to keep support for 32 bit, I'm inclined towards option 2. As you say none of the offsets require being more than 32 bit as they are relative addresses within the file. Rather than keeping the pointers as offsets in the 32 bit version, maybe I can make the fixup routine a 'no op' in the 64 bit version, and access the pointers through an accessor function that simply returns the pointer in the 32 bit version, and does the offset + start calculation in the 64 bit version.
  9. I'm converting a load of c++ code from 32 bit to 64 bit, and have run up to the predictable snag of fixup (relocation) pointers in binary files. Essentially there are a bunch of pointers in a binary file, but when saved on disk they are relative to the start of the file. Then on loading, the pointers are 'fixed up' by adding the address in memory of the start of the file to the offset, to give an absolute pointer which can be resaved in the memory location, and used at runtime as normal pointers. This is great but has so far been relying on the offset and pointer being 32 bit. The files are unlikely to be anywhere near 4 gigs so the offsets don't *need* to be 64 bit. My question is what would be best (or rather what do most of you guys do) for this situation? One particular quirk is that the code needs to compile and run fine as 32 bit and as 64 bit as it needs to run on both classes of device, and the binary files must be the same. The most obvious solution is to store all the offsets / pointers in the binary file as 64 bit. This would mean re-exporting all the binary files, but this is doable (even if somewhat of a pain). This would simplify things for 64 bit version, and require only slight modification for 32 bit. The downside is the file sizes / size in memory would be bigger + any cache implications. Keep the pointers as 32 bit offsets and do the pointer addition on the fly as the parts of the data need to be accessed. The files are kept the same and the only cost is the extra pointer arithmetic at runtime. I have a vague memory of seeing a presentation by a guy who did such relocation on the fly and found there was very little runtime cost. There also appears to me the question, even with 64 bit pointers, are they ever going to be more than a 32 bit value if the program is using a lot less than 4 gigs? I'm assuming yes as the address space may be past the first 4 gigs, and all the virtual memory address space / paging / randomization that goes on, but I just thought I'd check that assumption, as I'm not well versed on the low level details.
  10. Doh! After all that, I'm getting the inkling that linux may just support opengl es 2.0 out of the box, via the open source mesa driver thingies. Here was me thinking it was something to do with the black mesa research facility. Anyway I've successfully got a triangle on the screen, am praying it is not software emulated...
  11. The Mali OpenGL ES emulator works afaik by translating the openg es calls into regular desktop opengl calls, only allowing a subset of the full opengl, and performing lots of validation. This is what it appeared to do on windows, and everything I've read suggests it may be doing the same on linux. What is slightly confusing is that there may be some kind of mesa opengl es support built into my linux, presumably as it may be running on hardware that *does* natively support opengl es(?). What that does if you call it without hardware support I have no idea, I wish I could find some decent linux for dummies tutorials lol. I did manage to successfully install the mali emulator by first uninstalling sdl2. However, it gets worse. Their test cube app failed, apparently because it is trying to use the fallback compatibility opengl 3.0 profile instead of the core 4.3 profile on my kaby lake PC. The docs suggest this maybe because they've only tested it with nvidia hardware, but I haven't a clue, maybe I would have to force it to use the core profile somehow. I'm now trying to get some PowerVR OpenGL ES emulator working, in the hope it plays nicer. I have managed to get some SDL / opengl es code to compile and link, but not show a triangle yet so I don't know if it is working...
  12. I'm developing an android game and have been primarily using a PC build (on windows) with the Mali OpenGL ES 2.0 emulator, with a secondary Android Studio build for the devices. For various reasons I'm trying to change over to Linux, and I have Linux Mint on a new PC, and so I'm trying to get a similar PC build working under linux, however I am an absolute beginner at linux. It seems that a sensible option might be to use SDL on linux for stuff like creating a window, keyboard input etc. I gather that SDL2 is just the most up to date version of SDL, so I have been installing that with apt-get. However, when I try and install the Mali OpenGL ES emulator from ARM, I am getting a conflict: installed package 'libegl1-mesa-dev' conflicts with the installed package 'libgles2-mesa-dev' I am guessing this means that both SDL2 and Mali have some egl functionality, and they are treading on each others toes? There is some mention of this in the Mali help file: Is this --force-all option what I should do? Or is there no way to get SDL2 and Mali to play together? If not SDL2 then how should I be using Mali under linux (i.e. what other API should I be talking to for creating a window, keyboard input etc?)?
  13. Curse this mammalian brain, it's actually Mark Ridley, I think it was just titled 'Evolution' but I had it as a textbook on a 3rd year genetics course 20 years ago. There are probably many other great more recent books. For Dawkins I'd recommend Selfish Gene, then Blind Watchmaker. They are his earlier books but did very well, the later ones often rehash the same points. After all the principles involved haven't changed, although I'm pretty sure there's been a lot of breakthroughs in stuff like epigenetics since I studied it. Spot on, I was going to mention this in my first post. There's a lot of programmers experimenting with genetic type methods / selection to evolve artificial life / methods of locomotion in physics simulations, things like that. It may not be biological but the principles involved are exactly the same. This kind of thing for locomotion: https://www.youtube.com/watch?v=pgaEE27nsQw Well 'free will' I'd just say is a fancy name for the decision making process our brains do all the time (and most other organisms more complex than say, a fly). I don't really know anything about subjective experience..
  14. I know next to nothing about Pilobolus, but if you are interested in how life achieves complexity then I would recommend reading some Dawkins as a grounding to how it all works - 'the selfish gene' and 'the blind watchmaker'. I really don't know the extent of your biology knowledge, but imo there are 2 big aspects to get a grasp of - evolution and genetics (which Dawkins is a good introduction to, and there are more advanced books by e.g. Matt Ridley). The other is development and complexity arising from simple rules, and understanding that something apparently complex (e.g. a tree) can be built by simpler branching etc rules (have a look at Conway's 'game of life' cellular automaton for an example). Even things like human organs tend to be built in the same way - see for example the similarity in the branching in the lungs with the structure of a tree. It is a means to increase the surface area to volume ratio for gas exchange. I'm not super familiar with the specifics of development of any particular organism, but a lot of work has been done on simple organisms like fruit flies to understand how they are built, you could read about this to see how things like limbs and specialisation can happen. As you read about evolution you will read about how most of the organisms today are built from a few body plans / phyla, and share a lot of their blueprint. I just finished reading 'wonderful life' by Stephen J Gould, which aside from being a little rambling and overlong, suggests that during the first explosion of multicellular life there were far more bodyplans being experimented on by mother nature, and whether by random accident or better design, just a few of them won out and form the basis for later life on earth. As to creating models, go for it, maybe even start with simpler models than Pilobolus. You can even add genetics to your model and let nature 'select' the best version of your species. Or even compete 2 or more species against each other if you want to make things interesting, or have predator prey interactions. This is all assuming you are not a religious fruitcake, of course, in which case, forget all this, and just accept that everything was created by the flying spaghetti monster, waving his noodly appendages.
  15. Yes definitely, I've been finding this. Has made me so glad I went with pre-rendering the scrolling background as rendering all those sprites every frame would have killed performance. Most of the work on a frame is done by just drawing one big screen size quad for the background. The 'big work' is done when rendering a new row or column of the background, which only happens every few frames, and is limited to a small viewport so it minimizes the fillrate requirements. See here: https://www.youtube.com/watch?v=Xfaj4TtvjKk which shows it working on the ground texture. As well as hardware depth testing (so the particles interact with the animals), the particles and models also can do a depth check against the custom encoded RGBA depth texture for the background, so they go behind trees etc. This is an extra texture read and calculations in the fragment shader so did give a speedup when turned off. Yup I definitely found this to be the case.