A big advantage of one branch per change is it makes bisection (searching for a change that introduced a bug) much, much easier. This is only a concern, of course, if you're developing software that might need maintaining.
One branch per change means you can engage in test-driven development by writing the failing test, then making it pass, and using the "commit early, commit often" workflow in conjunction with the "trunk always passes all tests" paradigm. These are all very good things to do, and doing them together is even better.
You don't need to create branches just to write new code, branches have other uses.
Well, with git the idiomatic use is to create a branch for every change. You can create feature branches for multiple changes, too, and branch that and merge the smaller changes into one large change, then merge that. You can even do squash commits that effective make smaller branches disappear after the fact. You can rebase branches more easily than rebasing sets of commits. Branching and merging in git is cheap cheap cheap. It isn't yer grandpa's CVS with its mucho expensive branch-and-tag operations.
You can, of course, have long-lived independent branches a la CVS, if you want. That happens in the Linux kernel tree, where from time to time groups of changes get cherry-picked from other trees and pulled into Linus's tree, and become official. If nothing else, git is powerful and flexible.
There is no "master" branch, in fact. There's just a whole lot of branches everywhere and the meaning of each is an external semantic.
A university degree is not job training. In that light, don't worry that the name of a major at one or two institutions will condemn you to a life of slavery on the assembly line. Chances are in 10 years you'll be working on something that hasn't even been invented yet. Study what appeals to you, learn what you love.
You should also dive in to Plato, René Descartes, Immanuel Kant, and David Hume. They all directly address these questions to a greater or lesser extent. No metaexploration of reasoning, knowledge, thought, and general epistemology would be complete without and examination of the body of literature developed on the topic over the last 50 centuries.
I rewired a drum hard disk (you know the ones where you opened the top and dropped the huge platters in)
Dude, magnetic drum storage was distinctly different from magnetic disk storage. Drums had the media on the outside surface of a cylinder. Disks use the surface of disks. Hence the name.
My first full-time job after graduating sometimes required me to change the RL03 disk packs to get at legacy FORTRAN IV code for some clients when I had support calls. That was on a PDP-11/M running RSX-11. I'm familiar with disks. Drums I only read about in my mother's computer science textbooks. Core memory and paper tape I remember from high-school science fairs when I was in grade school. Core memory came in 256-bute chunks (2^8, 8-bit address space), measuring about 6 inches by 6 inches by 2 inches. I mean, who could fill up 256 bytes?
When I started university, we still used keypunches and submitted the card deck with JCL to the HSJS on the mainframe, and picked up a our printouts sometimes hours later. You could still get coding forms to write your code on with demarked columns (a 'C' in column 1 indicated the line was a comment).
Other than hardware, a lot of the time a term has fallen into disuse and been replaced by a new term that means the same thing that gets announced and marketed like it's a great new discovery or invention. You just gotta learn to be aware that when a new silver bullet is announced that's going to save us all from trying, you need to check what the business model of the promoter is and how they're aiming to part you from your money.
I would strongly recommend using a VM for what you're trying to do. VirtualBox, VMWare, they're all good. LXC is not a VM: you'll run into video driver grief sooner or later if you use it like one. Also, you can run Windows in a VM. You will want to make sure you have plenty of RAM and extra disk space.
You may find there are video driver issues inside the VM: it's rare, but it does happen. For full video driver testing, you will need to use bare metal. For everything else, VM is what you want. Video driver == OpenGL support. You want not only bare metal, but a selection of Intel, AMD, and nVidia hardware of different generations. If it works in the VM, it'll work elsewhere.
I develop on Ubuntu exclusively (it's in my job description): the current dev release is always a rolling release (so it has the current leading edge kernel, video drivers, toolchain, etc) but it's different enough and incompatible enough from the latest stable release that I have two separate systems for testing backports from dev to stable. Upgrading a stable system with something not in the archive (new toolchain, for example) is enough to possibly destabilize it and you're going to get real sad after a while.
(1) Give your base class a protected constructor that takes parameters. Use that constructore in the initializer list of derived classes. That way, the derived classes control the initialization values and they only get initialized once.
(2) Read up on virtual functions. The whole point is that you store pointers to the base class and call the base class virtual function, and the most-derived virtual function actually gets called.
(3) Better yet, read up on NVI. You would have a non-virtual Attack() function in the base class and a virtual CustomAttack() function that gets overridden by derived classes, and by default does nothing. The base class does some setup/terdown (logging for example), then invokes the virtual function to implement derived-tower-specific attack modes.
However, what concerns me is what happens on other platforms? Will SDL2 use whatever version of OpenGL is on the machine, or if none available (like on some phones) will it choose the ES versions or another rendering option (with last fallback of software rendering)?
If the version of OpenGL you specify in the context flags is not available, you get a failure.
The default context in libSDL2, for a device that supports OpenGL, is OpenGL 2.1. That's most desktop devices these days. Most desktops provide binary support for later versions, too, even if it requires software rendering. An extension wrangler will help with forward-support on older contexts.
The default context on a device that does not support OpenGL but supports OpenGL|ES 1 is OpenGL|ES 1.1.
The default context on a device that does not support OpenGL or OpenGL|ES 1 is OpenGL|ES 2.0. That's most mobile devices these days.
The default is to not use EGL for OpenGL and to use EGL for OpenGL|ES. Some platforms (eg. Wayland and Mir on Linux) you want to use EGL for everything otherwise you'll get failure, some it doesn't matter (X11) and others (Microsoft Windows) will fail if you try to use EGL for OpenGL.. You may need to probe to see what's supported.
Your best bet is to explicitly set the desired OpenGL version before opening the window, and write to that version. You may want to use an extension wrangler, and you may want compile-time selection between Gl and GL|ES code paths where the two are API-divergent.