Additionally, there is GLSL that seems to be another language using OpenGL, GLEW and GLFW...
GLSL is GL's shader language.
"Shaders" are the parts of your code that actually run on the GPU - GL2/D3D9 had "fixed function" GPU code, where you could tell it how to draw something. But GL3/4/D3D11 require the "how" of every operation to be described with shader code.
D3D/GL are a high level API that let you send data and "commands" to the GPU (such as "fill the pixels that are covered by these triangles"). Shaders contain the actual instructions that the GPU will perform within each high-level command (e.g. "colour each pixel by comparing its normal to the lighting direction"). Another way to think of a shader is that it's a callback that gets exectued for each pixel/vertex that the GPU processes. You can't write these callbacks in C/C++ -- GL requires you to write them in GLSL and D3D requires you to write them in HLSL.
some tutorials seem to use libraries that comprehend for features that got removed (GLM for some fixed functions?).
GL/D3D don't come with a math library, so GLM is a decent math library (that works in D3D and GL), which is designed to look like GLSL math code... so it's a good fit for a GL programmer.
SFML seems to be able to do that but then again, I want to learn OpenGL and not how SFML uses OpenGL.
SFML/SDL/GLFW help do a lot of the "boiler plate" work, such as initializing OpenGL, creating a Window (portably between OS's), handling mouse clicks/etc... SFML/SDL also have their own drawing functions (GLFW doesn't), but you don't have to use them. Once GL is initialized and you have a window, you can ignore SFML/SDL and just write your own GL code.
Alternatively you can avoid SFML/SDL/GLFW altogether and create your own window and initialize GL yourself manually.
Large parts of OpenGL don't actually exist in the gl.h header file. It's up to you to write the headers yourself by carefully reading the spec, then asking the OS to fetch function pointers from the driver for you and then casting them to the right types... which is insane to do manually. GLEW does that job for you.
About Vulcan, read that it is more low-level than OpenGL. Some say, it might be the future but others say, as OpenGL will continued to be developed, both will co-exist. I think, it is healthier to first learn OpenGL then? Would my knowledge carry over to Vulcan?
Yes. Vulkan/D3D12 are extremely low-level GPU API's, whereas OpenGL/D3D11 are "normal" GPU APIs. I would definately recommend learning OpenGL/D3D11 first, and yes, once you learn one GPU programming API, learning a second one will be much easier as all the concepts carry over.
OpenGL and Vulcan come to mind. Direct3D is not really in my interest, as it is platform dependent.
FWIW:
OpenGL is a single specification for Windows/Mac/Linux, but in reality it's at least 7 different libraries that all try to behave the same: NVidia Windows, NVidia Linux, AMD Windows, AMD Linux, Intel Windows, Intel Linux, Apple MacOS... In reality, they won't behave the same, so it's important to test your application on all 7 versions of "OpenGL". Apple is the only sane platform where a single party dictates the implementation, instead of it changing with each GPU vendor...
OpenGL|ES is the same thing for Android and iPhone... except there's hundreds of different android implementations of OpenGL|ES. There's also no alternative to GL|ES on Android, which is why doing graphics programming on Android is a circle of hell.
D3D is specified by and implemented by Microsoft - so like OpenGL on MacOS, it's actually stable and works the same across all vendors. It's also used on Windows and Xbox - so for a game developer targeting Windows+Xbox+PS4, it's actually more portable than OpenGL is :o
Other platforms (Playstation, Nintendo) use their own APIs that are neither GL or D3D - so a professional graphics programmer is forced to learn a lot of different graphics APIs.
MacOS / iPhone also have "Metal", which is better than GL/GLES, so most engines will use D3D on Windows, Metal on Mac/iPhone, GL on Linux, GLES on Android and other proprietary API's on consoles... so at the pro level, GL's portability benefit is negated :(
If you don't have the manpower to support umpteen API's and only care about Windows/Mac/Linux though, then GL's portability will be useful for you, as long as you're ok testing on multiple GPU's as mentioned above.