Turning to Computer Graphics again, after years of abstinence - Where to go, what to read?

Started by
15 comments, last by Dr_Asik 6 years, 11 months ago

Hi Unshaven. I can relate to your posts quite a lot, I also started as a software renderer (Amiga days).

I think you should stick with OpenGL, via GLFW and GLEW. Default setup is great, or perhaps manually specify v3.3 [core] (if everything uses VAOs). Oh and you're going to love shaders!

Indeed forget about Apple, those guys deleted my album art on day 2. Had a terrible experience mentioning it at the time, too. If I could change one part of my journey it would be to've stuck with PC instead of trying mobile for a few years.

For resources do Google searches and shortlist each subject to stuff that seems right to you. Don't put off bug fixes (at least keep a todo list), instead keep things as you expected them to be as you go, or just go around in circles which perhaps is a good option for a few laps before settling on decent base code (preferably in a library called libunshaven or something!).

Advertisement

wat

Vulkan isnt there in OSX. Neither for developing nor for running.

You can run it with third party software, Molten for example does that. Never tried that though.

The main reason why D3D12/Vulkan aren't great for non-experts (and D3D11/OpenGL are better) is that Vulkan/D12 don't at all abstract away the fact that the GPU and CPU are running asynchronously. In other words, the user is given the burden of writing "thread safe" code that manually synchronizes ownership and access of memory buffers between those two co-processors.

That is indeed true, but I consider it to be more a positive thing.
I would asume that you are at least an experienced programmer when messing around with Vulkan.
Apart from that, I found it quite easy to write concurrent code in Vulkan, alot easier than "real" threading. Everything is actually right there for you.
Command Buffers etc. can be synchronized appropriately using fences & semaphores, while resources can be savely accessed using buffer- and imagebarriers, and that's basically all you need to setup :)

Besides that, Vulkan/D12 expose many other areas of "undefined behaviour" where slightly incorrect code may seem to work, yet in the future could have disastrous consequences

Right, that's unfortunately true, the validation layers still don't catch many problems. The API is quite "young" though, they're constantly expanding the layers :)

Whoa, this thread looks more filled than when I looked last time! :-)
Quite some thought food to ponder, thanks guys.

Heh, well I have been doing some bare metal stuff within the last couple years (as in microcontrollers with realtime OS. Well yeah I did that, doesn't mean I'm an expert ;)), although only single core, so no "real multi threading", but I guess getting used to putting barriers here and there would not be like an ice bucket challenge to me, I have a rough idea of the problems involved.
Then again, I'm not going to be making AAA games, so from what I gathered now, investing the time to actually get all those nasty things right may not be worth it for me. Actually getting something done in very limited spare time would be nice indeed ;)
So I think DX12 and vulkan are out.
From there, I still need to decide (god, do I hate that).

As for "you don’t need school to get into the industry". Well, I guess it depends on *what* industry exactly we are speaking of.
Let's face it. I am well over 30. I will not get into the *game* industry again, for a wage that I would be sane for to accept at my age and other options in industries which pay real money (in Germany) - and I even doubt they would hire a guy as old as me for lower positions which are supposed to be positions for really young, really quick-learning people to grow from.

And then there's the country thing. Maybe in the US it's as easy to get hired and fired again as those things are comparably hard here, employers are more weary.
There are some companies who are *open* to try out software devs without degree. Alas, the likelihood that such a company is afflicted by at least one out of a set of typical problems which are not conducive to the quality of live of anyone involved, is rather high.
But anyway, it is much more of a fight to get even looked at without a degree, at least here, for the kind of jobs I have been interested in so far, and talking a lot to people with degree and much less experience than I have.

These days its all Unity and Unreal SDK. You should probably check those out. I think Unreal uses C++ as a scripting language. That's the modern thing anyway.

It depends a lot on what your end goals are and what your interests are. For me, it was too easy to get lazy with Unity and buy everything, which kind of defeats the whole purpose when you haven't created anything but merely assembled other peoples' stuff. I missed learning about low level stuff which is why I went back to C++ and OGL 4.5 and have been happy with my decision. But for me it's more about learning rather than trying to produce a commercial game.

Personally, I'm into OpenGL (4.5) right now after doing a little DX11. I think modern OGL is a pretty good learning platform if you find the right teachers. Check out LearnOpenGL.com if you think you want to get back into OpenGL.

You mentioned you were not familiar with shaders and the last time you were really into graphics programming was before the big shader revolution. You may want to check out my HLSL series on YouTube. You may want to skip the first video of the series and start with the "Triangles" video since the first video just shows the XNA code that I used to call the shader and explains the differences between that and doing HLSL with DX11.

Of course OGL uses GLSL and not HLSL, but most of what is in the video is math more than it is language specific. Not to mention I have written pretty much an identical shader to the one at the end of the video in GLSL and tried to use the same layout and variable names for comparison.

I'll see if I can find a link to a recent post where I posted that shader code.

But the whole series is designed to watch in order. It starts with the most simple 3D shader you can possible have: the silhouette shader. And then each video adds more and more code until it is turned into a Blinn-Phong shader that supports texturing which is pretty much the thing you want to learn to start learning 3D shaders. That gives you a platform to go on and learn more on your own once you kind of see what's going on. The next video would have been to add normal mapping. The series takes you right up to that point where that would have been the next step. I even did a test program to kind of prepare to make that video and never made it.

Also, there is a lot of vector and matrix algebra in shader programming and modern 3D graphics. If you are not big on Linear Algebra, you may want to watch my Vector and Matrix videos on my channel. It might help to watch those before the HLSL series because the shader uses vectors and matrices quite a bit.

On my website, there is a working 3D code example that makes use of the shader and you can download the source code and see how the GLSL is called and such.

EDIT: Couldn't find my GLSL code where I've posted it here recently. Must have been on the OGL forum where I posted it. Well here it is anyway. This is the GLSL translation of the HLSL shader you end up with at the end of my video series except this shader added fog that I didn't do in the video series. This is actually a Blinn or a Phong shader depending on which lines you comment out, but I think the video explains that better than I could here.

BlinnPhong.vrt


#version 450 core
layout (location = 0) in vec3 Pos;
layout (location = 1) in vec2 UV;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec4 Color;

uniform mat4 WorldMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;



smooth out vec2 TextureCoordinates;
smooth out vec3 VertexNormal;
smooth out vec4 RGBAColor;
smooth out vec4 PositionRelativeToCamera;
out vec3 WorldSpacePosition;


void main()
{
    gl_Position = WorldMatrix * vec4(Pos, 1.0f);                //Apply object's world matrix.
    WorldSpacePosition = gl_Position.xyz;                        //Save the position of the vertex in the 3D world just calculated. Convert to vec3 because it will be used with other vec3's.
    gl_Position = ViewMatrix * gl_Position;                        //Apply the view matrix for the camera.
    PositionRelativeToCamera = gl_Position;
    gl_Position = ProjectionMatrix * gl_Position;                //Apply the Projection Matrix to project it on to a 2D plane.
    TextureCoordinates = UV;                                    //Pass through the texture coordinates to the fragment shader.
    VertexNormal = mat3(WorldMatrix) * Normal;                    //Rotate the normal according to how the model is oriented in the 3D world.
    RGBAColor = Color;                                            //Pass through the color to the fragment shader.
};
 

BlinnPhong.frg


#version 450 core

in vec2 TextureCoordinates;
in vec3 VertexNormal;
in vec4 RGBAColor;
in float FogFactor;
in vec4 PositionRelativeToCamera;
in vec3 WorldSpacePosition;

layout (location = 0) out vec4 OutputColor;


uniform vec4 AmbientLightColor;
uniform vec3 DiffuseLightDirection;
uniform vec4 DiffuseLightColor;
uniform vec3 CameraPosition;
uniform float SpecularPower;
uniform vec4 FogColor;
uniform float FogStartDistance;
uniform float FogMaxDistance;
uniform bool UseTexture;
uniform sampler2D Texture0;



vec4 BlinnSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
    vec3 HalfwayNormal;
    vec4 SpecularLight;
    float SpecularHighlightAmount;


    HalfwayNormal = normalize(LightDirection + CameraDirection);
    SpecularHighlightAmount = pow(clamp(dot(PixelNormal, HalfwayNormal), 0.0, 1.0), SpecularPower);
    SpecularLight = SpecularHighlightAmount * LightColor;

    return SpecularLight;
}


vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
    vec3 ReflectedLightDirection;    
    vec4 SpecularLight;
    float SpecularHighlightAmount;


    ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
    SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
    SpecularLight = SpecularHighlightAmount * LightColor;
    

    return SpecularLight;
}


void main()
{
    vec3 LightDirection;
    float DiffuseLightPercentage;
    vec4 SpecularColor;
    vec3 CameraDirection;    //Float3 because the w component really doesn't belong in a 3D vector normal.
    vec4 AmbientLight;
    vec4 DiffuseLight;
    vec4 InputColor;

    
    if (UseTexture)
    {
        InputColor = texture(Texture0, TextureCoordinates);
    }
    else
    {
        InputColor = RGBAColor; // vec4(0.0, 0.0, 0.0, 1.0);
    }


    LightDirection = -normalize(DiffuseLightDirection);    //Normal must face into the light, rather than WITH the light to be lit up.
    DiffuseLightPercentage = max(dot(VertexNormal, LightDirection), 0.0);    //Percentage is based on angle between the direction of light and the vertex's normal.
    DiffuseLight = clamp((DiffuseLightColor * InputColor) * DiffuseLightPercentage, 0.0, 1.0);    //Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

    CameraDirection = normalize(CameraPosition - WorldSpacePosition);    //Create a normal that points in the direction from the pixel to the camera.

    if (DiffuseLightPercentage == 0.0f)
    {
        SpecularColor  = vec4(0.0f, 0.0f, 0.0f, 1.0f);
    }
    else
    {
        //SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
        SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
    }

    float FogDensity = 0.01f;
    float LOG2 = 1.442695f;
    float FogFactor = exp2(-FogDensity * FogDensity * PositionRelativeToCamera.z * PositionRelativeToCamera.z * LOG2);
    FogFactor = 1 - FogFactor;
    //float FogFactor = clamp((FogMaxDistance - PositionRelativeToCamera.z)/(FogMaxDistance - FogStartDistance), 0.0, 1.0);
    
    OutputColor = RGBAColor * (AmbientLightColor * InputColor) + DiffuseLight + SpecularColor;
    OutputColor = mix (OutputColor, FogColor, FogFactor);
    //OutputColor = vec4(0.0f, 0.5f, 0.0f, 1.0f);
};
 

But anyway, it is much more of a fight to get even looked at without a degree, at least here

A key thing for me to point out, since I am likely the most vocal about ditching standardized education, is that except for my latest job all of my jobs have been overseas, in Thailand, France, Japan, and the UK.
Living overseas may not be for everyone, but at least take a moment to consider that you don't have to consider only how things are in your area/country.

You mentioned being over 30 and compared things in your area to how they might be in America, but I didn't get my first job in America until I was 35. Implicitly that means my first job, having no prior experience on my résumé, was overseas.


Of course you can certainly have valid reasons for wanting to stay in your country. Maybe you already have a family or want to start a family and prefer to be in your native land for doing so.
I just want to make sure you are not artificially closing doors. Unless you have a reason for staying in Germany, the entire world is open to you (but I recommend avoiding North Korea), regardless of degrees and prior work experience.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

By the way, if I was to try out DX11
- is it feasible to do that within .NET / C# ?

I saw there's a still developed (from the looks of it) wrapper called "SharpDX".
Is that well usable, or as cumbersome and more of bricks in the way like some other .NET wrappers to e.g. OGL that I've seen?

Programming the scaffolding that does something with DX graphics experiments, and debug helpers and all that, would be so much nicer in C# than C++ ;-)

By the way, if I was to try out DX11
- is it feasible to do that within .NET / C# ?

SharpDX has been around for a long time and is very mature. It's also a bit nicer to use than the C++ API because you get code completion for enums, exceptions when things don't work, stronger typing etc. The thing is all documentation for D3D is for the C++ API so you have to do the back-and-forth (which is pretty trivial but still).

The bigger benefit is programming in C# of course.

This topic is closed to new replies.

Advertisement