• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL BRDFs in shaders

This topic is 796 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm reading a book called Real Time Rendering (3rd edition) and the author expresses the BRDF function for non-area light sources:

zd1T8.png

 

1. What's the difference between irradiance and radiance? English isn't my first language and dictionaries fail me.

 

2. "... is the irradiance of a light source measured in a plane perpendicular to the light direction vector l" can someone explain this? How can a light quantity be measued in a plane?

 

3. I followed basic OpenGL books and tutorials and I never worked with these kind of stuff. Radiance and irradiance were never introduced in the shaders. Is this concept really needed?

Edited by Pilpel

Share this post


Link to post
Share on other sites
Advertisement

Thanks for the explanation. I'm so mad. This chapter is absolutely cancer.

Do you know any source that explains BRDFs better than this book? All I care about is shader code, really..

Edited by Pilpel

Share this post


Link to post
Share on other sites

If I had to recommend one book it'd probably be PBRT (click). If you just care about shader code you might aswell google for that. Plenty of code out there. Most notably the Unreal Engine 4 source code on GitHub (click).

 

My two cents about this: I remember spotting a bug in the PDF for their BRDF in their codebase so trying to decipher the math by looking at code is probably not the best way to tackle this. People make mistakes. Programmers make a lot of mistakes. The chance of running into something you don't understand simply because there's a mistake in there is much higher than for stuff that was written down on paper.

Share this post


Link to post
Share on other sites

All I care about is shader code, really..

I take that back, I was really mad.

I did, however, give up on the explanations from Real-Time Rendering. They are really hard for me to understand.

Also, it seems like the author was explaining PBR without saying it was PBR..?

 

I'm looking for better (easier) explanations about this topic, rather than the book yoshi_t mentioned. Any idea?

Edited by Pilpel

Share this post


Link to post
Share on other sites

 

All I care about is shader code, really..

I take that back, I was really mad.

I did, however, give up on the explanations from Real-Time Rendering. They are really hard for me to understand.

Also, it seems like the author was explaining PBR without saying it was PBR..?

 

I'm looking for better (easier) explanations about this topic, rather than the book yoshi_t mentioned. Any idea?

 

 

As a piece of advice, it might be better to work through the hard stuff if you want to understand BRDFs and PBR. You're going to hit a point where there's no way around the math and radiometry theory, and you're going to hit it fast.

 

You're entering a realm of graphics where there's no more training wheels, better get used to it sooner rather than later.

Share this post


Link to post
Share on other sites

 

I'm looking for better (easier) explanations about this topic, rather than the book yoshi_t mentioned. Any idea?

 

okay let's see....

 

Irradiance is the quantity of energy that is measured on a surface location incoming from all directions (that is mostly shown in literature as the letter E)

Now you may be confused and say "But hey if irradiance (E) is measured on a single location what's E_L then aka the irradiance measured on a surface perpendicular to L".

The irradiance perpendicular to L (E_L) is the amount of energy passing through a unit sized plane (don't get confused by this it's just something to make the measuring easier). You can think of it as the amount of energy the light source itself emits. Think of a light bulb emitting light with some amount of intensity into a direction. That is your E_L.

Radiance on the other hand is basically the same as irradiance (also remember radiance can be incoming or outgoing energy!) but not from all directions but only a limited or focused amount (think of a camera lens focusing light into a small amount of directions, that is the solid angle).

In the equation above it shows outgoing radiance (L_o) which is light reflecting from your surface location into a certain amount of directions.

 

I hope that is somewhat easier to understand...if it's still a little too hard to grasp here's the short version:

 

1. Irradiance = light energy on a single location from all incoming directions

?    ?Radiance = light energy on a single location from a small set of directions (solid angle)

    Solid Angle = small set of directions in 3D space (think of a piece of cake)

 

2. Irradiance measured on a plane perpendicular to the light direction = light flowing through a unit sized plane (for measurement sake) to basically tell you how much energy the light is emitting/transmitting

 

3. Pretty sure if you've done anything that involves light or texture color you've made use of those equations (even if you didn't know).

Radiometry is just a way to mathematically or physically explain / define those things

 

 

The problem with radiometry is often that the "basics" are confusing since they are already based on simplification or approximations of more advanced equations.

Maybe try to keep going and see if it starts to make more sense going further...

For example later on when they explain how irradiance is obtained by summing up incoming radiance over all directions it made more sense to me

Edited by lipsryme

Share this post


Link to post
Share on other sites

Thanks a lot!

 

I went back to the basics and I'm not sure if I understand the difference between E and E_L.

E is irradiance from all directions onto a (unit?) plane. E_L is irradiance onto a plane, when only measuring light from direction L?

Edited by Pilpel

Share this post


Link to post
Share on other sites

E is the irradiance measured on a surface location (in your shading that would be the pixel you shade on your geometry).

E_L is irradiance measured not on a surface location but on a unit plane. The L subscript tells you already that it is irradiance corresponding to the light source.

 

edit: What you wrote is correct, although it's kind of confusing to think about shading a surface location as measuring it on a plane. This plane is kind of imaginary but perpendicular to the normal vector at that location, hence the N dot L term is born to saturate the amount of light depending on the angle the light hits this plane.

 

Here are a few quotes from RealTimeRendering:

 

- "The emission of a directional light source can be quantified by measuring power through a unit area surface perpendicular to L. This quantity, called irradiance, is equivalent to the sum of energies of the photons passing through the surface in one second".

Note: He's talking about E_L here, irradiance perpendicular to the light source L.

 

- "Although measuring irradiance at a plane perpendicular to L tells us how bright the light is in general, to compute its illumination on a surface, we need to measure irradiance at a plane parallel to that surface..."

Note: He goes on to talk about how the N dot L factor is derived...

 

On page 103 you can see that irradiance E is equal to the irradiance E_L times the cosine angle between the surface normal N and the light direction L.

E = E_L * cos_theta_i

 

looking at the equation in your original post it now makes sense because it now translates the brdf into:

f(l,v) = outgoing_radiance / irradiance

aka the ratio between outgoing light into a small set of directions (in this case in the direction of our sensor/eye, which is vector V) and incoming light to this surface location (or rather a plane perpendicular to the surface normal N)

 

so finally to translate this into actual hlsl code your very simple light equation could look like this:

float3 E_L = light_intensity * light_color;
float cos_theta_i = saturate(dot(N, -L)); // negate L because we go from surface to light
float3 E = E_L * cos_theta_i;

// We actually output outgoing radiance here but since this is a very simplified/approximated BRDF 
// we can set this equal since we assume that diffuse light is reflected the same in all directions
return E;

which is the lambertian shading / BRDF :)

Edited by lipsryme

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement