Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Funkymunky

Member Since 28 Jul 1999
Offline Last Active Today, 05:31 PM

Topics I've Started

BRDF Shading Diffuse + Specular

16 February 2015 - 10:55 AM

I've started to look into physically based rendering.  I'm particularly interested in the Unreal 4 stuff.  I've read the Disney paper and the Epic paper, several gamedev.net forum posts and various other Google results.  I think I have a decent handle on preconvolving the cubemap for roughness and generating the integrated BRDF lookup table.

 

I'm struggling, however, to actually apply the Diffuse and Specular components, particularly in regards to the "metallic" parameter.  The disney paper states:

 

metallic - the metallic-ness (0 = dielectric, 1 = metallic). This is a linear blend between two different models. The metallic model has no diffuse component and also has a tinted incident specular, equal to the base color.

 

 

When I look at various examples of a shader like this producing a dielectric result, I see stuff like this.  Which looks awesome.  But when I try a naive approach of "Color = Diffuse + Specular", my results do not look nearly as awesome:

 

These are both with a "metallic" and "roughness" of 0, for a dielectric, which is what I think that Unreal screenshot is using.

 

With a BaseColor of (0, 0, 0)

 

With a BaseColor of (1, 0, 0)

 

I'm doing my Diffuse and Specular like this:

 

float3 Diffuse = BaseColor * (1.0 - metallic);
float3 SpecularColor = float3(1.0, 1.0, 1.0);
float3 SpecularBase = (metallic * BaseColor) + ((1.0 - metallic) * SpecularColor);
float3 Specular = ApproximateSpecularIBL(SpecularBase, roughness, N, V);

FinalColor = Diffuse + Specular;

...How should I be generating a dieletric result?


Deleting from the Destructor on Out of Scope.

02 February 2015 - 03:44 PM

I have a class that is going to allocate a block of data.  I want this class to also delete this block of data when it's done being used.  However, I want to be able to pass this block of data to another function.  So I'm thinking of using a move constructor which keeps track of who should delete the block from the destructor.  Like this:

 



class ResourceBuffer
{
public:
   ResourceBuffer(const unsigned int &size)
   {
      Buffer = new char[size];
   }


   ResourceBuffer(ResourceBuffer &&r)
   {
      Buffer = r.Buffer;
      r.Buffer = 0;
   }


   ~ResourceBuffer()
   {
      if(Buffer)
         delete[] Buffer;
   }


   char *Buffer;
};

Then I could have another function that does this:



ResourceBuffer GetResourceBuffer(const unsigned int &size)
{
   ResourceBuffer RBuffer(size);
   return RBuffer;
}

And use it like this:

void DoSomething()
{
   ResourceBuffer Data = GetResourceBuffer(256);

   // do something with Data.Buffer
}

And the delete [] would be called when "DoSomething" exits, but not when the local copy from "GetResourceBuffer" goes out of scope.  I'm new to using move semantics, so I'm wondering if this is incredibly dangerous or foolish?  Is there any reason not to do it this way?


Loading Resource Data

02 February 2015 - 10:55 AM

   I am thinking about how I want to do my file loading for loading resources like models, textures, etc. from the disk.  What I'm thinking so far is to have a separate thread with a queue.  When I want to load a model, for instance, I submit a request to the queue.  The thread is constantly checking the queue and loading the next requested thing.  So it will read this data all at once into a buffer in RAM, and then indicate back to the rest of the code that the data is available to be parsed.

 

   What I'm wondering is, what is the best way to allocate that block of memory for loading the data into?  Should I allocate a new buffer every time I load an object?  Or should I pre-allocate a big block of memory, load into it, and provide an offset pointer into the block?  I'm thinking of using this for streaming data as the player moves through a large world.


Manual MipMap Generation

16 January 2015 - 11:26 PM

I am trying to manually render my own Mip Map chain using a framebuffer object.  It works on the first pass (for the first level of mipmaps), but doesn't output anything on subsequent passes.  I set it up like:

	glBindFramebuffer(GL_FRAMEBUFFER, fboID);
	glUseProgram(programID);
	glBindTexture(GL_TEXTURE_2D, texID);

	for(unsigned int i = 1; i < 10; i++)
	{
		unsigned int Dim = (512 >> i);
		glViewport(0, 0, Dim, Dim);
		glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texID, i);

		// <-- Render Quad -->
	}

So when I look at my texture, Level 0 is the source texture I expect, and level 1 has the correct 1st mipmap rendered... but every level from 2 on up is blank.  It's as if the render never happened and the program never ran.  If I modify the loop to start at 2, or 3, etc., then I see that mipmap level rendered correctly.  What gives?

 


CubeMap Coordinates

15 January 2015 - 09:52 PM

I'm trying to manually render the mipmap levels of a cubemap.  First I rendered level 0 of the cubemap by setting up a framebuffer and passing 6 projection/modelview matrices to a geometry shader, duplicating the triangles for each face.  Next I attached level 1 of the cubemap and bound the same texture as the input to the pixel shader.

 

And then i realized that I need a float3/vec3 to sample the cubemap.  Is there a good way to create this to sample the 6 faces cleanly?  Or do I need to re-evaluate my setup and attach each face independently...?


PARTNERS