Jump to content
  • Advertisement
Sign in to follow this  

OpenGL Textures not working (CgFX + OpenGL): draws black silhoette (works ok in FXComposer)

This topic is 3647 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, My first CgFX application is giving me trouble. I've managed to get some very simple shaders working but am having a lot of trouble passing textures to it. I'm trying to make use of existing libs (namely NVidia's nv::Image for loading the image data from file). The shader works fine in FX Composer (2.5b). The code is clean enough and should be logging all errors (using 'cgGetLastErrorString()' and 'glGetError()' etc). Shading appears is ok when I don't use the texture. But as soon as the texture is used, the CgFX shader compiles without warning and runs without error... but the test sphere I'm trying to draw is a black circle (which my arcball manipulator spins without any trouble). Can anyone suggest why when I use the following code all I see is a black circle? I've been chipping away at this for days (i hate to admit though it's becoming weeks...). can someone please put me out of my misery... even if it looks ok... any opinions/suggestions much a appreciated.
// simpleTextured.cgfx

// This is C2E1v_green from "The Cg Tutorial" (Addison-Wesley, ISBN
// 0321194969) by Randima Fernando and Mark J. Kilgard.  See page 38.

float4x4 modelViewProj : WorldViewProjection < string UIWidget="None"; >; 

float3 globalAmbient : Ambient    = { 0.1, 0.1, 0.1 };
float3 lightColor : Specular <
    string UIName =  "Lamp 0";
    string Object = "Pointlight0";
    string UIWidget = "Color";
> = {1.0f,1.0f,1.0f};
float3 lightPosition : Position <
    string Object = "PointLight0";
    string UIName =  "Lamp 0 Position";
    string Space = "World";
> = {0.81,-3.65,5};
float3 eyePosition   : Position   = { 0, 0, 13 };
float3 Ke : Emissive < string UIWidget = "Color"; > = {0.0, 0.0, 0.0};
float3 Ka : Ambient  = {0.0, 0.0, 0.0};
float3 Kd : Diffuse  = {0.5, 0.0, 0.0};
float3 Ks < string UIWidget = "Color"; > = {0.7, 0.6, 0.6};
float  shininess <
    string UIWidget = "slider";
    float UIMin = 0.0;
    float UIMax = 100.0;
    float UIStep = 1.0;
    string UIName =  "Specular";
> = 32.0;

//////// COLOR & TEXTURE /////////////////////

texture ColorTexture  <
    string ResourceName = "default_color.dds";
    string UIName =  "Diffuse Texture";
    string ResourceType = "2D";

sampler2D ColorSampler = sampler_state {
    Texture = <ColorTexture>;
    MinFilter = LinearMipMapLinear;
    MagFilter = Linear;
    WrapS = Repeat;
    WrapT = Repeat;
// This is C5E2v_fragmentLighting from "The Cg Tutorial" (Addison-Wesley, ISBN
// 0321194969) by Randima Fernando and Mark J. Kilgard.  See page 124.

void main(float4 position : POSITION,
             float3 normal   : NORMAL,
             float2 uv : TEXCOORD0,
             out float4 oPosition : POSITION,
             out float2 oUV : TEXCOORD0,
             out float3 objectPos : TEXCOORD1,
             out float3 oNormal   : TEXCOORD2)
	oPosition = mul(modelViewProj, position);
	objectPos = position.xyz;
	oNormal = normal;
    oUV = uv;

// This is C5E3f_basicLight from "The Cg Tutorial" (Addison-Wesley, ISBN
// 0321194969) by Randima Fernando and Mark J. Kilgard.  See page 125.

void psLight(float4 position  : TEXCOORD0,
             float2 uv : TEXCOORD1,
             float3 normal    : TEXCOORD2,
             out float4 color     : COLOR)
  float3 P = position.xyz;
  float3 N = normalize(normal);

  // Compute emissive term
  float3 emissive = Ke;

  // Compute ambient term
  float3 ambient = Ka * globalAmbient;

  // Compute the diffuse term
  float3 L = normalize(lightPosition - P);
  //float diffuseLight = max(dot(L, N), 0);
  float3 diffuseLight = tex2D(ColorSampler, uv) + max(dot(L, N), 0);
  float3 diffuse = Kd * lightColor * diffuseLight;

  // Compute the specular term
  float3 V = normalize(eyePosition - P);
  float3 H = normalize(L + V);
  float specularLight = pow(max(dot(H, N), 0), shininess);
  if (diffuseLight <= 0) specularLight = 0;
  float3 specular = Ks * lightColor * specularLight;

  color.xyz = emissive + ambient + diffuse + specular;
  color.w = 1;

technique NewTechnique <
	string Script = "Pass=p0;";
> {
    pass p0 <
	string Script = "Draw=geometry;";
    > {
        VertexProgram = compile vp40 main();
		DepthTestEnable = true;
		DepthMask = true;
		CullFaceEnable = false;
		BlendEnable = false;
		DepthFunc = LEqual;
        FragmentProgram = compile fp40 psLight();

//  Initialize() CPP

    cgContext = cgCreateContext();
    LOG_CG("cgCreateContext", cgContext);
    cgVertexProfile = cgGLGetLatestProfile(CG_GL_VERTEX);
    cgFragmentProfile = cgGLGetLatestProfile(CG_GL_FRAGMENT);
    LOG_CG("cgGLSetOptimalOptions", cgContext);
    LOG("Cg Vertex Profile:   " << cgGetProfileString(cgVertexProfile));
    LOG("Cg Fragment Profile: " << cgGetProfileString(cgFragmentProfile));

    PathResolver pathResolver("");
    //const std::string cgFXFile = "phong.cgfx";
    //const std::string cgFXFile = "green.cg";
    //const std::string cgFXFile = "vertlight.cgfx";
    const std::string cgFXFile = "blinn.cgfx";
    std::string resolvedCgFXFile = "";
    if (pathResolver.getFilePath(cgFXFile, resolvedCgFXFile))
        cgProgram = cgCreateProgramFromFile(cgContext, CG_SOURCE, resolvedCgFXFile.c_str(), cgVertexProfile, "main", 0);
        LOG_CG("cgCreateProgramFromFile(" << resolvedCgFXFile << ")", cgContext);
        LOG_ERROR("File not found: " << cgFXFile);

    const std::string modelFile = "cow.obj";
    std::string resolvedModelFile = "";
    if (pathResolver.getFilePath(modelFile, resolvedModelFile))
        model = Utils::LoadModel(resolvedModelFile.c_str(), &modelBBMin, &modelBBMax);
        LOG_ERROR("File not found: " << modelFile);

    LOG_CG("cgGLLoadProgram", cgContext);

    if(cgProgram == 0)
        LOG_ERROR("Invalid Cg program. This program will now exit...");

    LOG_CG("cgGLRegisterStates", cgContext);
    cgGLSetManageTextureParameters(cgContext, CG_TRUE); 
    LOG_CG("cgGLSetManageTextureParameters", cgContext);
    cgEffect = cgCreateEffectFromFile(cgContext, resolvedCgFXFile.c_str(), NULL); 
    LOG_CG("cgCreateEffectFromFile(" << resolvedCgFXFile << ")", cgContext);

    cgTechnique = cgGetFirstTechnique(cgEffect);
    while (cgTechnique && cgValidateTechnique(cgTechnique) == CG_FALSE) 
        LOG_ERROR("Technique '" << cgGetTechniqueName(cgTechnique) << "' did not validate");
        cgTechnique = cgGetNextTechnique(cgTechnique);
    LOG_CG("CG Techniques initialise", cgContext);

    if (cgTechnique) {
        LOG("Using Cg technique '" << cgGetTechniqueName(cgTechnique) << "' from '" << cgFXFile << "'.");
    } else {
        LOG_ERROR("Valid Cg Technique not found");

    // Load the texture (if needed)
    CGparameter param = cgGetNamedParameter(cgProgram, "ColorSampler");
    LOG_CG("cgGetNamedParameter", cgContext);
    if (param != 0) {

        image = new nv::Image();
        const std::string texFile = "Default_color.dds";
        std::string resolvedTexFile = "";
        if (!(pathResolver.getFilePath(texFile, resolvedTexFile) && 
            LOG_ERROR("File not found: " << texFile);

        LOG("Texture loaded (" << resolvedTexFile << ")");

        GLuint texName;    
        glGenTextures(1, &texName);
        glBindTexture(GL_TEXTURE_2D, texName);
        glTexImage2D( GL_TEXTURE_2D, 0, image->getInternalFormat(), image->getWidth(), image->getHeight(), 0, image->getFormat(), image->getType(), image->getLevel(0));

        cgGLSetTextureParameter(param, texName);
        LOG_CG("cgGLSetTextureParameter(param, texName)", cgContext);
        LOG("WARNING: No texture parameter found. Expecting 'sampler2D ColorSampler'.");


//  Display() in CPP

    CGparameter param = 0;
    param = cgGetEffectParameterBySemantic(cgEffect, "WorldViewProjection");
    if (param != 0)
        cgGLSetStateMatrixParameter(param, CG_GL_MODELVIEW_MATRIX, CG_GL_MATRIX_IDENTITY);
    LOG_CG("cgGLSetStateMatrixParameter(WorldViewProjection)", cgContext);

    CGpass pass = cgGetFirstPass(cgTechnique);
    while (pass) {

        //DrawModel(model, modelBBMin, modelBBMax);
        glutSolidSphere(0.5, 8, 8);
        pass = cgGetNextPass(pass);

[Edited by - axon on June 16, 2008 12:40:29 AM]

Share this post

Link to post
Share on other sites
Well like you may have gathered from my other thread you replied to, my understanding of Cg is not great, but theres something there I have never seen before: you don't pass the sampler2D into the pixel shader, you access it globally. I didn't know this was possible! Maybe you could try:

void psLight(float4 position : TEXCOORD0,
float2 uv : TEXCOORD1,
float3 normal : TEXCOORD2,
uniform sampler2D : colorSampler,
out float4 color : COLOR)
float3 diffuseLight = tex2D(colorSampler, uv) + max(dot(L, N), 0);

technique NewTechnique <
string Script = "Pass=p0;";
> {
pass p0 <
string Script = "Draw=geometry;";
> {
VertexProgram = compile vp40 main();
DepthTestEnable = true;
DepthMask = true;
CullFaceEnable = false;
BlendEnable = false;
DepthFunc = LEqual;
FragmentProgram = compile fp40 psLight(ColorSampler);

Share this post

Link to post
Share on other sites
Original post by bluntman
Well like you may have gathered from my other thread you replied to, my understanding of Cg is not great, but theres something there I have never seen before: you don't pass the sampler2D into the pixel shader, you access it globally. I didn't know this was possible! Maybe you could try:
*** Source Snippet Removed ***

Thanks for the suggestion. Unfortunately what you've noticed is one of the differences between Cg and CgFX. In Cg you pass samplers and other variables to the shader methods as uniforms. In CgFX you can (should?/must?) declare them as globals. (I just had a look at a few FXComposer shaders and they're all handling the sampler in the same way as the above code.)

Any other suggestions?
Maybe my OpenGL code is wrong? (I worked from the latest OpenGL Bible (6th ed) but I guess I could still have missed something???

Share this post

Link to post
Share on other sites
Well I am using CgFx in my current project and I pass everything as uniforms from the globals. It makes more sense that way, what if your vertex and fragment shaders are in different source files? I don't think CgFx resolves like that. I have never used FxComposer, but all the CgFx examples I have seen, including those in the Cg 2.0 users manual show all variables passed as uniforms. e.g. page 123 from the CgUsersManual.pdf that comes with the Cg2.0 SDK.

Share this post

Link to post
Share on other sites
Just gave it a try using the following shader but I get the same result.


Copyright NVIDIA Corporation 2007

To learn more about shading, shaders, and to bounce ideas off other shader
authors and users, visit the NVIDIA Shader Library Forums at:



// #define FLIP_TEXTURE_Y

float Script : STANDARDSGLOBAL <
string UIWidget = "none";
string ScriptClass = "object";
string ScriptOrder = "standard";
string ScriptOutput = "color";
string Script = "Technique=Main;";
> = 0.8;


float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >;
float4x4 WvpXf : WorldViewProjection < string UIWidget="None"; >;
float4x4 WorldXf : World < string UIWidget="None"; >;
float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >;

//// TWEAKABLE PARAMETERS ////////////////////

/// Point Lamp 0 ////////////
float3 Lamp0Pos : Position <
string Object = "PointLight0";
string UIName = "Lamp 0 Position";
string Space = "World";
> = {-0.5f,2.0f,1.25f};
float3 Lamp0Color : Specular <
string UIName = "Lamp 0";
string Object = "Pointlight0";
string UIWidget = "Color";
> = {1.0f,1.0f,1.0f};

// Ambient Light
float3 AmbiColor : Ambient <
string UIName = "Ambient Light";
string UIWidget = "Color";
> = {0.07f,0.07f,0.07f};

float Ks <
string UIWidget = "slider";
float UIMin = 0.0;
float UIMax = 1.0;
float UIStep = 0.05;
string UIName = "Specular";
> = 0.4;

float Eccentricity <
string UIWidget = "slider";
float UIMin = 0.0;
float UIMax = 1.0;
float UIStep = 0.0001;
string UIName = "Highlight Eccentricity";
> = 0.3;

//////// COLOR & TEXTURE /////////////////////

texture ColorTexture <
string ResourceName = "default_color.dds";
string UIName = "Diffuse Texture";
string ResourceType = "2D";

sampler2D ColorSampler = sampler_state {
Texture = <ColorTexture>;
MinFilter = LinearMipMapLinear;
MagFilter = Linear;
WrapS = Repeat;
WrapT = Repeat;

// #define this macro to permit the import and use of shared shadow
// maps created by COLLADA-FX. Make sure that the macro is defined
// and the code recompile *before* executing "Convert to Collada-FX"!

#include "include/shadowMap.cgh"

float ShadDens <
string UIWidget = "slider";
float UIMin = 0.0;
float UIMax = 1.0;
float UIStep = 0.01;
string UIName = "Shadow Density";
> = 0.7;
#endif /* USE_SHARED_SHADOW */

//////// CONNECTOR DATA STRUCTURES ///////////

/* data from application vertex buffer */
struct appdata {
float3 Position : POSITION;
float4 UV : TEXCOORD0;
float4 Normal : NORMAL;
float4 Tangent : TANGENT0;
float4 Binormal : BINORMAL0;

/* data passed from vertex shader to pixel shader */
struct vertexOutput {
float4 HPosition : POSITION;
float2 UV : TEXCOORD0;
// The following values are passed in "World" coordinates since
// it tends to be the most flexible and easy for handling
// reflections, sky lighting, and other "global" effects.
float3 LightVec : TEXCOORD1;
float3 WorldNormal : TEXCOORD2;
float3 WorldTangent : TEXCOORD3;
float3 WorldBinormal : TEXCOORD4;
float3 WorldView : TEXCOORD5;
// This optional value expresses the current location in "light"
// coordinates for use with shadow mapping.
float4 LProj : LPROJ_COORD;
#endif /* USE_SHARED_SHADOW */

///////// VERTEX SHADING /////////////////////

/*********** Generic Vertex Shader ******/

vertexOutput main(appdata IN) {
vertexOutput OUT = (vertexOutput)0;
OUT.WorldNormal = mul(WorldITXf,IN.Normal).xyz;
OUT.WorldTangent = mul(WorldITXf,IN.Tangent).xyz;
OUT.WorldBinormal = mul(WorldITXf,IN.Binormal).xyz;
float4 Po = float4(IN.Position.xyz,1);
float3 Pw = mul(WorldXf,Po).xyz;
OUT.LightVec = (Lamp0Pos - Pw);
OUT.UV = float2(IN.UV.x,(1.0-IN.UV.y));
#else /* !FLIP_TEXTURE_Y */
OUT.UV = IN.UV.xy;
#endif /* !FLIP_TEXTURE_Y */
float4 Pl = mul(ShadowViewProjXf,Pw); // "P" in light coords
float4x4 BiasXf = make_bias_mat(ShadBias);
OUT.LProj = mul(BiasXf,Pl); // bias to make texcoord
#endif /* USE_SHARED_SHADOW */
OUT.WorldView = normalize(float3(ViewIXf[0].w,ViewIXf[1].w,ViewIXf[2].w) - Pw);
OUT.HPosition = mul(WvpXf,Po);
return OUT;

///////// PIXEL SHADING //////////////////////

// Utility function for blinn shading

void blinn_shading(vertexOutput IN,
float3 LightColor,
float3 Nn,
float3 Ln,
float3 Vn,
uniform sampler2D colorSampler,
out float3 DiffuseContrib,
out float3 SpecularContrib)
float3 Hn = normalize(Vn + Ln);
float hdn = dot(Hn,Nn);
float3 R = reflect(-Ln,Nn);
float rdv = dot(R,Vn);
rdv = max(rdv,0.001);
float ldn=dot(Ln,Nn);
ldn = max(ldn,0.0);
float ndv = dot(Nn,Vn);
float hdv = dot(Hn,Vn);
float eSq = Eccentricity*Eccentricity;
float distrib = eSq / (rdv * rdv * (eSq - 1.0) + 1.0);
distrib = distrib * distrib;
float Gb = 2.0 * hdn * ndv / hdv;
float Gc = 2.0 * hdn * ldn / hdv;
float Ga = min(1.0,min(Gb,Gc));
float fresnelHack = 1.0 - pow(ndv,5.0);
hdn = distrib * Ga * fresnelHack / ndv;
DiffuseContrib = ldn * LightColor;
SpecularContrib = hdn * Ks * LightColor;

float4 std_PS(vertexOutput IN,
uniform sampler2D colorSampler) : COLOR {
float3 diffContrib;
float3 specContrib;
float3 Ln = normalize(IN.LightVec);
float3 Vn = normalize(IN.WorldView);
float3 Nn = normalize(IN.WorldNormal);
float3 diffuseColor = tex2D(ColorSampler,IN.UV).rgb;
float shadowed = tex2Dproj(DepthShadSampler,IN.LProj).x;
float faded = 1.0-(ShadDens*(1.0-shadowed));
diffContrib *= faded;
specContrib *= shadowed;
#endif /* USE_SHARED_SHADOW */
float3 result = specContrib+(diffuseColor*(diffContrib+AmbiColor));
// return as float4
return float4(result,1);

///// TECHNIQUES /////////////////////////////

technique Main <
string Script = "Pass=p0;";
> {
pass p0 <
string Script = "Draw=geometry;";
> {
VertexProgram = compile vp40 main();
DepthTestEnable = true;
DepthMask = true;
CullFaceEnable = false;
BlendEnable = false;
DepthFunc = LEqual;
FragmentProgram = compile fp40 std_PS(ColorSampler);

/////////////////////////////////////// eof //

I also tried setting the texParam explicitely per-frame (in Display()) but alas... no joy :(

Share this post

Link to post
Share on other sites
You say when you try and render with the texture the sphere appears as a black circle? Do you mean completely black, from all angles, i.e. no specular or ambient? If it was just the texture that was not being set correctly then I would still expect to see specular.
Are you sure glutSphere function generates UVs? Maybe you need to enable automatic texture coord generation?

Share this post

Link to post
Share on other sites
You're right, the specular term should be there but it isn't. The sphere is absolute black when viewed from every direction. Well spotted :)

So maybe it isn't the ColourSampler (or possibly even nothing to do with the texture)?

Looks like it could be the transforms after all.

Share this post

Link to post
Share on other sites
What happens when you keep everything the same but remove the texture diffuse component from the final calculation?
float3 diffuse = Kd * lightColor;
instead of:
float3 diffuse = Kd * lightColor * diffuseLight;
The only way I can think of that the a problem with the texture could cause the final colour to always be completely black is if there is a NaN value getting in there somewhere, but afaik if the texture is not set then it will return zeros not NaNs.

Share this post

Link to post
Share on other sites
Hi Axon, I saw your post on the NVIDIA forums.

I'm having similar issues, with probably the world's simplest CgFX shader. At first I thought it was a problem using Cg with Qt, but turns out it isn't.

Try using this as a test:

float4 Diffuse : COLOR
string UIWidget = "Color";
string UIName = "Diffuse";
> = {0.8f, 0.8f, 0.8f, 1.0f};

sampler2D DiffuseSampler = sampler_state
MinFilter = LinearMipMapLinear;

float4x4 WorldViewProj : WORLDVIEWPROJECTION;

struct VInput
float4 Position : POSITION;
float4 Colour : COLOR0;
float2 UVCoord : TEXCOORD0;

struct VOutput
float4 Position : POSITION;
float4 Colour : COLOR0;
float2 UVCoord : TEXCOORD0;

VOutput VShader(VInput IN)
//Create output object
VOutput OUT;

//Calculate output position
OUT.Position = mul(WorldViewProj, IN.Position);

OUT.UVCoord = IN.UVCoord;
OUT.Colour = Diffuse;

return OUT;

float4 PShader(VOutput IN) : COLOR
return tex2D(DiffuseSampler, IN.UVCoord);;

technique Main
pass p0
VertexProgram = compile vp40 VShader();

DepthTestEnable = true;
DepthMask = true;
CullFaceEnable = true;
BlendEnable = false;
DepthFunc = LEqual;

FragmentProgram = compile fp40 PShader();

Let's see if we can sort this out, there must be a common problem with our code, see mine here for reference: http://www.gamedev.net/community/forums/topic.asp?topic_id=497585

Share this post

Link to post
Share on other sites
Hi deadstar.

Just read your thread. Maaaaan. I feel your pain!
We're in the same boat... let's make it float.

I've had this problem for, I hate to admit it, possibly over 2 months.

I'm supposed to be good at this gear. I am good at this gear. I've read a lot of documentation including the thin PDF examples (which do all kinds of non-conventional stuff w.r.t geometry and textures) combed the PDFs. The SDK samples (both Cg/Cg-toolkit SDK and NV OpenGL SDK) and re-created some of their demos.

So where are we up to?:
1. Transforms and verts are ok. (Since your shader in the prev post uses only WorldViewProj and renders black geometry ok).
2. Shader is ok. Works in FXComposer (after you add the Texture param so that FXComposer can hook in (that could be a clue... but none of the demos seem to need to touch the Texture param)).
3. UVs are ok. (Since tex2D(sampler, float2(0.5,0.5)) returns green (center pix color of texture) in FXComposer but returns black in my app.)

I'm tempted to grab their CgFX bumdemo and start to piecewise convert it.
Step 1: Convert it to use own geom and test
Step 2: Convert it to use own tex and test
Step 3: Convert it to use own CgFX and test

Theoretically that can't go wrong... it's be great to think the solution is closer than that (since i've already done similar approaches before).

Or is it something curlier like we're linking against beta dlls at run-time???

I'll def keep this post updated... glad not to be alone on this one. I have tried for so long to figure it out methodically (as you obviously have) and yet here we are.

@bluntman: I rekon deadstar's shader distills the problem down. tex2D(sampler, uv) returns zeros (black) when it should look up the texture. I've also tested tex2D(sampler, float2(0.5,0.5)) and in FXComposer it returns green (from center of texture) and in my app it returns black.

[Edited by - axon on June 17, 2008 10:22:35 PM]

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • By Lewa
      So, i stumbled upon the topic of gamma correction.
      So from what i've been able to gather: (Please correct me if i'm wrong)
      Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format  
      Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
      First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)
      What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
      vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
      No gamma correction:
      With gamma correction:
      The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
      Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
    • Total Posts

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!