Jump to content
  • Advertisement

terryeverlast

Member
  • Content Count

    106
  • Joined

  • Last visited

Posts posted by terryeverlast


  1. But what are these then.("mTextureCoords")

    vec.x = mesh->mTextureCoords[0][i].x; 
    vec.y = mesh->mTextureCoords[0][i].y;
    

    I'm loading a .OBJ. So are these "mTextureCoords" are "vt" values?

     

    I tested the mTextureCoords values in my debugger and they were different so just wondering.


  2.  
      
    std::vector<Vertex::Basic32> vertices(8);
    	Assimp::Importer importer;
    
    
    
        const aiScene * scene = importer.ReadFile("untitled.obj", aiProcess_CalcTangentSpace | aiProcess_Triangulate | aiProcess_JoinIdenticalVertices | aiProcess_SortByPType);
    
    	aiMesh *mesh = scene->mMeshes[0]; 
    	
       for(int i = 0;i<8;i++)
       {
    	  
    {
    	
    aiVector3D pos = mesh->mVertices[i];
     
    vertices[i].Pos.x    =  pos.x;
    vertices[i].Pos.y    =  pos.y;
    vertices[i].Pos.z    =  pos.z;
    	}
    	
    
       }
       std::vector<UINT> indices;
    
     for (int i = 0; i < mesh->mNumFaces; i++) {
        const aiFace& Face = mesh->mFaces[i];
        if(Face.mNumIndices == 3) {
            indices.push_back(Face.mIndices[0]);
            indices.push_back(Face.mIndices[1]);
            indices.push_back(Face.mIndices[2]);
    
    }
    
     }
    

    My code now loads the cube and now need to load normal and texture....

     


  3. So far, i have:

    Assimp::Importer importer;
    
    
    
        const aiScene * scene = importer.ReadFile("untitled.obj", aiProcess_CalcTangentSpace | aiProcess_Triangulate | aiProcess_JoinIdenticalVertices | aiProcess_SortByPType);
    
    
    
    
    

    Next steps?


  4. image.png

     

    In the image above, this would be the first rotation which is cos(t*theta). What are the values of c1 and c2 which are *weighted values

    then what are the values of c1 c2 in the second rotation, cos(1-t)theta?

    conditions....(image not to scale)

     

    a = 0 or [1,0,0]

    b = 90 or [0,1,0]

    p =  45

     

     

     

    MSMv_F.png

     

    So final question is what are the values of c1 and c2 with the conditons above? thxn you


  5. Im back. Anyways I have a problem with the TEXTURE u,v loading right.

     

    I drew a simple quad in blender and exported to .obj.

    I can see the quad but the texture is messed up.

    My obj is

    
    
    v -6.167033 0.000000 6.167033
    v 6.167033 0.000000 6.167033
    v -6.167033 0.000000 -6.167033
    v 6.167033 0.000000 -6.167033
    vt 1.0 0.0
    vt 0.0 1.0
    vt 0.0 0.0
    vt 1.0 1.0
    vn 0.0000 1.0000 0.0000
    f 2/1/1 3/2/1 1/3/1
    f 2/1/1 4/4/1 3/2/1

    Im suppose to get a image that looks similar to

    image.jpg

     

    My image is looking like

    two.gif

     

     

     

    Im loading the U,Vs of the texture in the right way by subtracting one from it.

    Ive tried all kinds of different settings in blender, like vertex order and stuff with no luck

     

    Is there a way to get the right order of uvs ?

     

    my c++ code is

    for(int i = 0;i<6;i++)
    	{
    	int rx = textureV[i]; //this is the texture value in the face part ie..f 2/1/1 4/4/1 3/2/1
    	float t = uu[rx-1]; //subtracting one because of obj format
    	float t2 = vv[rx-1];
    	vertices[i].Tex.x = t;
    	vertices[i].Tex.y = t2;
    	}
    
    

    i need to get it right because when loading bigger objects like spheres. i cant manually fix the texture part.


  6. Alright thx. Got it working.

    Instead of doing below, I just loaded the Positions normally...

    vertices[i].Pos.x = vertices[indices[i]].sPos.x
    vertices[i].Pos.y = vertices[indices[i]].sPos.y;
    vertices[i].Pos.z = vertices[indices[i]].sPos.z;
    
    

  7. for(;;)
        {
    	
    
    	   fin >> strCommand;
    	   if(!fin)
    		   break;
      
       strcpy(str1v1, "v");
       strcpy(str2v1, strCommand);
    
       ret = strcmp(str1v1, str2v1);
    
       if(ret == 0)
       {
    	   
    	   fin >> vertices[test].sPos.x >> vertices[test].sPos.y >> vertices[test].sPos.z;
    	   test++;
       }
    
      
       strcpy(str1v2, "vn");
       strcpy(str2v2, strCommand);
    
       ret4 = strcmp(str2v2, str2v2);
    
       if(ret4 == 0)
       {
    	   fin >> vertices[test2].sNor.x >> vertices[test2].sNor.y >> vertices[test2].sNor.z;
    	   test2++;
       }
     
    
       strcpy(str1v3, "f");
       strcpy(str2v3, strCommand);
       
      
       ret5 = strcmp(str1v3, str2v3);
      
       if(ret5 == 0)
       {
    	   for(int i = 0;i<3;i++)
    			{
    				
                fin >> ind[in];
    			 if( '/' == fin.peek() )
                    {
                        fin.ignore();
    
                        if( '/' != fin.peek() )
                        {
                            fin.ignore();
                        }
    
                        if( '/' == fin.peek() )
                        {
                            fin.ignore();
    
                         }
    					 if( '/' != fin.peek() )
                        {
                            fin >> nor2[in];
                        }
    					
    	         }
    			
    		in++;
                    
    
      }
      }
      }
    	for(int i = 0;i<36;i++)
      {
    		indices[i] = (ind[i] - 1);
      }
    	
    	 for(int i = 0;i<36;i++)
       {
    	  
    	   vertices[i].Pos.x = vertices[indices[i]].sPos.x;
    	   vertices[i].Pos.y = vertices[indices[i]].sPos.y;
    	   vertices[i].Pos.z = vertices[indices[i]].sPos.z;
       } 
    

  8. Yeah i know that .obj start at 1 for their indices/texture/normal in the face area

    I subtracted 1 from every value and loaded those into the Position and normals


  9. Im programming in C++ Directx 11 win32 and was trying to make a mesh loader from a .OBJ file.

    My .obj looks like

    v 1.000000 -1.000000 -1.000000
    
    v 1.000000 -1.000000 1.000000
    
    v -1.000000 -1.000000 1.000000
    
    v -1.000000 -1.000000 -1.000000
    
    v 1.000000 1.000000 -0.999999
    
    v 0.999999 1.000000 1.000001
    
    v -1.000000 1.000000 1.000000
    
    v -1.000000 1.000000 -1.000000
    
    vn 0.0000 -1.0000 0.0000
    
    vn 0.0000 1.0000 0.0000
    
    vn 1.0000 -0.0000 0.0000
    
    vn 0.0000 -0.0000 1.0000
    
    vn -1.0000 -0.0000 -0.0000
    
    vn 0.0000 0.0000 -1.0000
    
    f 2//1 4//1 1//1
    
    f 8//2 6//2 5//2
    
    f 5//3 2//3 1//3
    
    f 6//4 3//4 2//4
    
    f 3//5 8//5 4//5
    
    f 1//6 8//6 5//6
    
    f 2//1 3//1 4//1
    
    f 8//2 7//2 6//2
    
    f 5//3 6//3 2//3
    
    f 6//4 7//4 3//4
    
    f 3//5 7//5 8//5
    
    f 1//6 4//6 8//6
    
    

    *was done in blender.

     

    For each indices, I'm making a Vertice. So I have 36 total vertices with 36 normal that I'm generating from the faces part "f".

    My mesh is not coming out right and I wanted to know is this the proper way? creating 36 verticies?

    need some advice..

     

     


  10. turtle.jpg

    Theres a box, theres a flat grid. The texture getting projected is doubling.

    My Settings are

     

    SamplerState SampleTypeWrap
    {
        Filter   = COMPARISON_MIN_MAG_LINEAR_MIP_POINT;
     AddressU = BORDER;
     AddressV = BORDER;
     AddressW = BORDER;
     BorderColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
    
    
        ComparisonFunc = LESS;
    };

    I don't know if the code above is wrong.?


  11. I found this code right here that ill dissect and implement in my code.

     

    void Camera::update(float dt, Terrain* terrain, float offsetHeight)

    {

    // Find the net direction the camera is traveling in (since the

    // camera could be running and strafing).

    D3DXVECTOR3 dir(0.0f, 0.0f, 0.0f);

    if( gDInput->keyDown(DIK_W) )

      dir += mLookW;

    if( gDInput->keyDown(DIK_S) )

      dir -= mLookW;

    if( gDInput->keyDown(DIK_D) )

      dir += mRightW;

    if( gDInput->keyDown(DIK_A) )

      dir -= mRightW;

     

    // Move at mSpeed along net direction.

    D3DXVec3Normalize(&dir, &dir);

    D3DXVECTOR3 newPos = mPosW + dir*mSpeed*dt;

     

    if( terrain != 0)

    {

      // New position might not be on terrain, so project the

      // point onto the terrain.

      newPos.y = terrain->getHeight(newPos.x, newPos.z) + offsetHeight;

     

      // Now the difference of the new position and old (current)

      // position approximates a tangent vector on the terrain.

      D3DXVECTOR3 tangent = newPos - mPosW;

      D3DXVec3Normalize(&tangent, &tangent);

      // Now move camera along tangent vector.

      mPosW += tangent*mSpeed*dt;

     

      // After update, there may be errors in the camera height since our

      // tangent is only an approximation.  So force camera to correct height,

      // and offset by the specified amount so that camera does not sit

      // exactly on terrain, but instead, slightly above it.

      mPosW.y = terrain->getHeight(mPosW.x, mPosW.z) + offsetHeight;

    }

    else

    {

      mPosW = newPos;

    }

     

    // We rotate at a fixed speed.

    float pitch  = gDInput->mouseDY() / 150.0f;

    float yAngle = gDInput->mouseDX() / 150.0f;

     

    // Rotate camera's look and up vectors around the camera's right vector.

    D3DXMATRIX R;

    D3DXMatrixRotationAxis(&R, &mRightW, pitch);

    D3DXVec3TransformCoord(&mLookW, &mLookW, &R);

    D3DXVec3TransformCoord(&mUpW, &mUpW, &R);

     

    // Rotate camera axes about the world's y-axis.

    D3DXMatrixRotationY(&R, yAngle);

    D3DXVec3TransformCoord(&mRightW, &mRightW, &R);

    D3DXVec3TransformCoord(&mUpW, &mUpW, &R);

    D3DXVec3TransformCoord(&mLookW, &mLookW, &R);

     

    // Rebuild the view matrix to reflect changes.

    buildView();

     

    mViewProj = mView * mProj;

    }


  12. Tried

    if( mWalkCamMode )
     {
      XMFLOAT3 camPos = mCam.GetPosition();
      float y = mTerrain.GetHeight(camPos.x, camPos.z);
      mCam.SetPosition(camPos.x, y, camPos.z);
      XMFLOAT3 tupac = mCam.GetPosition();
      float one = tupac.x - camPos.x;
      float two = tupac.y - camPos.y;
      float three = tupac.z - camPos.z;
      float test = sqrt((one*one)+(two*two)+(three*three));
      float one2 = one/test;
      float two2 = two/test;
      float three2 = three/test;
      mCam.SetPosition(one2, two2, three2);
     }

     

    still goes under terrain


  13. I'm playing around with height maps. I'm trying to move the camera along the tangent as a walk on hilly terrain. So far my code is:

    XMFLOAT3 camPos = mCam.GetPosition(); //current position of camera on flat grid
     float y = mTerrain.GetHeight(camPos.x, camPos.z); //height
     XMVECTORF32 oldpos = {camPos.x,camPos.y,camPos.z}; 
    XMVECTORF32 newpos = {camPos.x,y,camPos.z};
     XMVECTOR ntangent = XMVector3Normalize(newpos - oldpos);
     XMFLOAT3 tangent; XMStoreFloat3(&tangent,ntangent); 
    mCam.SetPosition(camPos.x, tangent.y + 2.0f, camPos. z);

    But my results do not go along the terrain. they go under the terrain. I cant find the problem and need help. Does anyone know why my code does not work?

     


  14. Its a exercise in Introduction to 3D Game Programming with Direct3D 11.0 by Frank Luna book.

    Anyways...what ive done so far is multiply my Tangent Vector by a viewspace matrix instead of being in world space in the vertex shader

    vout.TangentW = mul(vin.TangentL, (float3x3)gWorldView);

    and my eye vector like

    
    	float3 N = pin.NormalV;
    	float3 T = normalize(pin.TangentV - dot(pin.TangentV, N)*N);
    	float3 B = cross(N, T);
    
    	float3x3 TBN = float3x3(T, B, N);
    	float3x3 TBN3 = transpose(TBN);
    	
    
    	// The toEye vector is used in lighting.
    	float3 toEye = mul(gEyePosW - pin.PosW,TBN3);
    

    my image looks messed up like below

    pole.jpg

    Am I going in the right path so far?


  15. SO what what i multiply agaisnt the eye and light vector to make the computation on normals in tangent space.

    would i just take the inverse or transform of the tangent space in world space times(*) eye and light vector?

     

    Would I take the normal and tangent from the vertex shader and keep them in local space and not in world space?


  16. Ok. Update so far.

    I left alone the NormalSampleToWorldSpace, my mistake.

    I then

    float3 normalMapSample = gNormalMap.Sample(samLinear, pin.Tex).rgb;
    	float3 bumpedNormalW = NormalSampleToWorldSpace(normalMapSample, pin.NormalW, pin.TangentW);
    	 float3 wittbn = mul(bumpedNormalW,(float3x3)gWorldInvTranspose);
    

    And multiplied the Eye and Light Vector by wittbn.

    Still same lighting problrm.The function NormalSampleToWorldSpace can be seen here:

    //---------------------------------------------------------------------------------------
    // Transforms a normal map sample to world space.
    //---------------------------------------------------------------------------------------
    float3 NormalSampleToWorldSpace(float3 normalMapSample, float3 unitNormalW, float3 tangentW)
    {
    	// Uncompress each component from [0,1] to [-1,1].
    	float3 normalT = 2.0f*normalMapSample - 1.0f;
    
    	// Build orthonormal basis.
    	float3 N = unitNormalW;
    	float3 T = normalize(tangentW - dot(tangentW, N)*N);
    	float3 B = cross(N, T);
    
    	float3x3 TBN = float3x3(T, B, N);
    
    	// Transform from tangent space to world space.
    	float3 bumpedNormalW = mul(normalT, TBN);
    
    	return bumpedNormalW;
    }
    
    

    My directional light code goes

    void ComputeDirectionalLight(Material mat, DirectionalLight L, 
                                 float3 normal, float3 toEye,
    					         out float4 ambient,
    						     out float4 diffuse,
    						     out float4 spec
    							 in float3 testing)
    {
    	// Initialize outputs.
    	ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
    	diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
    	spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);
    
    	// The light vector aims opposite the direction the light rays travel.
    	float3 lightVec = -L.Direction * testing; //the light vector
    
    	// Add ambient term.
    	ambient = mat.Ambient * L.Ambient;	
    
    	// Add diffuse and specular term, provided the surface is in 
    	// the line of site of the light.
    	
    	float diffuseFactor = dot(lightVec, normal);
    
    	// Flatten to avoid dynamic branching.
    	[flatten]
    	if( diffuseFactor > 0.0f )
    	{
    		float3 v         = reflect(-lightVec, normal);
    		float specFactor = pow(max(dot(v, toEye), 0.0f), mat.Specular.w);
    					
    		diffuse = diffuseFactor * mat.Diffuse * L.Diffuse;
    		spec    = specFactor * mat.Specular * L.Specular;
    	}
    }
    
    

    The wittbn gets pumped into the ComputeDirectionalLight as testing

    The normal are not being shaded right and teh picture is messed up and dark


  17. Instead of doing lighting in world space, we can transform the eye and light vector from world space into tangent space and do all the lighting calculations in that space. Modify the normal mapping shader to do the lighting calculations in tangent space.

     

    so I edited the normalsampletoworldspace taking the transpose of the TBN matrix

    float3 NormalSampleToWorldSpace(float3 normalMapSample, float3 unitNormalW, float3 tangentW)
    {
    	// Uncompress each component from [0,1] to [-1,1].
    	float3 normalT = 2.0f*normalMapSample - 1.0f;
    
    	// Build orthonormal basis.
    	float3 N = unitNormalW;
    	float3 T = normalize(tangentW - dot(tangentW, N)*N);
    	float3 B = cross(N, T);
    
    	float3x3 TBN = float3x3(T, B, N);
    	float3x3 test = transpose(TBN);
    	// Transform from tangent space to world space.
    	float3 bumpedNormalW = mul(normalT, test);
    
    	return bumpedNormalW;
    }
    

    Also changed the toEye vector like

    float3 toEye = bumpedNormalW2*(gEyePosW - pin.PosW);

    so the transposed TBN TIMES toEye.

    I also mulitiplied the lightvec by bumpedNormalW2. my images are showing up dark(the first one is without normal map, second is with normal map)

    one.jpgtwo.png

    Is there anything else that needs to be done to make the lighting in tangent space. what seems to be the problem?

     

    here is the orignianl shader code and then edited to make tangent space normal mapping

    float4 PS(VertexOut pin, 
              uniform int gLightCount, 
    		  uniform bool gUseTexure, 
    		  uniform bool gAlphaClip, 
    		  uniform bool gFogEnabled, 
    		  uniform bool gReflectionEnabled) : SV_Target
    {
    	// Interpolating normal can unnormalize it, so normalize it.
    	pin.NormalW = normalize(pin.NormalW);
    
    	// The toEye vector is used in lighting.
    	float3 toEye = gEyePosW - pin.PosW;
    
    	// Cache the distance to the eye from this surface point.
    	float distToEye = length(toEye);
    
    	// Normalize.
    	toEye /= distToEye;
    	
        // Default to multiplicative identity.
        float4 texColor = float4(1, 1, 1, 1);
        if(gUseTexure)
    	{
    		// Sample texture.
    		texColor = gDiffuseMap.Sample( samLinear, pin.Tex );
    
    		if(gAlphaClip)
    		{
    			// Discard pixel if texture alpha < 0.1.  Note that we do this
    			// test as soon as possible so that we can potentially exit the shader 
    			// early, thereby skipping the rest of the shader code.
    			clip(texColor.a - 0.1f);
    		}
    	}
    
    	//
    	// Normal mapping
    	//
    
    	float3 normalMapSample = gNormalMap.Sample(samLinear, pin.Tex).rgb;
    	float3 bumpedNormalW = NormalSampleToWorldSpace(normalMapSample, pin.NormalW, pin.TangentW);
    	 
    	//
    	// Lighting.
    	//
    
    	float4 litColor = texColor;
    	if( gLightCount > 0  )
    	{  
    		// Start with a sum of zero. 
    		float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
    		float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
    		float4 spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);
    
    		// Sum the light contribution from each light source.  
    		[unroll]
    		for(int i = 0; i < gLightCount; ++i)
    		{
    			float4 A, D, S;
    		
    			ComputeDirectionalLight(gMaterial, gDirLights[i], bumpedNormalW, toEye, 
    				A, D, S);
    
    			ambient += A;
    			diffuse += D;
    			spec    += S;
    		}
    
    		litColor = texColor*(ambient + diffuse) + spec;
    
    		if( gReflectionEnabled )
    		{
    			float3 incident = -toEye;
    			float3 reflectionVector = reflect(incident, bumpedNormalW);
    			float4 reflectionColor  = gCubeMap.Sample(samLinear, reflectionVector);
    
    			litColor += gMaterial.Reflect*reflectionColor;
    		}
    	}
     
    	//
    	// Fogging
    	//
    
    	if( gFogEnabled )
    	{
    		float fogLerp = saturate( (distToEye - gFogStart) / gFogRange ); 
    
    		// Blend the fog color and the lit color.
    		litColor = lerp(litColor, gFogColor, fogLerp);
    	}
    
    	// Common to take alpha from diffuse material and texture.
    	litColor.a = gMaterial.Diffuse.a * texColor.a;
    
        return litColor;
    }
    
    
    float4 PS(VertexOut pin, 
              uniform int gLightCount, 
    		  uniform bool gUseTexure, 
    		  uniform bool gAlphaClip, 
    		  uniform bool gFogEnabled, 
    		  uniform bool gReflectionEnabled) : SV_Target
    {
    	// Interpolating normal can unnormalize it, so normalize it.
    	pin.NormalW = normalize(pin.NormalW);
    	float3 normalMapSample2 = gNormalMap.Sample(samLinear, pin.Tex).rgb;
    	float3 bumpedNormalW2 = NormalSampleToWorldSpace(normalMapSample2, pin.NormalW, pin.TangentW);
    	// The toEye vector is used in lighting.
    	float3 toEye = bumpedNormalW2*(gEyePosW - pin.PosW);
    
    	// Cache the distance to the eye from this surface point.
    	float distToEye = length(toEye);
    
    	// Normalize.
    	toEye /= distToEye;
    	
        // Default to multiplicative identity.
        float4 texColor = float4(1, 1, 1, 1);
        if(gUseTexure)
    	{
    		// Sample texture.
    		texColor = gDiffuseMap.Sample( samLinear, pin.Tex );
    
    		if(gAlphaClip)
    		{
    			// Discard pixel if texture alpha < 0.1.  Note that we do this
    			// test as soon as possible so that we can potentially exit the shader 
    			// early, thereby skipping the rest of the shader code.
    			clip(texColor.a - 0.1f);
    		}
    	}
    
    	//
    	// Normal mapping
    	//
    
    	float3 normalMapSample = gNormalMap.Sample(samLinear, pin.Tex).rgb;
    	float3 bumpedNormalW = NormalSampleToWorldSpace(normalMapSample, pin.NormalW, pin.TangentW);
    	 
    	//
    	// Lighting.
    	//
    
    	float4 litColor = texColor;
    	if( gLightCount > 0  )
    	{  
    		// Start with a sum of zero. 
    		float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
    		float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
    		float4 spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);
    		
    		// Sum the light contribution from each light source.  
    		[unroll]
    		for(int i = 0; i < gLightCount; ++i)
    		{
    			float4 A, D, S;
    			ComputeDirectionalLight(gMaterial, gDirLights[i], bumpedNormalW, toEye, 
    				A, D, S,bumpedNormalW);
    
    			ambient += A;
    			diffuse += D;
    			spec    += S;
    		}
    
    		litColor = texColor*(ambient + diffuse) + spec;
    
    		if( gReflectionEnabled )
    		{
    			float3 incident = -toEye;
    			float3 reflectionVector = reflect(incident, bumpedNormalW);
    			float4 reflectionColor  = gCubeMap.Sample(samLinear, reflectionVector);
    
    			litColor += gMaterial.Reflect*reflectionColor;
    		}
    	}
     
    	//
    	// Fogging
    	//
    
    	if( gFogEnabled )
    	{
    		float fogLerp = saturate( (distToEye - gFogStart) / gFogRange ); 
    
    		// Blend the fog color and the lit color.
    		litColor = lerp(litColor, gFogColor, fogLerp);
    	}
    
    	// Common to take alpha from diffuse material and texture.
    	litColor.a = gMaterial.Diffuse.a * texColor.a;
    
        return litColor;
    }
    

  18. Tranparent working but not on my CubeMap texture(the background in the distance). Here is my material code

    m.Ambient  = XMFLOAT4(0.2f, 0.2f, 0.2f, 0.3f);
    	m.Diffuse  = XMFLOAT4(0.2f, 0.2f, 0.2f, 0.2f);
    	m.Specular = XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f);
    	m.Reflect  = XMFLOAT4(0.5f, 0.5f, 0.5f, 0.3f);
    
    

    Instead of being tranparent when the land hits the cubemap background texture, it is white.Any ideas why so?

    Does the cubmap background have to have alpha channel ?


  19. Putting OMSetBlendState after apply failed. I tried that and also tried just blending in the shader.

    BlendState transparentBlend
    {
        BlendEnable[0] = TRUE;
        SrcBlend = SRC_ALPHA;
        DestBlend = INV_SRC_ALPHA;
        BlendOp = ADD;
        SrcBlendAlpha = ZERO;
        DestBlendAlpha = ZERO;
        BlendOpAlpha = ADD;
        RenderTargetWriteMask[0] = 0x0F;
    };
    

    So putting(omsetblend) after apply gave me the original image(did not work) then,. Trying to set up in the shader gave me a grayish image.

    Untitled.jpg

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!