Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Burnt_Fyr

Member Since 25 Aug 2009
Offline Last Active Yesterday, 09:58 AM

Topics I've Started

height at a point on a plane defined by 2 points.

15 March 2015 - 02:12 PM

First off... not my homework. it's the wifes! She's in landscape architecture and is having trouble understanding a question that was given with little explaination. In fact the instructor of the course couldn't remember how to do it and so they had no explanation.

 

The question is:

 

Given a point, q, on a plane,p, as well as a slope,s, along a vector,v, which lies on the projection of P onto the xy plane, find 3 other points.

p0------p1(i have height at p1)
|       |                 p0__x__p1
|       |                  |    /
|       |                y |  /v
p2------p3                 | /

so we started out by extending v, from p1, until it intersected the line from po->p2. since the points fall on a regular grid, we knew |x| = |p0-p1| = 50m, and used the magnitude of V when x = 50 to get y = 22.19m. the projection of V onto XY was measured at 55m. Since we have slope along V, a bit of pythagoras gave us dz of 4.565m. so now I have a vector,q<50,22.19,4.565> that lies along the plane,p, and the height at p1. How can i calculate the x and y components of the slope(gradient?), so that i can find p0.z = p1.z + xm, p3.z = p1.z+yn, and p2.z = p1.z+xm+yn.


Deferred optimization

10 February 2015 - 01:18 PM

I've gotten deferred rendering working with MRTs in direct x9, though the performance is so far quite abysmal.

 

My g-buffer is a naive implementation, with no compression, and a full 4 rts (diffuse.rgb, normal.xyz, spec.rgb + pow, position.xyz).  The G-Buffer is taking about 10% of the rendering time, while the lighting is taking up the rest. It seems that i can double the object count with minimal effect, so I'm primarily looking at ways of improving lighting speed.

 

I've tried stenciling out point and spotlights, but this considerably increased frame times(about 19->13 fps) and trying to use a double sided stencil was even worse(8fps). I think compressing this to 3rts is do-able, reconstruction position from z + screen space, compressing normals to 2 channels, etc, but am worried that this will be a wasted effort if it increases the pixel shader complexity.

 

Am I just hitting the wall on my laptop ? What are some common tricks I can use to reduce the time required for the lighting passes?

 

I'm about 25% of the way to porting over to dx11 and expect to see a large improvement with certain features available there(depth buffer reads, etc).

 

 

 

 

 

 

 

 

 

 

 

 

 


c++ Heap corruption w/ std::vector

20 January 2015 - 10:07 AM

Thanks for taking the time to read this. I'm using a mesh object that contains std::vectors for vertex and index data.

class Mesh2
{
// snipped for brevity
    std::vector<unsigned short>        indices;

}

void Mesh2::AddFace(unsigned short _i1,unsigned short _i2, unsigned short _i3) {
    
    indices.push_back(_i1);
    indices.push_back(_i2);
    indices.push_back(_i3);

}

The above functions reside in a static lib that contains all rendering code, linked to the main executable. Below are functions in the .Exe.

// In main()

Mesh2 cube;
GenerateCube(&cube,true,true);

and the function GenerateCube

GenerateCube(Mesh2* mesh, bool bGenNorms, bool bGenTangents)

{
 // at some point
 mesh->AddFace(0,1,2);

//.. and so on
}

The issue is as soon as mesh.indices has to resize past 10(it's default size) I get a heap corruption. I'm not sure what must be done to rectify this. If anyone has a good link or can spare 5 minutes for a thorough explanation it would be much appreciated. Everything I've pulled up on google so far has to do with crossing DLL boundaries which I'm not doing, but might be happening behind the scenes in the std::vector. I'm heading back to google for now.


Shadowmapping woes

07 January 2015 - 09:03 PM

Greetings all, happy new year. I've been working on shadow mapping and have ran into an issue that I can't seem to figure out. My scene is simple, a single light, a few cubes in a cornell box sort of setup. The shadows are working, but the cubes are coming out black. If this is/was a bias issue I would assume that it would effect the entire scene and not just the cubes so I'm not sure where to begin without access to the vertex debugging features in pix.

 

EDIT:Attached File  test.png   169.85KB   0 downloads

// Shadow map generation

struct VS_INPUT
{
    float3 Position : POSITION0;
    float3 Normal : NORMAL0;
    float2 Texcoord : TEXCOORD0;
 };

struct VS_OUTPUT
{
    float4 PositionCS : POSITION0;
    float2 TexCoord : TEXCOORD0;
    float2 Depth: TEXCOORD1;

};

VS_OUTPUT VSMain(VS_INPUT input)
{
    VS_OUTPUT output = (VS_OUTPUT)0;

    float4x4 WVmatrix = mul(Wmatrix,Vmatrix);
    float4x4 WVPmatrix = mul(WVmatrix,Pmatrix);
    
    // Transform Position to Clip Space
    output.PositionCS= mul(float4(input.Position,1), WVPmatrix);
    
    
    // Output TexCoords
    output.TexCoord = input.Texcoord;
    output.Depth = output.PositionCS.zw;
    return output;
}



struct PS_INPUT
{
    float4 PositionCS : POSITION0;
    float2 TexCoord : TEXCOORD0;  // NECCESARY FOR TRANSPARNECY
    float2 Depth : TEXCOORD1;
};
struct PS_OUTPUT
{
    float4 Color : COLOR0;
};
float4 PSMain(PS_INPUT input) : COLOR0
{
    PS_OUTPUT output;

    float4 diffuse = tex2D(tex0, input.TexCoord.xy);
    clip (diffuse.a - 0.15f);
    return input.Depth.x / input.Depth.y; // z / w; depth in [0, 1] range. // NDC SPACE
    
    
}


// shadow Calculation

float CalcShadowFactor(float4 projTexC)
{
// Complete projection by doing division by w.
	projTexC.xy /= projTexC.w;
	
	// Points outside the light volume are in shadow.
	if( projTexC.x < -1.0f || projTexC.x > 1.0f || 
	    projTexC.y < -1.0f || projTexC.y > 1.0f ||
	    projTexC.z < 0.0f )
	    return 0.0f;
	    
	// Transform from NDC space to texture space.
	projTexC.x = +0.5f*projTexC.x + 0.5f;
	projTexC.y = -0.5f*projTexC.y + 0.5f;
	
		// Depth in NDC space.
	float depth = projTexC.z / projTexC.w;


	// 2x2 percentage closest filter.
	// Sample shadow map to get nearest depth to light.
	float s0 = tex2D(tex1, projTexC.xy).r;
	float s1 = tex2D(tex1, projTexC.xy + float2(ShadowMap_dx, 0)).r;
	float s2 = tex2D(tex1, projTexC.xy + float2(0, ShadowMap_dx)).r;
	float s3 = tex2D(tex1, projTexC.xy + float2(ShadowMap_dx,ShadowMap_dx)).r;
	
	// Is the pixel depth <= shadow map value?
	float result0 = depth <= s0 + ShadowEpsilon;
	float result1 = depth <= s1 + ShadowEpsilon;
	float result2 = depth <= s2 + ShadowEpsilon;
	float result3 = depth <= s3 + ShadowEpsilon;
	
	// Transform to texel space.
	float2 texelPos = ShadowMapSize*projTexC.xy;
	
	// Determine the interpolation amounts.
	float2 t = frac( texelPos );

	// Interpolate results.
	return lerp( lerp(result0, result1, t.x), 
	             lerp(result2, result3, t.x), t.y);
}

Blting DX9 textures to GDI, editing, and blting back

06 July 2014 - 07:31 PM

I'm looking for info on directly editing height and splat map data. My initial thought was that GDI with it's premade brushes would be an asset. So I set about learning what i needed from GDI to get the DXTexture into an HDC for use in a window.I've used BltBit which copies the entire Pixels RGB data from the dx surface into the HDC of my window.

 

I would prefer to copy a single SRC channel ( R,G,B,or A for the splat layers(1-4)), leaving the rest of the DST bitmap black. After a brush has affected the bitmap, I intend to copy the single channel back to the initial DXTexture, which is used as a splat map in my terrain viewer(hopefully editor)

 

Another thought was skipping GDI directly, and and implementing brushes in DX(unless they exist in an area i'm not familiar with) using pixel shaders, and modifying textures directly that way.

 

I'd appreciate a nudge in the right direction, if anyone has a spare moment.


PARTNERS