Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


IkarusDowned

Member Since 08 Mar 2009
Offline Last Active Oct 23 2014 04:05 AM

Topics I've Started

Suggestions for a node-based shader / material tool and integration

27 August 2014 - 04:28 AM

Hi All,

 

I'm looking for trying to use an existing tool which does Node-based shader / material creation (much like Unity and Unreal Engine) which I can take the output and tie into a game engine...specifically, I need to be able to input a texture, run a shader on it, and output a texture. 

 

Our current restriction is that I can't use Unity / Unreal Engine (duh) but would like to give similar power to artists as opposed to having programmers writing them all by hand.

 

Any suggestions would be welcome!


Sampling Texture2D vs Texture1D?

04 June 2014 - 04:07 AM

Hello guys, 

I've run into another kinda interesting bug which I'm having trouble figuring out.

I'm trying to do some (albeit strange) lighting calculations where the point light diffuse, radius and position are baked (in the case of radius and position, encoded) into 1D textures. As such, I've created 1D textures like so:

bool CreateTexture1DClass::Initialize(ID3D11Device *device, std::vector<UINT8> &perChannelData, int width)
{
	m_width = width;
	D3D11_TEXTURE1D_DESC desc;
	ZeroMemory(&desc, sizeof(desc));

	desc.Width = m_width;
	desc.MipLevels = desc.ArraySize = 1;
	desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	desc.MiscFlags = 0;
	desc.Usage = D3D11_USAGE_DYNAMIC ;
	desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
	desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;


	HRESULT result;

	D3D11_SUBRESOURCE_DATA subresource;
	subresource.SysMemSlicePitch = 0;
	subresource.SysMemPitch = perChannelData.size() * sizeof(UINT8);
	subresource.pSysMem = &perChannelData[0];

	result = device->CreateTexture1D(&desc, &subresource, &m_pTexture);
	if(FAILED(result))
	{
		return false;
	}

	D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
	shaderResourceViewDesc.Format = desc.Format;
	shaderResourceViewDesc.Texture1D.MostDetailedMip = 0;
	shaderResourceViewDesc.Texture1D.MipLevels = 1;
	shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE1D;
	result = device->CreateShaderResourceView(m_pTexture, &shaderResourceViewDesc, &m_shaderResourceView);
	if(FAILED(result))
	{
		return false;
	}
	return true;
}

Now, for testing purposes I pre-load the color and radius into textures, then outputted them as 1D DDS files to see if my logic was right. The good news here is, it worked!

 

Now, onto using them in my shaders. As I said I'm trying to do something (possibly) weird, which is to render the point-light diffuse values to a texture as part of the deferred step, but sampling the values in the domain shader (as I have new vertices being made and such). Here's how I've declared my textures and the samplers:

Texture2D tex; 
Texture2D normalMap; 
Texture2D specularMap; 

Texture2D displacementMap; 
Texture1D pointLightDiffuseMap;
Texture1D pointLightPositionMap;
Texture1D pointLightRadiusMap;

SamplerState PixelSampler;
SamplerState DomainSampler;

in my domain shader for now, I'm just doing the following:

output.pointDiffuseAdd = pointLightDiffuseMap.SampleLevel(DomainSampler, 0, 0);

this essentially gets piped straight thru to the PixelOutput, so essentially i get a deferred render target which is filled with a flat color. 

Now, The problem is this: The pointLightDiffuseMap at the moment is just a 1D texture which has two values: Red, Blue. If my understanding is correct, with the SampleLevel's second parameter being 0, I should get back a full-red image. However, what I get back is a deep shade of purple! (it /almost/ looks like its blending the red and blue). Any ideas as to what could cause this? Is my sampler states setup wrong?

 

Here's a sampler state setup:

D3D11_SAMPLER_DESC samplerDesc;
	samplerDesc.Filter = D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR;
	samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
	samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
	samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
	samplerDesc.MipLODBias = 0.0f;
	samplerDesc.MaxAnisotropy = 1;
	samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
	samplerDesc.BorderColor[0] = 0;
	samplerDesc.BorderColor[1] = 0;
	samplerDesc.BorderColor[2] = 0;
	samplerDesc.BorderColor[3] = 0;
	samplerDesc.MinLOD = 0;
	samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;

device->CreateSamplerState(&samplerDesc, &m_displacementSampler);

P.S. i do check the result to make sure I don't error, I just don't put it here to reduce code-confusion

 

also, here's how I'm setting my samplers / textures into their respective locations:

deviceContext->PSSetShaderResources(0, 1, &diffuseTexture);
deviceContext->PSSetShaderResources(1, 1, &normalMap);
deviceContext->PSSetShaderResources(2, 1, &specularMap);

deviceContext->DSSetShaderResources(0, 1, &displacementMap);
deviceContext->DSSetShaderResources(1, 1, &pointLightDiffuseMap);
deviceContext->DSSetShaderResources(2, 1, &pointLightPositionMap);
deviceContext->DSSetShaderResources(3, 1, &pointLightRadiusMap);

Oddly enough, none of the other code seems effected by the weirdness happening from the 1D texture sampling issue. Any ideas as to what I could be doing wrong?


DX11: Pixel Shader not outputting to Render Target anything when domain shader is used?

29 May 2014 - 03:57 AM

Hey all, I've been studying up on DX11 and various graphical things using the rastertek tutorials:

http://www.rastertek.com/tutdx11.html

They've been great in helping me understand the carious parts of DX11, and I wanted to try and combine the Tesselation tutorial with the Deferred Rendering tutorial by just having the output of the Pixel Shader output to SV_Target0, SV_Target1, etc.

 

However, whenever I modify the code to output the the render target, I get absolutely NOTHING being sent to the target..not even the clear buffer fill color. However, if I change the Pixel Shader output from the custom PixelOutputType { float4 color : SV_Target0; } to just a float4, and render the scene straight to the default buffer (screen), everything shows up just fine. 

 

Also, if i just comment out the HSSetShader() and DSSetShader() code, at the VERY LEAST get the fill color outputting. Is there something that I'm missing that tells the hull and domain shaders to output to the pixel shader??

 


Beginner having trouble with transformations in a model hierarchy

01 February 2011 - 09:06 AM

Hi All,
I'm an "old" beginner...someone who played around with OGL and graphics a LONG time ago, and i'm trying to re-learn this stuff again..but running into some conceptual problems.

So i tried searching the Forums, but I couldn't figure out a viable search string to get the results I needed, so I do apologize if this is a re-post.

Anyway, At the moment i'm trying to create a scene with onlya few objects. i'll simplify it for the question as saying i have a "scene" with a big Box A, and inside it a small Box B that is rendered relative to Box A's coordinates.

However, the actual Box A and Box B models are the same; a single Box model that is 10 pixels cubed.
Lets ignore transparencies, assume its a wire-frame box.

So, Box A -> Box B

The way I am doing this in OGL atm is to use the relevant gl<Transformation>() functions.

So, for example using just some psuedocode

void drawNestedBox() {
glPushMatrix();
boxA.draw();
glTranslatef(1.0,0.0,0.0);
glRotatef(5.0,1.0,0.0,0.0);
glScalef(0.5,0.5,0.5);
boxB.draw();
glPopMatrix();
}

The conceptual issue I am having here is, what happens if I want to get, say a vertex / normal vector from boxB into the model-space? i understand I need to make a change of Basis matrix, but I can't seem to find any good REAL examples for this sort of thing; they all seem to "hand wave" the explaination without giving any real examples. Does anyone have any good resources, or maybe willing to explain this to me using the example above?

For more info about it, I want to make a scene-graph and I was planning on using something like a Quad Tree. But, I am having trouble wrapping my head around the following question: if I have a model with sub-meshes, and the mesh itself is glScale() down, how do I get the sub-mesh vertex information in model-space information so I can figure out if it is relvant?

Thanks in advance.






PARTNERS