Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Plerion

Member Since 02 Nov 2009
Offline Last Active Feb 07 2015 03:37 PM

Topics I've Started

Using 2x color modulation in FFP

26 January 2015 - 03:03 PM

Hello guys

 

Im asking this question for a friend of mine. He has an old program thats still using the FFP for rendering in OpenGL. Right now it still all works but there is a new feature hed like to implement. Every vertex has a color value. But its modulated in a different way. In GLSL in the fragment shader it would be like:
 

gl_FragColor = input_color.rgb * 2 * texture_color;

So essentially a color value of 0x7F7F7F7F would just return the texture color whereas 0xFFFFFFFF would double all channels of the texture.

 

The color values are bound to a buffer and sent to the FFP using glColorPointer. In DirectX i remember there was some kind of texture sampler state that allowed to specify MODULATE_2X as color operation. Is there something similar in OpenGL?

 

Greetings

Plerion


Getting invalid instance matrices in vertex shader

24 January 2015 - 06:04 PM

Hello all

 

Im using instancing to draw the opaque parts of of heavily repeated objects. I am running into some problems reading the instance data however.

 

My input structure for the vertex shader is like that:

struct VertexInput
{
	float3 position : POSITION0;
	float4 boneWeights : BLENDWEIGHT0;
	int4 bones : BLENDINDEX0;
	float3 normal : NORMAL0;
	float2 texCoord : TEXCOORD0;
	float2 texCoord2 : TEXCOORD1;

	float4 mat0 : TEXCOORD2;
	float4 mat1 : TEXCOORD3;
	float4 mat2 : TEXCOORD4;
	float4 mat3 : TEXCOORD5;
};

In order to get the position (before view and projection) i do the following:

VertexOutput main(VertexInput input) {
	float4x4 matInstance = float4x4(input.mat0, input.mat1, input.mat2, input.mat3);

	// bone & animation stuff

	position = mul(position, matInstance);
	// ...
}

the animation stuff and the per vertex input data is correct, I modified the last line to be: position = position.xyz + eyePosition + float3(100, 0, 0); and the elements appear correctly in front of my camera.

 

I have checked with the graphics debugger, in my opinion the input data looks correct (im not showing the per vertex stuff, since thats working):

Instance buffer (i checked, its bound):

RhPXaRU.png

 

 

Input Layout:

uYRdd2k.png

 

Im using the DrawIndexedInstanced function.

 

The result is completely wrong however: 

yCLGPj0.png

 

Where should i begin to look at? What could be the reason of this strange behavior?

 

Thanks in advance,

Plerion


Passing cube normals to shader

09 January 2015 - 06:50 PM

Hello all

 

Im using cube rendering. Basically per cube im using 8 vertices and 36 indices as one might expect. The problem im currently facing is passing normals accordingly to the shader. Putting them in the vertex buffer seems impractical since each vertex has 3 independent normals. My first guess was just sending an vec3 array as uniform to the shader and index it gl_VertexID but since that is not an option in WebGL im kinda out of ideas.

 

Is the best way to do so by using 36 vertices or is there a simpler way to accomplish this? Essentially I could use the average normal on each vertex and then the average of the 4 vertices of each face would be correct again. But obviously I can only access the one normal of the vertex. In the fragment shader its the interpolated value but not the average of the 4 vertices of the quad.

 

Thanks for any tips

Cromon


Invalid bone transformations in skinned mesh

11 October 2014 - 09:55 AM

Hello all

 

I am using skinned meshes with hierarchical bones in my application. Strangely i get rather mixed results for different models. The problem right now is that i am not sure if i am reading the values wrong or doing the math wrong. Let me first show you 2 different videos of different models:

 

N°1:

https://www.dropbox.com/s/n6r7wzyfxdw20rl/2014-10-11_17-49-35.mp4?dl=0

 

As you can see it doesnt look that bad, yet there are strange bumps in the animation and the character seems to be moving up and down as well.

 

N°2:

https://www.dropbox.com/s/qgn785i5x7y1jhn/2014-10-11_17-50-59.mp4?dl=0

 

For this nice fella however the animations seem to completely wrong...

 

The main code i am using to calculate my matrices looks like that:

void M2AnimationBone::updateMatrix(uint32 time, uint32 animation, Math::Matrix& matrix, M2Animator* animator) {
	auto position = mTranslation.getValueForTime(animation, time, animator->getAnimationLength());
	auto scaling = mScaling.getValueForTime(animation, time, animator->getAnimationLength());
	auto rotQuat = mRotation.getValueForTime(animation, time, animator->getAnimationLength());

	matrix = mPivot * Math::Matrix::translation(position) * Math::Matrix::rotationQuaternion(rotQuat) * Math::Matrix::scale(scaling) * mInvPivot;

	if (mBone.parentBone >= 0) {
		matrix = matrix * animator->getMatrix(time, mBone.parentBone);
	}
}

With getMatrix like this:

const Math::Matrix& M2Animator::getMatrix(uint32 time, int16 matrix) {
	assert(matrix >= 0 && (uint32) matrix < mBones.size());

	if (mCalculated[matrix]) {
		return mMatrices[matrix];
	}

	auto& mat = mMatrices[matrix];
	mBones[matrix]->updateMatrix(time, mAnimationId, mat, this);
	mCalculated[matrix] = true;

	return mat;
}

Ive been looking through several tutorials and explanations online and found - in my opinion - several different versions of it. Mostly it seems that the whole pivot stuff is a bit different everywhere. Am i doing it the right way?

 

Thanks for any help

Plerion


Creating readable mipmaps in D3D11

05 August 2014 - 04:05 PM

Hello all

 

For my project i have developed my own texture format and im currently writing a program that converts png images into that format including their precalculated mip map layers. I thought id use d3d11 to calculate the mipmaps since ive been using them mipmaps created by the engine itself so far for the textures and just read the actual data from the texture. In order to do so ive first created a texture with the appropriate flags and bindings to generate mipmaps and then copied it to a texture which can be read from the CPU. I then use squish to convert these layers into (right now statically) dxt1.

 

In code this means:

	std::vector<uint8> img = createImage(file, w, h);
	/* snippet removed: getting layer count -> it works */

	D3D11_TEXTURE2D_DESC texDesc = { 0 };
	texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
	texDesc.CPUAccessFlags = 0;
	texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	texDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
	/* removed obvious like array size, usage, and so on, it all works */

	ID3D11Texture2D* mipTexture = nullptr;
	
	massert(SUCCEEDED(gImageDevice->CreateTexture2D(&texDesc, nullptr, &mipTexture)));
	gImageCtx->UpdateSubresource(mipTexture, 0, nullptr, img.data(), w * 4, 0);
	ID3D11ShaderResourceView* srv = nullptr;
	/* snippet removed, obvious SRV creation, same mip levels, same format */
	massert(SUCCEEDED(gImageDevice->CreateShaderResourceView(mipTexture, &srvd, &srv)));
	gImageCtx->GenerateMips(srv);

	texDesc.BindFlags = 0;
	texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
	texDesc.MiscFlags = 0;
	texDesc.Usage = D3D11_USAGE_STAGING;

	ID3D11Texture2D* cpuTexture = nullptr;
	massert(SUCCEEDED(gImageDevice->CreateTexture2D(&texDesc, nullptr, &cpuTexture)));

	//gImageCtx->CopyResource(cpuTexture, mipTexture);
	for (uint32 i = 0; i < numLayers; ++i) {
		gImageCtx->CopySubresourceRegion(cpuTexture, i, 0, 0, 0, mipTexture, i, nullptr);
	}
	/* snippet removed, opening the file (binary) and writing the header */

	for (uint32 i = 0; i < numLayers; ++i) {
		D3D11_MAPPED_SUBRESOURCE resource;
		massert(SUCCEEDED(gImageCtx->Map(cpuTexture, i, D3D11_MAP_READ, 0, &resource)));
		uint32 cw = std::max<uint32>(w >> i, 1);
		uint32 ch = std::max<uint32>(h >> i, 1);

		std::vector<uint8> layerData(cw * ch * 4);
		memcpy(layerData.data(), resource.pData, layerData.size());
		gImageCtx->Unmap(cpuTexture, i);

		auto compSize = squish::GetStorageRequirements(cw, ch, squish::kDxt1);
		std::vector<uint8> outData(compSize);
		squish::CompressImage(img.data(), cw, ch, outData.data(), squish::kDxt1);
		os.write((const char*) outData.data(), outData.size());
	}

While this works fine for the first layer i have some problems with subsequent mip levels. For the first layer see:

NICXbtD.png

(RGBA vs BGRA aka D3D11 vs Chromium)

 

Now for example the second layer already looks bad, see here:

Layer 1:

S08AmeK.png

 

Layer 2:

L5elb4B.png

 

Layer 3:

xLPwQDg.png

 

and so on

 

As you can see im not happy with how stuff looks after layer 1. This also is visible when im using said texture it looks very bad:

hHFU0NK.png

 

Am i doing something wrong or is that just.... uhm... the way d3d creates mip levels? Are there good alternatives to d3d to create the mipmaps? 

 

Any help or hints are much appreciated. I wish you a nice evening (or whatever time of the day applies to you ;))

Plerion


PARTNERS