Jump to content

  • Log In with Google      Sign In   
  • Create Account


Waaayoff

Member Since 11 Sep 2009
Offline Last Active Aug 23 2014 05:29 AM

#5075694 Precomputed Atmospheric Scattering - Irradiance table?

Posted by Waaayoff on 06 July 2013 - 05:43 AM

I'm trying to implement Bruneton's algorithm but i'm not sure the irradiance results are correct. The output texture is black but the values aren't zero. If i multiply them by 100 i get the following:

 

 

yqda.png

 

Which closely resembles the image found in this implementation by Stefan Sperlhofer. So my queston is, is the table supposed to have such low values? Because the Transmittance and Inscatter tables don't.

 

Also, in the algorithm the single irradiance is multiplied by zero and discarded... Why calculate it in the first place???




#5074106 a = b = new c?

Posted by Waaayoff on 30 June 2013 - 04:09 AM

 

Also, I didn't realize it was that unclear to read...

Then why did you create this thread? ;-)

You weren't sure how it works, that's IMHO a good proof that it isn't clear to read (at least for you and as it's your code...).

 

 

Yes but that's like saying i shouldn't use smart pointers because they're not readable for me since i don't know how they work.. I get what you guys are saying about readability but i just thought this was a simple assignment operation that was common knowledge for non beginners (unlike me)




#5073525 Terrain lighting seams?

Posted by Waaayoff on 28 June 2013 - 02:19 AM

 


I was thinking maybe it's because my normals are being interpolated when sampling the chunk normal map in the pixel shader?
Sounds reasonable. If you're using REPEAT texture access you're set up for surprises. Real thing is, you don't assemble chunks to make a big terrain. We slice big terrains into chunks instead. All bounduary samples must use inter-chunk fetch. Seamless chunks hide the problem at an extreme authoring cost and limitation, don't do that.

 

 

I already use neighbour chunk vertices for border cases if that's what you mean

 

 

Is this seam there if you just go by normals, no normal maps?

 

What do your texture coordinates look like?

 

(Your normal map isn't in a texture atlas, is it?)

 

Yes, i just tested and the normals look fine when passed with the vertex. I guess it is a problem with the texture sampling... Any idea how to fix it? Or rather, why not just pass the normals instead of using normal maps? Does bump mapping even work when the texture resolution is the same as the vertex resolution?

 

And no i'm not using an atlas; a separate texture per chunk

 

 

If you are using bezier patches, make sure you are joining your patches with at least C1 continuity.

 

I don't even know what that means tongue.png




#4991567 Effect/Shader system design :: variable/cbuffer system

Posted by Waaayoff on 18 October 2012 - 03:52 PM

Hello,

After much thought i approached the problem like this: (Note that this is largely based on the Hieroglyph 3 engine)

I have a ShaderParameter abstract class that every variable type (vector, matrix, texture, etc..) inherits from. The derived classes (one for each type) contain the data. You can call it to get/set data, get type.. This class also has an Update ID (more on this in a second).

Next i have a class that the user interacts with to set shader parameters. The variables are created when a new shader is compiled and stored in an std::map (or hashmap if you want).

Now I divide my parameters into 3 'types' just like the D3D API: Constant buffers, textures and sampler states. A shader class has 3 vectors that contain wrappers around the D3D parameters. The Constant Buffer wrapper has a pointer to the actual buffer and a list of variable descriptions that belong in that buffer. The description struct looks like this:

struct VariableDesc
{
ShaderParameter* pParameter; // pointer to variable
unsigned int UpdateID;
unsigned int Offset; // offset into the buffer
};

Now the UpdateID variable is used to check if the buffer needs to be updated. Whenever the user sets a new value for the variable, its UpdateID that i talked about before is incremented. When you're binding a constant buffer, you loop through its variables and if one of the VariableDesc's UpdateID doesn't match the ShaderParameter's UpdateID, you remap the buffer's content. This is a simple optimization.

Next up you will have a RenderEffect. This is a simple POD struct that contains shaders and render states. It is passed to the Pipeline class which is responsible for binding everything. The RenderEffect doesn't really need to know about the details and so it has no functions.

Finally you have a ShaderStage. This one is an intermediary between the Shader class and the Pipeline class. It simply exists to take care of redundant API calls. Since the Shader class has a list of wrappers in contiguous memory and not the actual D3D pointers, the ShaderStage collects the D3D data and binds all shader resources in 4 calls. One for the shader, one for the cbuffers, one for the textures and one for the samplers.

I can't post code now but if you want i will when i get home. (Be warned though, there's a LOT of it since we're talking about the core of a rendering engine)


#4983348 Draw a Textured Quad

Posted by Waaayoff on 24 September 2012 - 02:35 PM

I see that you are not doing any error checking. I realize you're a beginner but it can help a lot. Most DirectX functions return an error value if they fail. You should check for that :)

I believe you forgot the texture file extension in this line:
D3DXCreateTextureFromFile(d3ddev, L"ParticleSmokeCloud64x64", &d3dtex);


#4980094 D3D11: CORRUPTION: ID3D11DeviceContext::ClearRenderTargetView: First paramete...

Posted by Waaayoff on 14 September 2012 - 09:52 AM

Insert a breakpoint at the ClearRenderTargetView function line and run it. Does g_pRenderTargetView point to NULL?

If so, the render target wasn't created properly. Also make sure you're setting the render target beforehand.


#4974190 Terrain normals messed up, don't know why?

Posted by Waaayoff on 28 August 2012 - 11:53 AM

Oh. My. God. Kill. Me. Now.

You were right. I forgot about some debugging code i left in my renderer where i bind the shader parameters only once. Changed that and everything worked. I feel so stupid right now.

Anyway, thank you!!!!!!!!


#4953944 Chunked LOD vertex data?

Posted by Waaayoff on 29 June 2012 - 06:56 AM

I don't know if it's relevant but i'm implementing the Rendering Very Large, Very Detailed Terrains version.


#4952380 assimp nodes?

Posted by Waaayoff on 24 June 2012 - 11:42 AM

... I just have three more questions. in your GLSL vertex shader I assume your "joints = bones"? ...


Yes. These terms are used interchangeably in animation.


#4950887 assimp nodes?

Posted by Waaayoff on 20 June 2012 - 02:58 AM

Bone indices indicate which bones affect a certain vertex the most. For example, assume you're processing a vertex on your player's elbow. That vertex will probably only have 2 bone indices (and hence 2 weights, since #indices = #weights). One bone index will be for the upper arm bone, and one for the lower arm bone. Now these indices are used to retrieve bone matrices from the entire skeleton, which is defined as FinalTransforms in my shader, and as bonesMatrix in larspensjo's.

Simply put, bone indices are just numbers that let you know where a certain bone's transformation is in the matrix array. That way you can retrieve the required matrix and multiply it by the weight.

As for my shader, you can ignore the loop. It just ensures that the total weight is unity if the vertex is affected by less than 4 bones. I have no idea why i didn't do that in the preprocessing. Just make sure the total bone weight for a vertex is equal to one when you're importing the data from assimp, and use larspensjo's shader :)


#4950425 Making a succes game from a to z

Posted by Waaayoff on 18 June 2012 - 05:50 PM

First things first, it doesn't sound like you have any programming experience whatsoever. So that's where you will start. Choose a programming language and start learning.

As for teaming up with somebody, i don't encourage it. Not until you know what you're doing.

I really don't want to put you down, but you have to be realistic about these things. I started programming about 4 years ago (with a lot of breaks). Only now do i think that i am ready to build a *functional* and *complete* game. Even now, 5 months after starting my project, i am nowhere near done. Probably not even 10% done. Also, after 4 years, if you ask me what i think about game development, i will say this: It is NOT fun. Sure getting results can be awesome. But that only comes after months of pain and suffering Posted Image

So ask yourself the following question: Am I ready to spend years learning about game development?


#4950424 assimp nodes?

Posted by Waaayoff on 18 June 2012 - 05:29 PM

Find my answer in this thread. I explain how to do the animation, rendering and how to retrieve data from assimp. It also shows how you can overcome the problems mentioned in the above post :)

If you still have questions after reading it, i can answer them here.


#4948249 How to generate the elevation maps for GPU based geometry clipmaps?

Posted by Waaayoff on 11 June 2012 - 01:03 PM

Thanks!

I re-read the article after your answers and now it all makes sense Posted Image

Edit: If anyone's interested, i also found this article to be helpful:


#4946151 Creating destructible terrain?

Posted by Waaayoff on 04 June 2012 - 10:34 AM

I just started working on an RPG and i would really like for the terrain to be destructible, like this:

Is it possible to have high resolution, large destructible terrains in real-time, and still have enough cycles for the rest of the game?

If so, can you give me some keywords/links to get started with?


#4900668 Vertex Skinning help

Posted by Waaayoff on 08 January 2012 - 10:53 AM

First off you will need to understand how skinning animation works. This is an excellent article that explains it. Try to understand the theory and ignore the DirectX specifics.

Now, assuming you've read the article, we will write the code. First off i will show you the animating and rendering code, then i will show you how to get the data from Assimp.

Step 1: The animating

Here's my animation class. I will explain it shortly.

Animation.h
#ifndef Animation_h
#define Animation_h

#include <D3dx10math.h>
#include <vector>

// Holds the data for a single keyframe
struct KeyFrame
{
		float time;
		D3DXVECTOR3  T; // Translation
		D3DXVECTOR3  S; // Scale
		D3DXQUATERNION R; // Rotation
};

//=====================================================================================================

// Represents a single bone or joint, whatever you want to call it. It contains a list of children bones, its offset matrix and
// an animated matrix which will be updated every frame.
struct Joint
{
	   D3DXMATRIX mOffsetTransf;
	   D3DXMATRIX mAnimatedTransf;
	   std::vector<Joint*> mChildren;
};

//=====================================================================================================
// Holds the entire animation data (list of keyframes) for a specific joint.
struct Channel
{
	   Joint* mJoint;
	   std::vector<KeyFrame> mKeyFrames;
};

//=====================================================================================================
class Animation
{
public:
	   Animation();
	   void Update(float delta);

	   Channel* CreateChannel()
	   {
			   mChannels.push_back(Channel());
			   return &mChannels.back();
	   }

	   void SetDuration(double Duration)
	   {
			   mDuration = Duration;
	   }

private:
		void InterpolateJoint(const KeyFrame& k0, const KeyFrame& k1, D3DXMATRIX& out);

		double mDuration; // in ms
		float mTime;

		// List of bone channels. Note that some bones may not be animated and thus will have no channel.
		std::vector<Channel> mChannels;
};

#endif // Animation_h


Animation.cpp
#include "Animation.h"
Animation::Animation() :
	   mDuration(0),
	   mTime(0)
{
}
//=====================================================================================================
// Update animation
//=====================================================================================================
void Animation::Update(float delta)
{
		mTime += delta;
		if (mTime >= mDuration)
			   mTime = 0;

		for (unsigned i = 0; i < mChannels.size(); i++)
		{
			   std::vector<KeyFrame>& keyFrames = mChannels[i].mKeyFrames;

			   // Figure out where we are in our animation
			   unsigned k0 = 0;
			   while (true)
			   {
					  // If we have reached the end of our animation, do nothing
					  if (k0 + 1 >= keyFrames.size())
							   break;

					  // The current time is less than that of the next keyframe, interpolate the two keyframes accordingly.
					  if (mTime < keyFrames[k0+1].time)
					  {
							   InterpolateJoint(keyFrames[k0], keyFrames[k0 + 1], mChannels[i].mJoint->mAnimatedTransf);
							   break;
					  }
  
					  else
					  {
							  k0++;
					  }
			   }
	   }
}

//=====================================================================================================
// Interpolate joint transformation between 2 keyframes
//=====================================================================================================
void Animation::InterpolateJoint(const KeyFrame& k0, const KeyFrame& k1, D3DXMATRIX& out)
{
	  float t0 = k0.time;
	  float t1 = k1.time;
	  float lerpTime = (mTime - t0) / (t1 - t0);
	  D3DXVECTOR3 lerpedT;
	  D3DXVECTOR3 lerpedS;
	  D3DXQUATERNION lerpedR;
	  D3DXVec3Lerp(&lerpedT, &k0.T, &k1.T, lerpTime);
	  D3DXVec3Lerp(&lerpedS, &k0.S, &k1.S, lerpTime);
	  D3DXQuaternionSlerp(&lerpedR, &k0.R, &k1.R, lerpTime);
	  D3DXMATRIX T, S, R;
	  D3DXMatrixTranslation(&T, lerpedT.x, lerpedT.y, lerpedT.z);
	  D3DXMatrixScaling(&S, lerpedS.x, lerpedS.y, lerpedS.z);
	  D3DXMatrixRotationQuaternion(&R, &lerpedR);
	  out = R * S * T;
}

Woah.. What's going on there?

The animation class represents an animation track. Every frame, the Update function will be called with the elapsed time, the function will figure out between which 2 keyframes we are in our animation, and call the InterpolateJoint() function to interpolate the two keyframes. The result will be stored in the bone's animated matrix. This process is done for all bone channels.

So now we have animated our individual bones. Now we need to calculate the combined transform for all bones in order to get our final transfomations that will be sent to the shader. The following function does just that. It will be called with the skeleton's Root bone and an identity matrix.


//=====================================================================================================
// Recursively calculate the joints' combined transformations
//=====================================================================================================
void AnimationController::CombineTransforms(Joint* pJoint, const D3DXMATRIX& P)
{
	  D3DXMATRIX final = pJoint->mAnimatedTransf  *  P;

	  mFinalTransforms.push_back( pJoint->mOffsetTransf * final );

	  for (unsigned i = 0; i < pJoint->mChildren.size(); i++)
	  {
			 CombineTransforms(pJoint->mChildren[i], final);
	  }

I have that function inside an AnimationController class that manages all the animation tracks for a single mesh. It holds a list of Animation objects and the root node for the skeleton. Every frame, i ask the AnimationController to update all of its animation tracks and combine the final transformations. I would then retrieve mFinalTransforms from the AnimationController and pass them to the skinning shader.


Step 2: The rendering

Here's the skinning shader in HLSL. For simplification i have omitted lighting calculations and texturing. I also assume that the vertex will be affected by a maximum of 4 bones and that there are a maximum of 32 joints in the skeleton.

cbuffer c_buffer
{
	 float4x4 World;
	 float4x4 WorldViewProj;
	 float4x4 FinalTransforms[32];
}

struct VS_OUT
{
	 float4 position : SV_POSITION;
};

VS_OUT VShader(float4 position : POSITION, float4 weights : BLENDWEIGHT, int4 boneIndices : BLENDINDICES)
{
	 VS_OUT output;

	 float4 p = float4(0.0f, 0.0f, 0.0f, 1.0f);
	 float lastWeight = 0.0f;
	 int n = 3;

     // I believe you can optimize this by unrolling the loop and making sure the weights add up to 1 during loading instead of in the shader
	 for(int i = n; i > 0; i--)
	 {
			lastWeight += weights[i];
			p += weights[i] * mul(FinalTransforms[boneIndices[i]], position);
	 }

	 lastWeight = 1.0f - lastWeight;
	 p += lastWeight * mul(FinalTransforms[boneIndices[0]], position);
	 p.w = 1.0f;

	 output.position = mul(WorldViewProj, p);
	 return output;
}


Step 3: Importing the data using Assimp

Now for the above code to work, we need to retrieve the following information:
- Vertex data (mainly bone indices and vertex weights)
- Skeleton hierarchy
- Bone data
- Keyframes

This step is very project specific so i won't provide any code.

I'm going to assume that you know how to load a scene using Assimp. If not, see how it's done in the sample. Make sure you pass the following flags to the aiImportFile function:

aiProcessPreset_TargetRealtime_Quality | aiProcess_ConvertToLeftHanded) & ~aiProcess_FindInvalidData

The first flag is for optimization.
The second one is for DirectX, since it uses a left handed coordinate system.
The third one is used out of convenience. aiProcess_FindInvalidData removes redundant animation key frames, meaning you will get a different number of scaling, rotation and translation keys, forcing you to even them out yourself. I choose to remove aiProcess_FindInvalidData (which is by default defined in aiProcessPreset_TargetRealtime_Quality) so that i get the same number of keyframes.

Once you have imported your scene, you will find an array of aiMesh pointers in the aiScene object. Obtaining the vertex positions, normals and texcoords is straightforward. (Note that mTextureCoords could contain more than one set of texture coordinates).

Inside the aiMesh object you will also find three things we need for skinning: vertex weights, bone indices and bone data. I will leave it up to you to retrieve the first two but getting the bone data is a bit tricky, so here's some code to do it. I didn't test it but it should theoretically work.

// Call a recursive function with the first bone in the mesh (the root node).
// Don't forget to store the root node because if you recall we need it in the AnimationController.
pJoint* pRoot = Formhierarchy(pMesh->mBones[0]);

// And here's the function
Joint* FormHierarchy(aiBone* pBone)
{
	  Joint* pJoint = new Joint;

	  Joint->mOffsetTransf = pBone->mOffsetMatrix;

	  // Get the aiNode belonging to the aiBone. g_Scene is the aiScene instance you got earlier
	  aiNode* node = g_Scene->mRootNode->FindNode(pBone->mName);

	  // initialize mAnimatedTransf with the bone's local transformation
	  pJoint.mAnimatedTransf = node->mTransformation;

	  // Now for the children
	  for (int i = 0; i < node->mNumChildren; i++)
	  {
			 Joint* pChild = FormHierarchy(pBone + i);
			 pJoint->mChildren.push_back( pChild );
	  }

	  return pBone;
}

Still alive? We're almost done Posted Image

Now we need the keyframe data, which are very easy to obtain. Here's some pseudo-code:

for every aiAnimation in the g_Scene
	create an Animation instance

	for every aiNodeAnim in the aiAnimation
		 create a Channel instance

		// ~aiProcess_FindInvalidData flag should take care of it but just incase
		assert((node->mNumPositionKeys == node->mNumRotationKeys) && (node->mNumRotationKeys == node->mNumScalingKeys));

		for i = 0, i < node->mNumPositionKeys, i++
			  create a Keyframe instance

			  store everything in the Keyframe. The translation and scaling keys in a vector and the rotation in a quaternion
			  also store the time but don't forget to convert it to milliseconds by multiplying by 1000.

Now store everything in its proper place and you're done!

EDIT: One very important thing i forgot to mention is that you might have to transpose the bone matrices, depending on how you use them in your shader.

I'm not particularly good at explaining stuff so i'm not sure if that was easy to follow. So if you have any questions don't hesitate to ask Posted Image




PARTNERS