Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


AdeptStrain

Member Since 19 Mar 2012
Offline Last Active Nov 25 2014 02:08 PM

#5178396 3D UI transformation matrix

Posted by AdeptStrain on 05 September 2014 - 02:39 PM

Sounds like you have your widgets in currently in screenspace. If you want them 3D you probably want them in world space (again, just out in front of your camera), rotate them there, and then multiply them by your ViewProjection matrix. Your view matrix doesn't make much sense to me, you're treating it like you're still in screen space when you're not.

 

If this is your first foray into 3D graphics you may want to read up on some linear algebra/matrix transformation tutorials (I don't know any great ones off the top of my head, maybe someone has a good suggestion and can chime in). Or am I misunderstanding your issue?




#5175549 How to get rid of camera z-axis rotation

Posted by AdeptStrain on 22 August 2014 - 04:13 PM

Here is my code:

 

take degrees for X and Y

if (mouseInput.MouseState.RightButton.Down)
            {
                basicCamera.RotateX(-mouseInput.MouseState.DeltaX * mouseRotationSpeed);
                basicCamera.RotateY(-mouseInput.MouseState.DeltaY * mouseRotationSpeed);
               
            }

Then do quaternion multiply:

public void RotateX(float _degree_angle)
        {
            if (cameraType == CamType.ThirdPerson)
            {
                _degree_angle = -_degree_angle;
            }

            qRotation = Quaternion.Multiply(Quaternion.RotationAxis(Vector3.UnitX, MathUtil.DegreesToRadians(_degree_angle)), qRotation);
            
            UpdateViewMatrix();
        }

        public void RotateY(float _degree_angle)
        {
            if (cameraType == CamType.ThirdPerson)
            {
                _degree_angle = -_degree_angle;
            }
            qRotation = Quaternion.Multiply(qRotation, Quaternion.RotationAxis(Vector3.UnitY, MathUtil.DegreesToRadians(_degree_angle)));
            
            UpdateViewMatrix();
        }

FInally update view matrix:

private void UpdateViewMatrix()
        {
            qRotation = Quaternion.Normalize(qRotation);

            if (CameraType == CamType.ThirdPerson)
            {
                ViewMatrix = Matrix.Translation(Vector3.Negate(position)) * Matrix.RotationQuaternion(qRotation);

                position -= new Vector3(ViewMatrix.M13, ViewMatrix.M23, ViewMatrix.M33) * radius;

                ViewMatrix = Matrix.LookAtLH(position, lookAt, new Vector3(ViewMatrix.M12, ViewMatrix.M22, ViewMatrix.M32));
            }
            else
            {
                ViewMatrix = Matrix.Translation(Vector3.Negate(position)) * Matrix.RotationQuaternion(qRotation);
            }
        }

P.S. quaternion multiply order in Y rotation changed because it prevent camera to move around its z-axis, but it causes it to turn upside down when moving on diagonal. If I dont change the order, then moving on diagonal is good, but camera is rotating on z-axis each time I move it around x and y axis.

 

You aren't actually creating accumulators and building the matrix each frame from scratch. You're modifying "qRotation" every time you call RotateX/RotateY, so if you rotate 45 degrees on the X and Y, next frame if you do the same you're going to be rotating from within that previous "diagonal" space. Building it from scratch prevents you from ever rotating outside of the unit X/Y/Z space.




#5175326 How to get rid of camera z-axis rotation

Posted by AdeptStrain on 21 August 2014 - 12:47 PM

If you're trying to do a 3rd person camera, I'll echo Álvaro and suggest you just keep track of camera yaw and position then construct the transform each frame.

 

Pseudo code:

class Camera
{
 float m_yaw; // Should be a value between 0 and 2 PI
 float m_pitch; // Value between -PI/2 and PI/2
 Vector3 m_position;
 Matrix  m_cameraMatrix;
 
 void Update()
 {
 	Vector2 deltaMousePos; // Assume this is a 2D vector that has the delta X/Y of our mouse cursor since the last frame.
 	
 	m_yaw += deltaMousePos.x;
 	m_pitch += deltaMousePos.y;
 	
 	// Construct our rotation based on our current yaw/pitch
 	Quaternion yRotation(Vector3::UnitY, m_yaw);
 	Quaternion xRotation(Vector3::UnitX, m_pitch);
 	
 	m_cameraMatrix = Matrix::Identity;
 	m_cameraMatrix *= (xRotation * yRotation);
 	m_cameraMatrix.translation = m_position; // Restore our position.
 }
 
}



#5172811 Visual Studio 2013 graphics debugger vs Nvidia Nsight?

Posted by AdeptStrain on 11 August 2014 - 09:43 AM

My big problem with NSight is many of the really great features require you to remote in from another machine that is running your application.

 

Have you tried RenderDoc? It's free and an amazing graphics debugger.




#5172690 Grass Rendering Questions

Posted by AdeptStrain on 10 August 2014 - 09:18 PM

Alright, Unbird is trying to keep me honest and requested an update so here we go.

 

I've mainly been working on partitioning up the grass into various cells so I can enable a greater density without the framerate hit. Even in this very early stage it seems to be working (the red lines are the cell bounds):

 

moarGrass.png

 

That's at 5 x 5 blades per 1 cell of the heightmap (although I have some doubts about what is actually being rendered which I'll get to in a moment). Previously that would slam my system down to 20 - 30 FPS but now it sits at 60FPS without any trouble. Success! Right now I have a 512M x 512M divided up in 8 x 8 chunks, obviously there is a balance there between doing visibility tests against each cell's bounds (as well as the overhead of more draw calls) but right now my system does that in another thread happily so I may play with how finely I divide up my grass cells.

 

I also turned on some very basic LOD to help smooth things out, Ideally I'd like to just toss the vertex away if the vertex is far away enough (and thus avoid the GS work), but I'm not sure how I could do that in the vertex shader. If anyone has any suggestions I'd appreciate it.

 

Here you can see the LOD zones (these are just really simply first passes, I probably need to get more aggressive):

 

lodDebug.png

 

Red > Orange > Blue > Green

 

Finally I noticed an annoying issue where my HLSL random function doesn't seem to actually be generating random values. Hence the large amount of uniformity in all these screenshots. You can see here where there's some small random being introduced, but then directly next to those blades sit a bunch of uniform blades:

 

grassRandom.png

 

I'm currently using the very basic GLSL random (in HLSL):

float GetRandomValue(float2 uv)
{
	// Internet magic numbers follow...
	float2 noise = (frac(sin(dot(uv, float2(12.9898, 78.233)*2.0)) * 43758.5453));
	return abs(noise.x + noise.y) * 0.5;
}

In my shader I have a table of prime numbers which I just multiple the root vertex's UV coords by, using the GS Instance as the key into the prime numbers table:

// IN = Root position vertex passed from the Vertex Shader
// primeTable = Array of prime numbers.
// instanceId = SV_GSInstanceID

float random = GetRandomValue(IN.Uv * (float)(primeTable[instanceId]);

My thoughts are:

  • Not sure how well this works if you have UV (0.0, 0.0).
  • It probably just makes more sense to populate a texture coordinate with some random numbers rather than key off the UVs of the root vertex.
  • I'm curious if this is causing some blades to stack on top of each other so I'm not getting the full effect currently.

Anyway, that's the latest. Comments / Thoughts / Suggestions always welcomed.

 




#5166894 Grass Rendering Questions

Posted by AdeptStrain on 14 July 2014 - 08:45 PM

Please post WIP images. I love grass fields!

 

Ha. Sure. Here's the latest.

 

You can see the lighting is still busted(haven't had time to really dig into it yet), but there are many more blades of grass. I had based most my work so far on the white paper that Lee did, but I've slowly started to drift away from some of the elements he uses. For example he populates his world with a vertex per grass blade (which means he needs N verts for N blades of grass). Right now I have 1 vert per heightmap cell and I use geometry shader instancing to populate that cell with 1 - 32 blades of grass. The attached picture is a 3x3 grid of grass blades, so 9 GS instances, per cell (with some small position offset randomization within the cell).

 

My hope is that massive parallel pipelines on today's cards can happily chew through the added work in the GS instance, so far it seems to be working. The framerate in the image is running in debug build, with no LOD on the grass (I have it setup, just not turned on yet), no visibility culling (beyond what the HW provides), and using a completely unoptimized shader. Turning the image to max instances(i.e. 32 blades of grass per cell) drops the FPS to 30 - 40s (which isn't bad and I suspect is more a fill rate issue than the actual number of verts but that's just a wild guess).

 

I'm sure I'll come across some massive flaw in my current approach, but for now it's working pretty well. smile.png

Attached Thumbnails

  • grass2.png



#5163063 VS2013 Graphics Diagnostics problem

Posted by AdeptStrain on 26 June 2014 - 01:06 PM

Just wanted to say I started seeing this behavior as well after Update 2. Before the update I had no issues with debugging and viewing the pipeline stages, now it always reads as an invalid event.




#5007427 Quick texturing question

Posted by AdeptStrain on 05 December 2012 - 10:14 AM

Yea, going with what C0lumbo said, it looks like you were parsing the faces improperly (looking at the OBJ file it looks like each face is defined by 3 indices - which makes sense, but then you were grabbing 9 at a time to define 1 face?) and then the loops you had at the end seemed to be a bit of a mess. I tried to clean it up real quick (haven't compiled this but it should be mostly good). Hope this helps.

struct ObjModelVert
{
   float m_position[3];
   float m_normal[3];
   float m_uv[2];
};

struct ObjModel
{
   vector<ObjModelVert> m_vertices;
   vector<int>		  m_indices;
};

bool ObjLoader::LoadFromFile(wchar_t* filename)
{
   ifstream fin;
   char* buffer;
   int filesize;
   Tokenizer tokenStream, lineStream, faceStream;
   vector<float> verts;
   vector<float> norms;
   vector<float> texC;

   // Assume we have the following member in this class:
   // ObjModel  m_loadedModel;

   string tempLine, token;
   fin.open(filename, ios_base::in);

   if (fin.fail())
	 return false;

   fin.seekg(0, ios_base::end);
   filesize = static_cast<int>(fin.tellg());
   fin.seekg(0, ios_base::beg);
   buffer = new char[filesize];
   memset(buffer, '\0', filesize);
   fin.read(buffer, filesize);
   fin.close();

// Send buffer to tokenstream
tokenStream.SetStream(buffer);
if (buffer)
{
  delete[] buffer;
  buffer = 0;
}

char lineDelimiters[2] = { '\n', ' ' };
while (tokenStream.MoveToNextLine(&tempLine))
{

  lineStream.SetStream((char*)tempLine.c_str());
  tokenStream.GetNextToken(0, 0, 0);

  if (!lineStream.GetNextToken(&token, lineDelimiters, 2))
   continue;

  if (strcmp(token.c_str(), "v") == 0)
  {
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   verts.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   verts.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   verts.push_back((float)atof(token.c_str()));
  }
  else if (strcmp(token.c_str(), "vn") == 0)
  {
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   norms.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   norms.push_back((float)atof(token.c_str()));
  
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   norms.push_back((float)atof(token.c_str()));
  }
  else if (strcmp(token.c_str(), "vt") == 0)
  {
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   texC.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   texC.push_back((float)atof(token.c_str()));
  }
  else if (strcmp(token.c_str(), "f") == 0)
  {
   char faceTokens[3] = { '\n', ' ', '/' };
   string faceIndex;
   faceStream.SetStream((char*)tempLine.c_str());
   faceStream.GetNextToken(0, 0, 0);

   // OBJ has face indices as 1 based rather than 0 based. Also this data doesn't need any special re-ordering
   // so we can copy it to our loaded model object directly.
   faceStream.GetNextToken(&faceIndex, faceTokens, 3);
   m_loadedModel.m_indices.push_back((int)atoi(faceIndex.c_str()) - 1);
   faceStream.GetNextToken(&faceIndex, faceTokens, 3);
   m_loadedModel.m_indices.push_back((int)atoi(faceIndex.c_str()) - 1);
   faceStream.GetNextToken(&faceIndex, faceTokens, 3);
   m_loadedModel.m_indices.push_back((int)atoi(faceIndex.c_str()) - 1);
  }
  token[0] = '\0';
}

int vIndex = 0, nIndex = 0, tIndex = 0;
for(; vIndex < verts.size(); vIndex += 3, nIndex += 3, tIndex +=2)
{
	   ObjModelVert newVert;
	  
	   // Position Data
	   newVert.m_position[0] = verts[vIndex]; // You could probably just do a memcpy here and grab all components at once, but I'll be explicit in this case.
	   newVert.m_position[1] = verts[vIndex + 1];
	   newVert.m_position[2] = verts[vIndex + 2];
	  
	   // Normal data
	   newVert.m_normal[0] = norms[nIndex];
	   newVert.m_normal[1] = norms[nIndex + 1];
	   newVert.m_normal[2] = norms[nIndex + 2];
	  
	   // Tex Coordinates
	   newVert.m_uv[0] = texC[tIndex];
	   newVert.m_uv[1] = texC[tIndex + 1];
	  
	   m_loadedModel.m_vertices.push_back(newVert);
}

// Don't need to clean up the temporary vectors since they were allocated on the stack, they'll automatically get cleaned up when they go out of scope.
return true;
}



#5007113 Terrain Multi-texture

Posted by AdeptStrain on 04 December 2012 - 10:15 AM

Most the times you use a blend map to describe what texture you want to apply where on your terrain. It's commonly referred to as Texture Splatting. There's a pretty good overview here:

http://www.gamerende...ture-splatting/

Basically, you have an RGBA texture that has each channel dedicated to 1 texture (so, grass could be the red channel and sand the green channel). Then in your shader code you just sample all your textures and multiply them by the blend specified in the blend map:

  float4 sand = textureSampler.Sample(textureSand, input.Tex);
  float4 grass = textureSampler.Sample(textureGrass, input.Tex);
  // Add up to 2 more textures (of course you can exceed that limit in various ways, but I'll keep it simple).
  float4 blend = textureSampler.Sample(textureBlendMap, input.Tex);

  float4 finalColor = grass * blend.r + sand * blend.g /*add other channels here*/;

  return finalColor;



#5006662 hundreds of errors and warnings in xnamath.h

Posted by AdeptStrain on 03 December 2012 - 10:18 AM

Solved! Posted Image
I had to #include <d3dx11.h> first.
1 rep for Erik! Posted Image


You don't actually need <d3dx11.h>, just <windows.h> - as Eric mentioned above. XNAMath isn't DX11 specific.


#4986784 .fx file error - Box comes out blue

Posted by AdeptStrain on 04 October 2012 - 09:16 AM


Yea, looks like uninitialized memory to me (that 1.#Q is a good hint and probably NaN). I'd look at the Direct3D 11 Tutorials that come with the SDK (specifically tutorial #4) and create a constant buffer, fill it with the data you want, and then set it for your pixel shader so it can access the data. Very simple process and will give you more control/better results of your data anyway.

I've used SlimDX before and its going to be VERY similar to the D3D tutorial (just more object based since SlimDX is a simple wrapper on top of the normal SDK).

As noted, it's my DX10 code; I need to get a new video card. And other expenses...

Edit: Just to clarify, I meant to ask if DX10 versus DX11 changes the tutorial I should look at, but the question probably wasn't clear. Don't want to implement DX11 code that has a silent error if ported to DX10 or something, after all. Posted Image


Ah, sorry, missed that. Regardless, Tutorial 4+ in the DirectX10 Tutorials directory of the SDK shows the same stuff. Looks like there is some small differences between 10 and 11 but not much.


#4986443 .fx file error - Box comes out blue

Posted by AdeptStrain on 03 October 2012 - 10:32 AM


And you're not setting the constant buffer values anywhere in code? I've never seen a constant buffer initialized in a shader... only declared. Slightly shocked that works.

If I am, I haven't found it. And I've looked using multiple techniques. Plus, that changing (and using both) the blue and alpha colours causes the box to change colours, makes it unlikely. (What I mean is, trying something swizzling blue, alpha and green on Material.Diffuse gets yellow. Which is one way I know it's reading the blue and alpha colours correctly).


You can verify exactly what values its reading and see what the cbuffer is actually filled with if you run it through PIX.


#4973959 Annoying issue with MRTs.

Posted by AdeptStrain on 27 August 2012 - 07:33 PM

Sorry for what I'm sure is "yet another post about MRTs" but I've been stuck on this issue for a few days.

Trying my hand at implementing deferred shading but I'm running into an issue where my values are being set properly on the render targets (I can see them in pix), but when I go to render the fullscreen quad - I get nothing. Infact Pix even says the pixel shader is returning the color I expect, but the final frame buffer value is always black.

The high level view of my code goes like this:
  • Create 3 Render Targets(Diffuse, Normal, Depth)
  • Set those 3 Render targets before drawing.
  • Render...
  • Set the Render Target to the original backbuffer and depth buffer (Actually right now I'm using the same depth buffer for the MRTs and the final quad...but Pix isn't saying the pixel is being thrown away due to depth issues, at least that I can tell).
I've disabled Mip Map filtering on the final quad so that shouldn't be the issue (which, again, Pix shows a color being read from the MRTs). Any thoughts on something I might be doing completely wrong here? Should I be using the same depth buffer during the initial render pass as well as during the final composition? Is it correct to set the RT back to the backbuffer before the final write? I'd post my code but everything is abstracted to not sure how readible it would be. However, if you need specific sections I can easily provide those. Pix output below.

Posted Image

EDIT: I just noticed the damn alpha value. I hate Occam, his stupid razor, and I hope he jumped off a cliff (like I will soon be doing).


PARTNERS