Jump to content

  • Log In with Google      Sign In   
  • Create Account


AdeptStrain

Member Since 19 Mar 2012
Offline Last Active Jul 20 2014 12:56 PM
-----

#5166894 Grass Rendering Questions

Posted by AdeptStrain on 14 July 2014 - 08:45 PM

Please post WIP images. I love grass fields!

 

Ha. Sure. Here's the latest.

 

You can see the lighting is still busted(haven't had time to really dig into it yet), but there are many more blades of grass. I had based most my work so far on the white paper that Lee did, but I've slowly started to drift away from some of the elements he uses. For example he populates his world with a vertex per grass blade (which means he needs N verts for N blades of grass). Right now I have 1 vert per heightmap cell and I use geometry shader instancing to populate that cell with 1 - 32 blades of grass. The attached picture is a 3x3 grid of grass blades, so 9 GS instances, per cell (with some small position offset randomization within the cell).

 

My hope is that massive parallel pipelines on today's cards can happily chew through the added work in the GS instance, so far it seems to be working. The framerate in the image is running in debug build, with no LOD on the grass (I have it setup, just not turned on yet), no visibility culling (beyond what the HW provides), and using a completely unoptimized shader. Turning the image to max instances(i.e. 32 blades of grass per cell) drops the FPS to 30 - 40s (which isn't bad and I suspect is more a fill rate issue than the actual number of verts but that's just a wild guess).

 

I'm sure I'll come across some massive flaw in my current approach, but for now it's working pretty well. smile.png

Attached Thumbnails

  • grass2.png



#5163063 VS2013 Graphics Diagnostics problem

Posted by AdeptStrain on 26 June 2014 - 01:06 PM

Just wanted to say I started seeing this behavior as well after Update 2. Before the update I had no issues with debugging and viewing the pipeline stages, now it always reads as an invalid event.




#5007427 Quick texturing question

Posted by AdeptStrain on 05 December 2012 - 10:14 AM

Yea, going with what C0lumbo said, it looks like you were parsing the faces improperly (looking at the OBJ file it looks like each face is defined by 3 indices - which makes sense, but then you were grabbing 9 at a time to define 1 face?) and then the loops you had at the end seemed to be a bit of a mess. I tried to clean it up real quick (haven't compiled this but it should be mostly good). Hope this helps.

struct ObjModelVert
{
   float m_position[3];
   float m_normal[3];
   float m_uv[2];
};

struct ObjModel
{
   vector<ObjModelVert> m_vertices;
   vector<int>		  m_indices;
};

bool ObjLoader::LoadFromFile(wchar_t* filename)
{
   ifstream fin;
   char* buffer;
   int filesize;
   Tokenizer tokenStream, lineStream, faceStream;
   vector<float> verts;
   vector<float> norms;
   vector<float> texC;

   // Assume we have the following member in this class:
   // ObjModel  m_loadedModel;

   string tempLine, token;
   fin.open(filename, ios_base::in);

   if (fin.fail())
	 return false;

   fin.seekg(0, ios_base::end);
   filesize = static_cast<int>(fin.tellg());
   fin.seekg(0, ios_base::beg);
   buffer = new char[filesize];
   memset(buffer, '\0', filesize);
   fin.read(buffer, filesize);
   fin.close();

// Send buffer to tokenstream
tokenStream.SetStream(buffer);
if (buffer)
{
  delete[] buffer;
  buffer = 0;
}

char lineDelimiters[2] = { '\n', ' ' };
while (tokenStream.MoveToNextLine(&tempLine))
{

  lineStream.SetStream((char*)tempLine.c_str());
  tokenStream.GetNextToken(0, 0, 0);

  if (!lineStream.GetNextToken(&token, lineDelimiters, 2))
   continue;

  if (strcmp(token.c_str(), "v") == 0)
  {
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   verts.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   verts.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   verts.push_back((float)atof(token.c_str()));
  }
  else if (strcmp(token.c_str(), "vn") == 0)
  {
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   norms.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   norms.push_back((float)atof(token.c_str()));
  
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   norms.push_back((float)atof(token.c_str()));
  }
  else if (strcmp(token.c_str(), "vt") == 0)
  {
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   texC.push_back((float)atof(token.c_str()));
   lineStream.GetNextToken(&token, lineDelimiters, 2);
   texC.push_back((float)atof(token.c_str()));
  }
  else if (strcmp(token.c_str(), "f") == 0)
  {
   char faceTokens[3] = { '\n', ' ', '/' };
   string faceIndex;
   faceStream.SetStream((char*)tempLine.c_str());
   faceStream.GetNextToken(0, 0, 0);

   // OBJ has face indices as 1 based rather than 0 based. Also this data doesn't need any special re-ordering
   // so we can copy it to our loaded model object directly.
   faceStream.GetNextToken(&faceIndex, faceTokens, 3);
   m_loadedModel.m_indices.push_back((int)atoi(faceIndex.c_str()) - 1);
   faceStream.GetNextToken(&faceIndex, faceTokens, 3);
   m_loadedModel.m_indices.push_back((int)atoi(faceIndex.c_str()) - 1);
   faceStream.GetNextToken(&faceIndex, faceTokens, 3);
   m_loadedModel.m_indices.push_back((int)atoi(faceIndex.c_str()) - 1);
  }
  token[0] = '\0';
}

int vIndex = 0, nIndex = 0, tIndex = 0;
for(; vIndex < verts.size(); vIndex += 3, nIndex += 3, tIndex +=2)
{
	   ObjModelVert newVert;
	  
	   // Position Data
	   newVert.m_position[0] = verts[vIndex]; // You could probably just do a memcpy here and grab all components at once, but I'll be explicit in this case.
	   newVert.m_position[1] = verts[vIndex + 1];
	   newVert.m_position[2] = verts[vIndex + 2];
	  
	   // Normal data
	   newVert.m_normal[0] = norms[nIndex];
	   newVert.m_normal[1] = norms[nIndex + 1];
	   newVert.m_normal[2] = norms[nIndex + 2];
	  
	   // Tex Coordinates
	   newVert.m_uv[0] = texC[tIndex];
	   newVert.m_uv[1] = texC[tIndex + 1];
	  
	   m_loadedModel.m_vertices.push_back(newVert);
}

// Don't need to clean up the temporary vectors since they were allocated on the stack, they'll automatically get cleaned up when they go out of scope.
return true;
}



#5007113 Terrain Multi-texture

Posted by AdeptStrain on 04 December 2012 - 10:15 AM

Most the times you use a blend map to describe what texture you want to apply where on your terrain. It's commonly referred to as Texture Splatting. There's a pretty good overview here:

http://www.gamerende...ture-splatting/

Basically, you have an RGBA texture that has each channel dedicated to 1 texture (so, grass could be the red channel and sand the green channel). Then in your shader code you just sample all your textures and multiply them by the blend specified in the blend map:

  float4 sand = textureSampler.Sample(textureSand, input.Tex);
  float4 grass = textureSampler.Sample(textureGrass, input.Tex);
  // Add up to 2 more textures (of course you can exceed that limit in various ways, but I'll keep it simple).
  float4 blend = textureSampler.Sample(textureBlendMap, input.Tex);

  float4 finalColor = grass * blend.r + sand * blend.g /*add other channels here*/;

  return finalColor;



#5006662 hundreds of errors and warnings in xnamath.h

Posted by AdeptStrain on 03 December 2012 - 10:18 AM

Solved! Posted Image
I had to #include <d3dx11.h> first.
1 rep for Erik! Posted Image


You don't actually need <d3dx11.h>, just <windows.h> - as Eric mentioned above. XNAMath isn't DX11 specific.


#4986784 .fx file error - Box comes out blue

Posted by AdeptStrain on 04 October 2012 - 09:16 AM


Yea, looks like uninitialized memory to me (that 1.#Q is a good hint and probably NaN). I'd look at the Direct3D 11 Tutorials that come with the SDK (specifically tutorial #4) and create a constant buffer, fill it with the data you want, and then set it for your pixel shader so it can access the data. Very simple process and will give you more control/better results of your data anyway.

I've used SlimDX before and its going to be VERY similar to the D3D tutorial (just more object based since SlimDX is a simple wrapper on top of the normal SDK).

As noted, it's my DX10 code; I need to get a new video card. And other expenses...

Edit: Just to clarify, I meant to ask if DX10 versus DX11 changes the tutorial I should look at, but the question probably wasn't clear. Don't want to implement DX11 code that has a silent error if ported to DX10 or something, after all. Posted Image


Ah, sorry, missed that. Regardless, Tutorial 4+ in the DirectX10 Tutorials directory of the SDK shows the same stuff. Looks like there is some small differences between 10 and 11 but not much.


#4986443 .fx file error - Box comes out blue

Posted by AdeptStrain on 03 October 2012 - 10:32 AM


And you're not setting the constant buffer values anywhere in code? I've never seen a constant buffer initialized in a shader... only declared. Slightly shocked that works.

If I am, I haven't found it. And I've looked using multiple techniques. Plus, that changing (and using both) the blue and alpha colours causes the box to change colours, makes it unlikely. (What I mean is, trying something swizzling blue, alpha and green on Material.Diffuse gets yellow. Which is one way I know it's reading the blue and alpha colours correctly).


You can verify exactly what values its reading and see what the cbuffer is actually filled with if you run it through PIX.


#4973959 Annoying issue with MRTs.

Posted by AdeptStrain on 27 August 2012 - 07:33 PM

Sorry for what I'm sure is "yet another post about MRTs" but I've been stuck on this issue for a few days.

Trying my hand at implementing deferred shading but I'm running into an issue where my values are being set properly on the render targets (I can see them in pix), but when I go to render the fullscreen quad - I get nothing. Infact Pix even says the pixel shader is returning the color I expect, but the final frame buffer value is always black.

The high level view of my code goes like this:
  • Create 3 Render Targets(Diffuse, Normal, Depth)
  • Set those 3 Render targets before drawing.
  • Render...
  • Set the Render Target to the original backbuffer and depth buffer (Actually right now I'm using the same depth buffer for the MRTs and the final quad...but Pix isn't saying the pixel is being thrown away due to depth issues, at least that I can tell).
I've disabled Mip Map filtering on the final quad so that shouldn't be the issue (which, again, Pix shows a color being read from the MRTs). Any thoughts on something I might be doing completely wrong here? Should I be using the same depth buffer during the initial render pass as well as during the final composition? Is it correct to set the RT back to the backbuffer before the final write? I'd post my code but everything is abstracted to not sure how readible it would be. However, if you need specific sections I can easily provide those. Pix output below.

Posted Image

EDIT: I just noticed the damn alpha value. I hate Occam, his stupid razor, and I hope he jumped off a cliff (like I will soon be doing).


PARTNERS