Jump to content

  • Log In with Google      Sign In   
  • Create Account


matches81

Member Since 23 Feb 2005
Offline Last Active Sep 12 2012 07:21 PM

Topics I've Started

Rendertargets + viewports and device caps in Direct3D11?

10 September 2012 - 04:18 PM

Hello there!

A few years ago a friend of mine and me put together a basic 3D engine with a D3D9 renderer. Worked pretty well. Now, after a few years of doing other, unrelated work, we want to give D3D11 a try. Since our engine's design "expects" that we're able to enumerate what the graphics device is capable of (fairly closely to what the device caps in D3D9 describe), we'd like to be able to provide that info.
Is there something similar in Direct3D11? The only thing I found so far is the feature level, but that seems fairly unspecific..

Another question: I've read a bit about render targets and viewports in Direct3D11. Am I correct that, in D3D11, a render target basically consists of the resource and a corresponding view, telling the pipeline where and how to read / write data? As for viewports, it seems that, although the involved methods and structs of course look a bit different, the basic idea is the same as in D3D9 (i.e. basically it ends up being a part of the projection matrix and that's it). Is that correct, too?

Any help would be appreciated and thanks for reading.

inclusion of other headers in headers: Do or Don't?

23 July 2009 - 11:12 PM

In the last few days I have been running into a question more and more and I'm unable to find a satisfying answer for myself, so here I am, presenting this question to you guys: Should I avoid including other headers in a header file whenever possible or does it make sense to include headers for structs / classes this header is using anyway? I started wondering about this when I went through all my header files and got rid of namespace inclusions in them in order to avoid namespace confusion for files including them. So, doing that, I asked myself: How about header files? Do I really want to "accidentally" include another header file by including this one? On the other hand: Do I really want that I have to include that next header explicitly when it is required to use this class properly anyway? Simple example: I have a class Box defined in Box.h, using a struct Vector3 defined Vector3.h and a struct Vector4 defined in Vector4.h. Is it a good idea to include both Vector3.h and Vector4.h in Box.h or is it better to provide forward declarations for both structs? In order to use Box, I'd pretty much have to know Vector3 and Vector4, so I'd have to include Vector3.h and Vector4.h anyway. Also, not including Vector3.h and Vector4.h will mean I'll have to use forward declarations for them. For this simple example, that's okay, but with a growing project I'll end up with forward declarations of a lot of stuff eventually. So, I guess this comes down to: Should I include header files for used classes / structs etc whenever possible (without causing a circular inclusion or similar issues) or should I rely on forward declarations as much as possible? Or is there some sweet spot in the middle?

[solved] output gets "corrupted" when window size changes

19 June 2009 - 01:01 AM

Edit: Please see my second post, the problem described here is already solved So... I have this app using multiple target windows to render into using D3D9. Of course, I ran into the usual "how to handle resizing windows" problem and found a post saying that I could just create a back buffer with the desktop's size and use viewports afterwards. Since this meant I got around doing a device reset each and every time a window gets resized, I went that route. But, I am confused: My calls to Clear and Present were changed accordingly and work fine. I tested this by clearing only half of the target windows but presenting the full rectangle, resulting in half the windows being the color I cleared to and the other half getting that nice D3D debug flicker. So, I expect that area to be covered and I'm rather certain that that stuff is set up correctly, i.e. I'm not just presenting areas of the backbuffer that don't contain anything. Additionally, my test scene is definitely okay. Before changing to using viewports it rendered fine, which means that the vertex and index buffers were okay, the camera returned correct view and projection matrices etc... I assume this is the same now, as I haven't changed a thing in that regard. The problem now is: I don't see a thing anymore. The D3D debug runtimes don't give any errors and the only warnings I get are redundant render states, so nothing to worry about for now. Do I have to take the D3DVIEWPORT9 into account for the view and projection transform I use in my shaders in any way? I looked in D3D docs and found no further information about the usage of viewports... [Edited by - matches81 on June 19, 2009 8:34:52 AM]

rasterising a line

31 December 2008 - 12:12 AM

Hi there! I'm currently implementing a ray-heightmap-intersection. The basic idea is: 1. Clamp ray to heightmap's AABB. 2. Rasterise the ray's valid segment (inside the AABB) to the grid defined by the heightmap to get list of possibly intersecting quads. 3. Test the quads. I've implemented it, and it works pretty well so far with a few exceptions here and there (intersections not found that should be there). One of the issues I've found so far is that the line rasterising algorithm sometimes goes beyond the end point specified by one "pixel". I'm pretty sure there are quite a few people around here that have implemented a line rasteriser, so it would be appreciated, if you could look over my code to see if you see something off. I don't. :( Here you go:
void Line::RasterizeToGrid(float gridSize, const Ogre::Vector2 &origin, std::vector<SamplePoint> &result)
{
	// Bresenham modified to plot all points in contact with line, not only one per X coordinate
	// http://lifc.univ-fcomte.fr/~dedu/projects/bresenham/index.html

	using namespace Ogre;

	float invGridSize = 1.f / gridSize;

	Vector2 start, end, dir;
	if(m_P.x > m_Q.x) // start = transformed m_Q, end = transformed m_P, to keep dir.x > 0
	{
		start = (m_Q - origin) * invGridSize;
		end = (m_P - origin) * invGridSize;
	}
	else
	{
		start = (m_P - origin) * invGridSize;
		end = (m_Q - origin) * invGridSize;
	}
	dir = end - start;
	int yStep;
	if(dir.y < 0)
	{
		yStep = -1;
		dir.y = -dir.y;
	}
	else yStep = 1;

	// now, dir.x and dir.y are > 0, so only two cases remain: slope <= 1 or slope > 1
	result.push_back(SamplePoint(start.x, start.y, 0));

	if(dir.y <= dir.x) // slope <= 1
	{
		float errorY = start.y - (int)start.y;
		float errorX = start.x - (int)start.x;

		float restStepY = errorX*dir.y;
		dir /= dir.x;
		int currentY = start.y;
		for(int i = start.x; i < end.x-1; i++)
		{
			errorY += dir.y;
			if(errorY > 1)
			{
				if(errorY - restStepY < 1) // we were below 1 when passing the grid border
				{
					result.push_back(SamplePoint(i+1, currentY, 0));
				}
				else
				{
					result.push_back(SamplePoint(i, currentY+yStep,0));
				}
				currentY += yStep;
				errorY -= 1.f;
			}
			result.push_back(SamplePoint(i+1, currentY, 0));
		}
	}
	else // if slope > 1, swap X and Y
	{
		float errorY = start.y - (int)start.y;
		float errorX = start.x - (int)start.x;

		float restStepX = errorY*dir.x;
		dir /= dir.y;
		int currentX = start.x;
		for(int i = start.y; i < end.y-1; i++)
		{
			errorX += dir.x;
			if(errorX > 1)
			{
				if(errorX - restStepX < 1)
				{
					result.push_back(SamplePoint(currentX, i+1, 0));
				}
				else
				{
					result.push_back(SamplePoint(currentX+1, i,0));
				}
				currentX++;
				errorX -= 1.f;
			}
			result.push_back(SamplePoint(currentX, i+1, 0));
		}
	}
	
	SamplePoint endSample = SamplePoint(end.x, end.y, 0);
	if(endSample != result.back()) result.push_back(endSample);
}

The parameter 'origin' specifies the origin of the grid, 'gridSize' should be self-explanatory and 'result' will contain a list of integer coordinates for the quads that intersected the line. The line has the fields m_P and m_Q, containing the start and end point. Those are calculated after intersecting the ray with the heightmap's AABB. As I said, this works rather well mostly. But every once in a while I'd get a SamplePoint outside of the line, the algorithm seems to go too far every now and then. So, if anybody sees something off here, please tell me. I get the feeling I've spent too much time on this to see clearly.

Mapping a 2^n texture to a 2^n+1 terrain

02 December 2008 - 11:09 PM

Hello there! I have a randomly generated terrain with a resolution of 2^n+1 square (requirement both for the Diamond-Square algorithm used for creating the basic heightmap and for some gameplay elements). The terrain takes the randomly generated heightmap and turns it into a grid of vertices, including position, normal, tangent, bitangent and UV-coordinates for texture mapping. Now, I generated some textures for that terrain, too: A normal map calculated based on a higher res heightmap and a blend map, used for splatting textures on the terrain, also based on a higher res heightmap. My current implementation generates textures that also have a size of 2^n+1 square. I think current graphics card shouldn't have a problem with that, but it just feels a bit odd to have 513x513 textures around, so I thought about creating 2^n textures for my terrain. The problem I've ran into is two-fold: a) Let's say I have a heightmap with a resolution of 513x513. My current implementation would create a 2049x2049 heightmap based on the low-res one and generate matching textures, making sampling the values for the texture straight-forward (each value of the high-res map maps straight to one pixel of each texture). Now, the high-res heightmap has to have a size of 2^n+1, how would I sample the values for a 2^n texture based on that height map? I thought about doing steps over the high-res map of the size 2^n / (2^n + 1), instead of 1, and using bilinear interpolation to get a value that represents the value of the heightmap at the current position, but this could get rather expensive. So, any other, simpler ideas would be appreciated. b) How would I have to adjust the UV-coordinates of the terrain's vertices so that the textures are still mapped correctly? Would I have to adjust them at all?

PARTNERS