• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

137 Neutral

About RyxiaN

  • Rank
  1. Hmm, yeah I thought a little about that. But they are stored as ONE object in the application, am I right? Like so: [code] Texture2DDescription texture2DDescription = new Texture2DDescription(); texture2DDescription.ArraySize = 8;[/code] That means it has to be of dynamic usage and write to before each draw call. Isn't that kind of performance heavy, if I have say an array size of 8 and the height maps would be something like 256 * 256? But maybe it's worth the fact that it's 7 less draw calls if you fill a batch with 8 patches...? Thoughts? Thanks in advance!
  2. Hello again! I'm currently rendering my terrain as a bunch of patches. There are different LOD levels of the patch. Each LOD level has it's own index buffer. No vertex buffer is used, the planar position of the vertex is calculated with SV_VertexID and the height is then sampled from a texture. Below is the shader for this: [code]//======================================================================================// // Constant buffers // //======================================================================================// cbuffer Initial : register(b0) { int NumCells; float SliceSize; } cbuffer EveryFrame : register(b1) { matrix World; matrix View; matrix Projection; } //======================================================================================// // Input/Output structures // //======================================================================================// struct AppToVertex { uint VertexID : SV_VERTEXID; }; struct VertexToPixel { float4 Position : SV_POSITION; }; //======================================================================================// // Shader resources // //======================================================================================// Texture2D HeightMap : register(t0); //======================================================================================// // Samplers // //======================================================================================// SamplerState HeightMapSampler : register(s0) { Filter = MIN_MAG_MIP_LINEAR; AddressU = CLAMP; AddressV = CLAMP; }; //======================================================================================// // Helper function prototypes // //======================================================================================// float2 GetPosition(uint vertexID); float GetHeight(float2 position); //======================================================================================// // Vertex Shader // //======================================================================================// VertexToPixel VS(in AppToVertex input) { VertexToPixel output = (VertexToPixel)0; // Calculate the location of this vertex float2 p = GetPosition(input.VertexID); // Calculate the height of this vertex float h = GetHeight(p); // Apply transform output.Position = float4(p.x, h, p.y, 1); output.Position = mul(output.Position, World); output.Position = mul(output.Position, View); output.Position = mul(output.Position, Projection); return output; } //======================================================================================// // Helper method for calculating the position of a specific vertex // //======================================================================================// float2 GetPosition(uint vertexID) { int numVerts = NumCells + 1; float x = (float)(vertexID % numVerts) * (SliceSize / NumCells); float z = (float)(vertexID / numVerts) * (SliceSize / NumCells); return float2(x, z); } //======================================================================================// // Helper method for sampling the height map // //======================================================================================// float GetHeight(float2 position) { // Calculate the texture coordinate float u = position.x / NumCells; float v = position.y / NumCells; // Sample the height map return HeightMap[float2(u, v)].r; }[/code] Now, I would like to instance patches that currently have the same LOD level and has the same materials (if that's necessary, better if that's not needed). I can have a structured buffer that contains data about each patch (patch X location, patch Z location, materials etc) and use SV_InstanceID to fetch it. But my concern is the height map. I tried using a simple array, like so: [code]Texture2D HeightMap[8][/code] But you cant dynamically index an array using SV_InstanceID like you can do with the structured buffer. So does anyone know how I can do instead? Putting the height map in the structured buffer would be cool, but you can't really do that can you...? Is there any other way, like a texture buffer (is this what tbuffer is for?) or something like that? Or maybe I can use a separate structured buffer like this: [code]StructuredBuffer<Texture2D> HeightMaps;[/code] ? Any ideas are welcome! Thanks in advance!
  3. [quote name='Tsus' timestamp='1336131666' post='4937346'] [quote name='RyxiaN' timestamp='1336126144' post='4937333'] Is binding a resource very time consuming? I assume they are already on the GPU so shouldn't be much right? [/quote] It depends very, very much on the hardware, but in general you should minimize state changes. E.g. when binding a resource the caches are invalidated and several things are validated (are all mipmaps available (for textures), have the formats changed etc). Probably even worth is changing a shader, since it internally changes much more states, potentially invalidating various caches (including the texture cache) as well as inlining of functions in case you use dynamic shader linkage etc. Writing a system that keeps on the CPU side track of the states and binds them when you submit a draw or dispatch call, isn’t worth it btw, since the GPU already does that for you. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] We have many experts here in the forum, so if you open up a new topic, you can get much more (and better [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]) information on this. [/quote] Allright. I think I'll just leave it to the experts for now. x) Doubt it will make a difference anyway. My biggest thought were more if it was worth binding the StructuredBuffer to both the vertex shader and the pixel shader, or just send along the variables from the vertex shader to the pixel shader through parameters with semantics, which I think I'll end up doing instead.
  4. [quote name='Tsus' timestamp='1336123632' post='4937326'] So, the quad is visible right? Does it receive the right color, if you hard code it in the pixel shader? Hm, you access the structured buffer in the pixel shader, too. Have you bound it to the pixel shader stage as well? [CODE]context.PixelShader.SetShaderResource(sprites, 0);[/CODE] Besides, I think the SV_InstanceID is input to the vertex shader, not the pixel shader. You could as well read the color in the vertex shader and pass it by to the pixel shader. Cheers! [/quote] Wow, so stupid. No I hadn't bound it to the pixel shader. [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img] And yeah, apparantly SV_InstanceID wasn't sent to the pixel shader. Now it works like a charm! Time to mess around a little. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Thank you both! [b]EDIT[/b]: Is binding a resource very time consuming? I assume they are already on the GPU so shouldn't be much right?
  5. [quote name='Tsus' timestamp='1336119456' post='4937317'] Hi! [quote name='RyxiaN' timestamp='1336114904' post='4937305'] The only clear difference I see between your code and mine in the creation part is that I set srvDesc.Dimension to ExtendedBuffer instead of Buffer. I saw some other tutorial which did set it to Extended, and if I don't, the SRV won't create, returning a Direct3D11Exception. [/quote] AFAIK, StructuredBuffers and ByteAddressBuffers go under ExtendedBuffer, so yes, your change is correct. I think, on the HLSL side, the shader resource view on the structured buffer should be assigned to a t-register. (s are for samplers) [CODE]StructuredBuffer<Sprite> Sprites : register(t0); [/CODE] Hope it helps. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Cheers! [/quote] Made no difference. :/ Still renders black. But I still think you're right though, so I'll keep it on register(t0). Thanks for the input!
  6. Wow, yet another awesome reply. Translation won't be a problem, I think. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Yeah, instancing it is, seems to be exactly what I'm looking for. I got that working. There's just one problem, with the StructuredBuffer. I think I'm creating it right, and I think I'm setting it right aswell, but it doesn't seem to get set for some reason. Here's some code (keep in mind it's all just for testing purposes, so it's no final code: Shader2D.hlsl: [code]struct Sprite { float2 TopLeft; float2 TopRight; float2 BotLeft; float2 BotRight; float4 Color; }; StructuredBuffer<Sprite> Sprites : register(s0); float4 VShader(in uint VID : SV_VertexID) : SV_POSITION { Sprite sprite = Sprites[0]; float2 pos; if (VID == 0) pos = float2(-1.0f, 1.0f);// sprite.TopLeft; else if (VID == 1) pos = float2(0.0f, 1.0f);// sprite.TopRight; else if (VID == 2) pos = float2(-1.0f, 0.0f);// sprite.BotLeft; else pos = float2(0.0f, 0.0f);// sprite.BotRight; return float4(pos, 1.0f, 1.0f); }; float4 PShader(in float4 position : SV_POSITION, in uint IID : SV_InstanceID) : SV_TARGET { return Sprites[IID].Color; }[/code] Buffer, SRV and index buffer creation: [code] Sprite sprite1 = new Sprite { TopLeft = new Vector2(-1.0f, 1.0f), TopRight = new Vector2(0.0f, 1.0f), BotLeft = new Vector2(-1.0f, 0.0f), BotRight = new Vector2(0.0f, 0.0f), Color = new Vector4(1.0f, 1.0f, 0.0f, 1.0f) }; BufferDescription bufferDescription = new BufferDescription(); bufferDescription.BindFlags = BindFlags.ShaderResource; bufferDescription.CpuAccessFlags = CpuAccessFlags.Write; bufferDescription.OptionFlags = ResourceOptionFlags.StructuredBuffer; bufferDescription.SizeInBytes = MaxSprites * Sprite.SizeInBytes; bufferDescription.StructureByteStride = Sprite.SizeInBytes; bufferDescription.Usage = ResourceUsage.Dynamic; using (DataStream data = new DataStream(MaxSprites * Sprite.SizeInBytes, true, true)) { data.Write(sprite1); data.Position = 0; structuredBuffer = new Buffer(graphics.Device, data, bufferDescription); } ShaderResourceViewDescription spritesDescription = new ShaderResourceViewDescription(); spritesDescription.Dimension = ShaderResourceViewDimension.ExtendedBuffer; spritesDescription.FirstElement = 0; spritesDescription.Format = Format.Unknown; spritesDescription.ElementCount = MaxSprites; sprites = new ShaderResourceView(graphics.Device, structuredBuffer, spritesDescription); using (DataStream dataStream = new DataStream(sizeof(uint) * 6, true, true)) { dataStream.Write(0); dataStream.Write(1); dataStream.Write(2); dataStream.Write(2); dataStream.Write(1); dataStream.Write(3); dataStream.Position = 0; indexBuffer = new Buffer(graphics.Device, dataStream, sizeof(uint) * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0); }[/code] Setting the SRV and rendering: [code] DeviceContext context = graphics.Context; InputAssemblerWrapper inputAssembler = context.InputAssembler; inputAssembler.InputLayout = inputLayout; inputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; inputAssembler.SetVertexBuffers(0, new VertexBufferBinding(null, 0, 0)); inputAssembler.SetIndexBuffer(indexBuffer, Format.R32_UInt, 0); context.PixelShader.Set(pixelShader); context.VertexShader.Set(vertexShader); context.VertexShader.SetShaderResource(sprites, 0); context.DrawIndexedInstanced(6, 1, 0, 0, 0);[/code] What I get is a black square. Should be (1.0f, 1.0f, 0.0f, 1.0f) colored. Also, if I use the points from the structured buffer, it doesnt render anything (or probably everything at (0.0f, 0.0f). So looks like the StructuredBuffer doesnt get set... The only clear difference I see between your code and mine in the creation part is that I set srvDesc.Dimension to ExtendedBuffer instead of Buffer. I saw some other tutorial which did set it to Extended, and if I don't, the SRV won't create, returning a Direct3D11Exception. Do I set it right? Perhaps it's something with the slot (0), in the SetShaderResource call? [b]EDIT:[/b] Removed the qoute, was so messy.
  7. [quote name='MJP' timestamp='1336069308' post='4937166'] You don't actually need any vertices or vertex buffers to render anything. You can bind a NULL vertex buffer and then say "draw 4 vertices", and then the GPU will invoke your vertex shader 4 times. Since you don't have any vertex data to pull in you typically use SV_VertexID and/or SV_InstanceID to decide what to do. Here's a really simple example of using SV_VertexID to generate a triangle that would cover the entire screen: [code] float4 VSMain(in uint VtxID : SV_VertexID) : SV_Position { float4 output = float4(-1.0f, 1.0f, 1.0f, 1.0f); if(VtxID == 1) output = float4(3.0f, 1.0f, 1.0f, 1.0f); else if(VtxID == 2) output = float4(-1.0f, -3.0f, 1.0f, 1.0f); return output; } [/code] So to invoke this you could just bind a NULL VB + IB, and call Draw with a vertex count of 3. If you want, you can also use an index buffer with a NULL vertex buffer. So for instance if you wanted to draw an indexed quad, you could have an index buffer with 6 indices that forms 2 triangles from 4 vertices. In this case SV_VertexID will give you the value from the index buffer, so in this case it would be 0 through 3. Then you could do something similar to the code above to generate one of the 4 corners of your quad for a sprite. SV_InstanceID works the same way, except it gives you the index of the instance being drawn. So in your case you would probably just use it to directly index into your structured buffer of sprite data. Unfortunately I don't know of any good tutorials or other resources regarding structured buffers. The D3D11 book that I collaborated on (link is in my signature) has a good overview of D3D11 resource types, but that's obviously not freely available. However if you any questions I can certainly try and answer them for you. [/quote] Wow, this cleared up ALOT! Thanks a bunch! To set a structured buffer, you have to create a ShaderResourceView to wrap it first? And about SV_InstanceID, you say it gives the index of the instance being drawn. What does an instance mean? Does it have to do with the draw instanced call? Havn't really looked into that yet, but if those are connected, I'll get right to it. [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img] [quote name='Adam_42' timestamp='1336071514' post='4937182'] You might find [url="http://software.intel.com/en-us/articles/microsoft-directcompute-on-intel-ivy-bridge-processor-graphics/"]http://software.inte...essor-graphics/[/url] useful. [/quote] Looks very interesting! Will look more into it tomorrow when I get the time! Thanks a lot. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  8. Hi! I'm trying to create my own SpriteBatch class, actually made one a while back, but had some lousy implementation. I've seen some code around here that used a structured buffer, something like this: [code]StructuredBuffer<Sprite> SpriteData; struct Sprite { float2 Position; float2 Size; byte TextureIndex; };[/code] Now, I'm not looking for help implementing this, I'm just curious if anybody know any good guides, examples or tutorials on structured buffers. I get the structured buffer thing, having one sprite struct instance per sprite. But what vertices would you send to the graphics card? I mean, if all the data you need is in the structured buffer, what would the vertices contain? They're obviously needed, else there would be nothing to process in the vertex shader. And how do you take use of the system-generated values, such as SV_InstanceID and SV_VertexID? On the samples I saw, SV_InstanceID (I think) were used to fetch the right Sprite instance from the structured buffer, but can't really get how this works. Again: I'm not asking anyone here to help implement a SpriteBatch class/shader, only if there is any good guides or similar. Havn't found much searching around... Thanks in advance!
  9. Hmm allright. Think I'll just go with the default adapter for now. Thanks for your input.
  10. So I took this one step further. I want to be able to change adapter. But since this is sent along as a parameter to the Device, I assume you need to re-create this, unless there is some other way? And what would happen to resources created with the device? I'm guessing those would have to be re-created too? That leaves me with this thought. Like seen in many games, some options require a game restart when changed. Is this the best solution when changing the adapter? Just let the user know that the change won't take effect until he/she restarts the game? Thanks in advance!
  11. Allright, thanks, clears up alot! I think what I was looking for was right in front if my eyes... Resource.FromSwapChain<Texture2D>(0) is what fetches the backbuffer resource, am I right? Anyways, got it working now! Can't really test the multisampling though or even change the settings after initialization, but I'll try that when I get the time and post then! I was stuck a long time on still having a black screen, but then I noticed that my render target texture and the swap chain used different formats... Here's the final code. [b]GraphicsDevice.cs:[/b] [code] // Create a render target Texture2DDescription renderTargetTextureDescription = new Texture2DDescription(); renderTargetTextureDescription.ArraySize = 1; renderTargetTextureDescription.BindFlags = BindFlags.RenderTarget; renderTargetTextureDescription.CpuAccessFlags = CpuAccessFlags.None; renderTargetTextureDescription.Format = Format.R8G8B8A8_UNorm; renderTargetTextureDescription.Height = Options.ResolutionHeight; renderTargetTextureDescription.MipLevels = 1; renderTargetTextureDescription.OptionFlags = ResourceOptionFlags.None; renderTargetTextureDescription.SampleDescription = new SampleDescription(4, 16); renderTargetTextureDescription.Usage = ResourceUsage.Default; renderTargetTextureDescription.Width = Options.ResolutionWidth; RenderTargetViewDescription renderTargetViewDescription = new RenderTargetViewDescription(); renderTargetViewDescription.Dimension = RenderTargetViewDimension.Texture2DMultisampled; renderTargetViewDescription.MipSlice = 0; renderTargetViewDescription.Format = renderTargetTextureDescription.Format; renderTargetResource = new Texture2D(device, renderTargetTextureDescription); renderTarget = new RenderTargetView(device, renderTargetResource, renderTargetViewDescription); // Set the render target and viewport var viewport = new Viewport(0.0f, 0.0f, currentOptions.ResolutionWidth, currentOptions.ResolutionHeight); context.OutputMerger.SetTargets(renderTarget); context.Rasterizer.SetViewports(viewport); // Fetch the backbuffer backbuffer = Resource.FromSwapChain<Texture2D>(swapChain, 0);[/code] [b]void Draw(Clock clock):[/b] [code] graphicsDevice.Context.ClearRenderTargetView(graphicsDevice.RenderTarget, ClearColor); foreach (var scene in VisibleScenes) { scene.Draw(graphicsDevice, clock); } var source = graphicsDevice.RenderTarget.Resource; var destination = graphicsDevice.Backbuffer; graphicsDevice.Context.ResolveSubresource(source, 0, destination, 0, Format.R8G8B8A8_UNorm); graphicsDevice.SwapChain.Present(0, PresentFlags.None);[/code] graphicsDevice.RenderTarget returns the created RenderTargetView graphicsDevice.Backbuffer returns the resource fetched from the swap chain Again, thanks a bunch MJP! [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  12. [quote name='MJP' timestamp='1335389724' post='4934887'] Your creation of the render target looks good. So what you want to do is bind that render target view, render to it, and then use DeviceContext.ResolveSubresource using the MSAA render target texture as the source and the backbuffer texture as the destination. This will resolve the MSAA render target (filter the individual MSAA subsamples), and copy the results to the backbuffer. [/quote] Is this the way to bind the render target view? [code] // Set the render target and viewport var viewport = new Viewport(0.0f, 0.0f, currentOptions.ResolutionWidth, currentOptions.ResolutionHeight); context.OutputMerger.SetTargets(renderTarget); // This binds? context.Rasterizer.SetViewports(viewport);[/code] I'm guessing that the render target texture is the one you get from renderTarget.Resource? But which one is the backbuffer texture? How do I get that one? I thought that the render target you bound had the backbuffer, but that's probably wrong then. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  13. [quote name='MJP' timestamp='1335135336' post='4933900'] You never need to recreate the device, you only do that once. For some changes you can call ResizeBuffers or ResizeTargets on the swap chain, however to change the MSAA mode you need to to create a new swap chain. You can do that using a DXGI factory. In most cases you don't actually want to create your swap chain with MSAA enabled...typically you want to create it without MSAA, and use an MSAA render target instead. Then you can just resolve the MSAA render target to the backbuffer as the final step. [/quote] Ah, thanks! Noticed that ResizeTargets could also change the refresh rate, compared to ResizeBuffers, so that's good. But how exactly do I create a render target with MSAA enabled? I currently create my render target view like this (like the SlimDX samples do it): [code]// Create a render target using (var resource = Resource.FromSwapChain<Texture2D>(swapChain, 0)) renderTarget = new RenderTargetView(device, resource);[/code] But with this way I can't really change any MSAA settings. I'm assuming you're talking about the SampleDescription property of the Texture2DDescription structure? So I guess I need to create my own Texture2D to use as render target instead of using the FromSwapChain<Texture2D> method? [b]EDIT[/b] I tried creating a render target with MSAA and this is what I came up with: [code] // Create a render target Texture2DDescription renderTargetTextureDescription = new Texture2DDescription(); renderTargetTextureDescription.ArraySize = 1; renderTargetTextureDescription.BindFlags = BindFlags.RenderTarget; renderTargetTextureDescription.CpuAccessFlags = CpuAccessFlags.None; renderTargetTextureDescription.Format = Format.R32G32B32A32_Float; renderTargetTextureDescription.Height = Options.ResolutionHeight; renderTargetTextureDescription.MipLevels = 1; renderTargetTextureDescription.OptionFlags = ResourceOptionFlags.None; renderTargetTextureDescription.SampleDescription = new SampleDescription(Options.MultiSampleCount, Options.MultiSampleQuality); renderTargetTextureDescription.Usage = ResourceUsage.Default; renderTargetTextureDescription.Width = Options.ResolutionWidth; RenderTargetViewDescription renderTargetViewDescription = new RenderTargetViewDescription(); renderTargetViewDescription.Dimension = RenderTargetViewDimension.Texture2D; renderTargetViewDescription.MipSlice = 0; renderTargetViewDescription.Format = renderTargetTextureDescription.Format; //using (var resource = Resource.FromSwapChain<Texture2D>(swapChain, 0)) using (var resource = new Texture2D(device, renderTargetTextureDescription)) renderTarget = new RenderTargetView(device, resource, renderTargetViewDescription);[/code] But now my screen only turns black. Anyone that can see what's wrong with my code? Thanks in advance!
  14. Hello there! I have a small question regarding changing device settings (such as multisampling, refresh rate, etc) after the device has been initialized. These things is what I know: Changing fullscreen mode requires a single call: [code]swapChain.IsFullscreen = !swapChain.IsFullscreen[/code] Changing screen resolution is fairly simple too, it's written at the [url="http://slimdx.org/tutorials/SimpleTriangle.php"]SlimDX tutorial page[/url]. This is what I did: [code]// Sets the forms size window.Form.ClientSize = new Size(width, height); // Dispose the current render target and create a new one renderTarget.Dispose(); swapChain.ResizeBuffers(1, 0, 0, Format.R8G8B8A8_UNorm, SwapChainFlags.AllowModeSwitch); using (var resource = Resource.FromSwapChain<Texture2D>(swapChain, 0)) renderTarget = new RenderTargetView(device, resource); // Set the new target context.OutputMerger.SetTargets(renderTarget);[/code] Now to the problem. If I want to change any other settings (settings in the SwapChainDescription structure), such as multisampling and refresh rate, what do I need to do? Do I need to recreate the device and swap chain objects? If that is the case, what happens to resources created or loaded with the device, such as effects, textures, vertex and index buffers, etc.? Do these need to be recreated aswell? Thanks in advance!
  15. Yeah, I figured it could be some floating point errors. I also realized I wasn't (5000, 0, 5000) far off in the distance, I was (500.000, 0, 500.000). Don't ask me why. (: But allright, so I guess I should offset everything then? I'm currently having 5*5 patches that are visible to the player, where the camera is in the center obviously. If I pass over from one patch to the next, I could offset everything the length of one patch back, and still render at origo? I'll give it a try. Thanks!