Jump to content
• Advertisement

# DX11 Exception when creating a 2D render target

## Recommended Posts

A new player of my game reported an issue. When he starts the game, it immediately crashes, before he even can see the main menu. He sent me a log file of my game and it turns out that the game crashes, when my game creates a 2D render target. Here is the full "Interface not supported" error message:

HRESULT: [0x80004002], Module: [General], ApiCode: [E_NOINTERFACE/No such interface supported], Message: Schnittstelle nicht unterstützt
bei SharpDX.Result.CheckError()
bei SharpDX.Direct2D1.Factory.CreateDxgiSurfaceRenderTarget(Surface dxgiSurface, RenderTargetProperties& renderTargetProperties, RenderTarget renderTarget)
bei SharpDX.Direct2D1.RenderTarget..ctor(Factory factory, Surface dxgiSurface, RenderTargetProperties properties)
bei Game.AGame.Initialize()

Because of the log file's content, I know exactly where the game crashes:

Factory2D = new SharpDX.Direct2D1.Factory();

_surface = backBuffer.QueryInterface<SharpDX.DXGI.Surface>();

// It crashes when calling this line!
RenderTarget2D = new SharpDX.Direct2D1.RenderTarget(Factory2D, _surface, new SharpDX.Direct2D1.RenderTargetProperties(new SharpDX.Direct2D1.PixelFormat(_dxgiFormat, SharpDX.Direct2D1.AlphaMode.Premultiplied)));

RenderTarget2D.AntialiasMode = SharpDX.Direct2D1.AntialiasMode.Aliased;

I did some research on this error message and all similar problems I found were around six to seven years old, when people tried to work with DirectX 11 3D graphics and Dirext 10.1 2D graphics. However, I am using DirectX 11 for all visual stuff. The game runs very well on the computers of all other 2500 players. So I am trying to figure out, why the source code crashes on this player's computer. He used Windows 7 with all Windows Updates, 17179 MB memory and a NVIDIA GeForce GTX 870M graphics card. This is more than enough to run my game.

Below, you can see the code I use for creating the 3D device and the swap chain. I made sure to use BGRA-Support when creating the device, because it is required when using Direct2D in a 3D game in DirectX 11. The same DXGI format is used in creating 2D and 3D content. The refresh rate is read from the used adapter.

// Set swap chain flags, DXGI format and default refresh rate.
_swapChainFlags = SharpDX.DXGI.SwapChainFlags.None;
_dxgiFormat = SharpDX.DXGI.Format.B8G8R8A8_UNorm;
SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1);

// Get proper video adapter and create device and swap chain.
using (var factory = new SharpDX.DXGI.Factory1())
{
SharpDX.DXGI.Adapter adapter = GetAdapter(factory);
if (adapter != null)
{
// Get refresh rate.
refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate);

// Create Device and SwapChain
_device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[]
{
SharpDX.Direct3D.FeatureLevel.Level_10_1
});

_swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate));

_deviceContext = _device.ImmediateContext;
}
}

#### Share this post

##### Share on other sites
Advertisement

Are you sure that his Windows 7 is fully updated? The D3D11 + D2D interop came to Win7 as a platform update, KB2670838.

#### Share this post

##### Share on other sites

No, I am not. He told me he has all Windows Updates. I can't spy my players' computer and I don't want to do it :-) I will ask him.

#### Share this post

##### Share on other sites

Dear SoldierOfLight, thank you very much! My player installed this update manually and he could play the game afterwards! Thank you!!

#### Share this post

##### Share on other sites

Cool, glad I could help. That number is burned into my brain from all the troubles it caused, but I'm glad it's helping at least one person.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

## Sign in

Already have an account? Sign in here.

Sign In Now

• Advertisement

### Announcements

• Advertisement

• ### Popular Now

• 17
• 10
• 19
• 14
• 19
• Advertisement
• ### Similar Content

• I've got a bug with my brick breaker style game. The bricks move down one line at a time ever 1.5 seconds. What appears to be happening is occasionally the ball will be just about to hit the brick when the brick moves down a line, and now the ball is behind it. I'm not sure how to fix this. I have two ideas but I'm not sure of implementation. 1 solution would be to check where they were and where they are going to be before rendering the frame. Then if they crossed paths, then register the brick as hit. Solution 2 would be change how the bricks move. I could maybe slide them down line by line, instead of a jump down. I'm not sure of this will fix the issue or not. Any ideas?

• Once again Unity is frustrating me to the point of insanity.
What I am looking for is a way to find a ray intersect with the edges of the mesh, using Unity's already made collision system. I want to point out that I know how to do a line intersect, what I want to know is if Unity supports this already.

The image above shows how I sweep a ray,intersecting the mesh. The top green image shows what I want and the red shows what Unity is giving me.
I want to know if there is some way, to find the edges in Unity without creating a custom line intersection tool.
Most engines I know don't use rays for this but instead use a plane like this:

I checked the Unity "Plane intersection" but it is just a ray cast. It will still need me to find the vertices on the collision mesh to cast the ray from; if I am doing that then making my own line intersection tool is better.

I looked online and can find anything on this. Also I don't want to cut the mesh, so I don't need a way to know what side is what.
Does Unity even have collisions that support edge only detection?
• By chiffre
Introduction:
In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
(TLDR at bottom)
The Actual Post:
To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
To add some context, let me define a struct as an example:
struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape.
TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
• By cozzie
Hi all,
I was wondering it it matters in which order you draw 2D and 3D items, looking at the BeginDraw/EndDraw calls on a D2D rendertarget.
The order in which you do the actual draw calls is clear, 3D first then 2D, means the 2D (DrawText in this case) is in front of the 3D scene.
The question is mainly about when to call the BeginDraw and EndDraw.
Note that I'm drawing D2D stuff through a DXGI surface linked to the 3D RT.
Option 1:
A - Begin frame, clear D3D RT
B - Draw 3D
C - BeginDraw D2D RT
D - Draw 2D
E - EndDraw D2D RT
F - Present
Option 2:
A - Begin frame, clear D3D RT + BeginDraw D2D RT
B - Draw 3D
C - Draw 2D
D - EndDraw D2D RT
E- Present
Would there be a difference (performance/issue?) in using option 2? (versus 1)
Any input is appreciated.
• By JuliaAxt
Please help me with this code, this error is currently stopping my project
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[RequireComponent(typeof(Rigidbody2D))]
public class TapController : MonoBehaviour {
public float tapForence = 10;
public float tiltSmooth = 5;
public Vector3 startPos;
Rigidbody2D Rigidbody;
Quaternion downRotation;
Quaternion forwardRotation;
private void Start() {
Rigidbody = GetComponent<Rigidbody2D>();
downRotation = Quaternion.Euler(0, 0, -90);
forwardRotation = Quaternion.Euler(0, 0, 35);

}
private void Update() {
if (Input.GetMouseButtonDown(0))
{
transform.rotation = forwardRotation;
Rigidbody.AddForce(Vector2.up * tapForce, ForceMode2D.Force);   (The name tapForcedoes not exist in current context)
}
transform.rotation = Quaternion.Lerp(transform.rotation, downRotation, tiltSmooth * Time.deltaTime);
}
}
void OnTriggerEnter2D(Collider2D col){
if (col.gameObject.tag == "scoreZone")
{
// register a score event
// play a sound
}
if (col.gameObject.tag == "deadZone")
{
Rigidbody.simulated = false;   (Rigidbody does not contain a definition for `simulated´)
//register a dead event
//play a sound
}
}

}

• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!