Jump to content

  • Log In with Google      Sign In   
  • Create Account

spazzarama

Member Since 19 Mar 2014
Offline Last Active Today, 01:04 AM

#5261515 take screenshot on secondary screen

Posted by spazzarama on 11 November 2015 - 03:54 AM


Just want to know if it is possible to take a screenshot from my second monitor.

 

Yes it is possible, however it will depend what is on there as to the approach. If you want to capture the second screen regardless of what is shown there, then you might need to look down the road of mirror drivers and the like (or more traditional desktop capture approaches). If a fullscreen Direct3D window is being shown on the 2nd display then you might need to look at grabbing the image from Direct3D.

 

If the 2nd screen is showing the desktop, I don't think using the DirectX approach isn't going to work unless you are working with the DWM.

 

You may need to implement more than one approach to handle all the scenarios, but again we don't know what you are trying to do exactly.

 

Just a tip - capturing from the front buffer is always going to be slow, and capturing from the backbuffer from 3rd party apps is usually done through hooking (take a look at the Direct3DHook project in my signature).

 

Other approaches include a mirror display driver but I don't have much experience with these.




#5255768 Using whitelisting to run untrusted C# code safely

Posted by spazzarama on 06 October 2015 - 12:28 AM

Could you please elaborate on what you are trying to protect, your application, or your user's system?

 

I would probably read that caution as "don't rely solely on CAS".

 

You can provide a handler for the AppDomain resolve assembly name / type, and prevent particular types/assemblies from being accessed in your AppDomain. This might also help you prevent reflection from bypassing a whitelisting implementation (e.g. restrict access to the reflection types).

 

Btw, reflection can let you mess with non-public members as well, so don't ignore that.

 

Maybe all this combined with a "peer review/approved" approach to marking safe maps/plugins would be enough - then the user can be warned about the dangers of using a particular extension.

 

Add-ins might also be of interest: https://msdn.microsoft.com/en-us/library/bb384200.aspx and https://msdn.microsoft.com/en-us/library/bb355219.aspx




#5253256 Direct3D11 Multithreading

Posted by spazzarama on 21 September 2015 - 01:50 AM

As @MJP said you are unlikely to gain much performance using deferred contexts with D3D11 *unless* you are doing a lot of CPU intensive work that for whatever reason you cannot separate from your rendering logic *and* can actually be parallelised.

 

I've got a C#/SharpDX example https://github.com/spazzarama/Direct3D-Rendering-Cookbook/tree/master/Ch10_01DeferredRendering. You may need to take a good look at the code to work out how to drive it as it is assumed you are reading the book at the same time.




#5252967 Direct3D SharpDX companion projects for Direct3D Rendering Cookbook now on Gi...

Posted by spazzarama on 18 September 2015 - 06:46 PM

I've just uploaded my Direct3D Rendering Cookbook projects to GitHub for ease of access. These projects are written in C# using SharpDX and can be built in VS2012 / VS2013+.

 

I'll be updating these projects to support current SharpDX releases, however for anyone wanting to try their hand at using SharpDX they provide a good starting point as is.

 

Cheers,

J




#5226665 Vector graphics

Posted by spazzarama on 01 May 2015 - 03:18 AM

Hi guys, can i use normal images for Unity 2d or I have to use vector images? I know that there are a lot of resolutions and screen sizes(especially for smartphones) so I thought that I should use vector... Thank you.

For a 2D app it makes it heaps easier if your original assets are vector images to begin with so that you can save them off at the different resolutions necessary - assuming that you need to target a range of resolutions (but personally I found it useful even just to get the icons done, splash screens and other assets necessary for release etc etc etc). Of course you don't have to and I don't think Unity has in-built vector graphic support so you would be saving the images off as raster graphics (but I think there are various attempts to support this on the Unity 3D asset store) - I would take a look over at their forum or perhaps ask there too.




#5226461 Need help with Cameras

Posted by spazzarama on 30 April 2015 - 03:25 AM

Has it ever worked?

 

I'm not a wiz with Math so stepping through Matrix calculations is always a very time consuming method for me to check if they are what is expected or not smile.png

 

Therefore I like to do a sanity check and create the most simple version first. I.e. just facing forward from origin or something equally basic, if that works then introduce your rotation calculations and so on until it fails again. The problem could be your camera class, or it could be a shader, who knows.

 

Btw is this SharpDX? What version? etc... you might need to help us with more info to help you.




#5200705 C# book recommendations for beginners?

Posted by spazzarama on 29 December 2014 - 07:33 PM

Always good to have a few good books on the subject, you may also like to take a look at:

Once you know enough to ask the right questions you will find a lot of what you want to know using your favourite search engine.




#5194556 SlimDx - ResolveSubresource

Posted by spazzarama on 25 November 2014 - 12:51 AM

I have almost identical code using SharpDX (and previously worked with SlimDX), everything you are doing looks ok. The only differences I have here are that I am always passing 0 as the source and dest index (I'm not sure whether what you have would be coming through as 0 or not), and I have no need to bind to the pipeline so it has no BindFlags. Here is the example in case it helps:

// texture is multi-sampled, lets resolve it down to single sample
textureResolved = new Texture2D(texture.Device, new Texture2DDescription()
{
	CpuAccessFlags = CpuAccessFlags.None,
	Format = texture.Description.Format,
	Height = texture.Description.Height,
	Usage = ResourceUsage.Default,
	Width = texture.Description.Width,
	ArraySize = 1,
	SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0), // Ensure single sample
	BindFlags = BindFlags.None,
	MipLevels = 1,
	OptionFlags = texture.Description.OptionFlags
});
// Resolve into textureResolved
texture.Device.ResolveSubresource(texture, 0, textureResolved, 0, texture.Description.Format);




#5190088 SlimDX to take Fullscreen Game-Screenshots

Posted by spazzarama on 30 October 2014 - 12:53 AM

To capture images from a fullscreen Direct3D application you require access to the underlying Direct3D device and depending upon your approach you may need to hook into the Direct3D methods. To do this you usually would need to perform some form of code injection into the target application.

 

I've played around with this for a few years in C# with both SlimDX and SharpDX. You can find a discussion on "Screen capture and overlays for Direct3D 9, 10 and 11 using API hooks" on my blog, and the Direct3DHook project on GitHub.  I now use SharpDX exclusively (its a bit faster, lighter weight and easier to distribute), but you can fairly easily convert to use SlimDX (or use the old link to code on the blog post - although I recommend only using the linked zip for reference as many issues have since been addressed). There is some other relevant information in an older post on my blog, this also describes an alternative approach to hooking using interface wrapping.

 

The Direct3DHook project uses EasyHook to inject managed C# assemblies into the target process and to hook the necessary Direct3D functions.

 

NOTE: SharpDX only works with C# not VB.NET - so if VB is essential you will need to stay with SlimDX.

 

EDIT: lol just reread that code and noticed it is a translation to VB from one of my earlier projects :)




#5189877 Questions about billboards

Posted by spazzarama on 29 October 2014 - 01:39 AM

What @unbird said, couldn't have said it any better.




#5189179 Questions about billboards

Posted by spazzarama on 26 October 2014 - 12:37 AM

You can expand and align the quads using a vertex shader and instancing without having to resort to using the geometry shader stage.

 

Here is the vertex shader part of a DX11 example of rendering particles using billboards and vertex shader instancing taken from Chapter 8 of my book Direct3D Rendering Cookbook. This doesn't need the geometry shader stage at all and the full example uses append/consume buffers within a compute shader to perform the particle simulation.

 

By using instancing to determine the index into the particle buffer there is no need to provide any input vertex buffer(s) into the vertex shader stage.

// Represents a particle
struct Particle {
    float3 Position;
    float Radius;
    float3 OldPosition;
    float Energy;
};

// Access to the particle buffer
StructuredBuffer<Particle> particles : register(t0);

// Some common structures and constant buffers (e.g. PS_Input, projections and so on)
#include "Common.hlsl"

// Represents the vertex positions for our triangle strips
static const float4 vertexUVPos[4] =
{
    { 0.0, 1.0, -1.0, -1.0 },
    { 0.0, 0.0, -1.0, +1.0 },
    { 1.0, 1.0, +1.0, -1.0 },
    { 1.0, 0.0, +1.0, +1.0 },
};

float4 ComputePosition(in float3 pos, in float size, in float2 vPos)
{
    // Create billboard (quad always facing the camera)
    float3 toEye = normalize(CameraPosition.xyz - pos);
    float3 up    = float3(0.0f, 1.0f, 0.0f);
    float3 right = cross(toEye, up);
    up           = cross(toEye, right);
    pos += (right * size * vPos.x) + (up * size * vPos.y);
    return mul(float4(pos, 1), WorldViewProjection);
}

PS_Input VSMainInstance(in uint vertexID : SV_VertexID, in uint instanceID : SV_InstanceID)
{
    PS_Input result = (PS_Input)0;

    // Load particle using vertex instance Id
    Particle p = particles[instanceID];
    // Vertices in triangle strip
    // 0-1
    //  /
    // 2-3
    // Load vertex pos using the vertexID for the vertex in the strip (i.e. 0, 1, 2 or 3)
    result.UV = vertexUVPos[vertexID].xy;
    result.Position = ComputePosition(p.Position, p.Radius, vertexUVPos[vertexID].zw);
    result.Energy = p.Energy;
    return result;
}

To use this shader you set the input assembler primitive topology to a "triangle strip" and then use DrawInstancedIndirect - assuming that a compute shader determines the number of particles, otherwise DrawInstanced.




#5179780 1D and 3D textures useless?

Posted by spazzarama on 12 September 2014 - 12:17 AM


I want to create a (100,200,300) 3D texture, how will this look in a 2D texture (?,?)

 

To represent the same number of texels as that 3D texture you would need to create one very larger 2D texture, e.g. 30,000 x 200 - of course it won't look exactly the same in memory and I would think that mipmapping will be very different and probably give undesirable results. Otherwise 300 separate 100x200 2D textures? 

 

Promit's link on texture tiling is probably what you are after here.




#5179754 1D and 3D textures useless?

Posted by spazzarama on 11 September 2014 - 09:58 PM

It's important to note that textures are not only used to store image data. You can store tables of lookup values, pseudo-random numbers, height maps, all sorts of information, for some of these the use of a 1D or 3D texture makes sense.

 

A 1D texture might be used in cell-shading to map the diffuse reflection component to a colour, or a range of other lookup tables.

 

A 3D texture might be used for some volumetric effects (e.g. smoke and the like). Pretty good overview of 3D textures here.




#5177825 Compile shaders in build time with common functions

Posted by spazzarama on 03 September 2014 - 04:42 AM

Hodgman's answer is definitely what you are after.

 

Another option, that although is not what you after is interesting non-the-less, is that DirectX 11.2 supports HLSL shader linking. Adding support for precompiled HLSL functions that can be packaged into libraries and linked into shaders at runtime. This would allow you to build up your shader libraries to support more variations without the cost of runtime HLSL compiler times.




#5177809 Domain vs Geomtry Shader

Posted by spazzarama on 03 September 2014 - 02:51 AM

You should definitely try to calculate your normals etc in the domain shader. This is the 3rd and final stage of the optional tessellation stages and is specifically used to calculate the final vertex position and data of the subdivided point. Because you are using the tessellation pipline the domain shader is going to be called no matter what, whereas the geometry shader stage is still optional and will incur additional cost (even for an empty shader).

 

Depending on the domain (tri or quad ) you may need to use barycentric, bilinear or bicubic interpolation to determine the correct values.

 

You will want to implement backface culling and/or dynamic LoD within the hull shader.

 

Below is an example domain shader taken from Chapter 5: Applying Hardware Tessellation of my book Direct3D Rendering Cookbook. It uses bilinear interpolation and a combination of patch and constant data for the inputs:

// This domain shader applies control point weighting with bilinear interpolation using the SV_DomainLocation
[domain("quad")]
PixelShaderInput DS_Quads( HS_QuadPatchConstant constantData, const OutputPatch<DS_ControlPointInput, 4> patch, float2 uv : SV_DomainLocation )
{
    PixelShaderInput result = (PixelShaderInput)0;

    // Interpolate using bilerp
    float4 c[4];
    float3 p[4];
    [unroll]
    for(uint i=0;i<4;i++) {
        p[i] = patch[i].Position;
        c[i] = patch[i].Diffuse;
    }
    float3 position = Bilerp(p, uv);
    float2 UV = Bilerp(constantData.TextureUV, uv);
    float4 diffuse = Bilerp(c, uv);
    float3 normal = Bilerp(constantData.NormalW, uv);

    // Prepare pixel shader input:
    // Transform world position to view-projection
    result.PositionV = mul( float4(position,1), ViewProjection );
    result.Diffuse = diffuse;
    result.UV = UV;
    result.NormalW = normal;
    result.PositionW = position;
    
    return result;
}

And bilinear interpolation on float2, float3 and float4 properties for the simple quad domain:

//*********************************************************
// QUAD bilinear interpolation
float2 Bilerp(float2 v[4], float2 uv)
{
    // bilerp the float2 values
    float2 side1 = lerp( v[0], v[1], uv.x );
    float2 side2 = lerp( v[3], v[2], uv.x );
    float2 result = lerp( side1, side2, uv.y );
	
    return result;    
}

float3 Bilerp(float3 v[4], float2 uv)
{
    // bilerp the float3 values
    float3 side1 = lerp( v[0], v[1], uv.x );
    float3 side2 = lerp( v[3], v[2], uv.x );
    float3 result = lerp( side1, side2, uv.y );
	
    return result;    
}

float4 Bilerp(float4 v[4], float2 uv)
{
    // bilerp the float4 values
    float4 side1 = lerp( v[0], v[1], uv.x );
    float4 side2 = lerp( v[3], v[2], uv.x );
    float4 result = lerp( side1, side2, uv.y );
	
    return result;    
}

.

For tri domains you would use barycentric interpolation - something like the following:

//*********************************************************
// TRIANGLE interpolation (using Barycentric coordinates)
/*
    barycentric.xyz == uvw
    w=1-u-v
    P=w*A+u*B+v*C
  C ______________ B
    \.    w    . /
     \  .    .  / 
      \    P   /
       \u  . v/
        \  . /
         \ ./
          \/
          A
*/
float2 BarycentricInterpolate(float2 v0, float2 v1, float2 v2, float3 barycentric)
{
    return barycentric.z * v0 + barycentric.x * v1 + barycentric.y * v2;
}

float3 BarycentricInterpolate(float3 v0, float3 v1, float3 v2, float3 barycentric)
{
    return barycentric.z * v0 + barycentric.x * v1 + barycentric.y * v2;
}

float4 BarycentricInterpolate(float4 v0, float4 v1, float4 v2, float3 barycentric)
{
    return barycentric.z * v0 + barycentric.x * v1 + barycentric.y * v2;
}

Good luck.






PARTNERS