Does anyone have any experience working with these devices in windows? I'd like to try and integrate these into lessons at my day job as a teacher, and want to know if they are a viable option. I've got the 360 controller for windows, which i can use via XInput, but these seem to use their own USB, so i assume that there is some in game code to handle routing inputs, on top of or outside the standard wireless controller interface. I I'm looking for perhaps a link to an SDK or really any info anyone has on these.
 Home
 » Viewing Profile: Topics: Burnt_Fyr
Burnt_Fyr
Member Since 25 Aug 2009Offline Last Active Today, 01:31 PM
Community Stats
 Group Members
 Active Posts 787
 Profile Views 10,643
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling

Location
Canada
Topics I've Started
xbox360 big button controllers for windows
01 December 2015  01:24 PM
height at a point on a plane defined by 2 points.
15 March 2015  02:12 PM
First off... not my homework. it's the wifes! She's in landscape architecture and is having trouble understanding a question that was given with little explaination. In fact the instructor of the course couldn't remember how to do it and so they had no explanation.
The question is:
Given a point, q, on a plane,p, as well as a slope,s, along a vector,v, which lies on the projection of P onto the xy plane, find 3 other points.
p0p1(i have height at p1)   p0__x__p1    /   y  /v p2p3  /
so we started out by extending v, from p1, until it intersected the line from po>p2. since the points fall on a regular grid, we knew x = p0p1 = 50m, and used the magnitude of V when x = 50 to get y = 22.19m. the projection of V onto XY was measured at 55m. Since we have slope along V, a bit of pythagoras gave us dz of 4.565m. so now I have a vector,q<50,22.19,4.565> that lies along the plane,p, and the height at p1. How can i calculate the x and y components of the slope(gradient?), so that i can find p0.z = p1.z + xm, p3.z = p1.z+yn, and p2.z = p1.z+xm+yn.
Deferred optimization
10 February 2015  01:18 PM
I've gotten deferred rendering working with MRTs in direct x9, though the performance is so far quite abysmal.
My gbuffer is a naive implementation, with no compression, and a full 4 rts (diffuse.rgb, normal.xyz, spec.rgb + pow, position.xyz). The GBuffer is taking about 10% of the rendering time, while the lighting is taking up the rest. It seems that i can double the object count with minimal effect, so I'm primarily looking at ways of improving lighting speed.
I've tried stenciling out point and spotlights, but this considerably increased frame times(about 19>13 fps) and trying to use a double sided stencil was even worse(8fps). I think compressing this to 3rts is doable, reconstruction position from z + screen space, compressing normals to 2 channels, etc, but am worried that this will be a wasted effort if it increases the pixel shader complexity.
Am I just hitting the wall on my laptop ? What are some common tricks I can use to reduce the time required for the lighting passes?
I'm about 25% of the way to porting over to dx11 and expect to see a large improvement with certain features available there(depth buffer reads, etc).
c++ Heap corruption w/ std::vector
20 January 2015  10:07 AM
Thanks for taking the time to read this. I'm using a mesh object that contains std::vectors for vertex and index data.
class Mesh2 { // snipped for brevity std::vector<unsigned short> indices; } void Mesh2::AddFace(unsigned short _i1,unsigned short _i2, unsigned short _i3) { indices.push_back(_i1); indices.push_back(_i2); indices.push_back(_i3); }
The above functions reside in a static lib that contains all rendering code, linked to the main executable. Below are functions in the .Exe.
// In main() Mesh2 cube; GenerateCube(&cube,true,true);
and the function GenerateCube
GenerateCube(Mesh2* mesh, bool bGenNorms, bool bGenTangents) { // at some point mesh>AddFace(0,1,2); //.. and so on }
The issue is as soon as mesh.indices has to resize past 10(it's default size) I get a heap corruption. I'm not sure what must be done to rectify this. If anyone has a good link or can spare 5 minutes for a thorough explanation it would be much appreciated. Everything I've pulled up on google so far has to do with crossing DLL boundaries which I'm not doing, but might be happening behind the scenes in the std::vector. I'm heading back to google for now.
Shadowmapping woes
07 January 2015  09:03 PM
Greetings all, happy new year. I've been working on shadow mapping and have ran into an issue that I can't seem to figure out. My scene is simple, a single light, a few cubes in a cornell box sort of setup. The shadows are working, but the cubes are coming out black. If this is/was a bias issue I would assume that it would effect the entire scene and not just the cubes so I'm not sure where to begin without access to the vertex debugging features in pix.
EDIT: test.png 169.85KB 0 downloads
// Shadow map generation struct VS_INPUT { float3 Position : POSITION0; float3 Normal : NORMAL0; float2 Texcoord : TEXCOORD0; }; struct VS_OUTPUT { float4 PositionCS : POSITION0; float2 TexCoord : TEXCOORD0; float2 Depth: TEXCOORD1; }; VS_OUTPUT VSMain(VS_INPUT input) { VS_OUTPUT output = (VS_OUTPUT)0; float4x4 WVmatrix = mul(Wmatrix,Vmatrix); float4x4 WVPmatrix = mul(WVmatrix,Pmatrix); // Transform Position to Clip Space output.PositionCS= mul(float4(input.Position,1), WVPmatrix); // Output TexCoords output.TexCoord = input.Texcoord; output.Depth = output.PositionCS.zw; return output; } struct PS_INPUT { float4 PositionCS : POSITION0; float2 TexCoord : TEXCOORD0; // NECCESARY FOR TRANSPARNECY float2 Depth : TEXCOORD1; }; struct PS_OUTPUT { float4 Color : COLOR0; }; float4 PSMain(PS_INPUT input) : COLOR0 { PS_OUTPUT output; float4 diffuse = tex2D(tex0, input.TexCoord.xy); clip (diffuse.a  0.15f); return input.Depth.x / input.Depth.y; // z / w; depth in [0, 1] range. // NDC SPACE } // shadow Calculation float CalcShadowFactor(float4 projTexC) { // Complete projection by doing division by w. projTexC.xy /= projTexC.w; // Points outside the light volume are in shadow. if( projTexC.x < 1.0f  projTexC.x > 1.0f  projTexC.y < 1.0f  projTexC.y > 1.0f  projTexC.z < 0.0f ) return 0.0f; // Transform from NDC space to texture space. projTexC.x = +0.5f*projTexC.x + 0.5f; projTexC.y = 0.5f*projTexC.y + 0.5f; // Depth in NDC space. float depth = projTexC.z / projTexC.w; // 2x2 percentage closest filter. // Sample shadow map to get nearest depth to light. float s0 = tex2D(tex1, projTexC.xy).r; float s1 = tex2D(tex1, projTexC.xy + float2(ShadowMap_dx, 0)).r; float s2 = tex2D(tex1, projTexC.xy + float2(0, ShadowMap_dx)).r; float s3 = tex2D(tex1, projTexC.xy + float2(ShadowMap_dx,ShadowMap_dx)).r; // Is the pixel depth <= shadow map value? float result0 = depth <= s0 + ShadowEpsilon; float result1 = depth <= s1 + ShadowEpsilon; float result2 = depth <= s2 + ShadowEpsilon; float result3 = depth <= s3 + ShadowEpsilon; // Transform to texel space. float2 texelPos = ShadowMapSize*projTexC.xy; // Determine the interpolation amounts. float2 t = frac( texelPos ); // Interpolate results. return lerp( lerp(result0, result1, t.x), lerp(result2, result3, t.x), t.y); }