Jump to content
  • Advertisement

ReaperSMS

Member
  • Content Count

    68
  • Joined

  • Last visited

Community Reputation

1546 Excellent

1 Follower

About ReaperSMS

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ReaperSMS

    Projection Offset Problem

    Because that is what makes a perspective projection have perspective. The division is what makes things shrink as they move further from the camera.
  2. ReaperSMS

    Projection Offset Problem

    Taking the example case of Far = 10, Near = 1, just dividing by Far-Near would put points at the far plane at 10/9, and points at the near plane at 1/9. Subtracting Near / (Far-Near) changes that so that points on the far plane become 1, and points on the near plane become 0. The scale by Far is to counteract the perspective divide
  3. ReaperSMS

    Projection Offset Problem

    The intended result is to transform the coordinate such that the range [Near,Far] maps to [0,1], but after the perspective divide. Ignoring the divide to start with, we start by translating by -Near, so that Near maps to 0. Zout = Zin - Near Now, in the given case, Z values at the near plane become 0, Z values at the far plane become 9. We rescale by 1/(Far-Near) to bring that to the range [0,1] Zout = (Zin - Near) / (Far - Near) To make this easier to calculate with a matrix, we want it in the form A * z + D, so we distribute and rearrange things Zout = Zin * 1/(Far - Near) - Near / (Far - Near) If it is an orthographic projection, we're done. If it is a perspective projection, we must take into account the divide by Zin that will happen. Zclip = Zin * 1/(Far - Near) - Near / (Far - Near) Zout = Zclip / Zin For Zin = Near, Zclip is 0.0, and nothing would change, but for Zin = Far, we would get a result of: Zclip = Zfar * 1/(Far - Near) - Near / (Far - Near) = 10 / 9 - 1 / 9 = 9 / 9 = 1 Zout = Zclip / Zin = Zclip / Far = 1 / 10 To get a Zout of 1, we have to scale things by Far, which will give the correct result of Zin = Near -> 0.0, Zin = Far -> 1.0. Distributing it across: Zclip = Zin * Far / (Far - Near) - (Near * Far) / (Far - Near) Zout = Zclip / Zin
  4. ReaperSMS

    Painters Algorithm

    The algorithm is literally render things in depth order, but it doesn't work those out, you have to provide them. Things get complicated when the objects start intersecting, and moreso when they are concave, but there are plenty of production particle systems that can boil their sorting down to a simple qsort() on Z. These days it is mostly applicable to translucent rendering, as opaque can rely on zbuffering to get correct results without regard to draw order.
  5. Welcome to the wonderful world of linear transformations. For the usual weighted skinning approach, this is indeed a valid way to do it. The short version is that the matrices in this case are linear transforms, which have the helpful properties that, for any particular linear function F(), values u, v, and scalar c, the following hold true: F(c * u) = c * F(u) and F(u + v) = F(u) + F(v) Assuming matrices bone0, bone1, bone2, bone3, weight0..3, and shrinking down to only looking at the x value of the result: float result = 0.0; result += (bone0 * pos).x * weight0; result += (bone1 * pos).x * weight1; result += (bone2 * pos).x * weight2; result += (bone3 * pos).x * weight3; (bone0 * pos).x expands out to something like (bone0._11 * pos.x + bone0._21 * pos.y + bone0._31 * pos.z + bone0._41), and similar for the rest, (apologies for playing very fast and loose with column vs row major, it doesn't particularly matter for the linearity of things) result += (bone0._11 * pos.x + bone0._21 * pos.y + bone0._31 * pos.z + bone0._41) * weight0; result += (bone1._11 * pos.x + bone1._21 * pos.y + bone1._31 * pos.z + bone1._41) * weight1; result += (bone2._11 * pos.x + bone2._21 * pos.y + bone2._31 * pos.z + bone2._41) * weight2; result += (bone3._11 * pos.x + bone3._21 * pos.y + bone3._31 * pos.z + bone3._41) * weight3; if you distribute the weight# multiplies through, roll all the sums together, and then pull pos.x, pos.y, and pos.z out accordingly, you get something like: result = pos.x * (bone0._11 * weight0 + bone1._11 * weight1 + bone2._11 * weight2 + bone3._11 * weight3) + pos.y * (etc...) + pos.z * (etc...) and get exactly the second formulation
  6. It looks like your segment intersection test is actually an infinite line test. It will only return false if they are parallel or coincident...
  7. That all looks fine, assuming diffuse and ambient are float4's, which they almost certainly should be if you want lights that aren't just white.
  8. ReaperSMS

    C++ Self-Evaluation Metrics

    Assuming it's for a deep magic code ninja type position, ask why, and likely be satisfied with a coherent answer. If it's not a position that involves staring at hex dumps for bugs, it probably doesn't even come up... unless someone claims they have a better grasp of C++ than Stroustrup or Sutter.   Or, on bad days, be very relieved, as it means I don't have to dig that bit of the standard out of cold storage.
  9. ReaperSMS

    C++ Self-Evaluation Metrics

    Anything over an 8 means one of two things. They've either written a solid, production ready compiler frontend and runtime support library, or they're a 4. 7-8 from someone with a background that matches means "I've seen horrible things, and know how to avoid/diagnose them, but there are still fell and terrible things lurking in the dark corners of the earth".    An approach we used from time to time, at least for people that claim to be Really Good and Technical with it, is to just have them start drawing out the memory layout of an instance of a class object, working up from the trivial case, through to the virtual diamond one, and see where the floundering starts. Bonus points for knowing how dynamic_cast and rtti work (and a slight bit of walking through the process usually serves as a good reminder of why they aren't exactly free).
  10. It's a 3D scene, but with the view direction restricted to slightly off-axis, and camera motion restricted to a 2D plane.   The main area of play is about 400 units in front of the camera, with some near-field objects about 200 units past that that can accept shadows. Tons and tons of background objects lie far beyond that, the far plane is set to around 100,000. It isn't particularly ideal.   That soup gets thrown at a deferred lighting renderer, which is all fine and great up until it needs to light things that don't write depth.
  11. I was afraid of that.   The divide by pi is in there on the real code side, I left out some of the normalization to get down to just the SH bits. The lighting model for this project is ridiculously ad-hoc, as we didn't get a real PBS approach set up in the engine until a few months into production. Another project is using a much more well behaved setup, but it has the advantage of still being in preproduction.   For this project the scenes are sparse space-scapes, with a strong directional light, and an absurd number of relatively small radius point lights for effects, and only about three layers of objects (ships, foreground, and background). I suppose a brute force iteration over the light list might do the job well enough, as there might not be enough of these around to justify a fancy approach.
  12. We have a game here using a straightforward deferred lighting approach, but we'd like to get some lighting on our translucent objects. In an attempt to avoid recreating all the horrible things that came from shader combinations for every light combination, I've been trying to implement something similar to the technique Bungie described in their presentation on Destiny's lighting.   The idea is to collapse the light environment at various probe points into a spherical harmonic representation, that the shader would then use to compute lighting. Currently it's doing all of this on the CPU, but I've run into what seems to be a fundamental issue with projecting a directional light into SH.   After digging through all of the fundamental papers, everything seems to agree that the way to project a directional light into SH, convolved with the cosine response is void project_directional( float* SH, float3 color, float3 dir ) {    SH[0] = 0.282095f * color * pi;    SH[1] = -0.48603f * color * dir.y * (pi * 2/3);    SH[2] = 0.48603f * color * dir.z * (pi * 2/3);    SH[3] = -0.48603f * color * dir.x * (pi * 2/3); }   float3 eval_normal( float* SH, float3 dir ) {    float3 result = 0;      result = SH[0] * 0.282095f;    result += SH[1] * -0.48603f * dir.y;    result += SH[2] * 0.48603f * dir.z;    result += SH[3] * -0.48603f * dir.x;    return result; }   // result is then scaled by diffuse There's a normalization term or two, but the problem I've been running into, that I haven't seen any decent way to avoid, is that ambient term in SH[0]. If I plug in a simple light pointing down Z, normals pointing directly at it, or directly away from it behave reasonably, but a normal pointing down, say, the X axis will always be lit by at least 1/4 of the light color. It's produced a directional light that generates significant amounts of light at 90 degress off-axis.   I'm not seeing how this could ever behave differently. I can get vaguely reasonable results if I ignore the ambient term while merging diffuse lights in, but that breaks down the moment I try summing two lights, pointing in opposite directions in. Expanding out to the 9-term quadratic form does not help much either.   I get the feeling I've missed some fundamental thing to trim down the off-axis directional light response, but I'll be damned if I can see where it would come from. Is this just a basic artifact of using a single light as a test case? Is this likely to behave better by keeping the main directional lights out, and just using the SH set to collapse point lights in as sphere lights or attenuated directionals? Have I just royally screwed up my understanding of how to project a directional light into SH?   The usual pile of papers and articles from SCEE, Tom Forsyth, Sebastien Lagarde, etc have not helped. Someone had a random shadertoy that looked like it worked better in posted screenshots, but actually running it produces results more like what I've seen.
  13. The sites are the ones in the wrong. They're probably implemented in javascript, which I believe treats all numbers as floats, and thus are losing precision. As an example, your third number, punched into windows calc, as the first step would be:   22236810928128038 % 62 = 42, which should be 'g'. If we subtract 42 out of there, we get  22236810928127996, which on the second site properly ends up with a final digit of '0'. If you give it 22236810928127997, it still ends in '0', and if you give it 22236810928127998, it jumps to '4'. double precision floats only give about 16 digits of precision, so feeding it an 18 digit number means it starts rounding in units of 4.   The entire idea seems a bit odd however, as for this to be reasonable, you have to convert before encrypting, and need to know exactly where numbers live in the output to parse them back properly. It seems like it would be better to encrypt directly from binary, and base-64 convert the output if you need to send it over a restricted channel.
  14. Bleh, I see gl does the same thing. I suppose I shall have to put up with being terribly disappointed in the PC API's again.
  15. So, I'm trying to use SV_InstanceID as an extra input to a shader, to pick from a small set of vertex colors in code.   It seems to completely ignore the last argument of DrawIndexedInstanced(), and start at 0 per draw call. This seems less than useful, as it would make it impossible to transparently split up an instanced draw call, and defeat a lot of the purpose of having the system value at all.   How would one be expected to use SV_InstanceID properly in this case? The vertex shader looks about like so: struct VertexInput { float4 position : POSITION; uint instanceid : SV_InstanceID; }; struct VertexOutput { float4 projPos : SV_Position; float4 color : COLOR0; }; VertexOutput vs_main( const VertexInput input ) { VertexOutput output = (VertexOutput)0; output.projPos = mul( float4( input.position.xyz, 1.0f ), g_ViewProjection ); if ( input.instanceid == 0 ) { output.color = float4(1,0,0,1); } else if ( input.instanceid == 1 ) { output.color = float4(0,1,0,1); } else { output.color = float4(0.5,0.5,0.5,1); } return output; } This results in it always picking red. If I instead dig a color out of a separate vertex buffer, via D3D11_INPUT_PER_INSTANCE_DATA, it works as expected.   How do I make d3d useful?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!