Jump to content
  • Advertisement

jbarcz1

Member
  • Content Count

    212
  • Joined

  • Last visited

Community Reputation

265 Neutral

About jbarcz1

  • Rank
    Member
  1. Have you tried interpolating the world-space position,using ddx/ddy on that, and then taking the cross product of the result? This should be all you need to do.
  2. jbarcz1

    John Carmack on Ray Tracing

    Bilinear filtering in software (From the linked thread): 78 M/texels Bilinear filtering in hardware from 2 generations ago (ATI X1900): around 10000 M/texels This is more than two orders of magnitude.
  3. jbarcz1

    batching? how?

    Quote:Original post by eschan01 Quote:Original post by silvermace What you describe is not batching in the traditional sense. When you batch, it is usually simply to avoid the state CHANGES, this means that calling your API's low level draw command multiple times is okay. Although, even with no state changes in between each draw call, I'm seeing a significant performance hit when calling d3d drawprimitive a few hundred times per frame (maybe it's just d3d specific issue). How are you testing this, and what is your performance 'hit' relative to? Did you try drawing only the first triangle in each mesh to rule out vertex bottlenecks, or are you just drawing 400 copies of the mesh with the same state? If you're drawing large meshes with DrawPrimitive, instead of DrawIndexedPrimitive with optimized index buffers, then you might very well have a vertex throughput problem.
  4. jbarcz1

    scattering in pixel shader

    Quote:Original post by jollyjeffers Whilst I have no solid proof I have heard more than enough developers claim that VTF isn't exactly the fastest route through a GPU, so you may want to keep a close eye on your performance if you implement your algorithm in this way. hth Jack VTF was apparently pretty bad on the earlier Nvidia chips. The current crop of ATI chips (R600 based) has full support for vertex texturing, and its extremely efficient because of the unified architecture. Presumably the G80 has also gotten better at it...
  5. How are you generating your rays? If you have an area light source, you should still be able to get pretty good ray coherence if you're careful about the order in which you generate your sample points. You'll get better results if you make sure that all the rays in your packet are aimed at nearby points on the light than you would if you just bundled them at random.
  6. Definitely use instancing, but you should also keep doing your culling on top of that. I.E. Cull the instances, and then place the transforms for your visible instances in a dynamic VB. If you need to run on HW without hardware instancing support, you can fake it by packing multiple copies of your cars into the vertex/index buffer (with an instance ID field added to distinguish them), and shoving their transforms into an array of shader constants.
  7. Quote:Original post by Sneftel Quote:3. Global Illumination or Lighting in General Pat Hanrahan at Stanford and Ravi Ramamoorthi at Columbia. For slower rendering, Henrik Jensen at UCSD. A lot of the researchers in this area are in the private sector. Kavita Bala at Cornell has also been doing some interesting GI work lately.
  8. jbarcz1

    Few Questions on Photon Tracing

    Final gathering means that instead of doing a radiance estimate at your shading point, you trace rays in the hemisphere above your shading point, and gather photons from the diffuse surfaces that those rays hit. With final gathering, you get noisy monte-carlo-like lighting instead of the blotchy appearance that you get when you gather photons directly, but the noise is reduced as you increase the number of sample rays, and it will eventually converge to the nice smooth result that you want. One important tidbit that I've found is that it's important to super-sample your pixels when doing final gathering. So, instead of shooting one eye ray and 8000 gather rays, its better to shoot, say, 8 eye rays with 1000 gather rays apiece. Shooting multiple rays per pixel helps to average down the noise.
  9. jbarcz1

    Few Questions on Photon Tracing

    1. You can do that, or you can choose the photon direction with probability proportional the dot product, and not scale them. Doing the latter will let you get away with using fewer photons, while still getting more or less the same answer (in the limit). 2. If you kill the photon with a probability proportional to the diffuse reflectance, you dont need to scale it at all, you can just store it with its original power, kill it probabilistically, and then, if it survives, just send it off. You may still want to scale it to take color into account though. 3. You definitely want to be finding the k nearest photons. If you average the photons in a single cell, but dont take neighboring cells into account, you get artifacts like the ones in your image. One more thing, In general, simply gathering photons at the pixel positions like you're doing will not give very pretty results (you'll get a lot of noise in the illumination because there aren't enough photons to properly sample the incoming energy). To get that to look smooth, you'll end up needing a ridiculously large number of photons. Final gathering, with a high enough sample count, is much more effective at producing smooth indirect illumination while still using sensible numbers of photons.
  10. jbarcz1

    Spherical Geometry question

    It sounds like what you really want is to do is derive a closed-form equation for each of the edge arcs, and then render the resulting curves. If you know the directions from the sphere center to each vertex, you should be able to look at the tangents at the vertices to derive the curves. If all you're trying to do is render the projected polygon on the sphere, you might have an easier time of it by just repeatedly subdividing the edges of the planar polygon and projecting the resulting points back onto the sphere (with short, straight line segments in between). That method will probably be less efficient, but the math is a bit easier.
  11. Quote:Original post by Ashkan Is the first option inefficient because testing for the torch's (parent) bounding volume adds no extra value to our culling process, since the parent's bounding volume is a subset of the child's? Yann has described a solution to this problem in the original thread but not only it's overly complicated, but it also seems more like an after thought. Have you found any solutions to this, dmatter? The lightsource bounding volume doesn't just 'not add value', it creates extra work to test against when frustum culling, and so is actively removing value. Hidden surface removal isn't the only problem either, there's also the fact that you have to traverse several irrelevant nodes to get to the light when you're trying to tell if it influences other objects. This seems like a great example of why transform and spatial hierarchies should be kept seperated. The problem disappears if you have two different spatial graphs, one of which is used to answer the question "What can the camera see" and the other to answer the question "What lights shine on me".
  12. I believe that on most ATI chips, depth, scissor, and stencil testing all happen at the same time. This could be before or after the shader, depending on what the shader does ( texkill instructions and depth output can cause it to happen after the shader ).
  13. jbarcz1

    Software renderers

    Quote:Original post by C0D1F1ED vargatom, that's some very interesting information about offline rendering. Thanks for sharing! I do believe though that sooner or later the complexity of a real-time raytracer would be comparable to that of a rasterizer. However, the important part is that things are easier for application developers. Raytracing offers some very powerful abstractions, and even if the implementation is complex, using them is quite intuitive. Nowadays the implemenation of a rasterizer is actually becoming easier (once you have a shader compiler), and all the complexity of creating lifelike scenes out of triangles is completely the application developer's task. If we continue like this the GPU will soon become just a massive SMT SIMD processor and it's up to the developer to write a rasterizer from scratch. I'm not sure I agree with this part. GPUs are trending in the direction you describe, but they still have (and will continue to have) fixed-function rasterizers with a very high throughput. You simply can't match that kind of performance in code, no matter how good that code might be, so I'd expect most developers would want to keep using the hardware since its already there and faster. Quote: Raytracing can make it very easy to create scenes equally or better looking than today's cutting-edge rasterized games. And that's what makes it so appealing. The performance benefits of rasterizers could quickly become of lesser importance... Again, I'm not sure I agree. Implementing the core raytracer may be fairly straightforward, but you also need programmable shading, high quality texture filtering, and lots of other things that GPUs already devote most of their die area to. You also need a clean, high level API, because developers are not going to want to have to mess around with the low-level raytracing code to adapt it to their specific needs. I expect that the need for a completely new (and different) API will be one of the major stumbling blocks for game developers. Developers already have large existing code bases for doing everything in the standard "bag of hacks", and they know how to use these hacks very convincingly, so, from a practical standpoint, it will take a while before they're willing to scrap it all and commit to a raytracing platform. Acceleration structure update for dynamic scenes is another potentially sticky issue. In a raytracer, you'll need to compute new vertex positions for every object in your scene, every single frame, or else implement some kind of lazy update scheme to do it on demand (not too tricky in software, but probably painful for hardware). Particle effects are another thing that I think the raytracing community has ignored. For particle systems made of billboards (the usual way) you're going to need a data structure that is tolerant of incoherent motion. The only one of these I know about is a uniform grid, but these eat way too much memory to be practical for the whole scene, so you'd probably need some kind of hybrid system. Also, if you happen to have large numbers of skinned characters with the same geometry but different animation states (for example, a large army in an RTS game), you need to create distinct copies of all of them to store in your acceleration structure. For situations like this, a rasterizer might be preferable, because of the lower memory footprint. I'm not denying that raytracing has some real advantages, but I think it will be a lot trickier than people think for game developers to just pick it up and use it.
  14. jbarcz1

    CPU and GPU running in parallel?

    Quote:Original post by transwarp I regret buying my Radeon X1950XT now. :( ATI (or rather, AMD graphics) has a PIX plugin that gives some fairly useful information about performance on their older cards (hardware utilization, vertex processing utilization, etc). There is also a new tool, called GPUPerfStudio, which gives essentially the same information but allows you to view it in real time (and display pretty plots). PerfStudio is similar to NvPerfHUD, but will work without any modifications to your application. GPUPerfStudio also lets you do things like forcing simple pixel shaders or focing 2x2 textures, to help identify bottlenecks. You can write code in your app to do this manually, of course, but its nice to have someone else doing it for you.
  15. jbarcz1

    Software renderers

    Quote:Original post by Stonemonkey Quote: Quote:Original post by Stonemonkey I'm interested in what is it you're wanting to do in a software rasteriser that you can't do with shaders? I got one. Scatter write! :) ok, maybe not with shaders but there is hardware capable of that and as the general purpose CPU evolves so will the GPU which atm are becoming more programmable, shaders being one step in that direction. The XBox 360 GPU already supports scatter writes. The shader cores can write to arbitrary memory addresses. I've never used it myself, and I imagine its a pain in the neck to use properly because of synchronization issues, but the capability is there. On PC GPUs, you can use vertex and/or geometry shaders to do "pseudo" scatter writes, by rendering point primitives into specified locations in a render target.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!