Jump to content
  • Advertisement

dblack

Member
  • Content Count

    80
  • Joined

  • Last visited

Community Reputation

150 Neutral

About dblack

  • Rank
    Member

Personal Information

  1. Havnt looked at your code in detail. But I wrote something equivelant using OpenCL, the first thing to do is verify each part individual. Start with the FFT routine and test some common functions and compare against Maxima (or other known working FFT). You could also try implimenting parts on the CPU(in fact IIRC my initial generation is always done on the CPU) and comparing the results to what you expect. David
  2. No it doesn't, it fails in the edge cases (in the completely literal sense). It works perfectly for the general case. The VAST majority of triangles will not cross UV seams. FYI, real models would avoid multiple repeats across a single triangle. Why? Because it implies that one triangle is using a disproportionate amount of texels within the texture (which inturn implies that some of your triangles aren't using enough). To be honest, I can't see many cases where you'd get a single texture repeating numerous times across the same polygon. Visually the repetition would look a little jarring, and it's not like we have to be that stingey on polygon budgets anymore! Fixing this within code (assuming the OP actually has a requirement to get this fixed) is likely to work without the amount of faffing about you are proposing. Multiple repeats are the only problem area, fixing via subdivision is easy, and let's be honest - it's not something the art team should be encouraged to do anyway. [/quote] I think I would disagree about repeats, there are plenty of cases where repeats can look good. The most common case I can think of is a wall or floor which derives its detail from a normal map (possibly with paralax mapping) or in a more modern setting tesselation and a displacement map. To avoid repetition, decals, procedural texturing, detail models etc can be used. If you can perform the reampling in a tool rather than forcing an artist to worry about dividing triangles, unique unwraps etc then you save them time so they can think about more "arty" things. David
  3. Thats perfectly normal if thats how the model was designed, I do this all the time for tileable texture layers. Sometimes you need unique coordinates(eg light mapping), which is perhaps what you want? There are two ways to achieve this, either modify the model or you can generate coordinates and perhaps resample the texture(s) during import, take a look at UVAtlas in D3D9 it has functions for this. David [/quote] Yeah I was afraid it is by design. And of course, I discovered it because it messed up my light mapping :-) I might try to modify the coordinates myself directly in MAX - is there some easy,automatic way to do it, or has it to be done manually? I will also check the UVAtlas you recommended. Thanks MC [/quote] I dont really know much about MAX, in Blender you can just select the texture coordinates and scale+translate them into the range you want. Plus baking textures etc as needed.
  4. You should keep things as simple as possibe,but no simpler. For the example you give, I think the object(mesh etc) knows best how it should be rendered, so it should have a draw call(or multiple draw calls), from which it draws itself using a lower level API. You can use inheritance and contained classes to factor out common code and higher or lower level code to handle communication(eg state sorting, light binding, occluder fusion etc). Just start with something simple and add features, spliting things out when they get messy or duplicated. David PS I am not a fan of thin API abstraction layers(anymore), I would leave such things until you actually have to port to another API or platform. Otherwise you end up wasting a lot of time on them and they continually grow in scope. (The exception to this is if you have no other option, eg a wrapper from C++ to .NET, in this case you should keep the wrapper as small as possible and mold the API to suit your specific needs).
  5. [source] u = fmod(u, 1.0f); v = fmod(v, 1.0f); [/source] That will mess up if a triangle has two verts, one with a u coord of say 0.99, the other with a u coord of 1.01. You may have to apply some slightly cleverer hueristics to make sure that doesn't happen [/quote] This is useless in the general case... Might work for a few restricted cases, but real models have eg multiple or partial repeats within triangles etc. You might be able to split triangles etc to a certain degree to fix this sort of thing, but it would be much better to generate a complely new unwrap and perhaps use two texture coordinate sets if you want to keep the original textures. David
  6. Thats perfectly normal if thats how the model was designed, I do this all the time for tileable texture layers. Sometimes you need unique coordinates(eg light mapping), which is perhaps what you want? There are two ways to achieve this, either modify the model or you can generate coordinates and perhaps resample the texture(s) during import, take a look at UVAtlas in D3D9 it has functions for this. David
  7. Not quite what I meant, but it makes things clearer(and the normalization to improve accurancy is probably useful for my case), I was thinking more along the lines of mainting the units of radiance/irradiance and the ability to artistically tweak the results etc.
  8. Is this done in the SH case as well? Where do I get the distance from for sky texels? [/quote] >>>So I need to weight the samples according to the projected area and by the dot product? Thats what I would do, it makes sense to consider the AO wrt the solid angle rather than uneven sized hemi cube pixels(not sure what changes you would want with just a single face tho). It might not matter much depending on other approximations and tradeoffs you make. AO is very similar to the first gather pass of a radiosity system with just a sky sphere emitting light. >>>"That term needs to be divided by the squared distance between the location of the camera xand the center of the texel" Is this done in the SH case as well? Where do I get the distance from for sky texels? << This is the distance from the centre of the hemicube(or cube map) to the texel you are computing the solid angle(area) for(on the surface of the hemicube). It is in the snippet of code you posted, when you plug in cos(theta) and r into the solid angle formulae and re-arrange you get the above code. David
  9. The code you mention is computing the solid angle which the pixel covers, this is not constant for a cube since the pixels at the corner are larger when projected onto a sphere contained within the cube. For some more details, take a look at the middle of this page and the links: http://the-witness.n...nd-integration/ If you dont do this, the results can look quite different. I think the 4 in the equation corresponds to the area of the face, since the input is a 2x2 square centred at zero(-1 to 1 texture space). The angle is cos(theta) = 1 / r = 1 / sqrt(1 + u^2 + v^2). Someone correct me if I am wrong, but this seems correct using the solid angle formulae mentioned in the radiosity literature(I havnt gotten around to deriving the solid angle formulae though, shouldnt be too difficult). A question of my own: Is it necasery to normalize the result from the above formulae somehow? I havnt checked what the scale values sum to... The results seem OK but I guess the result would just be off by a constant factor(total = 4*pi, 4pi / 6 for cube face??)... David
  10. dblack

    Instancing question d3d11

    Yeah, but there does seem to be a bit of an API gap, what would be really handy would be a DrawAuto*Array() function which is similar to DrawAuto() but it takes multiple sets of draw call parameters. Then you can just stream or generate draw call ranges. David
  11. dblack

    Instancing question d3d11

    You could use the primitive/vertex id to select the instance data you want to use based on which mesh it belongs to. But that would limit you to drawing each mesh the same number of times if you only use one draw call. This would be similar to a skinned mesh with a bunch of individual components, except you could perhaps remove the need for a per vertex index/weight. David
  12. dblack

    Instancing question d3d11

    It is probably possible if you keep your vertices and indices in some sort of random access buffer(eg a structured buffer) and just instance a single triangle an appropraite number of times using the instance index to figure out which vertices to fetch. Not sure about the performance implications of this or if you will run into any limits. David
  13. dblack

    Shadow Mapping d3d11

    Probably 2 if you are not doing filtering on the shadow map(PCF should work with just depth). Otherwise for something like VSM(Variance Shadow Mapping) or ESM(Exponential Shadow Mapping) you have to write to a texture, for VSM you need depth and depth^2. Which is faster will depend on the situation, eg you can use mip-maps with VSM and ESM and writing to just a depth buffer may also not be much faster if the GPU has unused bandwidth. David
  14. dblack

    FBX format loader

    I found it easier to write a general method which computes the direct index based on the ref and mapping mode, then looks up the value(normal) in the direct array. Something like: template<class T> Nullable<int> WindFbxImporter::ComputeDirectIndex(KFbxLayerElementTemplate<T>* element, int polygonIndex, int polygonItemIndex, int polygonVertexIndex); and Vector3 WindFbxImporter::GetVector3Element(KFbxLayerElementTemplate<KFbxVector4>* layerElement, int polygonIndex, int polygonItemIndex, int polygonVertexIndex); This means less special case code for dealing with each element type and makes things easier to extend. David
  15. Nice paper... looks like a solution to my DOF problems on a quick skim(maybe motion blur too). When I finish fixing/optimizing my engine due to the transition to dx11 I will definatly give it a look. Might be a while though:-( [/quote] Hmmm, that paper makes no sense... There is not really a proper description of the algorithm or why it works... David
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!