dblack

Members
  • Content count

    80
  • Joined

  • Last visited

Community Reputation

150 Neutral

About dblack

  • Rank
    Member

Personal Information

  1. Havnt looked at your code in detail. But I wrote something equivelant using OpenCL, the first thing to do is verify each part individual. Start with the FFT routine and test some common functions and compare against Maxima (or other known working FFT). You could also try implimenting parts on the CPU(in fact IIRC my initial generation is always done on the CPU) and comparing the results to what you expect. David
  2. [quote name='RobTheBloke' timestamp='1314793384' post='4855844'] [quote name='dblack' timestamp='1314726626' post='4855539'] This is useless in the general case... [/quote] No it doesn't, it fails in the edge cases (in the completely literal sense). It works perfectly for the general case. [quote name='dblack' timestamp='1314726626' post='4855539'] Might work for a few restricted cases, but real models have eg multiple or partial repeats within triangles etc. [/quote] The VAST majority of triangles will not cross UV seams. FYI, real models would avoid multiple repeats across a single triangle. Why? Because it implies that one triangle is using a disproportionate amount of texels within the texture (which inturn implies that some of your triangles aren't using enough). To be honest, I can't see many cases where you'd get a single texture repeating numerous times across the same polygon. Visually the repetition would look a little jarring, and it's not like we have to be that stingey on polygon budgets anymore! [quote name='dblack' timestamp='1314726626' post='4855539'] You might be able to split triangles etc to a certain degree to fix this sort of thing, but it would be much better to generate a complely new unwrap and perhaps use two texture coordinate sets if you want to keep the original textures.[/quote] Fixing this within code (assuming the OP actually has a requirement to get this fixed) is likely to work without the amount of faffing about you are proposing. Multiple repeats are the only problem area, fixing via subdivision is easy, and let's be honest - it's not something the art team should be encouraged to do anyway. [/quote] I think I would disagree about repeats, there are plenty of cases where repeats can look good. The most common case I can think of is a wall or floor which derives its detail from a normal map (possibly with paralax mapping) or in a more modern setting tesselation and a displacement map. To avoid repetition, decals, procedural texturing, detail models etc can be used. If you can perform the reampling in a tool rather than forcing an artist to worry about dividing triangles, unique unwraps etc then you save them time so they can think about more "arty" things. David
  3. [quote name='MirekCerny_' timestamp='1314749130' post='4855681'] [quote name='dblack' timestamp='1314723737' post='4855523'] [quote name='MirekCerny_' timestamp='1314720484' post='4855508'] Hello, I have some trouble importing UV (texture) coordinates from 3DSMAXs FBX file using the FBX SDK. I use pretty much the algorithm they recommend; however, some of the resulting coordinates are either higher than 1, or well below 0. (Cf. -8.123...) Is this normal in 3DS MAX? Or is the FBX file corrupted? Is there a way to convert it to a traditional 0..1 mapping? Thanks MC [/quote] Thats perfectly normal if thats how the model was designed, I do this all the time for tileable texture layers. Sometimes you need unique coordinates(eg light mapping), which is perhaps what you want? There are two ways to achieve this, either modify the model or you can generate coordinates and perhaps resample the texture(s) during import, take a look at UVAtlas in D3D9 it has functions for this. David [/quote] Yeah I was afraid it is by design. And of course, I discovered it because it messed up my light mapping :-) I might try to modify the coordinates myself directly in MAX - is there some easy,automatic way to do it, or has it to be done manually? I will also check the UVAtlas you recommended. Thanks MC [/quote] I dont really know much about MAX, in Blender you can just select the texture coordinates and scale+translate them into the range you want. Plus baking textures etc as needed.
  4. [quote name='Althar' timestamp='1314724641' post='4855526'] Hey everyone, I've been working on a personal 3D "game engine" for several months now, for learning purposes mainly. My engine is split into two separate libraries : [b]core[/b] and [b]game[/b]. [b]Core[/b] is the low level wrappers and classes which abstract specifics APIs (such as directX for graphics, FMod for sound, etc.) into a unified system to use for high level components. This part is pretty much complete, and I have something I am happy to work with, and which offers all the components required for rendering ( primitive drawing, texturing, shader handling, etc.), gathering input, as well as managing memory and resources. Now lately, I have been focusing on the g[b]ame[/b] part, which is supposed to feature high level components, such as scene management components (octrees, scene graphs, whatever you like), state management (game states), and the rendering pipeline. I have made several attempts at designing my rendering pipeline, from the dirty/naive rendering system within scene nodes (with simple "render()" calls), to attempting to have a unified system ( specific render managers for meshes, particles, etc, which receive basic data prior to rendering, and treat it later on) which I then turned into a deferred rendering pipeline. Unfortunately, I haven't been able to find a way which I found elegant and modular/flexible enough. I have been reading through the forums, and gathered a few techniques and designs on providing the data, such as cbuffers, or render managers which expect pre-defined structures (similar to what I attempted with my deferred renderer), but I have found very little information on what the code looks like and how it links up with the pipeline (with shading, shadowing, post-processing effects, etc.). --------------- Would you happen to know where I could find resources on an actual pipeline? I understand the principle of the techniques mentioned above, however, I do not understand how they actually relate to the complete process. In my deferred rendering design, I essentially had several render managers which would take a list of pre-defined structures, and were able to output a render something. Basically my main RenderManager would allocate RenderInstances on demand (which could be of type RenderMeshInstances for meshes, expecting a mesh, matrices, and materials), and register it, so that during the actual rendering of the scene, the instance can be dispatched to its corresponding manager (in that case RenderMeshManager), which actually performs the job. Now this method worked pretty well for a bit, since I could centralize rendering, however, as soon as I attempt to add an extra stage (say shadowing), I am lost. I thought I could potentially add a new RenderShadowManager, and its corresponding RenderShadowInstance which would require the mesh to be registered twice (once for default rendering, and once for shadowing). In effect that could work, but I hardly imagine it being viable once I start to add more effects and having to explicitely re-register instances for particular effects. How did you guys design your rendering pipeline ? Is it monolithic ? Has someone of you managed to design it in a somewhat modular and dynamic fashion (e.g. allowing post-effects to be stacked on the fly at runtime, or new stages to be injected, without further changes, based on existing info : for instance a mesh might still register for an AO pass)? Does it sound wrong to have to hardcode parts of the pipeline like I did? I have had a look at several engines/frameworks, such as Torque3D, Irrlicht, Ogre (which I know the less about), and they all seem to have to do it to some level. Am I delusionial to think a pipeline can be modular? Should I just drop it and design a fixed pipeline revolving around my game's needs? Thanks in advance for your input and advice !! [/quote] You should keep things as simple as possibe,but no simpler. For the example you give, I think the object(mesh etc) knows best how it should be rendered, so it should have a draw call(or multiple draw calls), from which it draws itself using a lower level API. You can use inheritance and contained classes to factor out common code and higher or lower level code to handle communication(eg state sorting, light binding, occluder fusion etc). Just start with something simple and add features, spliting things out when they get messy or duplicated. David PS I am not a fan of thin API abstraction layers(anymore), I would leave such things until you actually have to port to another API or platform. Otherwise you end up wasting a lot of time on them and they continually grow in scope. (The exception to this is if you have no other option, eg a wrapper from C++ to .NET, in this case you should keep the wrapper as small as possible and mold the API to suit your specific needs).
  5. [quote name='RobTheBloke' timestamp='1314725119' post='4855527'] [quote name='MirekCerny_' timestamp='1314720484' post='4855508']Is there a way to convert it to a traditional 0..1 mapping? [/quote] [source] u = fmod(u, 1.0f); v = fmod(v, 1.0f); [/source] That will mess up if a triangle has two verts, one with a u coord of say 0.99, the other with a u coord of 1.01. You may have to apply some slightly cleverer hueristics to make sure that doesn't happen [/quote] This is useless in the general case... Might work for a few restricted cases, but real models have eg multiple or partial repeats within triangles etc. You might be able to split triangles etc to a certain degree to fix this sort of thing, but it would be much better to generate a complely new unwrap and perhaps use two texture coordinate sets if you want to keep the original textures. David
  6. [quote name='MirekCerny_' timestamp='1314720484' post='4855508'] Hello, I have some trouble importing UV (texture) coordinates from 3DSMAXs FBX file using the FBX SDK. I use pretty much the algorithm they recommend; however, some of the resulting coordinates are either higher than 1, or well below 0. (Cf. -8.123...) Is this normal in 3DS MAX? Or is the FBX file corrupted? Is there a way to convert it to a traditional 0..1 mapping? Thanks MC [/quote] Thats perfectly normal if thats how the model was designed, I do this all the time for tileable texture layers. Sometimes you need unique coordinates(eg light mapping), which is perhaps what you want? There are two ways to achieve this, either modify the model or you can generate coordinates and perhaps resample the texture(s) during import, take a look at UVAtlas in D3D9 it has functions for this. David
  7. [quote name='MJP' timestamp='1314126032' post='4852896'] dblack: there is a final weighting that you apply as a final step...I believe it's included in that PDF B_old linked. Something like (4 * pi) / weightSum, where weightSum is the sum of all of the texel weights. [/quote] Not quite what I meant, but it makes things clearer(and the normalization to improve accurancy is probably useful for my case), I was thinking more along the lines of mainting the units of radiance/irradiance and the ability to artistically tweak the results etc.
  8. [quote name='B_old' timestamp='1314128392' post='4852913'] So I need to weight the samples according to the projected area [b]and[/b] by the dot product? In the link provided by dblack it says [quote] That term needs to be divided by the squared distance between the location of the camera xand the center of the texel [/quote] Is this done in the SH case as well? Where do I get the distance from for sky texels? [/quote] >>>So I need to weight the samples according to the projected area [b]and[/b] by the dot product? Thats what I would do, it makes sense to consider the AO wrt the solid angle rather than uneven sized hemi cube pixels(not sure what changes you would want with just a single face tho). It might not matter much depending on other approximations and tradeoffs you make. AO is very similar to the first gather pass of a radiosity system with just a sky sphere emitting light. >>>"That term needs to be divided by the squared distance between the location of the camera xand the center of the texel" Is this done in the SH case as well? Where do I get the distance from for sky texels? << This is the distance from the centre of the hemicube(or cube map) to the texel you are computing the solid angle(area) for(on the surface of the hemicube). It is in the snippet of code you posted, when you plug in cos(theta) and r into the solid angle formulae and re-arrange you get the above code. David
  9. [quote name='B_old' timestamp='1314090024' post='4852706'] I am trying to get an per vertex AO value by rendering the scene from the vertex point of view one or more times. (I am not rendering a hemi-cube but only 1 face of it, but I don't think it matters for the question.) Currently I am only averaging the values of the texture till I have one value, which is a problem because the angle is not taken into account. In [url="http://www.ppsloan.org/publications/StupidSH36.pdf"]stupid SH tricks[/url] there is pseudo code to integrate the raddiance cube and I don't understand it. Where is the angle between the position for which I am evaluating and the sample? It makes sense that it is implicitly encoded in the texture coordinates but can I really use float fTmp = 1 + u^2+v^2; float fWt = 4/(sqrt(fTmp)*fTmp); in my case? I probably have to change the factor 4, as I am only rendering 1/6 of the cube (1/3 of the hemi-cube). Even then I don't see how this compares to taking the dot product of the vertex normal and the direction to the sample. Can you help me understand how I should compute the weight for each of my samples? EDIT: What I think I should be doing: Sum up all samples weighted by the dot product (as described above) and divide the sum by the sum of all weights. But what is the cosine kernel I keep hearing about in this regard. Or does that only have to do with spherical harmonics? [/quote] The code you mention is computing the solid angle which the pixel covers, this is not constant for a cube since the pixels at the corner are larger when projected onto a sphere contained within the cube. For some more details, take a look at the middle of this page and the links: [url="http://the-witness.net/news/2010/09/hemicube-rendering-and-integration/"]http://the-witness.n...nd-integration/[/url] If you dont do this, the results can look quite different. I think the 4 in the equation corresponds to the area of the face, since the input is a 2x2 square centred at zero(-1 to 1 texture space). The angle is cos(theta) = 1 / r = 1 / sqrt(1 + u^2 + v^2). Someone correct me if I am wrong, but this seems correct using the solid angle formulae mentioned in the radiosity literature(I havnt gotten around to deriving the solid angle formulae though, shouldnt be too difficult). A question of my own: Is it necasery to normalize the result from the above formulae somehow? I havnt checked what the scale values sum to... The results seem OK but I guess the result would just be off by a constant factor(total = 4*pi, 4pi / 6 for cube face??)... David
  10. Instancing question d3d11

    [quote name='Jason Z' timestamp='1310013828' post='4832089'] I think just doing three separate draw calls will be much faster [/quote] Yeah, but there does seem to be a bit of an API gap, what would be really handy would be a DrawAuto*Array() function which is similar to DrawAuto() but it takes multiple sets of draw call parameters. Then you can just stream or generate draw call ranges. David
  11. Instancing question d3d11

    [quote name='Quat' timestamp='1309988349' post='4831988'] I can merge them into one vertex/index buffer. But still how would you draw them in one draw call? If you drew the entire buffers, then each "instance" is really 3 meshes, which is not what I want, since each 3 needs its own world matrix. I can, of course, only draw a subset of the buffers, but then I am back to breaking this up over 3 Draw Calls. [/quote] You could use the primitive/vertex id to select the instance data you want to use based on which mesh it belongs to. But that would limit you to drawing each mesh the same number of times if you only use one draw call. This would be similar to a skinned mesh with a bunch of individual components, except you could perhaps remove the need for a per vertex index/weight. David
  12. Instancing question d3d11

    [quote name='Quat' timestamp='1309977795' post='4831903'] Suppose I have a vertex buffer that stores 3 meshes. I want to draw the first mesh 5 times, the second mesh 4 times, and the third mesh 7 times, with different world space transformations. Is it possible to use instancing to do this in 1 draw call? I think I will need at least 3 draw calls. [/quote] It is probably possible if you keep your vertices and indices in some sort of random access buffer(eg a structured buffer) and just instance a single triangle an appropraite number of times using the instance index to figure out which vertices to fetch. Not sure about the performance implications of this or if you will run into any limits. David
  13. Shadow Mapping d3d11

    [quote name='Quat' timestamp='1309970060' post='4831856'] What strategy is better: 1. Draw linear depth to shadow map (requires color and depth writes). 2. Draw nonlinear depth to shadow map (requires only depth writes). Then create "read-only" view to it so it can be (2) sounds faster, but I am wondering if better shadow test accuracy is achieved by using (1). What is the recommended practice. [/quote] Probably 2 if you are not doing filtering on the shadow map(PCF should work with just depth). Otherwise for something like VSM(Variance Shadow Mapping) or ESM(Exponential Shadow Mapping) you have to write to a texture, for VSM you need depth and depth^2. Which is faster will depend on the situation, eg you can use mip-maps with VSM and ESM and writing to just a depth buffer may also not be much faster if the GPU has unused bandwidth. David
  14. FBX format loader

    [quote name='TiagoCosta' timestamp='1309618244' post='4830350'] I'm writing a FBX format mesh importer, but I'm having some trouble... I need to get vertex position, normal, tangent and texture coordinates data. After reading the importScene sample it seems that there are many ways to get normal data (getPolygonVertexNormal(), getElementNormal()) and I dont know which one should I use... Also, I must separate the vertices(and other data) in buffers by texture maps used which makes the whole things harder... [/quote] I found it easier to write a general method which computes the direct index based on the ref and mapping mode, then looks up the value(normal) in the direct array. Something like: template<class T> Nullable<int> WindFbxImporter::ComputeDirectIndex(KFbxLayerElementTemplate<T>* element, int polygonIndex, int polygonItemIndex, int polygonVertexIndex); and Vector3 WindFbxImporter::GetVector3Element(KFbxLayerElementTemplate<KFbxVector4>* layerElement, int polygonIndex, int polygonItemIndex, int polygonVertexIndex); This means less special case code for dealing with each element type and makes things easier to extend. David
  15. [quote name='dblack' timestamp='1308158952' post='4823698'] [quote name='pcmaster' timestamp='1308144659' post='4823592'] Hi community, I wonder if anyone of you read the 2009-2010 papers from Kosloff and Barsky on rectangle spreading. I'm having problems with some small details in "Depth of Field Postprocessing For Layered Scenes Using Constant-Time Rectangle Spreading" paper ([url="http://www.cs.berkeley.edu/%7Ebarsky/Blur/kosloff.pdf"]http://www.cs.berkel...lur/kosloff.pdf[/url]). Concretely, Fig 3 bottom, which represents the normalisation table and then (therefore) with variable per-pixel blur radii (e.g. coming from CoC), and then in general with arbitrary PSFs (but that's another story). I need to understand, why is the normalisation image a pixel wider (in each direction) than the original input image, how will this change if a smaller or larger kernel is used and ultimately what will happen with these extra pixels, which are in fact out of the input image, when variable blur will be used (Fig 3 has a constant PSF 3x3 "kernel"). I'm unable to find any implementation of any spreading (scattering) blur algorithm, including their DX10 implementation, which they mention (DX, GL, C++, Matlab, ... anything would be helpful). Anyone feeling like reading the paper and helping me out by discussing it here? [/quote] Nice paper... looks like a solution to my DOF problems on a quick skim(maybe motion blur too). When I finish fixing/optimizing my engine due to the transition to dx11 I will definatly give it a look. Might be a while though:-( [/quote] Hmmm, that paper makes no sense... There is not really a proper description of the algorithm or why it works... David