# rumble

Members

208

## Community Reputation

118 Neutral

• Rank
Member

Other than increasing tessellation, would storing at vertex level improve the problem much? I'm reminded for spherical harmonics lighting, you typically calculate lighting at the vertex level and let the graphics card interpolate across the triangle. Unless your lighting/shadow variations on your surfaces changes very slowly, the outlines of typical shadows become zigzaggy. The reason is if a shadow edge cuts through the middle of a triangle, vertex interpolation can't reproduce this edge. Might there be some work addressing this? I'm really interested in seeing some demos showcasing how good can either radiosity or spherical harmonics can achieve, and the amount of tessellation required.
2. ## Procedural roads and race tracks

[quote name='ApochPiQ' timestamp='1314318351' post='4853884'] What part are you specifically needing help with? Generating the geometry? Simplifying the generated meshes? Stitching meshes procedurally? Generating good procedural texture coordinates? [/quote] >Generate geometry I'm thinking about starting with the shape of road cross section and replicate it along curves. For example, replicate a trough shaped cross section to simulate hot wheel race tracks. But could be much more complex cross sections. > Simplify generated meshes Prefer to have fine details > Stitch meshes procedurally Yes need some ideas. I.e., any elegant method to stitch multiple trough shaped cross sections together. > Generate good procedural texture coordinate Yes. Especially how to handle the above intersection case. Texture coordinates for a three tracks intersection would be different compared to a four tracks intersection.
3. ## Procedural roads and race tracks

It's not too hard to generate race tracks or road geometry procedurally, but how to nicely handle intersections? I.e., -the four street road intersection -one road branching out into two or more Put it another way, given three or more street blocks geometry, all with the same cross section shape, how should I stitch them together and fill any gap in the middle? Preferably generate uniform tessellations and reasonable texture coordinates for the intersection geometry. Maybe there are papers or techniques that deal with this issue? Thanks
4. ## Wikileaks: no bloodshed inside Tiananmen Square

Anyone catch this? [url="http://www.telegraph.co.uk/news/worldnews/wikileaks/8555142/Wikileaks-no-bloodshed-inside-Tiananmen-Square-cables-claim.html"]http://www.telegraph.co.uk/news/worldnews/wikileaks/8555142/Wikileaks-no-bloodshed-inside-Tiananmen-Square-cables-claim.html[/url] [i][b] [/b][/i] [color=#282828][font=georgia,][i][b][font=arial, helvetica, sans-serif][size=2]The cables, obtained by WikiLeaks and released exclusively by The Daily Telegraph, partly confirm the Chinese government's account of the early hours of June 4, 1989, which has always insisted that soldiers did not massacre demonstrators inside Tiananmen Square.[/size][/font] [font=arial, helvetica, sans-serif][size=2]Instead, the cables show that [url="http://www.telegraph.co.uk/news/worldnews/asia/china/"]Chinese[/url] soldiers opened fire on protesters outside the centre of Beijing, as they fought their way towards the square from the west of the city.[/size][/font] [/b][/i][/font][/color]
5. ## Software rasterizer and float/int color

[quote name='Krypt0n' timestamp='1306234232' post='4815060'] storage using as small data as possible, computations using float (SIMDfied) [/quote] In my scene, typically I have cube maps. So I use reflection vectors to index into the cube map. If cube map stores integer RGBAs, you think the operations to convert this RGBA into a float4 is worth it? Essentially extra four multiplications by 1/255.0 each time a texel is read. This makes me want to convert everything to 16:16 fixed point math. But similar to the int/float dilemma with color, I ask myself is it worth doing? [quote name='Hodgman' timestamp='1306238802' post='4815079'] Yes, using smaller data means you can fit more in the CPUs cache at once. Moving data from RAM into the cache is the most expensive operation you can perform ([i]10-1000 times slower than mathematical operations[/i]). Your L1 cache is probably about 32KB, and your L2 cache around 2MB. If you're operating on more data than that and you're trying to optimise for performance, then you really want to think about how much data you're transferring between RAM and cache. [/quote] Would there be worthy speed increase if we restrict uncompressed texture/model data to be <2MB? If the rasterizer were written in Java or some other managed language where you don't know what's going on with memory, perhaps this suggestion is even less workable?
6. ## Software rasterizer and float/int color

[quote name='rarelam' timestamp='1306222851' post='4815004'] I would use a integer's unless you need the extra precision of floats. One of the largest bottlenecks in a software rasterizer is memory bandwidth and using floats instead of bytes is 4x the bandwidth(Even using 16bits per channel would be an advantage over floats). It does make the code more complicated to read, one thing you can do is use floats for your calculations and just pack to ints for storing. [/quote] I don't get about the memory bandwidth. Do you mean things like if texture data is in cache? That is, if texture color is int rather than float, you get more data loaded in cache? If I limit my scene to have one .3ds file (size 400k-2000k) and twelve 512x512 images, would memory bandwidth issues be insignificant? At what size of memory will this matter?
7. ## Software rasterizer and float/int color

For a 3D software rasterizer nowadays, is there much speed difference between using floating point or integers as color? For example, pseudo-code implementations of two color classes are below. The float one is easier to implement and use, but don't know how much slower it would be. class Color3f { public: float r,g,b; Color3f mix( Color3f& c1 ) { return Color3f( r*c1.r, g*c1.g, b*c1.b); } Color3f add( Color3f& c1 ) { return Color3f( r+c1.r, g+c1.g, b+c1.b); } }; class Color3i { public: int rgba; int getRed() { return (0xFF0000 & rgba)>>16; } int getGreen() { return (0x00FF00 & rgba)>>8; } int getBlue() { return (0xFF & rgba)>>0; } Color3i mix( Color3i& c1 ) { return Color3i( ( getRed() * c1.getRed() / 256 ) <<16 | ( getGreen() * c1.getGreen() / 256 ) <<8 | ( getBlue() * c1.getBlue() / 256 ) <<0 ); } // An add() function would have to check for overfloat/underflow. Omitted here } };
8. ## Floating point dds

Is there a DirectX API function that saves a buffer in memory as a .dds to disk? I have a 2D array of positive and negative floating point values generated by my program. I wish to save this as a .dds image, which I read supports float. I can only find info on saving/exporting to .dds via DirectX Texture conversion tool or other stand alone programs. I searched the The DirectX SDK help file for dds and there are only samples for reading the dds. Also, I'm curious if floating point dds are fully supported by popular image processing or shader development software. Thanks
9. ## subsurface scattering

Just to share, I too looked at the Jensen 2001 SSS paper in the past and realized I can't understand the steps between the successive equations without already being familiar with the classic 1960's radiative transfer text heavily referenced. I.e., quite a few important equations are given without proof and used as the basis for further derivations. I don't have that book and doubt I would understand it as it's written geared towards physicists, who may not use radiance as energy unit, and use more dx dt in integrals than I need. Don't think there is a good computer graphics book that discusses in great details either. Maybe a recommended course site/note somewhere that's detailed with the maths between each successive equation? Or someone's dissertation? I get stuck because papers say integrate with respect to some gradient you get the next equation. But my math isn't strong enough get the same result as the next equation, if at all. More hand-holding and spoon-feeding is needed.
10. ## OpenGL Vertices with multiple normals/uvs

Thanks. I came across Assimp viewer tool some time ago and couldn't save anything after modification, probably because it's just a viewer? Now again looking at the viewer and the command line tool options, I don't see an obvious option that accomplishes what I need. Does this mean the function that converts .obj model data to VBO friendly data is in the SDK? Also, the FAQ states the library doesn't export. I wish it would since I feed the 'fixed' .obj file into other tools.
11. ## OpenGL Vertices with multiple normals/uvs

The .obj 3d model format is still widely used. Looking at it, it seems the format allows multiple normals and uvs associated with each vertex. (And I'm sure other formats do too) That is, this is possible. f 1/1/1 2/2/2 3/3/3 f 2/5/5 3/7/7 4/4/4 Notice vertex 2 is associated with the 2nd and 5th normal/uv. And vertex 3 is associated with both the 3rd and 7th normal/uv. When translating data of the format to for example OpenGL vertex/normal/uv streams to be drawn by glDrawElements(), the proper thing to do is duplicating the vertices with multiple normal/uv associations. Is there a tool that does this already? The few model parsing code snippets I have used before don't seem that careful about this. What are some tools/libraries that are? And what can be expected from most commercial modeling software? Do they export .obj files with one-to-one mapping of vertices and normal/uvs? Thanks