# AndyFirth

Member

68

162 Neutral

• Rank
Member
1. ## How to find nearest vertex of a mesh to some other point?

if you're trying to deform mesh A based on proximity to a plane then do it in the vertex shader. Use the planar distance and smoothstep towards the centre of the model along the normal to the vertex... this will also give you a "bouncy" look.
2. ## How to find nearest vertex of a mesh to some other point?

whats the actual problem you're trying to solve? collision detection?
3. ## Ideas on how to implement a flexible Triangle and Mesh class

is this for processing of mesh data or rendering of it?
4. ## 2D Quads, sort by depth or by texture?

Over the years i've used many sort algorithms and techniques for sorting drawlists... What i use these days removes ALL complexity from the sorting itself. I make the core sort algorithm use a simple uint32_t and setup various methods of initializing it. In your case you mention have transparent planes and number of materials and you wish to render them in a close to optimal order. union sortkey_u { struct sortkey_opaque { uint32_t mTrans : 1; uint32_t mMaterial : MATERIAL_SORT_BITS; uint32_t mZSort : ZSORT_BITS; }m_opaque; struct sortkey_trans { uint32_t mTrans : 1 uint32_t mZSort : ZSORT_BITS; uint32_t mMaterial : MATERIAL_SORT_BITS; }m_trans; uint32_t mValue; }; so each object would fill out the union choosing which side of the union to initialize based on the mTrans flag itself. In this case the higher mValue is the later in the final list it will appear. You would then initialize ZSort to be either if(mTrans) { mZort = MAX_Z * (1 - RealZ / MaxZRange); } else { mZort = MAX_Z * (RealZ / MaxZRange); } which would mean opaque objects would render sorted by material THEN Z front to back, transparent objects would render sorted by Z back to front THEN Material. I have an algorithm based on this running in our engine right now (with various other elements) and it works well... requires only 1 sort of all objects and if you insert object address bits into the heuristic (low) the sort is also stable using std::qsort
5. ## Software triangle rasterization trouble

Quote:then optimize the code basic rule of programming... optimize once its working and only if required.
6. ## Conceptual questions about designing D3D render framework

generally speaking you should abstract from the top down. A typical example would be <Final Target> ..<Target X> ...<Viewport X> ....<Camera X> ..<Target Y> ...<Viewport Y> ....<Camera Y> however many systems have many targets within a single viewport for say, deferred + forward + full screen effects + particles etc combining them in the final "draw". there really isn't a "right" way however... to each their own... there is only a way to optimize your personal preference and that can sometimes remove certain options.
7. ## Software triangle rasterization trouble

have you debugged to figure out if your gaps are between triangles or within triangles? if they are within triangles then debug a single triangle to figure out why. if they are between triangles then find out why your snapping to top left isn't working.
8. ## Software triangle rasterization trouble

just a guess but i'd check that conversion direct from float to int is what you want... you should almost always do a +0.5f then floor
9. ## How to perform software clipping properly?

http://en.wikipedia.org/wiki/Clipping_(computer_graphics) see Sutherland Hodgeman... probably the easiest to understand. been a LONG time since i did this tho (~15 years) so expect that something better/faster is now out there.
10. ## Lens distortion

there are several ways to do it. * full screen effect ** using a simple formula to generation "offsets" that warp the main image ** using full screen texture that contains the offsets ** generating a full screen texture during game as a new pass, more often used for distortion for heat haze * perspective matrix warping
11. ## Metal Gear Solid 1 PSX - Graphics Discussion

been a VERY long time since this game came out... probably a good idea to remind people with screens (me especially :D)
12. ## For Faster HLSL Sky Gradient :Texture or Color Lerping?

it is definitely true that when it boils down to it... having the option to switch between ALU & Texture on the fly provides the best configurability. has to be said tho... most people on here should not be hitting GPU bound unless they have a degenerate scene/approach.
13. ## For Faster HLSL Sky Gradient :Texture or Color Lerping?

Quote:Original post by samoth Texture lookups as optimisation are something you'd usually have done 4-6 years ago, today you would only use that for really, really complex calculations (Oren-Nayar lighting or such). For pretty much anything else, ALU is as fast or faster, and will grow faster on future hardware. Texture reads (and memory bandwidth) have improved somewhat during the last years and will keep improving slowly and steadily, but ALU power has been and (probably) will be growing for a while far beyond Moore's law. (Actually, Moore's law isn't a law at all, more like an observation that was done once and misquoted many times, and rather seems to be a self-fulfilling prophecy today... you get the impression chip manufacturers work hard to fulfill that "law" and not go beyond it... but anyway, I'm getting esotheric :-) When you talk about "doubling power every 18 months" for CPUs, it is more like 9-10 months when it comes to graphic cards. So the ALU-texture gap will only become much, much bigger.) all true... most games are written for tech that is 3+ sometimes 5+ years old. personally i only work on ps3/360.
14. ## For Faster HLSL Sky Gradient :Texture or Color Lerping?

it is a fact that hardware is well optimized for reading textures; the operation is extremely well defined and so every possibly shortcut has been taken already. Given the requirements and assuming its heading for a production game i would tend towards authoring to the final intent rather than the development side. So if i were implementing this i would * write an interface to allow generation of the sky using a shader and a complex (as complex as you like) shader. * generate the texture using said shader as and when things change. * write the base use case using the texture. this provides the best of both worlds; the optimal production use case and ease of production iteration.
15. ## CPU skinning, generating normals? [SOLVED]

skinning in a singular instance is always faster on the GPU however you have to jump through some hoops to get it working. however its almost always fast to use the CPU should you need the results more than once (say deferred + shadow * 3) personally i came up with a nifty algorithm that when implemented correctly comes out at zero cache miss waits and (on x360) can do 100+ full 5k vert meshes easily within 5-7ms. out of order execution cpu's may do better with a different algorithm however