blackpawn

Members
  • Content count

    57
  • Joined

  • Last visited

Community Reputation

195 Neutral

About blackpawn

  • Rank
    Member

Personal Information

  1. Vertex attributes and buffers.

    yeah when blasting geometry at the GPU you generally format it so you have fat vertices each with their attributes (like position, normal, texture coordinate, color, etc) all packed together one after the other.  so you have one buffer of pos0, normal0, texcoord0, color0, pos1, normal1, texcoord1, color1, pos2, normal2.... you get the idea.  then another buffer that are indices for your triangles so like 0, 1, 2, 0, 2, 3, ... each index here then grabs the contiguous blob of data for the vertex at that index.   you'll find a lot of software like max and maya and formats like OBJ don't specify meshes this way.  they'll specify them as you describe where each face or triangle has separate indices for each vertex component.  so positions 0,1,2 and normals 6,1,3 and texcoords 4,3,1.   when you load data like this you do indeed have to duplicate vertices when the same vertex is referenced but with different attributes.  while this does mean potentially having many duplicate copies of attributes (like the texture coordinate 0,0 used by many vertices) it keeps things a bit simpler and avoids the GPU running all around memory gathering up the attributes from many different locations and also keeps the APIs a bit simpler as well.  in the general case of a big complex mesh created by an artist there's unlikely to be much savings in indexing attributes separately.
  2. hm i heard about people doing something like this, but to be honest i couldn't figure out how. doing a bounding volume test on a pixel's world position seems rather expensive  compared to rendering a low poly sphere twice, since it involves trigonometry, and has to be done for every pixel covered by the sphere (which still has to be rendered), not only the lit areas. but i probably am missing some integral part of the algorithm. got any links? googling something like 'deferred lighting without stencil' lists a bunch of stencil algorithms -.-' but as soon as i find some info on this ill definitely look into it.   so in point light rendered as sphere you're probably already computing the attenuation with some linear fall off that hits zero at light radius.  if the distance between light pos and world pos is greater than the light radius then you don't want to light that point so you either fully attenuate or discard (depending on expense of the rest of your shader)
  3. Losing "Coding Inertia" and how to deal with it?

    i'm also in the camp of when i'm really on a roll i just put off sleep as long as i can :)  wind up staying up well into the next day then totally crashing and sometimes takes a couple days to get back to normal but the output of those long all night stretches is always my best and most inspired code.
  4. that is a lot of rendertarget switching.  why not drop the stencil buffer and just handle it in the light shader.  unproject from screen space coord and depth value to world space position and discard fragment if its outside the bounds of the light (super easy to calculate for boxes, spheres, cones and probably doing it anyway when computing attenuation).  then you can just set the render target and render all your lights in one go.
  5. Flow Maps + Dynamic Terrain

    do you mean like analyze the terrain height data and create flowmap from that?  sounds doable. something like gradient descent perhaps going in steepest direction down from each pixel and then smoothing and encoding those directions into the flowmap.  could get some weird vortices i guess with noisy height map but could be a nice start.
  6. i think every DCC like max, maya, modo, houdini, xsi, etc are all right handed. can't remember ever using one that was left handed. they may still assume different windings orders though.
  7. hmm not sure flash and javascript are good routes if you're targeting mobile.  desktops are crazy powerful enough to yield decent performance with javascript games but mobile devices are much more constrained and less room for inefficiencies.  of the options you listed, i'd go for Unity and give your players a nice steady frame rate :)
  8. DX11 and multiple shader Textures

    ah that's cool it looks like they have a lot of flexibility with all those masks.  i think you may misunderstand though.  each of those channels doesn't mean they are using different shaders for different parts of the character.  the character could be rendered with one draw call and a shader which computes all 8 material properties and uses the 8 masks to control those properties.  so using the same shader and different weights in those mask channels you can achieve very different looking materials on the different parts of the character.  hope this helps
  9. Anyone else run into the "idk what programs to make" issue?

    sure, kinda depends where your interests are.  if you're interested in graphics and looking for something to make a game could turn into a massive undertaking involving not much of the fun graphics you were hoping to spend your time on. with small demos and intros near 100% of your time is on the visuals and cool stuff. code up a tunnel, add some glow, put some music to it with lots of bass, show it off to your friends and you may find that much more rewarding than mucking about on a more complicated project you may never finish and get that self motivating pay off from. all depends on personal interest i suppose but that's my recommendation for Bill. :)  
  10. DirectX + MFC source?

    here's another vote for you to stay as far away from MFC as you can!!  
  11. Anyone else run into the "idk what programs to make" issue?

    (1) watch the demoscene documentary and get inspired to create cool effects https://vimeo.com/40192433 (2) make some demos! (3) profit!     can start by doing a 4k intro doing pretty much just a GLSL or HLSL shader on fullscreen quad and go to demoparty (if you're in USA @party is in June http://atparty-demoscene.net/) and show it on the bigscreen and win fabulous prizes!
  12. Dynamic Texture Blending

    that's cool!   so it looked like as they moved the props around the map the projected textures updated nicely.  i couldn't tell for sure but looked like it wasn't just using one of the terrain tiles but had blending between the tiles as well.  so assuming the terrain is 2D XY + height the shader for the props would need to transform from its world space XY positions to the UV coordinates of the terrain block its sitting on then run the terrain shader to figure out the color to blend to (likely sampling a blend weights texture and a terrain tile sheet or 4 terrain tile textures).   seems that would work well for surfaces that like the terrain are pretty horizontal. to handle things like vertical surfaces perhaps using triplanar blending would be needed.     could be possible to do this in screen space as well with an inferred or deferred pipeline.  like if the material shader wants to blend with terrain it could do the mapping from world position to terrain UV then sample virtual texture/megatexture/clipmap for the terrain blend map and tiles texture array and again triplanar map to handle non horizontal surfaces.
  13. no floats have plenty of precision to address the pixel centers for the texture sizes. make sure to disable multisampling while rendering stuff like this so the gpu doesn't sample outside the glyph rects on the quad edges.
  14. you could pack your glyphs together into a larger texture using this technique: www.blackpawn.com/texts/lightmaps