Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

104 Neutral


About bandages

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. bandages

    Dirt And Rock Textures Using Blender Particle Systems

    Thanks a lot for this. I'm a little confused by the layout of the planes and scaling. Is A resized so that it no longer fills the camera's view? In trying to follow along, I notice that the particles extend past the border of the plane/viewport-- won't this create some problems with seams?
  2. Thank you! Was working on this since writing but wasn't getting anywhere. I'd just given up when I read your message, figuring I'd wait until I'm smarter. Replaced my ridiculous, non-functional code and it works Now I just have to figure out why to actually use + and - and .zy vs .yz since I just trial-and-errored it. I'm sure there's a reason cubemap filtering goes so slowly. But at least I've already found things to read and try when it comes to that, so hopefully I won't get stuck.
  3. I'm working in an old application (DX9-based) where I don't have access to the C code, but I can write any (model 3.0) HLSL shaders I want. I'm trying to mess with some cube mapping concepts. I've gotten to the point where I'm rendering a cube map of the scene to a cross cube that I can plug directly into ATI cubemapgen for filtering, which is already easier than trying to make one in Blender, so I'm pretty happy so far. But I would like to do my own filtering and lookups for two purposes: one, to effortlessly render directly to sphere map (which is the out-of-the-box environment mapping for the renderer I'm using), and two, to try out dynamic cube mapping so I can play with something approaching real-time reflections. Also, eventually, I'd like to do realish-time angular Gaussian on the cube map so that I can get a good feel for how to map specular roughness values to Gaussian-blurred environment miplevels. It's hard to get a feel for that when it requires processing through several independent, slow applications. Unfortunately, the math to do lookups and filtering is challenging, and I can't find anybody else online doing the same thing. It seems to me that I'm going to need a world-vector-to-cube-cross-UV function for the lookup, then a cube-cross-UV-to-world-vector function for the filtering (so I can point sample four or more adjacent texels, then interpolate on the basis of angular distance rather than UV distance.) First, I'm wondering if there's any kind of matrix that I can use here to transform vector to cube-cross map, rather than doing a bunch of conditionals on the basis of which cube face I want to read. This seems like maybe it would be possible? But I'm not really sure, it's kind of a weird transformation. Right now, my cube cross is a 3:4 portrait, going top/front/bottom/back from top to bottom, because that's what cubemapgen wants to see. I suppose I could make another texture from it with a different orientation, if that would mean I could skip a bunch of conditionals on every lookup. Second, it seems like once I have the face, I could just use something like my rendering matrix for that face to transform a vector to UV space, but I'm not sure that I could use the inverse of that matrix to get a vector from an arbitrary cube texel for filtering, because it involves a projection matrix-- I know those are kind of special, but I'm still wrapping my head around a lot of these concepts. I'm not even sure I could make the inverse very easily; I can grab an inverseProj from the engine, but I'm writing to projM._11_22 to set the FOV to 90, and I'm not sure how that would affect the inverse. Really interested in any kind of discussion on techniques involved, as well as any free resources. I'd like to solve the problem, but it's much more important to me to use the problem as a way to learn more.
  4. Thanks, that's easy to believe, and useful to know. I actually made my own shader to bake lighting/mats/etc to textures in-engine before I discovered that in reality, there was nearly no difficulty involved in Blender baking. So I can imagine the same is true with normals too. (But, it wasn't bad HLSL practice either, not a bad way to get more comfortable with the concepts, not something I regret doing.) So much just seems to be about finding the time to learn it, when there's so much to be learned, and difficult to know beforehand what's going to be hard to learn and what's going to be easy.
  5. Thanks for your responses, I think I understand better now. It sounds like it should be acceptable-- it's nice to know that some awful artifact isn't going to jump out at me-- and the real issue is matching UV coords, matching corresponding points/spaces. There are certainly situations where this is easy via UV correspondence, like if your high poly is just a subdivided low poly, it seems like it would be trivial; what I was doing with planes was trivial. But I can see now how there are situations where it wouldn't be trivial.
  6. I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen. While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image. With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses. All it takes is one or two models, a relatively simple shader, and a single frame of computer time. But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process. It sounds like the modelling packages are using some kind of raycasting algorithm. There must be a reason not to be doing things the way that I've been doing them. Can anyone explain to me the problems with creating normal maps via shader?
  7. Thanks. I'm doing everything but the divide in the VS now and it looks good. I'm having some issues implementing my shadow buffer, but it's a situation where I'll probably need to play with it for a few days (sleeping on it always seems to help). Appreciate the help regarding mat3tomat4: seeing that example will help me simplify other things that I do as well. Edit: Oh, I misunderstood something, but I see now. Rather than trying to fit my screen into to the (0-1, 0-1) range, I should try to fit my texture into -1 to 1 range. Doing this after the w-divide is appropriate. Edit2: I believe everything is working, but I need to do more testing to be sure, and make sure I'm handling things like alpha. There is something extremely magical about making my own shadow buffer for the first time. Thank you again for all of your help.
  8. Thanks, I think I see what you're saying. If I understand correctly, the UV coordinates won't quite be the same if I apply them before the w dvide, but that's probably an error in my current version. (This is the first time I've ever made a projection matrix, or even an inverse matrix, and I wanted to keep them clean. Because I guess I'm scared I'll never be able to get close again But I'll make a new matrix and multiply it in before the w divide, which should let me make a shadow buffer.) I don't know if you have any comments on anything else? With that 3x3-matrix-to-4x4 function, it feels like there should be a better way, one that I just don't know about. (The fov is also not what I'm treating it as, but that may be related to my scale+shift after divide.)
  9. I'm an amateur, trying to learn HLSL techniques. I'm currently trying to implement texture projection (making a movie projector) in a DX9 environment. I'm running my vertices through an alternate view and projection and using that as UV coordinates on a texture. However, I find that the coordinates are very different depending on whether I convert them from screen coordinates to texture coordinates in the vertex shader or the pixel shader and I don't know why. I suspect it may have something to do with some kind of automatic conversions going on between the vertex shader and the pixel shader? I don't care much about performance, but I really want to use the vertex shader for this calculation so that I can shadow the projection, shadow-buffer style. But there are artifacts and clones that I can't live with. I'm attaching two pics, one showing the artifacts when calculating UV coordinates in the vertex shader, one when calculating the UV coordinates in the pixel shader (which, other than shadowing, I'm happy with.) Here is the almost-complete code (I'm leaving out the wide variety of technique calls that all look the same). I'm never sure whether to whittle this down to what's relevant in order to save you some effort in understanding, or to leave it complete in case I turn out unqualified to be the one-that-whittles. Here, there is a single line in the pixel shader that I'm uncommenting in order to replace the UV coordinates with those computed in the vertex shader. I'm certain that there are a lot of other things that I'm doing poorly as well, and appreciate any extra recommendations. I don't have access to the main executable, just the HLSL. I greatly appreciate any help anyone is willing to offer. Thanks for looking. #define MOVIETEX "b.png" //#define MOVIETEX "test.gif" //#define MOVIETEX "NT.gif" #define VSVRS vs_2_0 #define PSVRS ps_2_0 //animated textures don't work in v3.0 #define PI 3.14159265f #define IDENTITYMATRIX {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0, 0, 0, 1}} #define BLACK float4(0,0,0,1) #define CONT_MODEL_INSTANCE "Projector.pmx" float4x4 cProjector : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Projector"; >; float4 cFOV : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "FOV"; >; float4 cBrightness : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Brightness"; >; float4 cCol : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Color"; >; float4 cNearFar : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "NearFar"; >; float3 cZVec : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "NearFar"; >; static float3 projWPos = float3(cProjector._41, cProjector._42, cProjector._43); float4x4 WorldMatrix : WORLD; float4x4 ViewMatrix : VIEW; float4x4 ViewProjMatrix : VIEWPROJECTION; float4x4 WorldViewProjMatrix : WORLDVIEWPROJECTION; float4x4 ProjMatrix : PROJECTION; float4 MaterialDiffuse : DIFFUSE < string Object = "Geometry"; >; float3 MaterialAmbient : AMBIENT < string Object = "Geometry"; >; float4 TextureAddValue : ADDINGTEXTURE; float4 TextureMulValue : MULTIPLYINGTEXTURE; texture MovieTex : ANIMATEDTEXTURE < string ResourceName = MOVIETEX; >; sampler MovieSamp = sampler_state { texture = <MovieTex>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; ADDRESSU = BORDER; ADDRESSV = BORDER; BORDERCOLOR = BLACK; }; texture ObjectTexture: MATERIALTEXTURE; sampler ObjTexSampler = sampler_state { texture = <ObjectTexture>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; ADDRESSU = WRAP; ADDRESSV = WRAP; }; technique EdgeTec < string MMDPass = "edge"; > { //disable } technique ShadowTec < string MMDPass = "shadow"; > { //disable } technique ZplotTec <string MMDPass = "zplot";> { //disable } float4x4 mat3tomat4 (float3x3 inpM) { float4x4 outp = IDENTITYMATRIX; outp._11 = inpM._11; outp._12 = inpM._12; outp._13 = inpM._13; outp._21 = inpM._21; outp._22 = inpM._22; outp._23 = inpM._23; outp._31 = inpM._31; outp._32 = inpM._32; outp._33 = inpM._33; outp._41 = 0.0f; outp._42 = 0.0f; outp._43 = 0.0f; outp._14 = 0.0f; outp._24 = 0.0f; outp._34 = 0.0f; return outp; } float4x4 invertTR4x4 (float4x4 inpM) { //inverts a typical 4x4 matrix composed of only translations and rotations float4x4 invTr = IDENTITYMATRIX; invTr._41 = -inpM._41; invTr._42 = -inpM._42; invTr._43 = -inpM._43; float3x3 invRot3x3 = transpose((float3x3)inpM); float4x4 invRot4x4 = mat3tomat4(invRot3x3); float4x4 outpM = mul(invTr, invRot4x4); return outpM; } float4x4 getPerspProj (float2 Fov, float near, float far) { //http://www.codinglabs.net/article_world_view_projection_matrix.aspx //receives FOV in degrees Fov *= PI / 180.0f; Fov = 1.0f/Fov; float4x4 outp = IDENTITYMATRIX; outp._11 = atan(Fov.x/2.0f); outp._22 = atan(Fov.y/2.0f); outp._33 = -(far+near)/(far-near); outp._43 = (-2.0f*near*far)/(far-near); outp._34 = -1.0f; outp._44 = 0.0f; return outp; } struct BufferShadow_OUTPUT { float4 Pos : POSITION; float4 PTex : TEXCOORD0; //texture coordinates in alternate projection float4 UV : TEXCOORD1; float3 Normal : TEXCOORD2; float3 PEye : TEXCOORD3; float2 Tex : TEXCOORD4; float4 wPos : TEXCOORD5; float4 Color : COLOR0; }; BufferShadow_OUTPUT BufferShadow_VS(float4 Pos : POSITION, float3 Normal : NORMAL, float2 Tex : TEXCOORD0, float2 Tex2 : TEXCOORD1, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) { BufferShadow_OUTPUT Out = (BufferShadow_OUTPUT)0; Pos = mul( Pos, WorldMatrix ); Out.PEye = cZVec - projWPos.xyz; //easier than transforming Zvec Out.wPos = Pos; Out.Pos = mul(Pos, ViewProjMatrix); float4x4 invTR = invertTR4x4(cProjector); Out.PTex = mul(Pos, invTR); float4x4 altProj = getPerspProj((cFOV.xy)*cFOV.z, cNearFar.x, cNearFar.y); Out.PTex = mul(Out.PTex, altProj); Out.UV = Out.PTex; Out.UV.xyz /= Out.UV.w; Out.UV.x = (Out.UV.x + 0.5f)*2.0f; Out.UV.y = (-Out.UV.y + 0.5f)*2.0f; Out.UV.xy -= 0.5f; //texture is centered on 0,0 Out.Normal = normalize( mul( Normal, (float3x3)WorldMatrix ) ); Out.Tex = Tex; Out.Color.rgb = MaterialAmbient; Out.Color.a = MaterialDiffuse.a; return Out; } float4 BufferShadow_PS(BufferShadow_OUTPUT IN, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) : COLOR { float4 Color = IN.Color; float3 PEn = normalize(IN.PEye); float3 Nn = normalize(IN.Normal); if ( useTexture ) { float4 TexColor = tex2D( ObjTexSampler, IN.Tex ); TexColor.rgb = lerp(1, TexColor * TextureMulValue + TextureAddValue, TextureMulValue.a + TextureAddValue.a).rgb; Color *= TexColor; } float4 UV = IN.PTex; UV.xyz /= UV.w; UV.x = (UV.x + 0.5f) *2.0f; UV.y = (-UV.y+0.5f) * 2.0f; UV.xy -= 0.5f; //uncommenting seems like it should provide same output yet doesn't //UV = IN.UV; float4 projTex = tex2D(MovieSamp, UV.xy); Color *= projTex; Color = projTex; Color.rgb *= pow(dot(Nn, PEn), 0.6f); Color.rgb *= cCol.rgb; Color.rgb *= cBrightness.x; if ((UV.z < 0.0f) || (UV.z > 1.0f) || (UV.x < 0.0f) || (UV.x > 1.0f) || (UV.y < 0.0f) || (UV.y > 1.0f)){ return BLACK; //outside range; using border mode giving me artifacts i don't understand } else {return Color;} } technique MainTecBS0 < string MMDPass = "object_ss"; bool UseTexture = false; bool UseSphereMap = false; bool UseToon = false; > { pass DrawObject { VertexShader = compile vs_3_0 BufferShadow_VS(false, false, false); PixelShader = compile ps_3_0 BufferShadow_PS(false, false, false); } }
  10. On a night's rest, it seems to me that I shouldn't be using angle = PI*(1.0f-((dotProd + 1.0f)/2.0f)); but should instead be using angle = acos(dotProd); . However, this apparently gives me an apparently identical response through angles up to Pi/2 radians, and breaks down when it reaches something like Pi*4/3. Seems like it's right theoretically, but looks entirely wrong. The relationship is not a power relationship. Nevertheless, using the code I provided above, but adding dotProd = 1.0f - pow(1.0f-dotProd, 0.25f); gives me something very close to correct. Currently I'm just hacking my way through it with this correction, creating a new node at the not-quite-right angle in order to approach the correct path.
  11. I'm trying to implement animation by deformation along a path in Miku Miku Dance. This problem is interesting to me, it opens up new options to animators, and it seems like a good way for me to learn more about transformations. I'm doing my deformation in a HLSL vertex shader, using bones as path nodes, using quaternions to create matrices to rotate my vertices, traveling down the path and rotating as I go. I don't understand quaternion math, but I found this code online, and it's worked for me other places It's almost right. I really think I'm doing the right thing. Almost. But my angles aren't right. Demonstrated in the picture (yes, I'm using an actual arrow model to test). At 90 degree intervals, the angles are correct. As I go from no transformation to 90 degrees, the transformation lags the vector. From 90 degrees to 180 degrees, the transformation overtakes the vector. This is symmetrical; transformation lags the -45 degree vector same as +45 degrees. Here is the code I've written. I'm trying to include only relevant bits. I can include everything if anybody wants, just trying to spare you. This is for shader model 3.0/DX9. ... float4 pos0 : CONTROLOBJECT < string name = PATHMODEL; string item = "0"; >; //leave at origin, indicates beginning of deformation float4 pos1 : CONTROLOBJECT < string name = PATHMODEL; string item = "1"; >; //first node, proceeding from origin ... float3 rotateAxis(float3 pos, float3 origin, float3 axis, float angle) { //rotates pos around origin in axis by angle in rads using quaternion pos -= origin; float4 q; q.xyz = axis*sin(angle/2.0f); q.w = cos(angle/2.0f); q = normalize(q); float3 temp = cross(q.xyz, pos) + q.w * pos; pos = (cross(temp, -q.xyz)+dot(q.xyz,pos)*q.xyz+q.w*temp); pos += origin; return pos; } ... VS_OUTPUT Basic_VS... float4 wPos = mul( Pos, WorldMatrix ); float3 vec0 = YVEC; //primary axis, as vertices travel in positive Y axis they are deformed float3 vec1 = normalize(pos1.xyz - pos0.xyz); float extent = wPos.y; extent -= pos0.y; if (extent > 0.0f) { float3 axis = cross(vec0, vec1); float angle = (PI*(1.0f-((dot(vec0,vec1)) + 1.0f)/2.0f)); wPos.xyz = rotateAxis(wPos.xyz, pos0.xyz, axis, angle); } Out.Pos = mul( wPos, ViewProjMatrix ); ... Am I misunderstanding the dot product here? Does my function not do what I think it does? Something else? Any help is greatly appreciated. I'm an amateur, I try to read and learn, but no formal education, no experience, and no people around me studying the same things, and I'm really grateful for the people on this forum that provide help.
  12. Thanks MJP, that's good to know. DLL workaround sounds beyond my current ability.
  13. Thanks, that might be a start. I'll try to figure out what I can get out of fxc. Just to be clear about my limits, my "development environment" is Notepad++ and I don't have any access to C code from the renderer. I believe that the renderer will only load uncompiled HLSL files (typically with the .fx extension) rather than compiled shaders.
  14. Hi, I'm just a self-taught amateur exploring HLSL (among other things). I'm using MikuMikuDance to render my models and effects. It's closed source but free, based on DX9 (so shader model 3.0). One of my problems with this renderer is that effects are always compiled at run time. This is handy for debugging, of course. But when I get something finished, load times can be irritating. I very much appreciate the intelligence of the compiler in terms of optimization, but loading times are the price to be paid for that. Since I usually attach shaders to models that I make for public use (public domain to the extent made possible by any other sources I might use), shader load times can limit my audience. I believe that if I could insert assembly into my .fx files, that I could bypass most of the compiler's thought processes. In order to do that, I'd need to know how to output my HLSL to ASM (I can't write ASM and have a lot of other priorities) and how to replace my HLSL with the outputted ASM. Maybe that's bad thinking? Like I said, I'm just a beginner, trying to work within the limits of my knowledge and my environment. Always happy to hear if I'm pursuing something unwise. But otherwise, this strikes me as something that is probably possible, and that would probably help load times quite a bit. Any help or advice? Wasn't able to refine my Googling enough to get anything useful.
  15. Thank you so much for the explanation. It means a lot. I am seeing that CameraPosition == float3(ViewInv._41, ViewInv._42, ViewInv._43). I think I was confused because ViewInv._41_42_43 does not equal -ViewMatrix._41_42_43, not when there's any rotation. It's clear that I have to be more careful about how I think about my matrices. I understand now why I was getting different results for all three. I'll try to give myself a little more time to solve my own problems in the future :)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!