• Content count

  • Joined

  • Last visited

Community Reputation

104 Neutral

1 Follower

About bandages

  • Rank

Personal Information

  • Interests
  1. Thanks, that's easy to believe, and useful to know. I actually made my own shader to bake lighting/mats/etc to textures in-engine before I discovered that in reality, there was nearly no difficulty involved in Blender baking. So I can imagine the same is true with normals too. (But, it wasn't bad HLSL practice either, not a bad way to get more comfortable with the concepts, not something I regret doing.) So much just seems to be about finding the time to learn it, when there's so much to be learned, and difficult to know beforehand what's going to be hard to learn and what's going to be easy.
  2. Thanks for your responses, I think I understand better now. It sounds like it should be acceptable-- it's nice to know that some awful artifact isn't going to jump out at me-- and the real issue is matching UV coords, matching corresponding points/spaces. There are certainly situations where this is easy via UV correspondence, like if your high poly is just a subdivided low poly, it seems like it would be trivial; what I was doing with planes was trivial. But I can see now how there are situations where it wouldn't be trivial.
  3. I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen. While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image. With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses. All it takes is one or two models, a relatively simple shader, and a single frame of computer time. But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process. It sounds like the modelling packages are using some kind of raycasting algorithm. There must be a reason not to be doing things the way that I've been doing them. Can anyone explain to me the problems with creating normal maps via shader?
  4. Thanks. I'm doing everything but the divide in the VS now and it looks good. I'm having some issues implementing my shadow buffer, but it's a situation where I'll probably need to play with it for a few days (sleeping on it always seems to help). Appreciate the help regarding mat3tomat4: seeing that example will help me simplify other things that I do as well. Edit: Oh, I misunderstood something, but I see now. Rather than trying to fit my screen into to the (0-1, 0-1) range, I should try to fit my texture into -1 to 1 range. Doing this after the w-divide is appropriate. Edit2: I believe everything is working, but I need to do more testing to be sure, and make sure I'm handling things like alpha. There is something extremely magical about making my own shadow buffer for the first time. Thank you again for all of your help.
  5. Thanks, I think I see what you're saying. If I understand correctly, the UV coordinates won't quite be the same if I apply them before the w dvide, but that's probably an error in my current version. (This is the first time I've ever made a projection matrix, or even an inverse matrix, and I wanted to keep them clean. Because I guess I'm scared I'll never be able to get close again But I'll make a new matrix and multiply it in before the w divide, which should let me make a shadow buffer.) I don't know if you have any comments on anything else? With that 3x3-matrix-to-4x4 function, it feels like there should be a better way, one that I just don't know about. (The fov is also not what I'm treating it as, but that may be related to my scale+shift after divide.)
  6. I'm an amateur, trying to learn HLSL techniques. I'm currently trying to implement texture projection (making a movie projector) in a DX9 environment. I'm running my vertices through an alternate view and projection and using that as UV coordinates on a texture. However, I find that the coordinates are very different depending on whether I convert them from screen coordinates to texture coordinates in the vertex shader or the pixel shader and I don't know why. I suspect it may have something to do with some kind of automatic conversions going on between the vertex shader and the pixel shader? I don't care much about performance, but I really want to use the vertex shader for this calculation so that I can shadow the projection, shadow-buffer style. But there are artifacts and clones that I can't live with. I'm attaching two pics, one showing the artifacts when calculating UV coordinates in the vertex shader, one when calculating the UV coordinates in the pixel shader (which, other than shadowing, I'm happy with.) Here is the almost-complete code (I'm leaving out the wide variety of technique calls that all look the same). I'm never sure whether to whittle this down to what's relevant in order to save you some effort in understanding, or to leave it complete in case I turn out unqualified to be the one-that-whittles. Here, there is a single line in the pixel shader that I'm uncommenting in order to replace the UV coordinates with those computed in the vertex shader. I'm certain that there are a lot of other things that I'm doing poorly as well, and appreciate any extra recommendations. I don't have access to the main executable, just the HLSL. I greatly appreciate any help anyone is willing to offer. Thanks for looking. #define MOVIETEX "b.png" //#define MOVIETEX "test.gif" //#define MOVIETEX "NT.gif" #define VSVRS vs_2_0 #define PSVRS ps_2_0 //animated textures don't work in v3.0 #define PI 3.14159265f #define IDENTITYMATRIX {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0, 0, 0, 1}} #define BLACK float4(0,0,0,1) #define CONT_MODEL_INSTANCE "Projector.pmx" float4x4 cProjector : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Projector"; >; float4 cFOV : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "FOV"; >; float4 cBrightness : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Brightness"; >; float4 cCol : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Color"; >; float4 cNearFar : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "NearFar"; >; float3 cZVec : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "NearFar"; >; static float3 projWPos = float3(cProjector._41, cProjector._42, cProjector._43); float4x4 WorldMatrix : WORLD; float4x4 ViewMatrix : VIEW; float4x4 ViewProjMatrix : VIEWPROJECTION; float4x4 WorldViewProjMatrix : WORLDVIEWPROJECTION; float4x4 ProjMatrix : PROJECTION; float4 MaterialDiffuse : DIFFUSE < string Object = "Geometry"; >; float3 MaterialAmbient : AMBIENT < string Object = "Geometry"; >; float4 TextureAddValue : ADDINGTEXTURE; float4 TextureMulValue : MULTIPLYINGTEXTURE; texture MovieTex : ANIMATEDTEXTURE < string ResourceName = MOVIETEX; >; sampler MovieSamp = sampler_state { texture = <MovieTex>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; ADDRESSU = BORDER; ADDRESSV = BORDER; BORDERCOLOR = BLACK; }; texture ObjectTexture: MATERIALTEXTURE; sampler ObjTexSampler = sampler_state { texture = <ObjectTexture>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; ADDRESSU = WRAP; ADDRESSV = WRAP; }; technique EdgeTec < string MMDPass = "edge"; > { //disable } technique ShadowTec < string MMDPass = "shadow"; > { //disable } technique ZplotTec <string MMDPass = "zplot";> { //disable } float4x4 mat3tomat4 (float3x3 inpM) { float4x4 outp = IDENTITYMATRIX; outp._11 = inpM._11; outp._12 = inpM._12; outp._13 = inpM._13; outp._21 = inpM._21; outp._22 = inpM._22; outp._23 = inpM._23; outp._31 = inpM._31; outp._32 = inpM._32; outp._33 = inpM._33; outp._41 = 0.0f; outp._42 = 0.0f; outp._43 = 0.0f; outp._14 = 0.0f; outp._24 = 0.0f; outp._34 = 0.0f; return outp; } float4x4 invertTR4x4 (float4x4 inpM) { //inverts a typical 4x4 matrix composed of only translations and rotations float4x4 invTr = IDENTITYMATRIX; invTr._41 = -inpM._41; invTr._42 = -inpM._42; invTr._43 = -inpM._43; float3x3 invRot3x3 = transpose((float3x3)inpM); float4x4 invRot4x4 = mat3tomat4(invRot3x3); float4x4 outpM = mul(invTr, invRot4x4); return outpM; } float4x4 getPerspProj (float2 Fov, float near, float far) { // //receives FOV in degrees Fov *= PI / 180.0f; Fov = 1.0f/Fov; float4x4 outp = IDENTITYMATRIX; outp._11 = atan(Fov.x/2.0f); outp._22 = atan(Fov.y/2.0f); outp._33 = -(far+near)/(far-near); outp._43 = (-2.0f*near*far)/(far-near); outp._34 = -1.0f; outp._44 = 0.0f; return outp; } struct BufferShadow_OUTPUT { float4 Pos : POSITION; float4 PTex : TEXCOORD0; //texture coordinates in alternate projection float4 UV : TEXCOORD1; float3 Normal : TEXCOORD2; float3 PEye : TEXCOORD3; float2 Tex : TEXCOORD4; float4 wPos : TEXCOORD5; float4 Color : COLOR0; }; BufferShadow_OUTPUT BufferShadow_VS(float4 Pos : POSITION, float3 Normal : NORMAL, float2 Tex : TEXCOORD0, float2 Tex2 : TEXCOORD1, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) { BufferShadow_OUTPUT Out = (BufferShadow_OUTPUT)0; Pos = mul( Pos, WorldMatrix ); Out.PEye = cZVec -; //easier than transforming Zvec Out.wPos = Pos; Out.Pos = mul(Pos, ViewProjMatrix); float4x4 invTR = invertTR4x4(cProjector); Out.PTex = mul(Pos, invTR); float4x4 altProj = getPerspProj((cFOV.xy)*cFOV.z, cNearFar.x, cNearFar.y); Out.PTex = mul(Out.PTex, altProj); Out.UV = Out.PTex; /= Out.UV.w; Out.UV.x = (Out.UV.x + 0.5f)*2.0f; Out.UV.y = (-Out.UV.y + 0.5f)*2.0f; Out.UV.xy -= 0.5f; //texture is centered on 0,0 Out.Normal = normalize( mul( Normal, (float3x3)WorldMatrix ) ); Out.Tex = Tex; Out.Color.rgb = MaterialAmbient; Out.Color.a = MaterialDiffuse.a; return Out; } float4 BufferShadow_PS(BufferShadow_OUTPUT IN, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) : COLOR { float4 Color = IN.Color; float3 PEn = normalize(IN.PEye); float3 Nn = normalize(IN.Normal); if ( useTexture ) { float4 TexColor = tex2D( ObjTexSampler, IN.Tex ); TexColor.rgb = lerp(1, TexColor * TextureMulValue + TextureAddValue, TextureMulValue.a + TextureAddValue.a).rgb; Color *= TexColor; } float4 UV = IN.PTex; /= UV.w; UV.x = (UV.x + 0.5f) *2.0f; UV.y = (-UV.y+0.5f) * 2.0f; UV.xy -= 0.5f; //uncommenting seems like it should provide same output yet doesn't //UV = IN.UV; float4 projTex = tex2D(MovieSamp, UV.xy); Color *= projTex; Color = projTex; Color.rgb *= pow(dot(Nn, PEn), 0.6f); Color.rgb *= cCol.rgb; Color.rgb *= cBrightness.x; if ((UV.z < 0.0f) || (UV.z > 1.0f) || (UV.x < 0.0f) || (UV.x > 1.0f) || (UV.y < 0.0f) || (UV.y > 1.0f)){ return BLACK; //outside range; using border mode giving me artifacts i don't understand } else {return Color;} } technique MainTecBS0 < string MMDPass = "object_ss"; bool UseTexture = false; bool UseSphereMap = false; bool UseToon = false; > { pass DrawObject { VertexShader = compile vs_3_0 BufferShadow_VS(false, false, false); PixelShader = compile ps_3_0 BufferShadow_PS(false, false, false); } }
  7. On a night's rest, it seems to me that I shouldn't be using angle = PI*(1.0f-((dotProd + 1.0f)/2.0f)); but should instead be using angle = acos(dotProd); . However, this apparently gives me an apparently identical response through angles up to Pi/2 radians, and breaks down when it reaches something like Pi*4/3. Seems like it's right theoretically, but looks entirely wrong. The relationship is not a power relationship. Nevertheless, using the code I provided above, but adding dotProd = 1.0f - pow(1.0f-dotProd, 0.25f); gives me something very close to correct. Currently I'm just hacking my way through it with this correction, creating a new node at the not-quite-right angle in order to approach the correct path.
  8. I'm trying to implement animation by deformation along a path in Miku Miku Dance. This problem is interesting to me, it opens up new options to animators, and it seems like a good way for me to learn more about transformations. I'm doing my deformation in a HLSL vertex shader, using bones as path nodes, using quaternions to create matrices to rotate my vertices, traveling down the path and rotating as I go. I don't understand quaternion math, but I found this code online, and it's worked for me other places It's almost right. I really think I'm doing the right thing. Almost. But my angles aren't right. Demonstrated in the picture (yes, I'm using an actual arrow model to test). At 90 degree intervals, the angles are correct. As I go from no transformation to 90 degrees, the transformation lags the vector. From 90 degrees to 180 degrees, the transformation overtakes the vector. This is symmetrical; transformation lags the -45 degree vector same as +45 degrees. Here is the code I've written. I'm trying to include only relevant bits. I can include everything if anybody wants, just trying to spare you. This is for shader model 3.0/DX9. ... float4 pos0 : CONTROLOBJECT < string name = PATHMODEL; string item = "0"; >; //leave at origin, indicates beginning of deformation float4 pos1 : CONTROLOBJECT < string name = PATHMODEL; string item = "1"; >; //first node, proceeding from origin ... float3 rotateAxis(float3 pos, float3 origin, float3 axis, float angle) { //rotates pos around origin in axis by angle in rads using quaternion pos -= origin; float4 q; = axis*sin(angle/2.0f); q.w = cos(angle/2.0f); q = normalize(q); float3 temp = cross(, pos) + q.w * pos; pos = (cross(temp,,pos)**temp); pos += origin; return pos; } ... VS_OUTPUT Basic_VS... float4 wPos = mul( Pos, WorldMatrix ); float3 vec0 = YVEC; //primary axis, as vertices travel in positive Y axis they are deformed float3 vec1 = normalize( -; float extent = wPos.y; extent -= pos0.y; if (extent > 0.0f) { float3 axis = cross(vec0, vec1); float angle = (PI*(1.0f-((dot(vec0,vec1)) + 1.0f)/2.0f)); = rotateAxis(,, axis, angle); } Out.Pos = mul( wPos, ViewProjMatrix ); ... Am I misunderstanding the dot product here? Does my function not do what I think it does? Something else? Any help is greatly appreciated. I'm an amateur, I try to read and learn, but no formal education, no experience, and no people around me studying the same things, and I'm really grateful for the people on this forum that provide help.
  9. Thanks MJP, that's good to know. DLL workaround sounds beyond my current ability.
  10. Thanks, that might be a start. I'll try to figure out what I can get out of fxc. Just to be clear about my limits, my "development environment" is Notepad++ and I don't have any access to C code from the renderer. I believe that the renderer will only load uncompiled HLSL files (typically with the .fx extension) rather than compiled shaders.
  11. Hi, I'm just a self-taught amateur exploring HLSL (among other things). I'm using MikuMikuDance to render my models and effects. It's closed source but free, based on DX9 (so shader model 3.0). One of my problems with this renderer is that effects are always compiled at run time. This is handy for debugging, of course. But when I get something finished, load times can be irritating. I very much appreciate the intelligence of the compiler in terms of optimization, but loading times are the price to be paid for that. Since I usually attach shaders to models that I make for public use (public domain to the extent made possible by any other sources I might use), shader load times can limit my audience. I believe that if I could insert assembly into my .fx files, that I could bypass most of the compiler's thought processes. In order to do that, I'd need to know how to output my HLSL to ASM (I can't write ASM and have a lot of other priorities) and how to replace my HLSL with the outputted ASM. Maybe that's bad thinking? Like I said, I'm just a beginner, trying to work within the limits of my knowledge and my environment. Always happy to hear if I'm pursuing something unwise. But otherwise, this strikes me as something that is probably possible, and that would probably help load times quite a bit. Any help or advice? Wasn't able to refine my Googling enough to get anything useful.
  12. Thank you so much for the explanation. It means a lot. I am seeing that CameraPosition == float3(ViewInv._41, ViewInv._42, ViewInv._43). I think I was confused because ViewInv._41_42_43 does not equal -ViewMatrix._41_42_43, not when there's any rotation. It's clear that I have to be more careful about how I think about my matrices. I understand now why I was getting different results for all three. I'll try to give myself a little more time to solve my own problems in the future :)
  13. Thank you for your response.   Apparently not. But I was under the impression that the view matrix contained only rotation and translation data (no scale or skew), and that under those circumstances, translations could be accessed out of the fourth row of the matrix?   My sources have indeed been using CameraPosition - PosWorld for their Eye variable. It's more typical in your experience to call the opposite of this vector Eye? (Same length anyways.)   I've been using Eye as a world space vector, where you wouldn't want to take camera orientation into account. For example, to get the halfvector in world space, so you don't have to do extra transformations on normals and light vectors. And, of course, depth is depth, regardless of its orientation. Is Eye used more commonly as a view-space vector in your experience?   Yeah, thanks :)   And yeah, I can see now how, even if I wasn't screwing it up with the float3 = float conversion, that I'd be dropping the x/y camera space component with this.
  14. Hi!  I'm a beginner and an amateur, just trying to learn, explore, and play.  I'm using MikuMikuDance in conjunction with MikuMikuEffect as an engine in order to explore HLSL (although I'm on the lookout for any other engines that will let me play with shaders as easily I can with MMD).  The engine uses DX9 and I'm using shader model 3.0.  MMD and MME are not open source, and I'm not familiar with other frameworks, so I never know if any problems I run into are entirely my own fault or if they involve anything specific to the engine.   I'm currently exploring techniques to write depth to an offscreen render target.  I ended up playing with three different ways to write camera-space depth, but they all give me slightly different output, and I don't know why.  I believe that understanding why would help me understand more than just the writing of depth.   Here's a section of my depth_VS.  I write the length of Out.Eye to a R32F render target and read it with a post (that rescales it by a constant).  Matrices and CameraPosition are provided to the shader by the engine.     Pos = mul( Pos, WorldMatrix );     float3 Eye;     float3 CamPos = CameraPosition;     //CamPos = float3(ViewMatrix._41, ViewMatrix._42, ViewMatrix._43);     Eye = CamPos -;     Pos = mul(Pos, ViewMatrix);     //Eye = Pos.z;     Out.Pos = mul(Pos, ProjMatrix);     Out.DepthV = length(Eye); The two commented lines are alternate ways of determining depth that seem to me like they should give the same output.  All three techniques give something that looks like view depth-- they have similar values, they change roughly appropriately as I move the camera in the scene.  Yet all three give slightly different output from each other.   Thanks in advance for any help.  I've been looking for good forums to ask for help regarding HLSL for a while.  If this isn't a good forum for it, I apologize, and please let me know!