Jump to content
  • Advertisement

bandages

Member
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

104 Neutral

2 Followers

About bandages

  • Rank
    Member

Personal Information

  • Role
    Amateur / Hobbyist
  • Interests
    Art
    Education
    Programming

Social

  • Github
    nathanvasil

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. When I just use tex2D and let the hardware do the mip, I get a nasty seam: texture2D envTex < string ResourceName = ENVTEX; >; sampler envSamp = sampler_state { texture = <envTex>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; AddressU = WRAP; AddressV = CLAMP; }; ... float2 vecToLatLong(float3 inp) { float lat = (normalize(float3(inp.x, inp.z, 0))).x; lat = acos(lat); lat = remap(lat, 0, PI, 0, 0.5f); lat = lerp(lat, 1.0f-lat, inp.z<0); float long = acos(inp.y); long = remap(long, 0, PI, 0, 1); return float2(lat, long); } float4 BufferShadow_PS(BufferShadow_OUTPUT IN, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) : COLOR { float3 normal = normalize(IN.Normal); float3 eye = normalize(IN.Eye); float3 diffUV = normal.xyz; diffUV.xy = vecToLatLong(diffUV.xyz); float4 envColDiff = tex2D(envSamp, diffUV.xy); return envColDiff; } Artifact disappears when point sampling or using tex2Dlod, and Googling finds people telling me I have to adjust the mip, which makes sense-- there's a fragment where I'm stretching all the way across the texture. But maybe there's some other trick? I don't know. (Beyond the fact that my trig is always kind of held together with string and chewing gum.) Thanks for hint regarding tex2Dgrad, I'll look into that. I'm still a HLSL baby, I just get into it when modelling inspiration isn't hitting Thanks for advice regarding AF and hw, nice to have a heads up for those kind of issues, but I'll probably still play with it, I'm just making things for myself mostly.
  2. You need to mark off the area you want to change color somehow. You can do that by separating to a new material (not recommended) or by using a texture mask. Let's say texture mask. If you do it with a texture mask, you do it in the shader. You mix between output depending on state && mask value: #define STATECOLOR float3(1,0.4,0) if (state) ( float maskVal = tex2D(maskSamp, UV).r; color.rgb = lerp(color.rgb, STATECOLOR, maskVal); } And, of course, you make a texture for the model defining the area you want to change color. In the example above, red would indicate it should change color, while black would indicate it shouldn't. Green and blue are unused.
  3. I'm playing with equirectangular environment maps in a DX9 framework and am curious about proper sampling. Any help is appreciated. So an equirectangular map represents 360 degrees in X axis but only 180 degrees in Y axis. If I sample a mip map, I'm going to get a box blur that's effectively stretched in the X axis. And it looks like I need to figure out my own miplevels and use tex2Dlod to avoid seaming problems. Is there a way to do anisotropic mipmapping with a tex2Dlod so that I can get the hardware to "stretch" the texture for me, in order to get a more appropriate aspect on the sample? I'm curious too-- if this is possible, could it be used (with multiple, rotated copies of a texture) to approximate a tunable Gaussian blur? It seems like you could do a lot to eliminate box blur artifacts, and with the ability to set different radii, two independently, seems like it might be sufficient for real-time environment map filtering. Seems like it could potentially be done in hardware from a single texture instance as well.
  4. Sorry Chuck, see the edit. I was getting different (normalized) vectors at different depths, but it wasn't a problem with your code, it was a letter I forgot to type
  5. Thanks a lot! Got it working. But it required sending a 0 for scrPos.z ( depth) instead of a 1. Afraid I don't understand why, if ChuckNovice's code is correct-- normalized vector from camera shouldn't change with depth. float4 worldPos = mul(float4(IN.ScrPos.xy, 0, 1), InverseViewProjMatrix); Checked visually against writing eye manually for each object in the scene, no apparent difference. Since it's a closed source engine, I can't control what gets sent to the shader, so I'd have to recompute my frustrum corners from my matrices in the shader if I wanted those. I'm doing forward rendering right now, but may adapt to somebody else's deferred renderer if my forward experiments look okay. I may check out their shader looking for recomputation of frustrum corners, as last time I tried to do that, I must've screwed up my math somewhere. In case anyone's curious, here's a camera staring roughly down the positive Z, where I'm either sending z=1 or z=0. The pimple pic is for z=1. Positive/negative Z axis is where the difference is most apparent. Edit: Oh, duh, I was only dividing worldPos.xy by w, not xyz. There's my problem. Thanks again!
  6. I'm looking to find the eye vector (world-space normalized vector from camera) to any given pixel on the screen, given the (-1,1) screen position and a full set of matrices (world, view, projection, inverses of same.) It seemed to me that I couldn't just multiply screen position by inverseProj, because proj isn't really reversible, and sure enough, I have some weird behavior suggesting that to be the case (although I'm having a hard time figuring out how to "print debug" this in a way where I can be sure what's happening.) I've done some Googling but haven't been able to find anything-- maybe it's a weird problem that nobody cares about, maybe it's obvious to everyone except me This is kind of an idle question, because I know there are some other well-documented techniques (recovering the vector from a depth read, as in rebuilding world pos from a depth map) but for my purposes, recovering the eye vector without accessing depth would be preferable. I'm working in HLSL, in DX9, in a closed source engine, but I don't imagine that matters. I'm trying to create pseudo-geometry in post, concentric spheres centered on the camera, for playing with fogging techniques. I want to get world position for various vectors at multiple, arbitrary depths and use those for simplex noise look-ups. I'm just a hobbyist, and an out-of-practice one at that. Any kind of help or pointing me to a source I missed would be appreciated.
  7. bandages

    Dirt And Rock Textures Using Blender Particle Systems

    Thanks a lot for this. I'm a little confused by the layout of the planes and scaling. Is A resized so that it no longer fills the camera's view? In trying to follow along, I notice that the particles extend past the border of the plane/viewport-- won't this create some problems with seams?
  8. Thank you! Was working on this since writing but wasn't getting anywhere. I'd just given up when I read your message, figuring I'd wait until I'm smarter. Replaced my ridiculous, non-functional code and it works Now I just have to figure out why to actually use + and - and .zy vs .yz since I just trial-and-errored it. I'm sure there's a reason cubemap filtering goes so slowly. But at least I've already found things to read and try when it comes to that, so hopefully I won't get stuck.
  9. I'm working in an old application (DX9-based) where I don't have access to the C code, but I can write any (model 3.0) HLSL shaders I want. I'm trying to mess with some cube mapping concepts. I've gotten to the point where I'm rendering a cube map of the scene to a cross cube that I can plug directly into ATI cubemapgen for filtering, which is already easier than trying to make one in Blender, so I'm pretty happy so far. But I would like to do my own filtering and lookups for two purposes: one, to effortlessly render directly to sphere map (which is the out-of-the-box environment mapping for the renderer I'm using), and two, to try out dynamic cube mapping so I can play with something approaching real-time reflections. Also, eventually, I'd like to do realish-time angular Gaussian on the cube map so that I can get a good feel for how to map specular roughness values to Gaussian-blurred environment miplevels. It's hard to get a feel for that when it requires processing through several independent, slow applications. Unfortunately, the math to do lookups and filtering is challenging, and I can't find anybody else online doing the same thing. It seems to me that I'm going to need a world-vector-to-cube-cross-UV function for the lookup, then a cube-cross-UV-to-world-vector function for the filtering (so I can point sample four or more adjacent texels, then interpolate on the basis of angular distance rather than UV distance.) First, I'm wondering if there's any kind of matrix that I can use here to transform vector to cube-cross map, rather than doing a bunch of conditionals on the basis of which cube face I want to read. This seems like maybe it would be possible? But I'm not really sure, it's kind of a weird transformation. Right now, my cube cross is a 3:4 portrait, going top/front/bottom/back from top to bottom, because that's what cubemapgen wants to see. I suppose I could make another texture from it with a different orientation, if that would mean I could skip a bunch of conditionals on every lookup. Second, it seems like once I have the face, I could just use something like my rendering matrix for that face to transform a vector to UV space, but I'm not sure that I could use the inverse of that matrix to get a vector from an arbitrary cube texel for filtering, because it involves a projection matrix-- I know those are kind of special, but I'm still wrapping my head around a lot of these concepts. I'm not even sure I could make the inverse very easily; I can grab an inverseProj from the engine, but I'm writing to projM._11_22 to set the FOV to 90, and I'm not sure how that would affect the inverse. Really interested in any kind of discussion on techniques involved, as well as any free resources. I'd like to solve the problem, but it's much more important to me to use the problem as a way to learn more.
  10. Thanks, that's easy to believe, and useful to know. I actually made my own shader to bake lighting/mats/etc to textures in-engine before I discovered that in reality, there was nearly no difficulty involved in Blender baking. So I can imagine the same is true with normals too. (But, it wasn't bad HLSL practice either, not a bad way to get more comfortable with the concepts, not something I regret doing.) So much just seems to be about finding the time to learn it, when there's so much to be learned, and difficult to know beforehand what's going to be hard to learn and what's going to be easy.
  11. Thanks for your responses, I think I understand better now. It sounds like it should be acceptable-- it's nice to know that some awful artifact isn't going to jump out at me-- and the real issue is matching UV coords, matching corresponding points/spaces. There are certainly situations where this is easy via UV correspondence, like if your high poly is just a subdivided low poly, it seems like it would be trivial; what I was doing with planes was trivial. But I can see now how there are situations where it wouldn't be trivial.
  12. I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen. While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image. With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses. All it takes is one or two models, a relatively simple shader, and a single frame of computer time. But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process. It sounds like the modelling packages are using some kind of raycasting algorithm. There must be a reason not to be doing things the way that I've been doing them. Can anyone explain to me the problems with creating normal maps via shader?
  13. Thanks. I'm doing everything but the divide in the VS now and it looks good. I'm having some issues implementing my shadow buffer, but it's a situation where I'll probably need to play with it for a few days (sleeping on it always seems to help). Appreciate the help regarding mat3tomat4: seeing that example will help me simplify other things that I do as well. Edit: Oh, I misunderstood something, but I see now. Rather than trying to fit my screen into to the (0-1, 0-1) range, I should try to fit my texture into -1 to 1 range. Doing this after the w-divide is appropriate. Edit2: I believe everything is working, but I need to do more testing to be sure, and make sure I'm handling things like alpha. There is something extremely magical about making my own shadow buffer for the first time. Thank you again for all of your help.
  14. Thanks, I think I see what you're saying. If I understand correctly, the UV coordinates won't quite be the same if I apply them before the w dvide, but that's probably an error in my current version. (This is the first time I've ever made a projection matrix, or even an inverse matrix, and I wanted to keep them clean. Because I guess I'm scared I'll never be able to get close again But I'll make a new matrix and multiply it in before the w divide, which should let me make a shadow buffer.) I don't know if you have any comments on anything else? With that 3x3-matrix-to-4x4 function, it feels like there should be a better way, one that I just don't know about. (The fov is also not what I'm treating it as, but that may be related to my scale+shift after divide.)
  15. I'm an amateur, trying to learn HLSL techniques. I'm currently trying to implement texture projection (making a movie projector) in a DX9 environment. I'm running my vertices through an alternate view and projection and using that as UV coordinates on a texture. However, I find that the coordinates are very different depending on whether I convert them from screen coordinates to texture coordinates in the vertex shader or the pixel shader and I don't know why. I suspect it may have something to do with some kind of automatic conversions going on between the vertex shader and the pixel shader? I don't care much about performance, but I really want to use the vertex shader for this calculation so that I can shadow the projection, shadow-buffer style. But there are artifacts and clones that I can't live with. I'm attaching two pics, one showing the artifacts when calculating UV coordinates in the vertex shader, one when calculating the UV coordinates in the pixel shader (which, other than shadowing, I'm happy with.) Here is the almost-complete code (I'm leaving out the wide variety of technique calls that all look the same). I'm never sure whether to whittle this down to what's relevant in order to save you some effort in understanding, or to leave it complete in case I turn out unqualified to be the one-that-whittles. Here, there is a single line in the pixel shader that I'm uncommenting in order to replace the UV coordinates with those computed in the vertex shader. I'm certain that there are a lot of other things that I'm doing poorly as well, and appreciate any extra recommendations. I don't have access to the main executable, just the HLSL. I greatly appreciate any help anyone is willing to offer. Thanks for looking. #define MOVIETEX "b.png" //#define MOVIETEX "test.gif" //#define MOVIETEX "NT.gif" #define VSVRS vs_2_0 #define PSVRS ps_2_0 //animated textures don't work in v3.0 #define PI 3.14159265f #define IDENTITYMATRIX {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0, 0, 0, 1}} #define BLACK float4(0,0,0,1) #define CONT_MODEL_INSTANCE "Projector.pmx" float4x4 cProjector : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Projector"; >; float4 cFOV : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "FOV"; >; float4 cBrightness : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Brightness"; >; float4 cCol : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "Color"; >; float4 cNearFar : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "NearFar"; >; float3 cZVec : CONTROLOBJECT < string name = CONT_MODEL_INSTANCE; string item = "NearFar"; >; static float3 projWPos = float3(cProjector._41, cProjector._42, cProjector._43); float4x4 WorldMatrix : WORLD; float4x4 ViewMatrix : VIEW; float4x4 ViewProjMatrix : VIEWPROJECTION; float4x4 WorldViewProjMatrix : WORLDVIEWPROJECTION; float4x4 ProjMatrix : PROJECTION; float4 MaterialDiffuse : DIFFUSE < string Object = "Geometry"; >; float3 MaterialAmbient : AMBIENT < string Object = "Geometry"; >; float4 TextureAddValue : ADDINGTEXTURE; float4 TextureMulValue : MULTIPLYINGTEXTURE; texture MovieTex : ANIMATEDTEXTURE < string ResourceName = MOVIETEX; >; sampler MovieSamp = sampler_state { texture = <MovieTex>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; ADDRESSU = BORDER; ADDRESSV = BORDER; BORDERCOLOR = BLACK; }; texture ObjectTexture: MATERIALTEXTURE; sampler ObjTexSampler = sampler_state { texture = <ObjectTexture>; MINFILTER = LINEAR; MAGFILTER = LINEAR; MIPFILTER = LINEAR; ADDRESSU = WRAP; ADDRESSV = WRAP; }; technique EdgeTec < string MMDPass = "edge"; > { //disable } technique ShadowTec < string MMDPass = "shadow"; > { //disable } technique ZplotTec <string MMDPass = "zplot";> { //disable } float4x4 mat3tomat4 (float3x3 inpM) { float4x4 outp = IDENTITYMATRIX; outp._11 = inpM._11; outp._12 = inpM._12; outp._13 = inpM._13; outp._21 = inpM._21; outp._22 = inpM._22; outp._23 = inpM._23; outp._31 = inpM._31; outp._32 = inpM._32; outp._33 = inpM._33; outp._41 = 0.0f; outp._42 = 0.0f; outp._43 = 0.0f; outp._14 = 0.0f; outp._24 = 0.0f; outp._34 = 0.0f; return outp; } float4x4 invertTR4x4 (float4x4 inpM) { //inverts a typical 4x4 matrix composed of only translations and rotations float4x4 invTr = IDENTITYMATRIX; invTr._41 = -inpM._41; invTr._42 = -inpM._42; invTr._43 = -inpM._43; float3x3 invRot3x3 = transpose((float3x3)inpM); float4x4 invRot4x4 = mat3tomat4(invRot3x3); float4x4 outpM = mul(invTr, invRot4x4); return outpM; } float4x4 getPerspProj (float2 Fov, float near, float far) { //http://www.codinglabs.net/article_world_view_projection_matrix.aspx //receives FOV in degrees Fov *= PI / 180.0f; Fov = 1.0f/Fov; float4x4 outp = IDENTITYMATRIX; outp._11 = atan(Fov.x/2.0f); outp._22 = atan(Fov.y/2.0f); outp._33 = -(far+near)/(far-near); outp._43 = (-2.0f*near*far)/(far-near); outp._34 = -1.0f; outp._44 = 0.0f; return outp; } struct BufferShadow_OUTPUT { float4 Pos : POSITION; float4 PTex : TEXCOORD0; //texture coordinates in alternate projection float4 UV : TEXCOORD1; float3 Normal : TEXCOORD2; float3 PEye : TEXCOORD3; float2 Tex : TEXCOORD4; float4 wPos : TEXCOORD5; float4 Color : COLOR0; }; BufferShadow_OUTPUT BufferShadow_VS(float4 Pos : POSITION, float3 Normal : NORMAL, float2 Tex : TEXCOORD0, float2 Tex2 : TEXCOORD1, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) { BufferShadow_OUTPUT Out = (BufferShadow_OUTPUT)0; Pos = mul( Pos, WorldMatrix ); Out.PEye = cZVec - projWPos.xyz; //easier than transforming Zvec Out.wPos = Pos; Out.Pos = mul(Pos, ViewProjMatrix); float4x4 invTR = invertTR4x4(cProjector); Out.PTex = mul(Pos, invTR); float4x4 altProj = getPerspProj((cFOV.xy)*cFOV.z, cNearFar.x, cNearFar.y); Out.PTex = mul(Out.PTex, altProj); Out.UV = Out.PTex; Out.UV.xyz /= Out.UV.w; Out.UV.x = (Out.UV.x + 0.5f)*2.0f; Out.UV.y = (-Out.UV.y + 0.5f)*2.0f; Out.UV.xy -= 0.5f; //texture is centered on 0,0 Out.Normal = normalize( mul( Normal, (float3x3)WorldMatrix ) ); Out.Tex = Tex; Out.Color.rgb = MaterialAmbient; Out.Color.a = MaterialDiffuse.a; return Out; } float4 BufferShadow_PS(BufferShadow_OUTPUT IN, uniform bool useTexture, uniform bool useSphereMap, uniform bool useToon) : COLOR { float4 Color = IN.Color; float3 PEn = normalize(IN.PEye); float3 Nn = normalize(IN.Normal); if ( useTexture ) { float4 TexColor = tex2D( ObjTexSampler, IN.Tex ); TexColor.rgb = lerp(1, TexColor * TextureMulValue + TextureAddValue, TextureMulValue.a + TextureAddValue.a).rgb; Color *= TexColor; } float4 UV = IN.PTex; UV.xyz /= UV.w; UV.x = (UV.x + 0.5f) *2.0f; UV.y = (-UV.y+0.5f) * 2.0f; UV.xy -= 0.5f; //uncommenting seems like it should provide same output yet doesn't //UV = IN.UV; float4 projTex = tex2D(MovieSamp, UV.xy); Color *= projTex; Color = projTex; Color.rgb *= pow(dot(Nn, PEn), 0.6f); Color.rgb *= cCol.rgb; Color.rgb *= cBrightness.x; if ((UV.z < 0.0f) || (UV.z > 1.0f) || (UV.x < 0.0f) || (UV.x > 1.0f) || (UV.y < 0.0f) || (UV.y > 1.0f)){ return BLACK; //outside range; using border mode giving me artifacts i don't understand } else {return Color;} } technique MainTecBS0 < string MMDPass = "object_ss"; bool UseTexture = false; bool UseSphereMap = false; bool UseToon = false; > { pass DrawObject { VertexShader = compile vs_3_0 BufferShadow_VS(false, false, false); PixelShader = compile ps_3_0 BufferShadow_PS(false, false, false); } }
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!