Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Everything posted by CDeniz

  1. Im trying to make an outline pixel shader that grows inwards depending on the specified thickness.   Here is what Ive tried: [unroll(MaxVal)] for (int i = -thickness; i < thickness; i++) { [unroll(MaxVal)] for (int j = -thickness; j < thickness; j++) { float2 pixelOffsetUV = uv.xy + float2( i / width, j / height ); float pixelOffset = tex2D(input, pixelOffsetUV).a; if (pixelOffset <=0.5) return float4(0, 0, 0, color.a); } } return orginal; It simply checks the neighbour pixels, and if it finds one whose alpha component is below a specified value, it returns black. It works fine, but it takes forever to compile. I need to have a max thickness of about 100-200, and the above implementation is obviously overkill for this. What are some other more efficient outline algorithms?  By the way this is strictly 2d (WPF effects), no vertex shaders or geometry attributes,, just a pixel shader.
  2. I already have a fragment shader that generates a normal map from a height map. Now I need to apply the effect to all 360 degrees of the rotated heightmap.   Of coarse I could just apply the effect 360 times to each heighmap, but if I have already done it once then is it possible to use what I have to efficiently generate the remaining 359 maps?   Im guessing that each pixel gets offset (in r,g,b) by some constant amount depending on the rotation:  
  3. Awesome. It ended up being much simpler than I made it out to be, thanks! sampler2D input : register(s0); float degree : register(C0); float3 rotatedVec(float3 original) { float rad = radians( degree); float3x3 rotationMat = { cos( rad), -sin(rad), 0, sin( rad), cos(rad), 0, 0,0,1 }; return mul( rotationMat , original); } float4 main(float2 uv : TEXCOORD) : COLOR { float3 norm = normalize(tex2D(input, uv.xy).rgb * 2.0 - 1.0); float3 newColor = rotatedVec(norm); newColor = newColor * 0.5 + 0.5; return float4(newColor,1.0); }
  4. I'm a third year computer science student. Last semester I created a painting app for my user interface course.    [media] http: [/media]   The main feature of my app is that instead of painting with plain brushes, you paint with images, or what I call 'stamps'. The stamps are just images of objects with the background removed (or not, depending on what you want), which can be done in my app or any other image editing app.    I have implemented several tools and features. The static brush gives you precise control of your stamp. Orienting, scaling and positioning is very simple and fast.   The dynamic brush allows you to randomize a lot of properties, such as size, hue, rotation angle, opacity, brush step, density, position radius, and opacity age. ex. randomly controlling size: ex. randomly controlling position radius: ex. randomly controlling hue: As you can see the dynamic brush is very powerful. Multiple stamps can be active at the same time aswell, and the order you choose them is reflected, i.e. if you select a grass stamp, then a rock stamp, the grass stamp will be rendered below the rock stamp. You can combine all these features of the dynamic brush, and control how random each property is.   The line tool is a work in progress. It is what it sounds like, a line tool. Like the dynamic brush, you can also control many properties such as hue, density, size ect. In addition the direction of the generated stamps could be in either the lines direction or a random rotation:   The generate tool automates everything for you. Set the properties such as density, size variation, hue variation, ect., and hit the generate button to generate a texture map.   You can specify if you want the texture to tile automatically:   Basic features such as a layer system, stamp and layer adjustments, stamp and layer effects such as drop shadow, inner shadow ect., creating a stamp from the existing canvas, and masking/unmasking are all there.   You could also generate a normal map from the existing canvas. You can control the normal maps large, medium, and small details. Its still a work in progress, I plan to add more control, such as fine detail, very large detail.   Or if you want to manually paint a normal map, you can do that. If you change the painting mode from diffuse, to normal, you can perform all the features I described above but in normal mode. ex. using the static brush in normal mode: ex. using the dynamic brush in normal mode: In normal mode you also have control of the normal's large, medium and small details.     ------------------------------------------------------------------------------------------------------------- These are the main features of the app. There are many more features that I would like to implement, mainly being able to import a 3D obj file into a 3D viewer, and use it as a source for generating the stamps. Also perhaps painting onto the 3D model itself instead of a flat canvas, like zbrush. Speaking of zbrush, maybe sculpting features, but thats quite a stretch, haha.   Is the project worth pursuing? Is anyone interested in being part of the development? I was thinking of making it open source. The project is a wpf app (I know its not the best choice), so its windows only. Also since wpf is c#, openGL is not natively supported however I was looking into sharpGL which allows you to use openGL in WPF apps.     
  5. I am trying to create a normal map from a height map in HLSL. I followed this http://stackoverflow.com/a/5284527/451136 which is for GLSL. Here is how I translated GLSL to HLSL:     GLSL:       uniform sampler2D unit_wave     noperspective in vec2 tex_coord;     const vec2 size = vec2(2.0,0.0);     const ivec3 off = ivec3(-1,0,1);              vec4 wave = texture(unit_wave, tex_coord);         float s11 = wave.x;         float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;         float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;         float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;         float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;         vec3 va = normalize(vec3(size.xy,s21-s01));         vec3 vb = normalize(vec3(size.yx,s12-s10));         vec4 bump = vec4( cross(va,vb), s11 ); HLSL:       sampler2D image : register(s0);          noperspective float2 TEXCOORD;     static const float2 size = (2.0,0.0);     static const int3 off = (-1,0,1);          float4 main(float2 uv : TEXCOORD) : COLOR      {              float4 color = tex2D(image, uv);         float s11 = color.x;         float s01 = tex2D(image, uv + off.xy).x;         float s21 = tex2D(image, uv + off.zy).x;         float s10 = tex2D(image, uv + off.yx).x;         float s12 = tex2D(image, uv + off.yz).x;         float3 va = normalize((size.xy,s21-s01));         float3 vb = normalize((size.yx,s12-s10));         float4 bump = (cross(va,vb), s11);         return bump;      } The output is a black and white image, with the darker pixels being more transparent (since the alpha is the height).   How can I generate a normal map like this http://www.filterforge.com/filters/3051-normal.jpg from a height map?
  6. CDeniz

    Normal map from height map

    Just sampling the neighbour pixels wasn't giving me the results I desired. I ended up using the Sobel operator method instead. This still wasn't giving me enough control though. I had to do three passes on the heightmap, for controlling the large, medium and small details which I combined at the end. I am pretty happy with the results.  
  7. Im having difficulties getting this inner shadow effect to work correctly. Basically I am check the neighbour pixel's (currentPixel + an adjustable offset) alpha component, if it is 0 then I set the current pixels to the shadow color.  I want to control the shadow falloff, and I was thinking of lerping based on the distance to the centre of the texture. But now that I think about it, this will just give me a vignette effect. So really I need to lerp based on the distance from the edge of the the texture (nearest pixel whose alpha == 0 I guess).  Here is what I have so far: static const float3 off = {-Radius,0.0,Radius}; static const float2 nTex = {dim[0], dim[1]}; //texture width and height static const float2 center = {nTex.x/2.0, nTex.y/2.0}; //center of texture static const float hype = sqrt(pow(nTex.x,2) + pow(nTex.y,2)); //hypotenuse of texture  float4 main(float2 uv : TEXCOORD) : COLOR  {       //float dist = distance(uv, center) / (hype/2.0);      float4 color = tex2D( image , uv.xy);   float2 offxy = {off.x/nTex.x , off.y/nTex.y};   float2 offzy = {off.z/nTex.x , off.y/nTex.y};   float2 offyx = {off.y/nTex.x , off.x/nTex.y};   float2 offyz = {off.y/nTex.x , off.z/nTex.y};      float s11 = color.a;   float s01 = tex2D(image, uv.xy + offxy).a;   float s21 = tex2D(image, uv.xy + offzy).a;   float s10 = tex2D(image, uv.xy + offyx).a;   float s12 = tex2D(image, uv.xy + offyz).a;   if(s01 == 0 || s21 == 0 || s10 == 0 || s12 == 0)   {     //color.rgb = lerp(float3(0,0,0), color.rgb, dist);      color.rgb = float3(0,0,0);   }   return color;  }
  8. CDeniz

    Normal map from height map

    Thanks. Here is the working HLSL code: float Width : register(C0); float Height : register(C1); sampler2D image : register(s0); noperspective float2 TEXCOORD; static const float2 size = {2.0,0.0}; static const float3 off = {-1.0,0.0,1.0}; static const float2 nTex = {Width, Height}; float4 main(float2 uv : TEXCOORD) : COLOR { float4 color = tex2D(image, uv.xy); float2 offxy = {off.x/nTex.x , off.y/nTex.y}; float2 offzy = {off.z/nTex.x , off.y/nTex.y}; float2 offyx = {off.y/nTex.x , off.x/nTex.y}; float2 offyz = {off.y/nTex.x , off.z/nTex.y}; float s11 = color.x; float s01 = tex2D(image, uv.xy + offxy).x; float s21 = tex2D(image, uv.xy + offzy).x; float s10 = tex2D(image, uv.xy + offyx).x; float s12 = tex2D(image, uv.xy + offyz).x; float3 va = {size.x, size.y, s21-s01}; float3 vb = {size.y, size.x, s12-s10}; va = normalize(va); vb = normalize(vb); float4 bump = {(cross(va,vb)) / 2 + 0.5, 1.0}; return bump; } Where should I increase the intensity? On the heightmap, or after generating the normal like (cross(va,vb)) *  n + 0.5, and just playing with n. In general if I want to add more effects like sharpen, noise, large/medium/small detail, should I do it before generating the normal map or after.    Edit: Okay I was doing some testing and applying the normal map effect last seems to work nicely. The brightness/contrast, sharper effects are being applied on the height map.
  9. CDeniz

    Normal map from height map

    If I set the alpha component to 1, I get a blank white image. 
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!