Custom Shaders

Started by
8 comments, last by Awoken 7 years, 1 month ago

I found Shadertoy and have been playing around with it to kind of get a feel for things. My next big goal as far as shaders go is to figure out the code necessary to draw a line between each vector on a given face.

anyone know of vector or fragment shader code that can achieve the desired effect?

Also, I noticed I can have different materials respond to different light sources, anyone know off the top of their head how to reference a light source within the fragment shader code?

This is all brand spanking new to me so I'm just learning the basics right now. Been going through a couple tutorials.

Cheers Everyone.

Advertisement

Concerning line drawing: are you wanting to do this specifically with ShaderToy? Or in your own program? The reason I ask is that ShaderToy has some limitations that won't apply to your own program. To understand why, you need to understand a little about how the graphics pipeline works. A simplified version is:

  1. You issue a command to draw a series of vertices, using a specific shader program (including both vertex and fragment shaders), specifying some other important details like shader constants and what type of primitive (triangle, quad, etc) you are drawing.
  2. The GPU invokes your vertex shader once per vertex. Inside your vertex shader, you determine where the vertex should appear on the screen (not exactly, but close enough for this). You output this on-screen coordinate.
  3. The GPU then decides which screen pixels are within the triangles specified by the verts outputted in step 2. It invokes your fragment shader for each of these pixels. Your fragment shader determines what color the pixel in question should be.

ShaderToy has very specific limitations. In particular, you have no control over steps 1 & 2. The framework simply draws a full-screen quad (two triangles, arranged in a rectangle that covers the whole screen). Anything you want to draw using ShaderToy has to be done procedurally in the fragment shader. If you look at all of the various shaders that people have come up with, they all deal with this limitation, even the ones that look like they have thousands of custom polygons. The 3D scenes will often use ray-tracing or a technique called ray-marching with distance fields or height fields or something like that.

With your own program, you have a lot more control. You can specify whatever polygons you want to draw, and you have complete control over steps 1 & 2. The downside is that it's a lot harder to get something started -- unfortunately there is so much boilerplate involved in setting up the graphics API (OpenGL, in this case) just to draw a single triangle that this option is a huge pain in the butt. You'll want to find a decent OpenGL or DirectX tutorial (this one looks alright, but I haven't checked it out in detail yet), and work through it until you can draw a triangle and you understand how it works. Don't expect to finish in one sitting!

With that said, how you solve your problem depends on whether you're using ShaderToy or not.

If you're using ShaderToy, then you'll want to look at your fragCoord input (this tells you which pixel is currently being drawn) and use math to determine how far away this coordinate is from your line. If the distance is less than half the thickness of your line, then it is within the line and should be shaded with the line color. Otherwise, it should be shaded with the background color. If you want to do something fancy, you can anti-alias the line by performing this operation 4 times (each time slightly offset from the others by 1/2 pixel width/height, which you can determine with the iResolution input) and blend the results.

If you're using your own program, simply draw your triangle with glPolygonMode set to GL_LINE. You can set the thickness of the line with glLineThickness. Anti-aliasing is a little more advanced, but this article will tell you how to set up your OpenGL context so that it uses MSAA.

So, it's much simpler within the context of your own program, but of course setting up your own program is a huge pain in the butt. It's something you'll want to tackle at some point anyway, though. ShaderToy is fun, but doing something non-trivial in ShaderToy is usually done very differently than how it would be done in a typical 3D game or application. If you go very far down the ShaderToy route, you'll be messing with ray-marching and distance fields and stuff, which believe me, is very fun, but it might not be what you're after.

@CDProp, thank you very much for the reply. I'm wanting to write my own vertex and fragment shader code for my own application. I'm currently using THREE.js and working with Chrome and Webgl. Below is a sample of code that I've borrowed off the internet from other sources. Basically it's a material applied to a mesh that responds to a fixed light point.


<script type="x-shader/x-vertex" id="vertexShader">

	varying vec3 vNormal;

	void main() {

		// set the vNormal value with
		// the attribute value passed
		// in by Three.js
		vNormal = normal;

		gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
	}

</script>
<script type="x-shader/x-fragment" id="fragmentShader">

	varying vec3 vNormal;

	void main() {

		// calc the dot product and clamp
		// 0 -> 1 rather than -1 -> 1
		vec3 light = vec3(0.5, 0.2, 1.0);

		// ensure it's normalized
		light = normalize(light);

		// calculate the dot product of
		// the light to the vertex normal
		float dProd = max( 0.0 , dot(vNormal, light) );

		// feed into our frag colour
		gl_FragColor = vec4(dProd, dProd, dProd, 0.1);
	}

</script>

Which gives the follow effect, which I think is really neat. Having two independent light sources affecting different materials. In the picture below it's the ocean material that has the custom shader.

[attachment=35224:Screenshot from 2017-03-07 10-09-54.png]

So a few things I gather is that the normal in the vertex shader is referenced via THREE.Face3().normal. In the fragment shader code the light source is fixed.

A few things I'm trying to figure out currently are the following.

- How can I reference a moving light sources referenced via the THREE.DirectionLight() class?

- The original ocean material was transparent, when I set the Alpha value to anything, it doesn't seem to do much. How would I make the texture more transparent?

I'm kind of under the impression that in order to achieve a transparent look you need to do multiple passes? does that sound right, if so that means doubling computation time for one single render?

- And lastly, within the fragment shader I am under the impression that you can only retrieve a position of a pixel, is it possible to determine what the current pixel is doing according to what the surrounding pixels have already done? To add to that, is it also possible to know the previous value of the now current pixel being rendered?

The intended effect would be to make grass that extends upwards outside the constraints of the current triangular face, which would be awesome!!!

I am not familiar with three.js, but the properties of lights are often specified using uniforms. Uniforms are shader inputs, basically. You set them once before drawing, and their value stays the same for every vertex or fragment in that draw call. Hence the name "uniform". Typically, for a directional light you'll need a vec3 uniform for the light direction, and another vec3 uniform for the light color. Since many shaders support multiple directional lights, these uniforms might be declared as arrays like so:


uniform vec3 directionalLightColor[MAX_DIRECTIONAL_LIGHTS];
uniform vec3 directionalLightPosition[MAX_DIRECTIONAL_LIGHTS];

Chances are, three.js uses something like this. They will use the glUniform functions to set the directional light uniforms before each frame, and then all you have to do is declare the right uniforms in your shader, and then your shader can use those uniforms to access the light values.

This is just an educated guess, though. You'll need to find our what uniforms three.js is trying to set, and what their names and data types are. Unfortunately, the three.js documentation is completely unclear on this. So you might have to do some digging.

Concerning alpha blending, it's possible that you have blending disabled. In standard OpenGL, one would enable blending by calling glEnable with GL_BLEND passed in. You'll have to consult the three.js documentation to find out how to do this through their framework.

You'll also want to look into glBlendFunc to make sure you're using the right blend function (use GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA as the source and destination blend factors, respectively). Again, you may need to find the three.js equivalent of this.

Using alpha blending will not require multiple passes. However, if you get to a point where you're using multiple translucent objects in your scene, you'll need to sort them from back-to-front (so the distant ones are drawn first). The alpha blending function is not commutative, and so it has to be done in that order. This means it won't double your rendering time, but since you're sorting your draw calls by distance instead of by state, you'll incur some penalty there.

At some point, you may want to look into using premultiplied alpha.

It is not possible to know the values of the adjacent pixels. One big reason for this is that we can't be sure if those pixels have even been calculated yet. The pixel shader is executed in parallel for large batches of pixels at a time, and there's no way of knowing which pixels have been drawn and which haven't. Generally speaking, you can never read from the same buffer that you're drawing to.

As for reading the previous frame, there is no way to do that directly. You can do it indirectly, however, if you draw each frame to an off-screen texture. So, you would need two such textures (call them texture A and texture B). During the first frame, you render the scene to texture A. Then you copy that texture to the main back buffer. Then, on the next frame, you bind Texture A so that you can sample from it, and you draw the scene to Texture B. This allows you to sample from texture A while you draw the scene. Then, you copy texture B to the back buffer. On the third frame, you one again draw to texture A. At this point, texture B contains the previous frame, so you can sample from it. And so on, back and forth. Kind of a pain.

I'm not sure what grass effect you're after, but your fragment shader is only capable of drawing within the boundaries of the triangle that it is shading. So, it can't make anything seem to "pop out" of that triangle.

By the way, notice that your shaders use projectionMatrix and modelViewMatrix. These are examples of uniforms. In a full shader, you would see something like this at the top:


uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;

Your shaders don't have this, so I suppose that three.js is adding it for you. I'm guessing that the names of these uniforms are chosen by three.js and so you couldn't use something like "projMatrix". The names have to match what three.js uses when they set the uniform data. Similarly with the light uniforms, you'll need to find out what names they're using and what their data types are.

Other variables like position and normal are not uniforms. They are vertex attributes, because they change once per vertex. Chances are, these names were decided by three.js as well. One thing that I noticed is that you're not transforming your normal. You'll want to make sure that you multiply the normal by the inverse-transpose of the modelView matrix. That will ensure that the normal is rotated without being scaled. Put the normal in a vec4, like you do with position, but use 0.0 instead of 1.0 as the w coordinate (so that the normal isn't translated). It looks like three.js calls the inverse-transpose modelView matrix this normalMatrix.

The vNormal variable is declared as a varying because its value will be interpolated across the triangle being rendered. Note: in later versions of GLSL, varying is not used. Instead, vertex shader outputs are declared using the out keyword, and they are matched with fragment shader inputs that are declared using the in keyword.

@CDProp, thank you again. You've given me more than enough background information to get started and begin experimenting. I'll post back within a week or two with some stuff I've come up with.

in my vertexShader code I have gl_Position.x += 1.0; This seems to offset the orientation of the material by more than one pixel, any ideas why?

The output of the vertex shader (gl_Position) is not actually the screen coordinate. It's a coordinate in clip space, which is a sort of abstract 4D space that is hard to visualize. It's the coordinate space where OpenGL does all of the polygon clipping. There are actually a couple more transformations that OpenGL does before you get a screen coordinate:

The first is often called the w-divide or perspective divide, where the x/y/z clip space coordinates are divided by the w coordinate in each vertex. This transforms the verts from clip space to Normalized Device Coordinates (a 3D space where every vertex now fits within the -1 to 1 range for the x, y, and z axes).

Then, OpenGL does a viewport transform to convert the Normalized Device Coordinates to screen coordinates based on what you specified using glViewport (or three.js equivalent).

All this happens after your vertex shader returns, but before your fragment shader is called. In your fragment shader, the gl_FragCoord input will contain these screen coordinates. At this point, they've already been decided and you cannot change them.

So, to shift your entire object over by one pixel, you'll probably need to either shift your viewport over 1 pixel when you call glViewport for that object, or do some crazy math in the vertex shader to do the shift in clip space. Something like:

gl_Position.x += 2.0/viewportWidth*gl_Position.w

Note: I haven't tried this and even if it works it might be weird. And you'll have to pass in viewportWidth yourself as a uniform.

CDProp has some really good advice there, although I would point out that he's discussing things from an OpenGL/GLSL perspective, which is a slightly different perspective than a DirectX/HLSL perspective. I believe Uniforms, for example, are called Constant Buffers in HLSL. And a fragment shader is called a pixel shader in HLSL. A minor difference really.

But you may wish to watch my HLSL series. I don't do fancy shaders, just the basics. I've met some guys that do incredible things with shaders that have opened up some ideas on what is possible with shaders although I still haven't tried most of it. But if you want a foundational lesson on shaders, I think my video series is well worth the time. There are even videos for vectors and matrices, if you are not very familiar with them. The video series starts with a video explaining the differences of HLSL for XNA vs. DirectX and talks about the code that calls the shader code. You can probably skip that video, since you are probably doing neither. You might start with the triangles video and go through in numeric order, assuming you don't want to watch the Vector and Matrix videos first. The whole series is building up a single Blinn-Phong shader from scratch explaining every part of the code, line by line. Nothing fancy, but these concepts are used in practically every shader you will build. It basically gives a foundation of understanding to then build more things on.

@BBeck, thank you for the link to you're handy tutorials, I'll be looking to them for guidance. Is there a way to determine the location of the 3 vectors making up the current polygon and current pixel gl_FragCoord. I want to draw a straight line between each vector as an outline of the said face.

[ EDIT ] :: I'm sorry, this question was already answered by fpsulli3 / CDProp.

For anyone interested in learning more about WebGL custom shaders, check out this link I've found. It addresses most topics.

https://www.tutorialspoint.com/webgl/index.htm

This topic is closed to new replies.

Advertisement