Jump to content
  • Advertisement
Sign in to follow this  
Awoken

Custom Shaders

This topic is 498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I found Shadertoy and have been playing around with it to kind of get a feel for things.  My next big goal as far as shaders go is to figure out the code necessary to draw a line between each vector on a given face.

 

anyone know of vector or fragment shader code that can achieve the desired effect?

Also, I noticed I can have different materials respond to different light sources, anyone know off the top of their head how to reference a light source within the fragment shader code?  

 

This is all brand spanking new to me so I'm just learning the basics right now.  Been going through a couple tutorials.

 

Cheers Everyone.

Share this post


Link to post
Share on other sites
Advertisement

@CDProp, thank you very much for the reply.  I'm wanting to write my own vertex and fragment shader code for my own application.  I'm currently using THREE.js and working with Chrome and Webgl.  Below is a sample of code that I've borrowed off the internet from other sources.  Basically it's a material applied to a mesh that responds to a fixed light point.

<script type="x-shader/x-vertex" id="vertexShader">

	varying vec3 vNormal;

	void main() {

		// set the vNormal value with
		// the attribute value passed
		// in by Three.js
		vNormal = normal;

		gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
	}

</script>
<script type="x-shader/x-fragment" id="fragmentShader">

	varying vec3 vNormal;

	void main() {

		// calc the dot product and clamp
		// 0 -> 1 rather than -1 -> 1
		vec3 light = vec3(0.5, 0.2, 1.0);

		// ensure it's normalized
		light = normalize(light);

		// calculate the dot product of
		// the light to the vertex normal
		float dProd = max( 0.0 , dot(vNormal, light) );

		// feed into our frag colour
		gl_FragColor = vec4(dProd, dProd, dProd, 0.1);
	}

</script>

Which gives the follow effect, which I think is really neat.  Having two independent light sources affecting different materials.  In the picture below it's the ocean material that has the custom shader.

 

[attachment=35224:Screenshot from 2017-03-07 10-09-54.png]

 

So a few things I gather is that the normal in the vertex shader is referenced via THREE.Face3().normal.  In the fragment shader code the light source is fixed. 

 

A few things I'm trying to figure out currently are the following.

- How can I reference a moving light sources referenced via the THREE.DirectionLight() class?  

- The original ocean material was transparent, when I set the Alpha value to anything, it doesn't seem to do much.  How would I make the texture more transparent? 

 

I'm kind of under the impression that in order to achieve a transparent look you need to do multiple passes?  does that sound right, if so that means doubling computation time for one single render?

 

- And lastly, within the fragment shader I am under the impression that you can only retrieve a position of a pixel, is it possible to determine what the current pixel is doing according to what the surrounding pixels have already done?  To add to that, is it also possible to know the previous value of the now current pixel being rendered? 

 

The intended effect would be to make grass that extends upwards outside the constraints of the current triangular face, which would be awesome!!!

Edited by Awoken

Share this post


Link to post
Share on other sites

I am not familiar with three.js, but the properties of lights are often specified using uniforms. Uniforms are shader inputs, basically. You set them once before drawing, and their value stays the same for every vertex or fragment in that draw call. Hence the name "uniform". Typically, for a directional light you'll need a vec3 uniform for the light direction, and another vec3 uniform for the light color. Since many shaders support multiple directional lights, these uniforms might be declared as arrays like so:

uniform vec3 directionalLightColor[MAX_DIRECTIONAL_LIGHTS];
uniform vec3 directionalLightPosition[MAX_DIRECTIONAL_LIGHTS];

Chances are, three.js uses something like this. They will use the glUniform functions to set the directional light uniforms before each frame, and then all you have to do is declare the right uniforms in your shader, and then your shader can use those uniforms to access the light values.

This is just an educated guess, though. You'll need to find our what uniforms three.js is trying to set, and what their names and data types are.  Unfortunately, the three.js documentation is completely unclear on this. So you might have to do some digging.

Concerning alpha blending, it's possible that you have blending disabled. In standard OpenGL, one would enable blending by calling glEnable with GL_BLEND passed in. You'll have to consult the three.js documentation to find out how to do this through their framework.

You'll also want to look into glBlendFunc to make sure you're using the right blend function (use GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA as the source and destination blend factors, respectively). Again, you may need to find the three.js equivalent of this.

Using alpha blending will not require multiple passes. However, if you get to a point where you're using multiple translucent objects in your scene, you'll need to sort them from back-to-front (so the distant ones are drawn first). The alpha blending function is not commutative, and so it has to be done in that order. This means it won't double your rendering time, but since you're sorting your draw calls by distance instead of by state, you'll incur some penalty there.

At some point, you may want to look into using premultiplied alpha

It is not possible to know the values of the adjacent pixels. One big reason for this is that we can't be sure if those pixels have even been calculated yet. The pixel shader is executed in parallel for large batches of pixels at a time, and there's no way of knowing which pixels have been drawn and which haven't. Generally speaking, you can never read from the same buffer that you're drawing to.

As for reading the previous frame, there is no way to do that directly. You can do it indirectly, however, if you draw each frame to an off-screen texture. So, you would need two such textures (call them texture A and texture B). During the first frame, you render the scene to texture A. Then you copy that texture to the main back buffer. Then, on the next frame, you bind Texture A so that you can sample from it, and you draw the scene to Texture B. This allows you to sample from texture A while you draw the scene. Then, you copy texture B to the back buffer. On the third frame, you one again draw to texture A. At this point, texture B contains the previous frame, so you can sample from it. And so on, back and forth. Kind of a pain.

I'm not sure what grass effect you're after, but your fragment shader is only capable of drawing within the boundaries of the triangle that it is shading. So, it can't make anything seem to "pop out" of that triangle. 

Share this post


Link to post
Share on other sites

By the way, notice that your shaders use projectionMatrix  and modelViewMatrix. These are examples of uniforms. In a full shader, you would see something like this at the top:

uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;

Your shaders don't have this, so I suppose that three.js is adding it for you. I'm guessing that the names of these uniforms are chosen by three.js and so you couldn't use something like "projMatrix". The names have to match what three.js uses when they set the uniform data. Similarly with the light uniforms, you'll need to find out what names they're using and what their data types are.

Other variables like position and normal are not uniforms. They are vertex attributes, because they change once per vertex. Chances are, these names were decided by three.js as well. One thing that I noticed is that you're not transforming your normal. You'll want to make sure that you multiply the normal by the inverse-transpose of the modelView matrix. That will ensure that the normal is rotated without being scaled. Put the normal in a vec4, like you do with position, but use 0.0 instead of 1.0 as the w coordinate (so that the normal isn't translated). It looks like three.js calls the inverse-transpose modelView matrix this normalMatrix

The vNormal variable is declared as a varying because its value will be interpolated across the triangle being rendered. Note: in later versions of GLSL, varying is not used. Instead, vertex shader outputs are declared using the out keyword, and they are matched with fragment shader inputs that are declared using the in keyword.

Share this post


Link to post
Share on other sites

@CDProp, thank you again.  You've given me more than enough background information to get started and begin experimenting.  I'll post back within a week or two with some stuff I've come up with.

Share this post


Link to post
Share on other sites
The output of the vertex shader (gl_Position) is not actually the screen coordinate. It's a coordinate in clip space, which is a sort of abstract 4D space that is hard to visualize. It's the coordinate space where OpenGL does all of the polygon clipping. There are actually a couple more transformations that OpenGL does before you get a screen coordinate:

The first is often called the w-divide or perspective divide, where the x/y/z clip space coordinates are divided by the w coordinate in each vertex. This transforms the verts from clip space to Normalized Device Coordinates (a 3D space where every vertex now fits within the -1 to 1 range for the x, y, and z axes).

Then, OpenGL does a viewport transform to convert the Normalized Device Coordinates to screen coordinates based on what you specified using glViewport (or three.js equivalent).

All this happens after your vertex shader returns, but before your fragment shader is called. In your fragment shader, the gl_FragCoord input will contain these screen coordinates. At this point, they've already been decided and you cannot change them.

So, to shift your entire object over by one pixel, you'll probably need to either shift your viewport over 1 pixel when you call glViewport for that object, or do some crazy math in the vertex shader to do the shift in clip space. Something like:

gl_Position.x += 2.0/viewportWidth*gl_Position.w

Note: I haven't tried this and even if it works it might be weird. And you'll have to pass in viewportWidth yourself as a uniform.

Share this post


Link to post
Share on other sites

CDProp has some really good advice there, although I would point out that he's discussing things from an OpenGL/GLSL perspective, which is a slightly different perspective than a DirectX/HLSL perspective. I believe Uniforms, for example, are called Constant Buffers in HLSL. And a fragment shader is called a pixel shader in HLSL. A minor difference really.

 

But you may wish to watch my HLSL series. I don't do fancy shaders, just the basics. I've met some guys that do incredible things with shaders that have opened up some ideas on what is possible with shaders although I still haven't tried most of it. But if you want a foundational lesson on shaders, I think my video series is well worth the time. There are even videos for vectors and matrices, if you are not very familiar with them. The video series starts with a video explaining the differences of HLSL for XNA vs. DirectX and talks about the code that calls the shader code. You can probably skip that video, since you are probably doing neither. You might start with the triangles video and go through in numeric order,  assuming you don't want to watch the Vector and Matrix videos first. The whole series is building up a single Blinn-Phong shader from scratch explaining every part of the code, line by line. Nothing fancy, but these concepts are used in practically every shader you will build. It basically gives a foundation of understanding to then build more things on.

Edited by BBeck

Share this post


Link to post
Share on other sites

@BBeck, thank you for the link to you're handy tutorials, I'll be looking to them for guidance.  Is there a way to determine the location of the 3 vectors making up the current polygon and current pixel gl_FragCoord.  I want to draw a straight line between each vector as an outline of the said face.

 

[ EDIT ] :: I'm sorry, this question was already answered by fpsulli3 / CDProp.

 

For anyone interested in learning more about WebGL custom shaders, check out this link I've found.  It addresses most topics.

 

https://www.tutorialspoint.com/webgl/index.htm

Edited by Awoken

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!