# OpenGL ES Problems with directional lighting shader

## Recommended Posts

Hi Guys,

I have been struggling for a number of hours trying to make a directional (per fragment) lighting shader.

I have been following this tutorial in the 'per fragment' section of the page - http://www.learnopengles.com/tag/per-vertex-lighting/ along with tutorials from other sites.

This is what I have at this point.

// Vertex shader

varying vec3 v_Normal;
varying vec4 v_Colour;
varying vec3 v_LightPos;

uniform vec3 u_LightPos;
uniform mat4 worldMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;

void main()
{
vec4 object_space_pos = vec4(in_Position, 1.0);
gl_Position = worldMatrix * vec4(in_Position, 1.0);
gl_Position = viewMatrix * gl_Position;     // WV
gl_Position = projectionMatrix * gl_Position;

mat4 WV = worldMatrix * viewMatrix;

v_Position = vec3(WV * object_space_pos);
v_Normal = vec3(WV * vec4(in_Normal, 0.0));
v_Colour = in_Colour;
v_LightPos = u_LightPos;
}

And

// Fragment

varying vec3 v_Position;
varying vec3 v_Normal;
varying vec4 v_Colour;
varying vec3 v_LightPos;

void main()
{
float dist = length(v_LightPos - v_Position);
vec3 lightVector = normalize(v_LightPos - v_Position);
float diffuse_light = max(dot(v_Normal, lightVector), 0.1);
diffuse_light = diffuse_light * (1.0 / (1.0 + (0.25 * dist * dist)));

gl_FragColor = v_Colour * diffuse_light;
}

If I change the last line of the fragment shader to 'gl_FragColor = v_Colour;' the model (a white sphere) will render to the screen in solid white, as expected.

But if I leave the shader as is above, the object is invisible.

I am suspecting that it is something to do with this line in the vertex shader, but am at a loss as to what is wrong.

v_Position = vec3(WV * object_space_pos);

If I comment the above line out, I get some sort of shading going on which looks like it is trying to light the subject (with the normals calculating etc.)

Any help would be hugely appreciated.

##### Share on other sites

Hi, lonewolff!

My initial guess would be that your v_Position and v_LightPos are in different coordinate spaces, making the distance and light direction computation meaningless. v_Position is in view space:

v_Position = vec3(WV * object_space_pos);

Can you make sure that you also transform v_LightPos into view space somewhere in your app?

Does it help if you comment out attenuation?

diffuse_light = diffuse_light * (1.0 / (1.0 + (0.25 * dist * dist)));

Also, probably just a formality, but directional lighting doesn't use light position or attenuation, it's an approximation for the case when the light source is very far relative to the scale of the scene, e.g. the sun illuminating a building. In such a case we assume that the light rays all travel in the same direction and the light's intensity falloff with distance is negligible. Point lighting would be a better name in this case.

##### Share on other sites

So basically tutorial shows how you manage per pixel lighting which is say really easy to implement.

You pass verts to shader and test the distance between light and pixel fragment, whenever fragment is in radius you color it as diffuse light color

Let me show the code first

Vs

attribute vec3 Vpos;

uniform vec4 MVP1;
uniform vec4 MVP2;
uniform vec4 MVP3;
uniform vec4 MVP4;

uniform vec4 WM1;
uniform vec4 WM2;
uniform vec4 WM3;
uniform vec4 WM4;

vec4 vertexClip;

varying highp vec3 vertex_pos;

float dp43(vec4 matrow, vec3 p)
{
return ( (matrow.x*p.x) + (matrow.y*p.y) + (matrow.z*p.z) + matrow.w );
}

void main()
{
vertexClip.x = dp43(MVP1, Vpos);
vertexClip.y = dp43(MVP2, Vpos);
vertexClip.z = dp43(MVP3, Vpos);
vertexClip.w = dp43(MVP4, Vpos);

vertex_pos.x = dp43(WM1, Vpos);
vertex_pos.y = dp43(WM2, Vpos);
vertex_pos.z = dp43(WM3, Vpos);

gl_Position = vertexClip;

}



Fs

varying highp vec3 vertex_pos;
uniform highp vec3 LPOS;
uniform highp vec3 LDIFF;

highp float n3ddistance(highp vec3 first_point, highp vec3 second_point)
{
highp float x = first_point.x-second_point.x;
highp float y = first_point.y-second_point.y;
highp float z = first_point.z-second_point.z;
highp float val = x*x + y*y + z*z;
return sqrt(val);
}

void main()
{
highp float dst = n3ddistance(LPOS, vertex_pos);
highp float intensity = clamp(1.0 - dst / LRadius, 0.0, 1.0);
highp vec4 color = vec4(LDIFF.x, LDIFF.y, LDIFF.z, 1.0)*intensity;
gl_FragColor = color;
}

Now short explenation you pass vertex world coordinate to fragment shader then you can test that against light position that means: your object could be centered at 0,0,0 pos then you could apply that

Matrix44<float> wrld;
wrld.TranslateP(ship[i]->pos);// make translation matrix
wrld = ship[i]->ROTATION_MAT * wrld;
MVP = (wrld * ACTUAL_VIEW) * ACTUAL_PROJECTION;

Then we are ready to make a directional light always when you pass normal is either affected by object rotation matrix or you have it already fixed in buffer

Then dot(normal, lightdir) whenever is0 or positive gives you ambient lightning color else we can get diffuse lighting

The idea is to find which fragment is in the cone, to do that you could simply define light radius and radius of a base of this cone bR

Since we have base radius and light radius and light direction we could simply check whenever a fragment is in the spotlight, to do that you need closestpointonline function

First define two line ends first is your lightpos second one is lightpos+lightdir*radius

Now you test fragment position against this line and get the distance between fragment and closest point on line,

Now having this we need to test whenever lets say fragment is in triangle

Given closest point on line we can find distance from light pos to cone line center let it be cvX now divide it by light radius we will get 'percentage of the distance' now multiply that percentage by base radius IF FRAGMENT DISTANCE TO CONE LINE CENTER IS LESS THAN THIS YOU CAN COLOR FRAGMENT WITH DIFFUSE COLOR

end of story

##### Share on other sites

Thanks for the replies guys.

@dietrich- Right you are! I have moved the light into view space and taken away attenuation for the time being to turn it in to a directional light only.

Things are looking better, but still some anomalies.

The only thing I can see is that the light position is off on the z-axis.

Sphere is at 0,0,0
light is at -5, 2, 5
camera is at 0, 0, -5

Going by this, the light should be behind the object, not in front.

Does this give a hint at any calculation I may have missed?

Here is the shader in its current form.

attribute vec3 in_Position;
attribute vec3 in_Normal;
attribute vec4 in_Colour;

varying vec3 v_Position;
varying vec3 v_Normal;
varying vec4 v_Colour;
varying vec3 v_LightPos;

uniform vec3 u_LightPos;
uniform mat4 worldMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;

void main()
{
vec4 object_space_pos = vec4(in_Position, 1.0);
gl_Position = worldMatrix * vec4(in_Position, 1.0);
gl_Position = viewMatrix * gl_Position;
gl_Position = projectionMatrix * gl_Position;

mat4 WV = worldMatrix * viewMatrix;

v_Position = vec3(WV * object_space_pos);
v_Normal = vec3(WV * vec4(in_Normal, 0.0));
v_Colour = in_Colour;

vec4 object_space_light = vec4(u_LightPos, 1.0);
v_LightPos = vec3(WV * object_space_light);
}

and

varying vec3 v_Position;
varying vec3 v_Normal;
varying vec4 v_Colour;
varying vec3 v_LightPos;

void main()
{
float dist = length(v_LightPos - v_Position);
vec3 lightVector = normalize(v_LightPos - v_Position);
float diffuse_light = max(dot(v_Normal, lightVector), 0.1);
//  diffuse_light = diffuse_light * (1.0 / (1.0 + (0.0000001 * dist * dist)));

//    gl_FragColor = v_Colour * diffuse_light;
gl_FragColor = vec4(v_Colour.rgb * diffuse_light, v_Colour.a);
}

Thanks again, this is hugely appreciated

##### Share on other sites

No problem, @lonewolff

That looks interesting:) worldMatrix is at the moment an identity matrix, correct?

##### Share on other sites

Yes, I believe so.

I am setting it as in identity matrix and am not applying any translations.

##### Share on other sites

Then I'd suggest to simplify things a bit and move the lighting computations into world space, and see if it changes anything.

v_Position will then become

v_Position = vec3(worldMatrix * object_space_pos);

and light position will remain unchanged,

v_LightPos = u_LightPos;

##### Share on other sites

Oddly enough, the result is the same.

##### Share on other sites

Arggh! Turns out I was working with a screwy model. I have enabled backface culling and all is now great.

A huge thanks to you though, I have learned a lot today!

##### Share on other sites

Ah, well, that happens too, glad to hear it's fixed:)

One more thing: you're currently using v_Normal as is, but you really want to normalize it again in the fragment shader before doing any computations with it (the tutorial seems to be missing this step). Each fragment receives an interpolated normal, and a linear interpolation of unit vectors is not necessarily a unit vector itself, here's a nice illustration (image via google):

## Create an account

Register a new account

• 10
• 17
• 9
• 14
• 41
• ### Similar Content

• By KKTHXBYE
I have a fullscreen sized quad, now i draw it on screen then in fragment shader i choose whenever linenis visible or not and draw it on screen (2d grid rendering)
However when i zoom out lines dissapear not to mention i cant achieve one thickeness regardless of the zoom
Left zoomed in right and below zoomed out

precision highp float; uniform vec3 translation; uniform float grid_size; uniform vec3 bg_color; uniform vec3 grid_color; uniform float lthick; uniform float sw; uniform float sh; uniform float scale; varying vec3 vc; int modulo(float x, float y) { return int (x - y * float ( int (x/y))); } float modulof(float x, float y) { return x - y * float ( int (x/y) ); } void main() { bool found = false; vec2 fragcoord = vc.xy * 0.5 + 0.5; //find how much units do we have in worldspace for a halfspace vec2 sc = ( vec2(sw, sh) / 2.0 ) / scale; sc = sc * vc.xy + translation.xy; //world position int px = modulo(float (sc.x), grid_size); int py = modulo(float (sc.y), grid_size); if ( (px == 0) || (py == 0) ) found = true; if (found) gl_FragColor = vec4(grid_color, 1.0); else gl_FragColor = vec4(bg_color, 1.0); } I know that when zooming out, a fragment can not represent actual grid line.
• By KKTHXBYE
I wonder how one could achieve that, personally i could pass another vertex data first would be actual geometric position, second would be next vertex in array. But its way too overcomplicated, ill have to build two sets of arrays so i just don't.
Can't actually think of something. Something that would not force me to pass another attribute to shaders, something that wont force me to change my internal model atructure at all,

By the way im drawing lines with usage of GL_LINE_LOOP
Any thoughts ?

• Hello. For the last two weeks, I've been struggling with one thing... 3D object picking. And I'm near getting it right! Works great when facing front! With a first person style camera, I can go up, down, forward, backward, strafe left, straft right, and it works. Problem is, when I rotate the camera, the other end of the ray that is not the mouse end goes off in another place other than the camera, completely throwing it off! So I'm going to go step by step, and see if you guys can spot what went wrong.
The first step was to normalize the mouse device coordinates, or in my case, touch coordinates:
public static float[] getNormalizedDeviceCoords(float touchX, float touchY){ float[] result = new float[2]; result[0] = (2f * touchX) / Render.camera.screenWidth - 1f; result[1] = 1f - (2f * touchY) / Render.camera.screenHeight; return result; } which in turn is converted into Homogeneous Clip Coordinates:
float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; The next step was to convert this Homogeneous Clip Coordinates into Eye Coordinates:
public static float[] getEyeCoords(float[] clipCoords){ float[] invertedProjection = new float[16]; Matrix.invertM(invertedProjection, 0, Render.camera.projMatrix, 0); float[] eyeCoords = new float[4]; Matrix.multiplyMV(eyeCoords, 0, invertedProjection, 0 ,clipCoords, 0); float[] result = new float[]{eyeCoords[0], eyeCoords[1], -1f, 0f}; return result; } Next was to convert the Eye Coordinates into World Coordinates and normalize it:
public static float[] getWorldCoords(float[] eyeCoords){ float[] invertedViewMatrix = new float[16]; Matrix.invertM(invertedViewMatrix, 0, Render.camera.viewM, 0); float[] rayWorld = new float[4]; Matrix.multiplyMV(rayWorld, 0, invertedViewMatrix, 0 ,eyeCoords, 0); float length = (float)Math.sqrt(rayWorld[0] * rayWorld[0] + rayWorld[1] * rayWorld[1] + rayWorld[2] * rayWorld[2]); if(length != 0){ rayWorld[0] /= length; rayWorld[1] /= length; rayWorld[2] /= length; } return rayWorld; } Putting this all together gives me a method to get the ray direction I need:
public static float[] calculateMouseRay(){ float touchX = MainActivity.touch.x; float touchY = MainActivity.touch.y; float[] normalizedDeviceCoords = getNormalizedDeviceCoords(touchX, touchY); float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; float[] eyeCoords = getEyeCoords(homogeneousClipCoords); float[] worldCoords = getWorldCoords(eyeCoords); return worldCoords; } I then test for the Ray / Sphere intersection using this with double precision:
public static boolean getRaySphereIntersection(float[] rayOrigin, float[] spherePosition, float[] rayDirection, float radius){ double[] v = new double[4]; double[] dir = new double[4]; // Calculate the a, b, c and d coefficients. // a = (XB-XA)^2 + (YB-YA)^2 + (ZB-ZA)^2 // b = 2 * ((XB-XA)(XA-XC) + (YB-YA)(YA-YC) + (ZB-ZA)(ZA-ZC)) // c = (XA-XC)^2 + (YA-YC)^2 + (ZA-ZC)^2 - r^2 // d = b^2 - 4*a*c v[0] = (double)rayOrigin[0] - (double)spherePosition[0]; v[1] = (double)rayOrigin[1] - (double)spherePosition[1]; v[2] = (double)rayOrigin[2] - (double)spherePosition[2]; dir[0] = (double)rayDirection[0]; dir[1] = (double)rayDirection[1]; dir[2] = (double)rayDirection[2]; double a = (dir[0] * dir[0]) + (dir[1] * dir[1]) + (dir[2] * dir[2]); double b = (dir[0] * v[0] + dir[1] * v[1] + dir[2] * v[2]) * 2.0; double c = (v[0] * v[0] + v[1] * v[1] + v[2] * v[2]) - ((double)radius * (double)radius); // Find the discriminant. //double d = (b * b) - c; double d = (b * b) - (4.0 * a * c); Log.d("d", String.valueOf(d)); if (d == 0f) { //one root } else if (d > 0f) { //two roots double x1 = -b - Math.sqrt(d) / (2.0 * a); double x2 = -b + Math.sqrt(d) / (2.0 * a); Log.d("X1 X2", String.valueOf(x1) + ", " + String.valueOf(x2)); if ((x1 >= 0.0) || (x2 >= 0.0)){ return true; } if ((x1 < 0.0) || (x2 >= 0.0)){ return true; } } return false; } After a week and a half of playing around with this chunk of code, and researching everything I could on google, I found out by sheer accident that the sphere position to use in this method must be the transformed sphere position extracted from the model matrix, not the position itself. Which not one damn tutorial or forum article mentioned! And works great using this. Haven't tested the objects modelView yet though. To visually see the ray, I made a class to draw the 3D line, and noticed that it has no trouble at all with one end being my mouse cursor. The other end, which is the origin, is sort of working. And it only messes up when I rotate left or right as I move around in a FPS style camera. Which brings me to my next point. I have no idea what the ray origin should be for the camera. And I have 4 choices. 3 of them worked but gave me the same results.
Ray Origin Choices:
1. Using just the camera.position.x, camera.position.y, and camera.position.z for my ray origin worked flawlessly straight due to the fact that the ray origin remained in the center of the screen, but messed up when I rotated the camera, and moved off screen as I was rotating. Now theoretically, even if you were facing at an angle, you still are fixated at that point, and the ray origin shouldn't be flying off away from the center of the screen at all. A point is a point after all.
2.Using the cameras model matrix (used for translating and rotating the camera, and later multiplied to the cameras view matrix), specifically -modelMatrix[12], -modelMatrix[13], and -modelMatrix[14] (note I am using negative), basically gave me nearly the same results. Only difference is that camera rotations play a role in the cameras positions. Great facing straight, but the ray origin is no longer centered at different angles.
3.Using the camera's view matrix didn't work at all, positive or negative, using 12, 13, and 14 in the matrix.
4.Using the camera's inverted view matrix (positive invertedViewMatrix[12], invertedViewMatrix[13], and invertedViewMatrix[14]) did work, but gave me what probably seemed like the same results as #2.
So basically, I'm having difficulty getting the other end of the ray, which is the ray origin. Shooting the ray to the mouse pointer was no problem, like I said. With the camera end of the ray being off, it throws off the accuracy a lot at different camera angles other than straight. If anyone has any idea's, please let me know. I'm sure my math is correct. If you need any more information, such as the camera, or how I render the ray, which I don't think is needed, I can show that too. Thanks in advance!

• Hello everyone,
I'm trying to display a 2D texture to screen but the rendering isn't working correctly.
First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering
So here is the shader I'm using :
const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG.
For that I'm using libPNG to load the PNG to a texture, here is my code :
GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory
And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning).
//MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square.
The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help.
Thanks !

• I have a 9-slice shader working mostly nicely:

Here, both the sprites are separate images, so the shader code works well:
varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels).
This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet.

Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!

• Not long ago, I create a nice OBJ loader that loads 3D Studio Max files. Only problem is, is that although it works and works great, I wasn't using Vertex Buffers. Now that I applied Vertex Buffers, it seems to only use the first color of the texture and spread it all across the poiygon. I examined my code over and over again, and the Vertex Buffer code is correct. But when I comment out all of my vertex buffer code, it works as intended. I practically given up on fixing it on my own, so hopefully you guys will be able to figure out what is wrong.
• By cebugdev
hi all,
how to implement this type of effect ?
Also what is this effect called? this is considered volumetric lighting?
what are the options of doing this?
a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel.
b. a transparent 3d mesh? and we can animate it as well?

2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)?

Note: im using OpenGL ES 2.0 on mobile.

thanks!

• Hey guys. Wow it's been super long since I been here.

Anyways, I'm having trouble with my 2D OrthoM matrix setup for phones / tablets. Basically I wan't my coordinates to start at the top left of the screen. I also want my polygons to remain squared regardless if you have it on portrait or landscape orientation. At the same time, if I translate the polygon to the middle of the screen, I want it to come to the middle regardless if I have it in portrait or landscape mode. So far I'm pretty close with this setup:
private float aspectRatio; @Override public void onSurfaceChanged(GL10 glUnused, int width, int height) { Log.d("Result", "onSurfacedChanged()"); glViewport(0, 0, width, height); if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Log.d("Result", "onSurfacedChanged(PORTRAIT)"); aspectRatio = ((float) height / (float) width); orthoM(projectionMatrix, 0, 0f, 1f, aspectRatio, 0f, -1f, 1f); } else{ Log.d("Result", "onSurfacedChanged(LANDSCAPE)"); aspectRatio = ((float) width / (float) height); orthoM(projectionMatrix, 0, 0f, aspectRatio, 1f, 0f, -1f, 1f); } } When I translate the polygon using TranslateM( ) however, goes to the middle in portrait mode but in landscape, it only moved partially to the right, as though portrait mode was on some of the left of the screen. The only time I can get the translation to match is if in Landscape I move the aspectRatio variable in OrthoM( ) from the right arguement to the bottom arguement, and make right be 1f. Works but now the polygon is stretched after doing this. Do I just simply multiply the aspectRatio to the translation values only when its in Landscape mode to fix this or is there a better way?
if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Matrix.translateM(modelMatrix, 0, 0.5f, 0.5f * aspectRatio, 0f); } else { Matrix.translateM(modelMatrix, 0, 0.5f * aspectRatio, 0.5f, 0f); } Thanks in advance.

• I'm trying to capture a frame with gl.readPixels and send the data to my server. For testing purposes, I tried rendering a texture with the same Uint8Array I used with gl.readPixels, but unfortunately can't get the texture to show an image.   Let me share the steps I'm taking.
I made sure to allocate memory outside of the game loop:
const width = Game.Renderer.width; const height = Game.Renderer.height; let pixels = new Uint8Array(4 * width * height); And before i unbind the frame buffer in the drawing function, I pick up the pixels:
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels); if (stream) { if (stream.ready) stream.socket.send(pixels); } This is also where I send the pixels to the server.
In my render function I have a function updating the texture I use for displaying video, or in this case: a different image every frame:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this._video) This works perfectly with a video or an image element, but if I pass in my uint8array no image is rendered.
The plan is to have the server send that same array to the other clients so they can use it to update their textures. Hopefully this makes sense. Thanks!
BTW: Not sure why my thread appeared two times, my connection timed out and I guess I pressed it two times. My apologies mods, I hid the duplicate thread.
• ### Forum Statistics

• Total Topics
631067
• Total Posts
2997740
×