Jump to content
  • Advertisement

Recommended Posts

I have a 9-slice shader working mostly nicely:

9-slice.png.231188fe8ce59a994fa693ec76675036.png

slice-scifi.png.461bcea84f60a33e3ac0fa1672f3b05a.png     slice-dashed.png.d215b7f9bf17f71a0c232b646f75c3b2.png

Here, both the sprites are separate images, so the shader code works well:

varying vec4 color;
varying vec2 texCoord;

uniform sampler2D tex;

uniform vec2 u_dimensions;
uniform vec2 u_border;

float map(float value, float originalMin, float originalMax, float newMin, float newMax) {
    return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin;
}

// Helper function, because WET code is bad code
// Takes in the coordinate on the current axis and the borders 
float processAxis(float coord, float textureBorder, float windowBorder) {
    if (coord < windowBorder)
        return map(coord, 0, windowBorder, 0, textureBorder) ;
    if (coord < 1 - windowBorder) 
        return map(coord,  windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder);
    return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1);
} 

void main(void) {
    vec2 newUV = vec2(
        processAxis(texCoord.x, u_border.x, u_dimensions.x),
        processAxis(texCoord.y, u_border.y, u_dimensions.y)
    );
    // Output the color
    gl_FragColor = texture2D(tex, newUV);
}

External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels).

This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet.

9-slice-fail.png.ecc4da7128ca4c27e238741d79062adb.png

Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you! :)

Share this post


Link to post
Share on other sites
Advertisement

You just need to replace your hard-coded 0s and 1s with subspriteX/Y and subspriteW/H.

uniform vec4 subsprite; // [0,1] aka x,y are top-left, [2,3] aka z,w are bottom-right.

// all your code except main...

vec2 subspriteMap(vec2 inner) {
   return mix(subsprite.xy, subsprite.zw, inner.xy);
}

void main(void) {
  vec2 newUV = subspriteMap(vec2(processAxis(...), processAxis(...)));
  gl_FragColor = texture2D(tex, newUV);
}

Untested, quickly banged together. But should do the trick.

Share this post


Link to post
Share on other sites
Posted (edited)

Thanks! @Wyrframe Here is my current result:

9-slice-2.png.ea5de4b2683fdafcd9320f3887e6a1a4.png

It seems a little nearer to correct, but there's also some visible stretching on the left now, and the right border is still missing. I definitely think you're onto something with the use of subsprites, though! Also, what does the mix function do in this? Do you have any links to further reading on the use of subsprites and how mix plays in? Thanks! :)

Update:

With some additional fiddling around with the shader code, I was able to get this result:

9-slice-3.png.3684bfcba52781651d96dc0e509b51c0.png

slice-test.png.486702dca5bf56c2d4b8383598af70d2.png

Current shader code, for anyone in the future trying to do the same thing:

varying vec4 color;
varying vec2 texCoord;

uniform sampler2D tex;

uniform vec2 u_dimensions;
uniform vec2 u_border;

float map(float value, float originalMin, float originalMax, float newMin, float newMax) {
    return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin;
}

// Helper function, because WET code is bad code
// Takes in the coordinate on the current axis and the borders 
float processAxis(float coord, float textureBorder, float windowBorder) {
    if (coord < windowBorder)
        return map(coord, 0, windowBorder, 0, textureBorder) ;
    if (coord < 1 - windowBorder) 
        return map(coord,  windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder);
    return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1);
}

void main(void) {
    vec2 newUV = vec2(
        processAxis(texCoord.x, u_border.x, u_dimensions.x),
        processAxis(texCoord.y, u_border.y, u_dimensions.y)
    );
    newUV.x+=2.0; // which image to use: 0, 1, or 2
    newUV.x*=24.0/120; // clip.w / texture.w
    newUV.y*=24.0/48; // clip.h / texture.h
    gl_FragColor = texture2D(tex, newUV);
}

I still have to figure out how to go about using the 48x48 sprite instead of one of the three 24x24 ones, and update the code to omit the need for hard-coded math, but it looks like the most confusing part is finally out of the way! Fun stuff! :)

Edited by DelicateTreeFrog

Share this post


Link to post
Share on other sites
13 hours ago, DelicateTreeFrog said:

Also, what does the mix function do in this?

http://lmgtfy.com/?q=glsl+mix

tl:dr; mix(a,b,q), given arbitrary values A,B, and a value from 0..1 Q, returns the value Q% of the way from A to B. I use it there to re-map a 0,1 range (the requested coordinate within the subsprite, the output of your original code) to the range within the texture (the bounding coordinates of the subsprite). Were you setting subsprite to (to extract subsprite #0 from the 4-subsprite texture map shown in your latest update) the vec4 value [0,0, texWidth/5, texHeight/2]?

Quote

I still have to figure out how to go about using the 48x48 sprite instead of one of the three 24x24 ones, and update the code to omit the need for hard-coded math [...]

That's what passing the subsprite to use as a shading parameter (a uniform if you're drawing all boxes of one layer and type in one primitives call, or a varying if you want to draw a different box with each pair of triangles) is meant to do. You calculate the bounds once (or at most, once per draw call) in your program, and pass them into your shader as a uniform.

Share this post


Link to post
Share on other sites
Posted (edited)

Thanks for clarifying what mix does in your code! :) @Wyrframe

I never had any luck figuring out how to set it up that way, so I just did this:

void main(void) {
    vec2 newUV = vec2(
        processAxis(texCoord.x, u_border.x, u_dimensions.x),
        processAxis(texCoord.y, u_border.y, u_dimensions.y)
    );
    newUV.xy+=u_clip.xy/u_clip.wz;
    newUV.xy*=u_clip.zw/u_texsize.xy;
    gl_FragColor = texture2D(tex, newUV);
}

Now it works for all the possible clipping coordinates and sizes. I'll probably move the division from the shader into the CPU part of the program so it only has to be evaluated once per draw call instead of once per pixel, but yeah, very cool!

Edited by DelicateTreeFrog

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Psychopathetica
      Hello. For the last two weeks, I've been struggling with one thing... 3D object picking. And I'm near getting it right! Works great when facing front! With a first person style camera, I can go up, down, forward, backward, strafe left, straft right, and it works. Problem is, when I rotate the camera, the other end of the ray that is not the mouse end goes off in another place other than the camera, completely throwing it off! So I'm going to go step by step, and see if you guys can spot what went wrong.
      The first step was to normalize the mouse device coordinates, or in my case, touch coordinates:
      public static float[] getNormalizedDeviceCoords(float touchX, float touchY){ float[] result = new float[2]; result[0] = (2f * touchX) / Render.camera.screenWidth - 1f; result[1] = 1f - (2f * touchY) / Render.camera.screenHeight; return result; } which in turn is converted into Homogeneous Clip Coordinates:
      float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; The next step was to convert this Homogeneous Clip Coordinates into Eye Coordinates:
      public static float[] getEyeCoords(float[] clipCoords){ float[] invertedProjection = new float[16]; Matrix.invertM(invertedProjection, 0, Render.camera.projMatrix, 0); float[] eyeCoords = new float[4]; Matrix.multiplyMV(eyeCoords, 0, invertedProjection, 0 ,clipCoords, 0); float[] result = new float[]{eyeCoords[0], eyeCoords[1], -1f, 0f}; return result; } Next was to convert the Eye Coordinates into World Coordinates and normalize it:
      public static float[] getWorldCoords(float[] eyeCoords){ float[] invertedViewMatrix = new float[16]; Matrix.invertM(invertedViewMatrix, 0, Render.camera.viewM, 0); float[] rayWorld = new float[4]; Matrix.multiplyMV(rayWorld, 0, invertedViewMatrix, 0 ,eyeCoords, 0); float length = (float)Math.sqrt(rayWorld[0] * rayWorld[0] + rayWorld[1] * rayWorld[1] + rayWorld[2] * rayWorld[2]); if(length != 0){ rayWorld[0] /= length; rayWorld[1] /= length; rayWorld[2] /= length; } return rayWorld; } Putting this all together gives me a method to get the ray direction I need:
      public static float[] calculateMouseRay(){ float touchX = MainActivity.touch.x; float touchY = MainActivity.touch.y; float[] normalizedDeviceCoords = getNormalizedDeviceCoords(touchX, touchY); float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; float[] eyeCoords = getEyeCoords(homogeneousClipCoords); float[] worldCoords = getWorldCoords(eyeCoords); return worldCoords; } I then test for the Ray / Sphere intersection using this with double precision:
      public static boolean getRaySphereIntersection(float[] rayOrigin, float[] spherePosition, float[] rayDirection, float radius){ double[] v = new double[4]; double[] dir = new double[4]; // Calculate the a, b, c and d coefficients. // a = (XB-XA)^2 + (YB-YA)^2 + (ZB-ZA)^2 // b = 2 * ((XB-XA)(XA-XC) + (YB-YA)(YA-YC) + (ZB-ZA)(ZA-ZC)) // c = (XA-XC)^2 + (YA-YC)^2 + (ZA-ZC)^2 - r^2 // d = b^2 - 4*a*c v[0] = (double)rayOrigin[0] - (double)spherePosition[0]; v[1] = (double)rayOrigin[1] - (double)spherePosition[1]; v[2] = (double)rayOrigin[2] - (double)spherePosition[2]; dir[0] = (double)rayDirection[0]; dir[1] = (double)rayDirection[1]; dir[2] = (double)rayDirection[2]; double a = (dir[0] * dir[0]) + (dir[1] * dir[1]) + (dir[2] * dir[2]); double b = (dir[0] * v[0] + dir[1] * v[1] + dir[2] * v[2]) * 2.0; double c = (v[0] * v[0] + v[1] * v[1] + v[2] * v[2]) - ((double)radius * (double)radius); // Find the discriminant. //double d = (b * b) - c; double d = (b * b) - (4.0 * a * c); Log.d("d", String.valueOf(d)); if (d == 0f) { //one root } else if (d > 0f) { //two roots double x1 = -b - Math.sqrt(d) / (2.0 * a); double x2 = -b + Math.sqrt(d) / (2.0 * a); Log.d("X1 X2", String.valueOf(x1) + ", " + String.valueOf(x2)); if ((x1 >= 0.0) || (x2 >= 0.0)){ return true; } if ((x1 < 0.0) || (x2 >= 0.0)){ return true; } } return false; } After a week and a half of playing around with this chunk of code, and researching everything I could on google, I found out by sheer accident that the sphere position to use in this method must be the transformed sphere position extracted from the model matrix, not the position itself. Which not one damn tutorial or forum article mentioned! And works great using this. Haven't tested the objects modelView yet though. To visually see the ray, I made a class to draw the 3D line, and noticed that it has no trouble at all with one end being my mouse cursor. The other end, which is the origin, is sort of working. And it only messes up when I rotate left or right as I move around in a FPS style camera. Which brings me to my next point. I have no idea what the ray origin should be for the camera. And I have 4 choices. 3 of them worked but gave me the same results.
      Ray Origin Choices:
      1. Using just the camera.position.x, camera.position.y, and camera.position.z for my ray origin worked flawlessly straight due to the fact that the ray origin remained in the center of the screen, but messed up when I rotated the camera, and moved off screen as I was rotating. Now theoretically, even if you were facing at an angle, you still are fixated at that point, and the ray origin shouldn't be flying off away from the center of the screen at all. A point is a point after all.
      2.Using the cameras model matrix (used for translating and rotating the camera, and later multiplied to the cameras view matrix), specifically -modelMatrix[12], -modelMatrix[13], and -modelMatrix[14] (note I am using negative), basically gave me nearly the same results. Only difference is that camera rotations play a role in the cameras positions. Great facing straight, but the ray origin is no longer centered at different angles.
      3.Using the camera's view matrix didn't work at all, positive or negative, using 12, 13, and 14 in the matrix.
      4.Using the camera's inverted view matrix (positive invertedViewMatrix[12], invertedViewMatrix[13], and invertedViewMatrix[14]) did work, but gave me what probably seemed like the same results as #2.
      So basically, I'm having difficulty getting the other end of the ray, which is the ray origin. Shooting the ray to the mouse pointer was no problem, like I said. With the camera end of the ray being off, it throws off the accuracy a lot at different camera angles other than straight. If anyone has any idea's, please let me know. I'm sure my math is correct. If you need any more information, such as the camera, or how I render the ray, which I don't think is needed, I can show that too. Thanks in advance!
       
       
    • By DevAndroid
      Hello everyone,
      I'm trying to display a 2D texture to screen but the rendering isn't working correctly.
      First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering
      So here is the shader I'm using :
      const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG.
      For that I'm using libPNG to load the PNG to a texture, here is my code :
      GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory
      And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning).
      //MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square.
      The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help.
      Thanks !
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!