Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL ES'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 51 results

  1. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  2. originally, i just use GL_LINE_STRIP to render lines and it has been evident in different device the difference in width of the lines, plus i cannot texture it! so i decided to use triangles to render my lines so i can have control on its width and add some textures,, i can already convert a line segment based from two given points, or two lines using 3 points using textured quad, I want to do the joints of these quads next, since the app needs to draw using the mobile touchscreen, so it is fitting to have a circular cap/joints instead of those pointy joints. I saw some lessons and tutorials and they suggest as simple as adding a circle in the joint end, is that really how simple it is done? Let me know if you guys have any tips and further suggestions, or link to a source/tutorial (OpenGL/OGL ES). much appreciated!
  3. In early 2017 I had this idea, if I can stream an HD movie without downloading the whole thing, I could stream a massive open world game as well without downloading it. Most people are under the impression that a game running in the browser can’t look good because then it takes forever to load it. Well, in game dev the concept of LOD - Level Of Details - exists quite a while now, there is no reason why we couldn’t apply it in the browser as well. The game I developed loads in a matter of seconds. It takes a couple seconds to load the engine itself, then when you press play: - Loads the terrain peaces closer to you first, than what's further away. - Loads the low poly version of the models like trees, rocks and bushes first, then the high poly version when available. - Structures... - Animals... - Sound... - Etc.. you get the idea My point is, it is POSSIBLE to build a 3D version of the Internet, where instead of browsing through websites, we could jump from one 3D space to the next. I “invite” everyone to make this happen. I’ve made a 3D Survival Game with a massive terrain to prove the tech works. I want you guys to build your own 3D spaces implementing your own ideas what the web should look like in the future. We could just link them all together and make this Interconnected Virtual Space happen - yeah, the Metaverse, for the Snow Crash fans out there I would love to hear what you think about the applications of 3D spaces on the Web. Please leave me a comment if you are as exited about the possibilities as I am. Backing up my claims: Live Tech Demo is available on https://plainsofvr.com Watch a the Open World to load instantly, than gradually improving:
  4. Hello, I'm trying to make a bloom effect in my 2D game made with LibGDX. I've a problem with the shader side, in short the glow effect should be go off the bounds of the texture, if I have a texture 40x40 the shader should cover 60x60 for example. This image explain better the problem I have: , as you can see the bloom effect is cut when the texture ends. I suppose that I should modify the default vertex shader to get what I need but I'm not very familiar with GLSL programmig so I ask if someone could help me. Vertex: #ifdef GL_ES precision highp float; precision highp int; #endif attribute vec4 a_position; attribute vec4 a_color; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; varying vec4 v_color; varying vec2 v_texCoords; void main() { v_color = a_color; v_texCoords = a_texCoord0; gl_Position = u_projTrans * a_position; } Fragment #ifdef GL_ES precision highp float; precision highp int; #endif varying vec4 v_color; varying vec2 v_texCoords; uniform sampler2D u_texture; uniform mat4 u_projTrans; uniform float u_blurSize; uniform float u_intensity; void main() { vec4 sum = vec4(0); // blur in y (vertical) // take nine samples, with the distance blurSize between them sum += texture2D(u_texture, vec2(v_texCoords.x - 4.0*u_blurSize, v_texCoords.y)) * 0.05; sum += texture2D(u_texture, vec2(v_texCoords.x - 3.0*u_blurSize, v_texCoords.y)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x - 2.0*u_blurSize, v_texCoords.y)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x - u_blurSize, v_texCoords.y)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y)) * 0.16; sum += texture2D(u_texture, vec2(v_texCoords.x + u_blurSize, v_texCoords.y)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x + 2.0*u_blurSize, v_texCoords.y)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x + 3.0*u_blurSize, v_texCoords.y)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x + 4.0*u_blurSize, v_texCoords.y)) * 0.05; // blur in y (vertical) // take nine samples, with the distance blurSize between them sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 4.0*u_blurSize)) * 0.05; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 3.0*u_blurSize)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 2.0*u_blurSize)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - u_blurSize)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y)) * 0.16; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + u_blurSize)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 2.0*u_blurSize)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 3.0*u_blurSize)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 4.0*u_blurSize)) * 0.05; gl_FragColor = sum * u_intensity + texture2D(u_texture, v_texCoords); }
  5. Psychopathetica

    Raycast From Camera To Mouse Pointer

    Hello. For the last two weeks, I've been struggling with one thing... 3D object picking. And I'm near getting it right! Works great when facing front! With a first person style camera, I can go up, down, forward, backward, strafe left, straft right, and it works. Problem is, when I rotate the camera, the other end of the ray that is not the mouse end goes off in another place other than the camera, completely throwing it off! So I'm going to go step by step, and see if you guys can spot what went wrong. The first step was to normalize the mouse device coordinates, or in my case, touch coordinates: public static float[] getNormalizedDeviceCoords(float touchX, float touchY){ float[] result = new float[2]; result[0] = (2f * touchX) / Render.camera.screenWidth - 1f; result[1] = 1f - (2f * touchY) / Render.camera.screenHeight; return result; } which in turn is converted into Homogeneous Clip Coordinates: float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; The next step was to convert this Homogeneous Clip Coordinates into Eye Coordinates: public static float[] getEyeCoords(float[] clipCoords){ float[] invertedProjection = new float[16]; Matrix.invertM(invertedProjection, 0, Render.camera.projMatrix, 0); float[] eyeCoords = new float[4]; Matrix.multiplyMV(eyeCoords, 0, invertedProjection, 0 ,clipCoords, 0); float[] result = new float[]{eyeCoords[0], eyeCoords[1], -1f, 0f}; return result; } Next was to convert the Eye Coordinates into World Coordinates and normalize it: public static float[] getWorldCoords(float[] eyeCoords){ float[] invertedViewMatrix = new float[16]; Matrix.invertM(invertedViewMatrix, 0, Render.camera.viewM, 0); float[] rayWorld = new float[4]; Matrix.multiplyMV(rayWorld, 0, invertedViewMatrix, 0 ,eyeCoords, 0); float length = (float)Math.sqrt(rayWorld[0] * rayWorld[0] + rayWorld[1] * rayWorld[1] + rayWorld[2] * rayWorld[2]); if(length != 0){ rayWorld[0] /= length; rayWorld[1] /= length; rayWorld[2] /= length; } return rayWorld; } Putting this all together gives me a method to get the ray direction I need: public static float[] calculateMouseRay(){ float touchX = MainActivity.touch.x; float touchY = MainActivity.touch.y; float[] normalizedDeviceCoords = getNormalizedDeviceCoords(touchX, touchY); float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; float[] eyeCoords = getEyeCoords(homogeneousClipCoords); float[] worldCoords = getWorldCoords(eyeCoords); return worldCoords; } I then test for the Ray / Sphere intersection using this with double precision: public static boolean getRaySphereIntersection(float[] rayOrigin, float[] spherePosition, float[] rayDirection, float radius){ double[] v = new double[4]; double[] dir = new double[4]; // Calculate the a, b, c and d coefficients. // a = (XB-XA)^2 + (YB-YA)^2 + (ZB-ZA)^2 // b = 2 * ((XB-XA)(XA-XC) + (YB-YA)(YA-YC) + (ZB-ZA)(ZA-ZC)) // c = (XA-XC)^2 + (YA-YC)^2 + (ZA-ZC)^2 - r^2 // d = b^2 - 4*a*c v[0] = (double)rayOrigin[0] - (double)spherePosition[0]; v[1] = (double)rayOrigin[1] - (double)spherePosition[1]; v[2] = (double)rayOrigin[2] - (double)spherePosition[2]; dir[0] = (double)rayDirection[0]; dir[1] = (double)rayDirection[1]; dir[2] = (double)rayDirection[2]; double a = (dir[0] * dir[0]) + (dir[1] * dir[1]) + (dir[2] * dir[2]); double b = (dir[0] * v[0] + dir[1] * v[1] + dir[2] * v[2]) * 2.0; double c = (v[0] * v[0] + v[1] * v[1] + v[2] * v[2]) - ((double)radius * (double)radius); // Find the discriminant. //double d = (b * b) - c; double d = (b * b) - (4.0 * a * c); Log.d("d", String.valueOf(d)); if (d == 0f) { //one root } else if (d > 0f) { //two roots double x1 = -b - Math.sqrt(d) / (2.0 * a); double x2 = -b + Math.sqrt(d) / (2.0 * a); Log.d("X1 X2", String.valueOf(x1) + ", " + String.valueOf(x2)); if ((x1 >= 0.0) || (x2 >= 0.0)){ return true; } if ((x1 < 0.0) || (x2 >= 0.0)){ return true; } } return false; } After a week and a half of playing around with this chunk of code, and researching everything I could on google, I found out by sheer accident that the sphere position to use in this method must be the transformed sphere position extracted from the model matrix, not the position itself. Which not one damn tutorial or forum article mentioned! And works great using this. Haven't tested the objects modelView yet though. To visually see the ray, I made a class to draw the 3D line, and noticed that it has no trouble at all with one end being my mouse cursor. The other end, which is the origin, is sort of working. And it only messes up when I rotate left or right as I move around in a FPS style camera. Which brings me to my next point. I have no idea what the ray origin should be for the camera. And I have 4 choices. 3 of them worked but gave me the same results. Ray Origin Choices: 1. Using just the camera.position.x, camera.position.y, and camera.position.z for my ray origin worked flawlessly straight due to the fact that the ray origin remained in the center of the screen, but messed up when I rotated the camera, and moved off screen as I was rotating. Now theoretically, even if you were facing at an angle, you still are fixated at that point, and the ray origin shouldn't be flying off away from the center of the screen at all. A point is a point after all. 2.Using the cameras model matrix (used for translating and rotating the camera, and later multiplied to the cameras view matrix), specifically -modelMatrix[12], -modelMatrix[13], and -modelMatrix[14] (note I am using negative), basically gave me nearly the same results. Only difference is that camera rotations play a role in the cameras positions. Great facing straight, but the ray origin is no longer centered at different angles. 3.Using the camera's view matrix didn't work at all, positive or negative, using 12, 13, and 14 in the matrix. 4.Using the camera's inverted view matrix (positive invertedViewMatrix[12], invertedViewMatrix[13], and invertedViewMatrix[14]) did work, but gave me what probably seemed like the same results as #2. So basically, I'm having difficulty getting the other end of the ray, which is the ray origin. Shooting the ray to the mouse pointer was no problem, like I said. With the camera end of the ray being off, it throws off the accuracy a lot at different camera angles other than straight. If anyone has any idea's, please let me know. I'm sure my math is correct. If you need any more information, such as the camera, or how I render the ray, which I don't think is needed, I can show that too. Thanks in advance!
  6. Hi, I'm trying to solve a problem where I can get all colors from image. I see only one: walk through a loop at raster data and collect all bytes, I'm sure there is a better way to get colors from image. I'm thinking about some sort of collecting colors in result texture... Is it ordinary situation, could you help me, I didn find anithing on the internet... Thanks.
  7. I have a fullscreen sized quad, now i draw it on screen then in fragment shader i choose whenever linenis visible or not and draw it on screen (2d grid rendering) However when i zoom out lines dissapear not to mention i cant achieve one thickeness regardless of the zoom Left zoomed in right and below zoomed out And heres the shader precision highp float; uniform vec3 translation; uniform float grid_size; uniform vec3 bg_color; uniform vec3 grid_color; uniform float lthick; uniform float sw; uniform float sh; uniform float scale; varying vec3 vc; int modulo(float x, float y) { return int (x - y * float ( int (x/y))); } float modulof(float x, float y) { return x - y * float ( int (x/y) ); } void main() { bool found = false; vec2 fragcoord = vc.xy * 0.5 + 0.5; //find how much units do we have in worldspace for a halfspace vec2 sc = ( vec2(sw, sh) / 2.0 ) / scale; sc = sc * vc.xy + translation.xy; //world position int px = modulo(float (sc.x), grid_size); int py = modulo(float (sc.y), grid_size); if ( (px == 0) || (py == 0) ) found = true; if (found) gl_FragColor = vec4(grid_color, 1.0); else gl_FragColor = vec4(bg_color, 1.0); } I know that when zooming out, a fragment can not represent actual grid line.
  8. I wonder how one could achieve that, personally i could pass another vertex data first would be actual geometric position, second would be next vertex in array. But its way too overcomplicated, ill have to build two sets of arrays so i just don't. Can't actually think of something. Something that would not force me to pass another attribute to shaders, something that wont force me to change my internal model atructure at all, By the way im drawing lines with usage of GL_LINE_LOOP Any thoughts ?
  9. Hello everyone, I'm trying to display a 2D texture to screen but the rendering isn't working correctly. First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering So here is the shader I'm using : const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG. For that I'm using libPNG to load the PNG to a texture, here is my code : GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning). //MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square. The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help. Thanks !
  10. DelicateTreeFrog

    OpenGL GLSL: 9-slicing

    I have a 9-slice shader working mostly nicely: Here, both the sprites are separate images, so the shader code works well: varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels). This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet. Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!
  11. Not long ago, I create a nice OBJ loader that loads 3D Studio Max files. Only problem is, is that although it works and works great, I wasn't using Vertex Buffers. Now that I applied Vertex Buffers, it seems to only use the first color of the texture and spread it all across the poiygon. I examined my code over and over again, and the Vertex Buffer code is correct. But when I comment out all of my vertex buffer code, it works as intended. I practically given up on fixing it on my own, so hopefully you guys will be able to figure out what is wrong. public static final int BYTES_PER_FLOAT = 4; public static final int POSITION_COMPONENT_COUNT_3D = 4; public static final int COLOR_COMPONENT_COUNT = 4; public static final int TEXTURE_COORDINATES_COMPONENT_COUNT = 2; public static final int NORMAL_COMPONENT_COUNT = 3; public static final int POSITION_COMPONENT_STRIDE_2D = POSITION_COMPONENT_COUNT_2D * BYTES_PER_FLOAT; public static final int POSITION_COMPONENT_STRIDE_3D = POSITION_COMPONENT_COUNT_3D * BYTES_PER_FLOAT; public static final int COLOR_COMPONENT_STRIDE = COLOR_COMPONENT_COUNT * BYTES_PER_FLOAT; public static final int TEXTURE_COORDINATE_COMPONENT_STRIDE = TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT; public static final int NORMAL_COMPONENT_STRIDE = NORMAL_COMPONENT_COUNT * BYTES_PER_FLOAT; int loadFile() { ArrayList<Vertex3D> tempVertexArrayList = new ArrayList<Vertex3D>(); ArrayList<TextureCoord2D> tempTextureCoordArrayList = new ArrayList<TextureCoord2D>(); ArrayList<Vector3D> tempNormalArrayList = new ArrayList<Vector3D>(); ArrayList<Face3D> tempFaceArrayList = new ArrayList<Face3D>(); StringBuilder body = new StringBuilder(); try { InputStream inputStream = context.getResources().openRawResource(resourceID); InputStreamReader inputStreamReader = new InputStreamReader(inputStream); BufferedReader bufferedReader = new BufferedReader(inputStreamReader); String nextLine; String subString; String[] stringArray; String[] stringArray2; int[] indexNumberList = new int[3]; int[] textureCoordNumberList = new int[3]; int[] normalNumberList = new int[3]; int i = 0; int j = 0; int k = 0; try { while ((nextLine = bufferedReader.readLine()) != null) { if (nextLine.startsWith("v ")) { subString = nextLine.substring(1).trim(); stringArray = subString.split(" "); try { tempVertexArrayList.add(new Vertex3D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]), Float.parseFloat(stringArray[2]), 1f)); } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading vertex list"); return 0; } String x = String.valueOf(tempVertexArrayList.get(i).x); String y = String.valueOf(tempVertexArrayList.get(i).y); String z = String.valueOf(tempVertexArrayList.get(i).z); //Log.d(TAG, "vertex " + String.valueOf(i) + ": " + x + ", " + y + ", " + z); i++; } if (nextLine.startsWith("vn ")) { subString = nextLine.substring(2).trim(); stringArray = subString.split(" "); try { if(reverseNormals){ tempNormalArrayList.add(new Vector3D(-Float.parseFloat(stringArray[0]), -Float.parseFloat(stringArray[1]), -Float.parseFloat(stringArray[2]))); } else{ tempNormalArrayList.add(new Vector3D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]), Float.parseFloat(stringArray[2]))); } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading normal list"); return 0; } String nx = String.valueOf(tempNormalArrayList.get(j).x); String ny = String.valueOf(tempNormalArrayList.get(j).y); String nz = String.valueOf(tempNormalArrayList.get(j).z); //Log.d(TAG, "normal " + String.valueOf(j) + ": " + nx + ", " + ny + ", " + nz); j++; } if (nextLine.startsWith("vt ")) { subString = nextLine.substring(2).trim(); stringArray = subString.split(" "); try { tempTextureCoordArrayList.add(new TextureCoord2D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]))); } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading texture coordinate list"); return 0; } String tu = String.valueOf(tempTextureCoordArrayList.get(k).tu); String tv = String.valueOf(tempTextureCoordArrayList.get(k).tv); //Log.d(TAG, "texture coord " + String.valueOf(k) + ": " + tu + ", " + tv); k++; } if (nextLine.startsWith("f ")) { subString = nextLine.substring(1).trim(); stringArray = subString.split(" "); for (int index = 0; index <= 2; index++) { stringArray2 = stringArray[index].split("/"); try { indexNumberList[index] = Integer.parseInt(stringArray2[0]) - 1; if(indexNumberList[index] < 0){ Log.d(TAG, "Error: indexNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading indexNumberList[]"); return 0; } try{ textureCoordNumberList[index] = Integer.parseInt(stringArray2[1]) - 1; if(textureCoordNumberList[index] < 0){ Log.d(TAG, "Error: textureCoordNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading textureCoordNumberList[]"); return 0; } try{ normalNumberList[index] = Integer.parseInt(stringArray2[2]) - 1; if(normalNumberList[index] < 0){ Log.d(TAG, "Error: normalNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading normalNumberList[]"); return 0; } } tempFaceArrayList.add(new Face3D(indexNumberList[0], textureCoordNumberList[0], normalNumberList[0], indexNumberList[1], textureCoordNumberList[1], normalNumberList[1], indexNumberList[2], textureCoordNumberList[2], normalNumberList[2])); } body.append(nextLine); body.append('\n'); } //Now that everything has successfully loaded, you can now populate the public variables. if(tempVertexArrayList != null && tempVertexArrayList.size() != 0) vertexArrayList.addAll(tempVertexArrayList); if(tempTextureCoordArrayList != null && tempTextureCoordArrayList.size() != 0) textureCoordArrayList.addAll(tempTextureCoordArrayList); if(tempNormalArrayList != null && tempNormalArrayList.size() != 0) normalArrayList.addAll(tempNormalArrayList); if(tempFaceArrayList != null && tempFaceArrayList.size() != 0) faceArrayList.addAll(tempFaceArrayList); vertexList = new float[faceArrayList.size() * POSITION_COMPONENT_COUNT_3D * NUMBER_OF_SIDES_PER_FACE]; indexList = new short[faceArrayList.size() * NUMBER_OF_SIDES_PER_FACE]; colorList = new float[faceArrayList.size() * COLOR_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; textureCoordList = new float[faceArrayList.size() * TEXTURE_COORDINATES_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; normalList = new float[faceArrayList.size() * NORMAL_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; int nextFace = 0; int step = POSITION_COMPONENT_COUNT_3D * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < vertexList.length; currentVertex += step){ vertexList[currentVertex + 0] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).x; vertexList[currentVertex + 1] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).y; vertexList[currentVertex + 2] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).z; vertexList[currentVertex + 3] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).w; vertexList[currentVertex + 4] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).x; vertexList[currentVertex + 5] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).y; vertexList[currentVertex + 6] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).z; vertexList[currentVertex + 7] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).w; vertexList[currentVertex + 8] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).x; vertexList[currentVertex + 9] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).y; vertexList[currentVertex + 10] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).z; vertexList[currentVertex + 11] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).w; nextFace++; } numberOfVertices = vertexList.length / POSITION_COMPONENT_COUNT_3D; nextFace = 0; for (int currentIndex = 0; currentIndex < indexList.length; currentIndex += NUMBER_OF_SIDES_PER_FACE){ indexList[currentIndex + 0] = faceArrayList.get(nextFace).indexNumberList.get(0).shortValue(); indexList[currentIndex + 1] = faceArrayList.get(nextFace).indexNumberList.get(1).shortValue(); indexList[currentIndex + 2] = faceArrayList.get(nextFace).indexNumberList.get(2).shortValue(); } step = COLOR_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < colorList.length; currentVertex += step){ colorList[currentVertex + 0] = red; colorList[currentVertex + 1] = green; colorList[currentVertex + 2] = blue; colorList[currentVertex + 3] = alpha; colorList[currentVertex + 4] = red; colorList[currentVertex + 5] = green; colorList[currentVertex + 6] = blue; colorList[currentVertex + 7] = alpha; colorList[currentVertex + 8] = red; colorList[currentVertex + 9] = green; colorList[currentVertex + 10] = blue; colorList[currentVertex + 11] = alpha; } nextFace = 0; step = TEXTURE_COORDINATES_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < textureCoordList.length; currentVertex += step){ textureCoordList[currentVertex + 0] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(0)).tu * mult; textureCoordList[currentVertex + 1] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(0)).tv * mult; textureCoordList[currentVertex + 2] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(1)).tu * mult; textureCoordList[currentVertex + 3] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(1)).tv * mult; textureCoordList[currentVertex + 4] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(2)).tu * mult; textureCoordList[currentVertex + 5] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(2)).tv * mult; nextFace++; } nextFace = 0; step = NORMAL_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < normalList.length; currentVertex += step){ normalList[currentVertex + 0] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).x; normalList[currentVertex + 1] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).y; normalList[currentVertex + 2] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).z; normalList[currentVertex + 3] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).x; normalList[currentVertex + 4] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).y; normalList[currentVertex + 5] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).z; normalList[currentVertex + 6] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).x; normalList[currentVertex + 7] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).y; normalList[currentVertex + 8] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).z; nextFace++; } vertexBuffer = ByteBuffer.allocateDirect(vertexList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); indexBuffer = ByteBuffer.allocateDirect(indexList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asShortBuffer(); colorBuffer = ByteBuffer.allocateDirect(colorList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); textureCoordBuffer = ByteBuffer.allocateDirect(textureCoordList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); normalBuffer = ByteBuffer.allocateDirect(normalList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); vertexBuffer.put(vertexList).position(0); indexBuffer.put(indexList).position(0); colorBuffer.put(colorList).position(0); textureCoordBuffer.put(textureCoordList).position(0); normalBuffer.put(normalList).position(0); createVertexBuffer(); uMVPMatrixHandle = glGetUniformLocation(program, U_MVPMATRIX); uMVMatrixHandle = glGetUniformLocation(program, U_MVMATRIX); uTextureUnitHandle = glGetUniformLocation(program, U_TEXTURE_UNIT); aPositionHandle = glGetAttribLocation(program, A_POSITION); aColorHandle = glGetAttribLocation(program, A_COLOR); aTextureCoordinateHandle = glGetAttribLocation(program, A_TEXTURE_COORDINATES); aNormalHandle = glGetAttribLocation(program, A_NORMAL); glEnableVertexAttribArray(aPositionHandle); glEnableVertexAttribArray(aColorHandle); glEnableVertexAttribArray(aTextureCoordinateHandle); glEnableVertexAttribArray(aNormalHandle); glActiveTexture(GL_TEXTURE0); glUniform1i(uTextureUnitHandle, 0); } catch(IOException e){ } } catch (Resources.NotFoundException nfe){ throw new RuntimeException("Resource not found: " + resourceID, nfe); } return 1; } public void draw(){ glEnable(GL_DEPTH_TEST); bindData(); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glDrawArrays(GL_TRIANGLES, 0, faceArrayList.size() * NUMBER_OF_SIDES_PER_FACE); glBindBuffer(GL_ARRAY_BUFFER, 0); } public void bindData(){ int offset = 0; glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glVertexAttribPointer(aPositionHandle, POSITION_COMPONENT_COUNT_3D, GL_FLOAT, false, POSITION_COMPONENT_STRIDE_3D, offset); offset += POSITION_COMPONENT_COUNT_3D; glVertexAttribPointer(aColorHandle, COLOR_COMPONENT_COUNT, GL_FLOAT, false, COLOR_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); offset += COLOR_COMPONENT_COUNT; glVertexAttribPointer(aTextureCoordinateHandle, TEXTURE_COORDINATES_COMPONENT_COUNT, GL_FLOAT, false, TEXTURE_COORDINATE_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); offset += TEXTURE_COORDINATES_COMPONENT_COUNT; glVertexAttribPointer(aNormalHandle, NORMAL_COMPONENT_COUNT, GL_FLOAT, false, NORMAL_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); glBindBuffer(GL_ARRAY_BUFFER, 0); ///////////////////////////////////////////////////// /* vertexBuffer.position(0); glVertexAttribPointer(aPositionHandle, POSITION_COMPONENT_COUNT_3D, GL_FLOAT, false, POSITION_COMPONENT_STRIDE_3D, vertexBuffer); colorBuffer.position(0); glVertexAttribPointer(aColorHandle, COLOR_COMPONENT_COUNT, GL_FLOAT, false, COLOR_COMPONENT_STRIDE, colorBuffer); textureCoordBuffer.position(0); glVertexAttribPointer(aTextureCoordinateHandle, TEXTURE_COORDINATES_COMPONENT_COUNT, GL_FLOAT, false, TEXTURE_COORDINATE_COMPONENT_STRIDE, textureCoordBuffer); normalBuffer.position(0); glVertexAttribPointer(aNormalHandle, NORMAL_COMPONENT_COUNT, GL_FLOAT, false, NORMAL_COMPONENT_STRIDE, normalBuffer); */ } public void createVertexBuffer(){ glGenBuffers(1, vertexBufferObject, 0); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glBufferData(GL_ARRAY_BUFFER,numberOfVertices * (POSITION_COMPONENT_COUNT_3D + COLOR_COMPONENT_COUNT + TEXTURE_COORDINATES_COMPONENT_COUNT + NORMAL_COMPONENT_COUNT) * BYTES_PER_FLOAT, null, GL_STATIC_DRAW); int offset = 0; glBufferSubData(GL_ARRAY_BUFFER, offset, numberOfVertices * POSITION_COMPONENT_COUNT_3D * BYTES_PER_FLOAT, vertexBuffer); offset += POSITION_COMPONENT_COUNT_3D; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * COLOR_COMPONENT_COUNT * BYTES_PER_FLOAT, colorBuffer); offset += COLOR_COMPONENT_COUNT; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT, textureCoordBuffer); offset += TEXTURE_COORDINATES_COMPONENT_COUNT; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * NORMAL_COMPONENT_COUNT * BYTES_PER_FLOAT, normalBuffer); glBindBuffer(GL_ARRAY_BUFFER, 0); }
  12. lawnjelly

    Android Build and Performance

    In the last few weeks I've been focusing on getting the Android build of my jungle game working and tested. Last time I did this I was working from Windows, but now I've totally migrated to Linux I wasn't sure how easily everything would go. In the end, it turns out that support for Linux is great, in fact it was easier than getting things up and running on Windows, no special drivers needed. Definitely Android studio and particularly the emulators seem to be better than last time, with x86 emulators running near native speed, and much quicker APK uploads to the emulators (although still slow to the devices, I gather I can increase this by updating them to high Android version but then less good for testing). My devices I have at home are an old Cat B15 phone, 800x480 with a GPU that seems to date from 2006(!), a Nexus 7 2012 tablet, and finally an Amlogic S905X TV media player (2017). Funnily enough the TV box has been the most involved to get working. CPU issues My first issue to contend with was I got a 'SIGBUS illegal alignment' error when running on the phone. After tracking it down, it turns out the particular Arm CPU is very picky about the alignment of data. It is usually good practice to keep structures aligned well, but the x86 is very forgiving, and I use quite a few structs #pragma packed to 1 byte, particularly in serialization. Some padding in the structures sorted this. Next I had spent many hours trying to figure out a strange bug whereby the object lighting worked fine on emulators, but looked wrong on the device. I had a suspicion it was a signed / unsigned issue in values for diffuse light in a shader input, but I couldn't see anything wrong with the code. Almost unbelievably, when I tracked it down, it turned there wasn't anything wrong with the code. The problem was that on the x86 compiler, a 'char' defaults to be a signed char, but on the ARM compiler, 'char' defaults to unsigned!! This is an interesting choice (apparently on the ARM chip the unsigned may be faster) but it goes against the usual convention for short, int etc. It was easy enough to fix by flipping a compiler switch. I guess I should really be using explicit signed / unsigned types. It has always struck me as somewhat wierd that C is so vague with the in-built types, with number of bits and the sign, given that changing these usually gives bugs. GPU issues The biggest problem I tend to have with OpenGL ES devices is the 'precision' specifiers in shaders. You can fill them however you want on the desktop, but it just ignores them and uses high precision. However different devices have different capabilities for lowp, mediump and highp both in vertex and fragment shaders. What would be really helpful if the guys making the emulators / OpenGL ES on the desktop could allow it to emulate the lower precision, allowing us to debug precision on the desktop. Alas no, I couldn't figure out a way to get this to work. It may be impossible using the hardware OpenGL ES, but the emulator also can use SwiftShader so maybe they could implement this? My biggest problems were that my worst performing device for precision was actually my newest, the TV box. It is built for super fast decoding video at high resolution, but the fragment shaders are a minimal 10 bit precision affair, and the fill rate is poor for a 1080P device. This was coupled with the problem I couldn't usb connect up to the desktop for debugging, I literally was compiling an APK, putting it on a usb stick (or dropbox), taking to bedroom, installing, running. This is not ideal and I will look into either seeing if ADB will run over my LAN or getting another low precision device for testing. I won't go into detail on the precision issues, I wrote more on this on a post here: https://www.gamedev.net/forums/topic/694188-debugging-precision-issues-in-opengl-es-2 As a quick summary, 10 bits of precision in the fragment shader can lead to sampling error in any maths done there, especially in texture coordinate math. I was able to fix some of my problems by moving the tex coordinate calculations to the vertex shader, which has more precision. Then, it turns out that my TV box (and presumably many such chipsets) support an extra high precision path in the fragment shader, *as long as you don't touch the input data*. This allows them to do accurate uv coords on large texture maps, because they don't use the 10 bit precision. Menus I've written a rudimentary menu system for the game, with tickboxes, sliders and listboxes. This has enabled me to put in a bunch of debugging features I can turn on and off on devices, to try and find out what affects performance, without recompiling. Another trick from my console days is I have put in some simple graphical performance bars. I record the last 60 frames into a circle buffer and store things like the frame duration, and when certain game tasks took place. In my case the big issue is when a 'scroll' event takes place, as I render horizontal and vertical tiles of the landscape as you move about it. In the diagram the blue bar is where a scroll happens, a green bar is where the ground scroll happens, and the red is the frame duration. It doesn't show much on the desktop as the GPU is fast, but on the slow devices I often get a dropped frame on the scrolls, so I am trying to reduce this. I can turn on and off various aspects of the scrolling / rendering to track down what causes performance issues. Certainly PCF shadows are a big ask on mobiles, as is the ground (terrain) shader. On my first incarnation of the game I pre-rendered everything (graphics + shadows) out to a massive texture at loadup and just scrolled through it as you moved. This is great for performance, but unfortunately uses a shedload of memory if you want big maps. And phones don't have lots of memory. So a lot of technical effort has gone into writing the scrolling system which redraws the background in horizontal and vertical tiles as you move about. This is much more tricky with an angled landscape than with a top-down 90 degree view, and even more tricky when you have to render shadow maps as you move. Having identified the shadow map pass as being a bottleneck, I did some quick calculations for my max map size (approx 16384x16384) and decided that I could probably get away with pre-rendering the shadow map to a 2048x2048 texture. Alright it isn't very high resolution, but it beats turning shadows off completely. This is working fine, and avoids a lot of ugly issues from scrolling the shadow map. To render out the shadow map I render a bunch of 256x256 tiles and copy them to the final shadowmap. This fixed some of the slowness, then I realised I could go a step further. Much of the PCF shadows slowdown was from rendering the landscape shadows. The buildings and objects are much rarer so I figured I could pre-render a low-res landscape shadow texture, and use this when scrolling, then only need to do expensive PCF / simple shadows on the static objects, and dynamic objects. This worked a treat, and incidentally solves at a stroke precision issues I was having with the shadow shader on the 10 bit hardware. Joysticks As well as supporting touchscreens and keyboards, I want to support gamepads, so I bought a bluetooth / wireless gamepad for xmas. It works great with the TV box with wireless dongle, unfortunately, the bluetooth doesn't seem to work with my old phone and tablet, or my desktop. So it has been very difficult / impossible to debug to get analog joystick working. And, in an oversight(?) for the emulator, there doesn't seem to be an option for emulating a gamepad. I can get a D pad but I don't think it is analog. So after some stabs in the dark with docs I am still facing gamepad focus issues so will have to wait till I have a suitable device to debug this. That's all for now folks!
  13. hi all, how to implement this type of effect ? Also what is this effect called? this is considered volumetric lighting? what are the options of doing this? a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel. b. a transparent 3d mesh? and we can animate it as well? need your expert advise. additional: 2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)? Note: im using OpenGL ES 2.0 on mobile. thanks!
  14. Hey guys. Wow it's been super long since I been here. Anyways, I'm having trouble with my 2D OrthoM matrix setup for phones / tablets. Basically I wan't my coordinates to start at the top left of the screen. I also want my polygons to remain squared regardless if you have it on portrait or landscape orientation. At the same time, if I translate the polygon to the middle of the screen, I want it to come to the middle regardless if I have it in portrait or landscape mode. So far I'm pretty close with this setup: private float aspectRatio; @Override public void onSurfaceChanged(GL10 glUnused, int width, int height) { Log.d("Result", "onSurfacedChanged()"); glViewport(0, 0, width, height); if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Log.d("Result", "onSurfacedChanged(PORTRAIT)"); aspectRatio = ((float) height / (float) width); orthoM(projectionMatrix, 0, 0f, 1f, aspectRatio, 0f, -1f, 1f); } else{ Log.d("Result", "onSurfacedChanged(LANDSCAPE)"); aspectRatio = ((float) width / (float) height); orthoM(projectionMatrix, 0, 0f, aspectRatio, 1f, 0f, -1f, 1f); } } When I translate the polygon using TranslateM( ) however, goes to the middle in portrait mode but in landscape, it only moved partially to the right, as though portrait mode was on some of the left of the screen. The only time I can get the translation to match is if in Landscape I move the aspectRatio variable in OrthoM( ) from the right arguement to the bottom arguement, and make right be 1f. Works but now the polygon is stretched after doing this. Do I just simply multiply the aspectRatio to the translation values only when its in Landscape mode to fix this or is there a better way? if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Matrix.translateM(modelMatrix, 0, 0.5f, 0.5f * aspectRatio, 0f); } else { Matrix.translateM(modelMatrix, 0, 0.5f * aspectRatio, 0.5f, 0f); } Thanks in advance.
  15. I'm trying to capture a frame with gl.readPixels and send the data to my server. For testing purposes, I tried rendering a texture with the same Uint8Array I used with gl.readPixels, but unfortunately can't get the texture to show an image. Let me share the steps I'm taking. I made sure to allocate memory outside of the game loop: const width = Game.Renderer.width; const height = Game.Renderer.height; let pixels = new Uint8Array(4 * width * height); And before i unbind the frame buffer in the drawing function, I pick up the pixels: gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels); if (stream) { if (stream.ready) stream.socket.send(pixels); } This is also where I send the pixels to the server. In my render function I have a function updating the texture I use for displaying video, or in this case: a different image every frame: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this._video) This works perfectly with a video or an image element, but if I pass in my uint8array no image is rendered. The plan is to have the server send that same array to the other clients so they can use it to update their textures. Hopefully this makes sense. Thanks! BTW: Not sure why my thread appeared two times, my connection timed out and I guess I pressed it two times. My apologies mods, I hid the duplicate thread.
  16. Falken42

    Crystal Clash Build UI

    From the album: Crystal Clash

  17. Falken42

    Crystal Clash Main Menu UI

    From the album: Crystal Clash

  18. Falken42

    Crystal Clash In-game Screenshot #2

    From the album: Crystal Clash

  19. Falken42

    Crystal Clash In-game Screenshot #1

    From the album: Crystal Clash

  20. Falken42

    Crystal Clash Tutorial UI

    From the album: Crystal Clash

  21. My light is positioned at vec3(0, 0, 2) which is in front of an object at vec3(0, 0, 0). If I don't rotate the object, everything seems to look fine: The problem occurs when I rotate the object, the object's lit area seems to rotate with the object. Instead of just shining the faces looking at the light. In fact here's another strange example but with specular added. The effect is correct in the first the rotation, the second it's dark, and in the third it's back to good again I can't seem to figure out what the problem is with my shader. I even tried calculating the normal matrix in glsl, just in case my implementation was wrong, but I get the same results: nrms = mat3(transpose(inverse(model))) * normals; // and nrms = normalMat * normals; // both get same results. I really don't think it has to do with the normals, the light calculations visually seem okay, as long as I don't rotate the object though. In fact, I can translate and rotate the camera and the lighting is still good, again, as long as I don't rotate the object. By the way, camera rotation is not considered in the calculations since I'm passing the camera.transform.position vec3 to calculate the toCam vector I use in my lighting calculations. There's clearly something I'm doing wrong. I'm guessing it has to do with what space am I calculating against. It's almost like I'm calculating based on the model's local space instead of world. Hopefully somebody can identify what it is though, I'll share the vertex and frag shaders below. I didn't include the specular portion though. thanks a lot! Vertex #version 300 es #ifdef GL_ES precision mediump float; #endif layout (location= 0) in vec3 vertex; layout (location= 1) in vec3 normals; layout (location= 2) in vec2 uv; layout (location= 3) in vec3 colors; out vec3 fragPos; out vec3 baseColors; out vec3 nrms; out vec3 camPosition; uniform vec3 camera; uniform mat3 normalMat; uniform mat4 model; uniform mat4 projection; uniform mat4 view; uniform mat4 mvp; void main() { // nrms = mat3(transpose(inverse(model))) * normals; baseColors = colors; nrms = normalMat * normals; fragPos = vec3(model * vec4(vertex, 1.0)); camPosition = camera; gl_Position = mvp * vec4(fragPos, 1.0); } Frag #version 300 es #ifdef GL_ES precision mediump float; #endif #define PI 3.14159265359 #define TWO_PI 6.28318530718 #define NUM_LIGHTS 2 in vec3 fragPos; in vec3 baseColors; in vec3 nrms; in vec3 camPosition; out vec4 color; struct Light { vec3 position; vec3 intensities; float attenuation; float ambient; }; Light light; void main () { light.position.x = 0.0; light.position.y = 0.0; light.position.z = 2.0; light.intensities.r = 1.0; light.intensities.g = 1.0; light.intensities.b = 1.0; light.ambient = 0.005; vec4 base = vec4(baseColors, 1.0); vec3 normals = normalize(nrms); vec3 toLight = normalize(light.position - fragPos); vec3 toCamera = normalize(camPosition - fragPos); // Ambient vec3 ambient = light.ambient * base.rgb * light.intensities; // Diffuse float diffuseBrightness = max(0.0, dot(normals, toLight)); vec3 diffuse = diffuseBrightness * base.rgb * light.intensities; // Composition vec3 linearColor = ambient + (diffuse); vec3 gamma = vec3(1.0 / 2.2); color = vec4( pow(linearColor, gamma), base.a); }
  22. I'm trying to learn how to make my own model, view, projection setup. I've managed to translate, rotate, and scale my models, but have an issue with my perspective projection matrix. Even though I'm multiplying halfFOV with my aspect ratio, the image looks squished unless my window is a perfect square like this: If it's not a perfect square: the wider my window the more stretched my object looks in the Z axis like an egg. So it definitely has to do with with my projection matrix, specifically related to my aspect ratio. The way I'm multiplying my matrices is as follows, I'll show you my translation and projection keep it simple: Translation [1, 0, 0, 0, 0, 1 0, 0 0, 0, 1, 0, x, y, z, 1] Projection let halfFOV = Math.tan(toRad(FOV/2.0)); let zRange = (NEAR - FAR); let x = 1.0 / (halfFOV * aspect); let y = 1.0 / (halfFOV); let z = (NEAR + FAR) / zRange; let w = 2 * FAR * NEAR / zRange; [x, 0, 0, 0, 0, y, 0, 0, 0, 0, z, -1, 0, 0, w, 0] I normally see the -1 where the w is, but for some reason I need to set it up the way you see it in my matrix, if not it won't work. I'll also share how I'm multiplying matrices very quickly: function (r, a, b) r.mat[0] = (a.mat[0] * b.mat[0]) + (a.mat[1] * b.mat[4]) + (a.mat[2] * b.mat[8]) + (a.mat[3] * b.mat[12]); r.mat[1] = (a.mat[0] * b.mat[1]) + (a.mat[1] * b.mat[5]) + (a.mat[2] * b.mat[9]) + (a.mat[3] * b.mat[13]); r.mat[2] = (a.mat[0] * b.mat[2]) + (a.mat[1] * b.mat[6]) + (a.mat[2] * b.mat[10]) + (a.mat[3] * b.mat[14]); r.mat[3] = (a.mat[0] * b.mat[3]) + (a.mat[1] * b.mat[7]) + (a.mat[2] * b.mat[11]) + (a.mat[3] * b.mat[15]); // don't need to add the rest of it... // How I use it Mathf.mul(resultMat, position, projection); So I take the left row and multiply it against the right column. I then get rows as a result as you can see. If you want to check out the full implementation it's here. I've also checked that I'm getting the correct window size. I divide width/height to get the aspect ratio too. Not sure what I'm doing wrong. I also tried multiplying my matrices on the gpu (glsl) and I gt the same results, so it's definitely my projection matrix Hope this all makes sense. Edit: I probably should have posted this thread in the Math categories, my apologies.
  23. Hi, so I'm trying to pack 4 color values into a single 32-bit float but I'm having some issues. The resulting color values which I am getting are not correct. What could be wrong here? This is the part of code where I pack the 4 bytes into a single float in Java byte [] colorBytes = new byte[4]; colorBytes[0] = (byte)(color.x*256); colorBytes[1] = (byte)(color.y*256); colorBytes[2] = (byte)(color.z*256); colorBytes[3] = (byte)(color.w*256); vertexManager.appendVertexColorData(ByteBuffer.wrap(colorBytes).order(ByteOrder.LITTLE_ENDIAN).getFloat()); I also tried this: bitSh.x = 1.0f/(256.0f*256.0f*256.0f); bitSh.y = 1.0f/(256.0f*256.0f); bitSh.z = 1.0f/(256.0f); bitSh.w = 1.0f; color.x = object.vertexColorData[i*4+0]*r; color.y = object.vertexColorData[i*4+1]*g; color.z = object.vertexColorData[i*4+2]*b; color.w = object.vertexColorData[i*4+3]*a; vertexManager.appendVertexColorData(color.dot(bitSh)); But it didn't work either, though it gave me different results, both are incorrect. This is the vertex shader: uniform mat4 MVPMatrix; // model-view-projection matrix attribute vec4 position; attribute vec2 textureCoords; attribute float color; varying vec4 outColor; varying vec2 outTexCoords; const vec4 bitSh = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0); const vec4 bitMsk = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0); vec4 unpack_float(const float value) { vec4 res = fract(value * bitSh); res -= res.xxyz * bitMsk; return res; } void main() { outTexCoords = textureCoords; outColor = unpack_float(color); gl_Position = MVPMatrix * position; } And this is the fragment shader: precision lowp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { gl_FragColor = texture2D(texture, outTexCoords) * outColor; } Thanks in advance.
  24. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  25. I need to pass 24 vec3, to a shader However glUniform3fv requires an array of GLfloat, Since my vec3 structure looks like: struct vec3{float x; float y; float z;}; Can i just pass that safely, to float parray[24 * 3] using memcpy? Dont bother about GLfloat and float sizes i just gave pseudocode i just need to know if sent array will have first vertex at position 0 second one will be in pos 3, thrid one in 6 and so on, cause im not sure when even float and GLfloat match the sizes i could get some extra bytes anywhere, And another question is how then i define an uniform in shader? uniform vec3 box[24];. ? Cheers
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!