Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL ES'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 54 results

  1. Hi, i try to measure render time (per render pass or ideally per drawcall) on Android (9) using OpenGLES3 together with the EXT_disjoint_timer_query extension. I use queries for the absolute gpu/gl t imestamp via glQueryCounter(qid, GL_TIMESTAMP_EXT). According to the spec this returns nanoseconds in gl/gpu "time". To correlated the gpu/gl timeline with the cpu timeline and determine latency from drawcall dispatch until realisation i process as follows: // initially sync gl/cpu timeline uint64_t cpu_time_base_ns = getCpuTimeInNs(); uint64_t gl_time_base_ns = 0; glGetInteger64v(GL_TIMESTAMP_EXT,&gl_time_base_ns); // for every frame (queries are pooled etc..to avoid stalls) glQueryCounter(query_start,GL_TIMESTAMP_EXT); bind(fbo); glClear(..); glDrawBla(...); glQueryCounter(query_end,GL_TIMESTAMP_EXT); eglSwapBuffers(); // check if queries results are available if( queries_available) { uint64_t query_start_result_timestamp_ns = result from query_start; // project gl times to cpu timeline uint64_t query_start_in_cpu_time = cpu_time_base_ns + (query_start_result_timestamp_ns - gl_time_base_ns) } The problem is, that the projected gl times in cpu time are BEFORE the cpu timestamp the drawcall was actually issued. Do you guys have any idea what's wrong with my approach? I fear glGetInteger64v(GL_TIMESTAMP_EXT,&gl_time_base_ns); returns a time base which is not suitable for what i try to do, but i dont understand why. I need absolute times to get command latency, not just the duration of GL work on the gpu. Thanks alot for any hints!
  2. Just wrote a shader that uses a texture but only when a bool flag is set to true, so that means i can either to use texture mapping in shader or not, now since i use texturecoordinate attribute in passed vertex buffer, and send it from vertex shader to fragment one where i choose to use it or not via uniform bool I WONDER. Will drawing crash ? I mean after some time of developement i encountered problems with various phones that claim only to 'eat' correct shaders (that means i need to make 2 different shaders - one that uses texture and second one that does not), but its an overkill and waste of time to do everything like that, i could but #ifdefs in shader and load 2 different ones but still i could just switch the uniform bool and get over with it. So are phones/tablets still so strict?
  3. hi all, i am trying to implement a textbox with scroll where you can display as many text as you can and just use the scroll bar to see the rest of the image, same as to how listbox works, etc I have implemented this using 2D SDL by displaying the messages in an extra framebuffer/texture and just bitBlk the portion of it to the main screen depending on offset. I am porting my 2D SDL code to straight OpenGL ES 2.0 by creating extra framebuffer(FBO) and render to texture, now my question is how to select a portion of that texture to be rendered only in OpenGL ES 2.0 (more like how is bitblk can be implemented in OpenGL ES 2.0)? I was thinking to using scissors but im not sure if this is the right solution. Also, I am using OpenGL ES 2.0 (Mobile) so not all libraries from desktop OpenGL is available. In Summary 1. How to do bitblk in OpenGL ES.0 for textures rendered in orthographic projection (2D)?
  4. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  5. originally, i just use GL_LINE_STRIP to render lines and it has been evident in different device the difference in width of the lines, plus i cannot texture it! so i decided to use triangles to render my lines so i can have control on its width and add some textures,, i can already convert a line segment based from two given points, or two lines using 3 points using textured quad, I want to do the joints of these quads next, since the app needs to draw using the mobile touchscreen, so it is fitting to have a circular cap/joints instead of those pointy joints. I saw some lessons and tutorials and they suggest as simple as adding a circle in the joint end, is that really how simple it is done? Let me know if you guys have any tips and further suggestions, or link to a source/tutorial (OpenGL/OGL ES). much appreciated!
  6. In early 2017 I had this idea, if I can stream an HD movie without downloading the whole thing, I could stream a massive open world game as well without downloading it. Most people are under the impression that a game running in the browser can’t look good because then it takes forever to load it. Well, in game dev the concept of LOD - Level Of Details - exists quite a while now, there is no reason why we couldn’t apply it in the browser as well. The game I developed loads in a matter of seconds. It takes a couple seconds to load the engine itself, then when you press play: - Loads the terrain peaces closer to you first, than what's further away. - Loads the low poly version of the models like trees, rocks and bushes first, then the high poly version when available. - Structures... - Animals... - Sound... - Etc.. you get the idea My point is, it is POSSIBLE to build a 3D version of the Internet, where instead of browsing through websites, we could jump from one 3D space to the next. I “invite” everyone to make this happen. I’ve made a 3D Survival Game with a massive terrain to prove the tech works. I want you guys to build your own 3D spaces implementing your own ideas what the web should look like in the future. We could just link them all together and make this Interconnected Virtual Space happen - yeah, the Metaverse, for the Snow Crash fans out there I would love to hear what you think about the applications of 3D spaces on the Web. Please leave me a comment if you are as exited about the possibilities as I am. Backing up my claims: Live Tech Demo is available on https://plainsofvr.com Watch a the Open World to load instantly, than gradually improving:
  7. Hello, I'm trying to make a bloom effect in my 2D game made with LibGDX. I've a problem with the shader side, in short the glow effect should be go off the bounds of the texture, if I have a texture 40x40 the shader should cover 60x60 for example. This image explain better the problem I have: , as you can see the bloom effect is cut when the texture ends. I suppose that I should modify the default vertex shader to get what I need but I'm not very familiar with GLSL programmig so I ask if someone could help me. Vertex: #ifdef GL_ES precision highp float; precision highp int; #endif attribute vec4 a_position; attribute vec4 a_color; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; varying vec4 v_color; varying vec2 v_texCoords; void main() { v_color = a_color; v_texCoords = a_texCoord0; gl_Position = u_projTrans * a_position; } Fragment #ifdef GL_ES precision highp float; precision highp int; #endif varying vec4 v_color; varying vec2 v_texCoords; uniform sampler2D u_texture; uniform mat4 u_projTrans; uniform float u_blurSize; uniform float u_intensity; void main() { vec4 sum = vec4(0); // blur in y (vertical) // take nine samples, with the distance blurSize between them sum += texture2D(u_texture, vec2(v_texCoords.x - 4.0*u_blurSize, v_texCoords.y)) * 0.05; sum += texture2D(u_texture, vec2(v_texCoords.x - 3.0*u_blurSize, v_texCoords.y)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x - 2.0*u_blurSize, v_texCoords.y)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x - u_blurSize, v_texCoords.y)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y)) * 0.16; sum += texture2D(u_texture, vec2(v_texCoords.x + u_blurSize, v_texCoords.y)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x + 2.0*u_blurSize, v_texCoords.y)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x + 3.0*u_blurSize, v_texCoords.y)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x + 4.0*u_blurSize, v_texCoords.y)) * 0.05; // blur in y (vertical) // take nine samples, with the distance blurSize between them sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 4.0*u_blurSize)) * 0.05; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 3.0*u_blurSize)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 2.0*u_blurSize)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - u_blurSize)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y)) * 0.16; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + u_blurSize)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 2.0*u_blurSize)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 3.0*u_blurSize)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 4.0*u_blurSize)) * 0.05; gl_FragColor = sum * u_intensity + texture2D(u_texture, v_texCoords); }
  8. Psychopathetica

    Raycast From Camera To Mouse Pointer

    Hello. For the last two weeks, I've been struggling with one thing... 3D object picking. And I'm near getting it right! Works great when facing front! With a first person style camera, I can go up, down, forward, backward, strafe left, straft right, and it works. Problem is, when I rotate the camera, the other end of the ray that is not the mouse end goes off in another place other than the camera, completely throwing it off! So I'm going to go step by step, and see if you guys can spot what went wrong. The first step was to normalize the mouse device coordinates, or in my case, touch coordinates: public static float[] getNormalizedDeviceCoords(float touchX, float touchY){ float[] result = new float[2]; result[0] = (2f * touchX) / Render.camera.screenWidth - 1f; result[1] = 1f - (2f * touchY) / Render.camera.screenHeight; return result; } which in turn is converted into Homogeneous Clip Coordinates: float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; The next step was to convert this Homogeneous Clip Coordinates into Eye Coordinates: public static float[] getEyeCoords(float[] clipCoords){ float[] invertedProjection = new float[16]; Matrix.invertM(invertedProjection, 0, Render.camera.projMatrix, 0); float[] eyeCoords = new float[4]; Matrix.multiplyMV(eyeCoords, 0, invertedProjection, 0 ,clipCoords, 0); float[] result = new float[]{eyeCoords[0], eyeCoords[1], -1f, 0f}; return result; } Next was to convert the Eye Coordinates into World Coordinates and normalize it: public static float[] getWorldCoords(float[] eyeCoords){ float[] invertedViewMatrix = new float[16]; Matrix.invertM(invertedViewMatrix, 0, Render.camera.viewM, 0); float[] rayWorld = new float[4]; Matrix.multiplyMV(rayWorld, 0, invertedViewMatrix, 0 ,eyeCoords, 0); float length = (float)Math.sqrt(rayWorld[0] * rayWorld[0] + rayWorld[1] * rayWorld[1] + rayWorld[2] * rayWorld[2]); if(length != 0){ rayWorld[0] /= length; rayWorld[1] /= length; rayWorld[2] /= length; } return rayWorld; } Putting this all together gives me a method to get the ray direction I need: public static float[] calculateMouseRay(){ float touchX = MainActivity.touch.x; float touchY = MainActivity.touch.y; float[] normalizedDeviceCoords = getNormalizedDeviceCoords(touchX, touchY); float[] homogeneousClipCoords = new float[]{normalizedDeviceCoords[0], normalizedDeviceCoords[1], -1f, 1f}; float[] eyeCoords = getEyeCoords(homogeneousClipCoords); float[] worldCoords = getWorldCoords(eyeCoords); return worldCoords; } I then test for the Ray / Sphere intersection using this with double precision: public static boolean getRaySphereIntersection(float[] rayOrigin, float[] spherePosition, float[] rayDirection, float radius){ double[] v = new double[4]; double[] dir = new double[4]; // Calculate the a, b, c and d coefficients. // a = (XB-XA)^2 + (YB-YA)^2 + (ZB-ZA)^2 // b = 2 * ((XB-XA)(XA-XC) + (YB-YA)(YA-YC) + (ZB-ZA)(ZA-ZC)) // c = (XA-XC)^2 + (YA-YC)^2 + (ZA-ZC)^2 - r^2 // d = b^2 - 4*a*c v[0] = (double)rayOrigin[0] - (double)spherePosition[0]; v[1] = (double)rayOrigin[1] - (double)spherePosition[1]; v[2] = (double)rayOrigin[2] - (double)spherePosition[2]; dir[0] = (double)rayDirection[0]; dir[1] = (double)rayDirection[1]; dir[2] = (double)rayDirection[2]; double a = (dir[0] * dir[0]) + (dir[1] * dir[1]) + (dir[2] * dir[2]); double b = (dir[0] * v[0] + dir[1] * v[1] + dir[2] * v[2]) * 2.0; double c = (v[0] * v[0] + v[1] * v[1] + v[2] * v[2]) - ((double)radius * (double)radius); // Find the discriminant. //double d = (b * b) - c; double d = (b * b) - (4.0 * a * c); Log.d("d", String.valueOf(d)); if (d == 0f) { //one root } else if (d > 0f) { //two roots double x1 = -b - Math.sqrt(d) / (2.0 * a); double x2 = -b + Math.sqrt(d) / (2.0 * a); Log.d("X1 X2", String.valueOf(x1) + ", " + String.valueOf(x2)); if ((x1 >= 0.0) || (x2 >= 0.0)){ return true; } if ((x1 < 0.0) || (x2 >= 0.0)){ return true; } } return false; } After a week and a half of playing around with this chunk of code, and researching everything I could on google, I found out by sheer accident that the sphere position to use in this method must be the transformed sphere position extracted from the model matrix, not the position itself. Which not one damn tutorial or forum article mentioned! And works great using this. Haven't tested the objects modelView yet though. To visually see the ray, I made a class to draw the 3D line, and noticed that it has no trouble at all with one end being my mouse cursor. The other end, which is the origin, is sort of working. And it only messes up when I rotate left or right as I move around in a FPS style camera. Which brings me to my next point. I have no idea what the ray origin should be for the camera. And I have 4 choices. 3 of them worked but gave me the same results. Ray Origin Choices: 1. Using just the camera.position.x, camera.position.y, and camera.position.z for my ray origin worked flawlessly straight due to the fact that the ray origin remained in the center of the screen, but messed up when I rotated the camera, and moved off screen as I was rotating. Now theoretically, even if you were facing at an angle, you still are fixated at that point, and the ray origin shouldn't be flying off away from the center of the screen at all. A point is a point after all. 2.Using the cameras model matrix (used for translating and rotating the camera, and later multiplied to the cameras view matrix), specifically -modelMatrix[12], -modelMatrix[13], and -modelMatrix[14] (note I am using negative), basically gave me nearly the same results. Only difference is that camera rotations play a role in the cameras positions. Great facing straight, but the ray origin is no longer centered at different angles. 3.Using the camera's view matrix didn't work at all, positive or negative, using 12, 13, and 14 in the matrix. 4.Using the camera's inverted view matrix (positive invertedViewMatrix[12], invertedViewMatrix[13], and invertedViewMatrix[14]) did work, but gave me what probably seemed like the same results as #2. So basically, I'm having difficulty getting the other end of the ray, which is the ray origin. Shooting the ray to the mouse pointer was no problem, like I said. With the camera end of the ray being off, it throws off the accuracy a lot at different camera angles other than straight. If anyone has any idea's, please let me know. I'm sure my math is correct. If you need any more information, such as the camera, or how I render the ray, which I don't think is needed, I can show that too. Thanks in advance!
  9. Hi, I'm trying to solve a problem where I can get all colors from image. I see only one: walk through a loop at raster data and collect all bytes, I'm sure there is a better way to get colors from image. I'm thinking about some sort of collecting colors in result texture... Is it ordinary situation, could you help me, I didn find anithing on the internet... Thanks.
  10. I have a fullscreen sized quad, now i draw it on screen then in fragment shader i choose whenever linenis visible or not and draw it on screen (2d grid rendering) However when i zoom out lines dissapear not to mention i cant achieve one thickeness regardless of the zoom Left zoomed in right and below zoomed out And heres the shader precision highp float; uniform vec3 translation; uniform float grid_size; uniform vec3 bg_color; uniform vec3 grid_color; uniform float lthick; uniform float sw; uniform float sh; uniform float scale; varying vec3 vc; int modulo(float x, float y) { return int (x - y * float ( int (x/y))); } float modulof(float x, float y) { return x - y * float ( int (x/y) ); } void main() { bool found = false; vec2 fragcoord = vc.xy * 0.5 + 0.5; //find how much units do we have in worldspace for a halfspace vec2 sc = ( vec2(sw, sh) / 2.0 ) / scale; sc = sc * vc.xy + translation.xy; //world position int px = modulo(float (sc.x), grid_size); int py = modulo(float (sc.y), grid_size); if ( (px == 0) || (py == 0) ) found = true; if (found) gl_FragColor = vec4(grid_color, 1.0); else gl_FragColor = vec4(bg_color, 1.0); } I know that when zooming out, a fragment can not represent actual grid line.
  11. I wonder how one could achieve that, personally i could pass another vertex data first would be actual geometric position, second would be next vertex in array. But its way too overcomplicated, ill have to build two sets of arrays so i just don't. Can't actually think of something. Something that would not force me to pass another attribute to shaders, something that wont force me to change my internal model atructure at all, By the way im drawing lines with usage of GL_LINE_LOOP Any thoughts ?
  12. Hello everyone, I'm trying to display a 2D texture to screen but the rendering isn't working correctly. First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering So here is the shader I'm using : const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG. For that I'm using libPNG to load the PNG to a texture, here is my code : GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning). //MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square. The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help. Thanks !
  13. DelicateTreeFrog

    OpenGL GLSL: 9-slicing

    I have a 9-slice shader working mostly nicely: Here, both the sprites are separate images, so the shader code works well: varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels). This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet. Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!
  14. Not long ago, I create a nice OBJ loader that loads 3D Studio Max files. Only problem is, is that although it works and works great, I wasn't using Vertex Buffers. Now that I applied Vertex Buffers, it seems to only use the first color of the texture and spread it all across the poiygon. I examined my code over and over again, and the Vertex Buffer code is correct. But when I comment out all of my vertex buffer code, it works as intended. I practically given up on fixing it on my own, so hopefully you guys will be able to figure out what is wrong. public static final int BYTES_PER_FLOAT = 4; public static final int POSITION_COMPONENT_COUNT_3D = 4; public static final int COLOR_COMPONENT_COUNT = 4; public static final int TEXTURE_COORDINATES_COMPONENT_COUNT = 2; public static final int NORMAL_COMPONENT_COUNT = 3; public static final int POSITION_COMPONENT_STRIDE_2D = POSITION_COMPONENT_COUNT_2D * BYTES_PER_FLOAT; public static final int POSITION_COMPONENT_STRIDE_3D = POSITION_COMPONENT_COUNT_3D * BYTES_PER_FLOAT; public static final int COLOR_COMPONENT_STRIDE = COLOR_COMPONENT_COUNT * BYTES_PER_FLOAT; public static final int TEXTURE_COORDINATE_COMPONENT_STRIDE = TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT; public static final int NORMAL_COMPONENT_STRIDE = NORMAL_COMPONENT_COUNT * BYTES_PER_FLOAT; int loadFile() { ArrayList<Vertex3D> tempVertexArrayList = new ArrayList<Vertex3D>(); ArrayList<TextureCoord2D> tempTextureCoordArrayList = new ArrayList<TextureCoord2D>(); ArrayList<Vector3D> tempNormalArrayList = new ArrayList<Vector3D>(); ArrayList<Face3D> tempFaceArrayList = new ArrayList<Face3D>(); StringBuilder body = new StringBuilder(); try { InputStream inputStream = context.getResources().openRawResource(resourceID); InputStreamReader inputStreamReader = new InputStreamReader(inputStream); BufferedReader bufferedReader = new BufferedReader(inputStreamReader); String nextLine; String subString; String[] stringArray; String[] stringArray2; int[] indexNumberList = new int[3]; int[] textureCoordNumberList = new int[3]; int[] normalNumberList = new int[3]; int i = 0; int j = 0; int k = 0; try { while ((nextLine = bufferedReader.readLine()) != null) { if (nextLine.startsWith("v ")) { subString = nextLine.substring(1).trim(); stringArray = subString.split(" "); try { tempVertexArrayList.add(new Vertex3D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]), Float.parseFloat(stringArray[2]), 1f)); } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading vertex list"); return 0; } String x = String.valueOf(tempVertexArrayList.get(i).x); String y = String.valueOf(tempVertexArrayList.get(i).y); String z = String.valueOf(tempVertexArrayList.get(i).z); //Log.d(TAG, "vertex " + String.valueOf(i) + ": " + x + ", " + y + ", " + z); i++; } if (nextLine.startsWith("vn ")) { subString = nextLine.substring(2).trim(); stringArray = subString.split(" "); try { if(reverseNormals){ tempNormalArrayList.add(new Vector3D(-Float.parseFloat(stringArray[0]), -Float.parseFloat(stringArray[1]), -Float.parseFloat(stringArray[2]))); } else{ tempNormalArrayList.add(new Vector3D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]), Float.parseFloat(stringArray[2]))); } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading normal list"); return 0; } String nx = String.valueOf(tempNormalArrayList.get(j).x); String ny = String.valueOf(tempNormalArrayList.get(j).y); String nz = String.valueOf(tempNormalArrayList.get(j).z); //Log.d(TAG, "normal " + String.valueOf(j) + ": " + nx + ", " + ny + ", " + nz); j++; } if (nextLine.startsWith("vt ")) { subString = nextLine.substring(2).trim(); stringArray = subString.split(" "); try { tempTextureCoordArrayList.add(new TextureCoord2D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]))); } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading texture coordinate list"); return 0; } String tu = String.valueOf(tempTextureCoordArrayList.get(k).tu); String tv = String.valueOf(tempTextureCoordArrayList.get(k).tv); //Log.d(TAG, "texture coord " + String.valueOf(k) + ": " + tu + ", " + tv); k++; } if (nextLine.startsWith("f ")) { subString = nextLine.substring(1).trim(); stringArray = subString.split(" "); for (int index = 0; index <= 2; index++) { stringArray2 = stringArray[index].split("/"); try { indexNumberList[index] = Integer.parseInt(stringArray2[0]) - 1; if(indexNumberList[index] < 0){ Log.d(TAG, "Error: indexNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading indexNumberList[]"); return 0; } try{ textureCoordNumberList[index] = Integer.parseInt(stringArray2[1]) - 1; if(textureCoordNumberList[index] < 0){ Log.d(TAG, "Error: textureCoordNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading textureCoordNumberList[]"); return 0; } try{ normalNumberList[index] = Integer.parseInt(stringArray2[2]) - 1; if(normalNumberList[index] < 0){ Log.d(TAG, "Error: normalNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading normalNumberList[]"); return 0; } } tempFaceArrayList.add(new Face3D(indexNumberList[0], textureCoordNumberList[0], normalNumberList[0], indexNumberList[1], textureCoordNumberList[1], normalNumberList[1], indexNumberList[2], textureCoordNumberList[2], normalNumberList[2])); } body.append(nextLine); body.append('\n'); } //Now that everything has successfully loaded, you can now populate the public variables. if(tempVertexArrayList != null && tempVertexArrayList.size() != 0) vertexArrayList.addAll(tempVertexArrayList); if(tempTextureCoordArrayList != null && tempTextureCoordArrayList.size() != 0) textureCoordArrayList.addAll(tempTextureCoordArrayList); if(tempNormalArrayList != null && tempNormalArrayList.size() != 0) normalArrayList.addAll(tempNormalArrayList); if(tempFaceArrayList != null && tempFaceArrayList.size() != 0) faceArrayList.addAll(tempFaceArrayList); vertexList = new float[faceArrayList.size() * POSITION_COMPONENT_COUNT_3D * NUMBER_OF_SIDES_PER_FACE]; indexList = new short[faceArrayList.size() * NUMBER_OF_SIDES_PER_FACE]; colorList = new float[faceArrayList.size() * COLOR_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; textureCoordList = new float[faceArrayList.size() * TEXTURE_COORDINATES_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; normalList = new float[faceArrayList.size() * NORMAL_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; int nextFace = 0; int step = POSITION_COMPONENT_COUNT_3D * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < vertexList.length; currentVertex += step){ vertexList[currentVertex + 0] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).x; vertexList[currentVertex + 1] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).y; vertexList[currentVertex + 2] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).z; vertexList[currentVertex + 3] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).w; vertexList[currentVertex + 4] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).x; vertexList[currentVertex + 5] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).y; vertexList[currentVertex + 6] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).z; vertexList[currentVertex + 7] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).w; vertexList[currentVertex + 8] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).x; vertexList[currentVertex + 9] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).y; vertexList[currentVertex + 10] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).z; vertexList[currentVertex + 11] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).w; nextFace++; } numberOfVertices = vertexList.length / POSITION_COMPONENT_COUNT_3D; nextFace = 0; for (int currentIndex = 0; currentIndex < indexList.length; currentIndex += NUMBER_OF_SIDES_PER_FACE){ indexList[currentIndex + 0] = faceArrayList.get(nextFace).indexNumberList.get(0).shortValue(); indexList[currentIndex + 1] = faceArrayList.get(nextFace).indexNumberList.get(1).shortValue(); indexList[currentIndex + 2] = faceArrayList.get(nextFace).indexNumberList.get(2).shortValue(); } step = COLOR_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < colorList.length; currentVertex += step){ colorList[currentVertex + 0] = red; colorList[currentVertex + 1] = green; colorList[currentVertex + 2] = blue; colorList[currentVertex + 3] = alpha; colorList[currentVertex + 4] = red; colorList[currentVertex + 5] = green; colorList[currentVertex + 6] = blue; colorList[currentVertex + 7] = alpha; colorList[currentVertex + 8] = red; colorList[currentVertex + 9] = green; colorList[currentVertex + 10] = blue; colorList[currentVertex + 11] = alpha; } nextFace = 0; step = TEXTURE_COORDINATES_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < textureCoordList.length; currentVertex += step){ textureCoordList[currentVertex + 0] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(0)).tu * mult; textureCoordList[currentVertex + 1] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(0)).tv * mult; textureCoordList[currentVertex + 2] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(1)).tu * mult; textureCoordList[currentVertex + 3] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(1)).tv * mult; textureCoordList[currentVertex + 4] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(2)).tu * mult; textureCoordList[currentVertex + 5] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(2)).tv * mult; nextFace++; } nextFace = 0; step = NORMAL_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < normalList.length; currentVertex += step){ normalList[currentVertex + 0] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).x; normalList[currentVertex + 1] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).y; normalList[currentVertex + 2] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).z; normalList[currentVertex + 3] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).x; normalList[currentVertex + 4] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).y; normalList[currentVertex + 5] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).z; normalList[currentVertex + 6] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).x; normalList[currentVertex + 7] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).y; normalList[currentVertex + 8] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).z; nextFace++; } vertexBuffer = ByteBuffer.allocateDirect(vertexList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); indexBuffer = ByteBuffer.allocateDirect(indexList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asShortBuffer(); colorBuffer = ByteBuffer.allocateDirect(colorList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); textureCoordBuffer = ByteBuffer.allocateDirect(textureCoordList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); normalBuffer = ByteBuffer.allocateDirect(normalList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); vertexBuffer.put(vertexList).position(0); indexBuffer.put(indexList).position(0); colorBuffer.put(colorList).position(0); textureCoordBuffer.put(textureCoordList).position(0); normalBuffer.put(normalList).position(0); createVertexBuffer(); uMVPMatrixHandle = glGetUniformLocation(program, U_MVPMATRIX); uMVMatrixHandle = glGetUniformLocation(program, U_MVMATRIX); uTextureUnitHandle = glGetUniformLocation(program, U_TEXTURE_UNIT); aPositionHandle = glGetAttribLocation(program, A_POSITION); aColorHandle = glGetAttribLocation(program, A_COLOR); aTextureCoordinateHandle = glGetAttribLocation(program, A_TEXTURE_COORDINATES); aNormalHandle = glGetAttribLocation(program, A_NORMAL); glEnableVertexAttribArray(aPositionHandle); glEnableVertexAttribArray(aColorHandle); glEnableVertexAttribArray(aTextureCoordinateHandle); glEnableVertexAttribArray(aNormalHandle); glActiveTexture(GL_TEXTURE0); glUniform1i(uTextureUnitHandle, 0); } catch(IOException e){ } } catch (Resources.NotFoundException nfe){ throw new RuntimeException("Resource not found: " + resourceID, nfe); } return 1; } public void draw(){ glEnable(GL_DEPTH_TEST); bindData(); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glDrawArrays(GL_TRIANGLES, 0, faceArrayList.size() * NUMBER_OF_SIDES_PER_FACE); glBindBuffer(GL_ARRAY_BUFFER, 0); } public void bindData(){ int offset = 0; glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glVertexAttribPointer(aPositionHandle, POSITION_COMPONENT_COUNT_3D, GL_FLOAT, false, POSITION_COMPONENT_STRIDE_3D, offset); offset += POSITION_COMPONENT_COUNT_3D; glVertexAttribPointer(aColorHandle, COLOR_COMPONENT_COUNT, GL_FLOAT, false, COLOR_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); offset += COLOR_COMPONENT_COUNT; glVertexAttribPointer(aTextureCoordinateHandle, TEXTURE_COORDINATES_COMPONENT_COUNT, GL_FLOAT, false, TEXTURE_COORDINATE_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); offset += TEXTURE_COORDINATES_COMPONENT_COUNT; glVertexAttribPointer(aNormalHandle, NORMAL_COMPONENT_COUNT, GL_FLOAT, false, NORMAL_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); glBindBuffer(GL_ARRAY_BUFFER, 0); ///////////////////////////////////////////////////// /* vertexBuffer.position(0); glVertexAttribPointer(aPositionHandle, POSITION_COMPONENT_COUNT_3D, GL_FLOAT, false, POSITION_COMPONENT_STRIDE_3D, vertexBuffer); colorBuffer.position(0); glVertexAttribPointer(aColorHandle, COLOR_COMPONENT_COUNT, GL_FLOAT, false, COLOR_COMPONENT_STRIDE, colorBuffer); textureCoordBuffer.position(0); glVertexAttribPointer(aTextureCoordinateHandle, TEXTURE_COORDINATES_COMPONENT_COUNT, GL_FLOAT, false, TEXTURE_COORDINATE_COMPONENT_STRIDE, textureCoordBuffer); normalBuffer.position(0); glVertexAttribPointer(aNormalHandle, NORMAL_COMPONENT_COUNT, GL_FLOAT, false, NORMAL_COMPONENT_STRIDE, normalBuffer); */ } public void createVertexBuffer(){ glGenBuffers(1, vertexBufferObject, 0); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glBufferData(GL_ARRAY_BUFFER,numberOfVertices * (POSITION_COMPONENT_COUNT_3D + COLOR_COMPONENT_COUNT + TEXTURE_COORDINATES_COMPONENT_COUNT + NORMAL_COMPONENT_COUNT) * BYTES_PER_FLOAT, null, GL_STATIC_DRAW); int offset = 0; glBufferSubData(GL_ARRAY_BUFFER, offset, numberOfVertices * POSITION_COMPONENT_COUNT_3D * BYTES_PER_FLOAT, vertexBuffer); offset += POSITION_COMPONENT_COUNT_3D; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * COLOR_COMPONENT_COUNT * BYTES_PER_FLOAT, colorBuffer); offset += COLOR_COMPONENT_COUNT; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT, textureCoordBuffer); offset += TEXTURE_COORDINATES_COMPONENT_COUNT; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * NORMAL_COMPONENT_COUNT * BYTES_PER_FLOAT, normalBuffer); glBindBuffer(GL_ARRAY_BUFFER, 0); }
  15. lawnjelly

    Android Build and Performance

    In the last few weeks I've been focusing on getting the Android build of my jungle game working and tested. Last time I did this I was working from Windows, but now I've totally migrated to Linux I wasn't sure how easily everything would go. In the end, it turns out that support for Linux is great, in fact it was easier than getting things up and running on Windows, no special drivers needed. Definitely Android studio and particularly the emulators seem to be better than last time, with x86 emulators running near native speed, and much quicker APK uploads to the emulators (although still slow to the devices, I gather I can increase this by updating them to high Android version but then less good for testing). My devices I have at home are an old Cat B15 phone, 800x480 with a GPU that seems to date from 2006(!), a Nexus 7 2012 tablet, and finally an Amlogic S905X TV media player (2017). Funnily enough the TV box has been the most involved to get working. CPU issues My first issue to contend with was I got a 'SIGBUS illegal alignment' error when running on the phone. After tracking it down, it turns out the particular Arm CPU is very picky about the alignment of data. It is usually good practice to keep structures aligned well, but the x86 is very forgiving, and I use quite a few structs #pragma packed to 1 byte, particularly in serialization. Some padding in the structures sorted this. Next I had spent many hours trying to figure out a strange bug whereby the object lighting worked fine on emulators, but looked wrong on the device. I had a suspicion it was a signed / unsigned issue in values for diffuse light in a shader input, but I couldn't see anything wrong with the code. Almost unbelievably, when I tracked it down, it turned there wasn't anything wrong with the code. The problem was that on the x86 compiler, a 'char' defaults to be a signed char, but on the ARM compiler, 'char' defaults to unsigned!! This is an interesting choice (apparently on the ARM chip the unsigned may be faster) but it goes against the usual convention for short, int etc. It was easy enough to fix by flipping a compiler switch. I guess I should really be using explicit signed / unsigned types. It has always struck me as somewhat wierd that C is so vague with the in-built types, with number of bits and the sign, given that changing these usually gives bugs. GPU issues The biggest problem I tend to have with OpenGL ES devices is the 'precision' specifiers in shaders. You can fill them however you want on the desktop, but it just ignores them and uses high precision. However different devices have different capabilities for lowp, mediump and highp both in vertex and fragment shaders. What would be really helpful if the guys making the emulators / OpenGL ES on the desktop could allow it to emulate the lower precision, allowing us to debug precision on the desktop. Alas no, I couldn't figure out a way to get this to work. It may be impossible using the hardware OpenGL ES, but the emulator also can use SwiftShader so maybe they could implement this? My biggest problems were that my worst performing device for precision was actually my newest, the TV box. It is built for super fast decoding video at high resolution, but the fragment shaders are a minimal 10 bit precision affair, and the fill rate is poor for a 1080P device. This was coupled with the problem I couldn't usb connect up to the desktop for debugging, I literally was compiling an APK, putting it on a usb stick (or dropbox), taking to bedroom, installing, running. This is not ideal and I will look into either seeing if ADB will run over my LAN or getting another low precision device for testing. I won't go into detail on the precision issues, I wrote more on this on a post here: https://www.gamedev.net/forums/topic/694188-debugging-precision-issues-in-opengl-es-2 As a quick summary, 10 bits of precision in the fragment shader can lead to sampling error in any maths done there, especially in texture coordinate math. I was able to fix some of my problems by moving the tex coordinate calculations to the vertex shader, which has more precision. Then, it turns out that my TV box (and presumably many such chipsets) support an extra high precision path in the fragment shader, *as long as you don't touch the input data*. This allows them to do accurate uv coords on large texture maps, because they don't use the 10 bit precision. Menus I've written a rudimentary menu system for the game, with tickboxes, sliders and listboxes. This has enabled me to put in a bunch of debugging features I can turn on and off on devices, to try and find out what affects performance, without recompiling. Another trick from my console days is I have put in some simple graphical performance bars. I record the last 60 frames into a circle buffer and store things like the frame duration, and when certain game tasks took place. In my case the big issue is when a 'scroll' event takes place, as I render horizontal and vertical tiles of the landscape as you move about it. In the diagram the blue bar is where a scroll happens, a green bar is where the ground scroll happens, and the red is the frame duration. It doesn't show much on the desktop as the GPU is fast, but on the slow devices I often get a dropped frame on the scrolls, so I am trying to reduce this. I can turn on and off various aspects of the scrolling / rendering to track down what causes performance issues. Certainly PCF shadows are a big ask on mobiles, as is the ground (terrain) shader. On my first incarnation of the game I pre-rendered everything (graphics + shadows) out to a massive texture at loadup and just scrolled through it as you moved. This is great for performance, but unfortunately uses a shedload of memory if you want big maps. And phones don't have lots of memory. So a lot of technical effort has gone into writing the scrolling system which redraws the background in horizontal and vertical tiles as you move about. This is much more tricky with an angled landscape than with a top-down 90 degree view, and even more tricky when you have to render shadow maps as you move. Having identified the shadow map pass as being a bottleneck, I did some quick calculations for my max map size (approx 16384x16384) and decided that I could probably get away with pre-rendering the shadow map to a 2048x2048 texture. Alright it isn't very high resolution, but it beats turning shadows off completely. This is working fine, and avoids a lot of ugly issues from scrolling the shadow map. To render out the shadow map I render a bunch of 256x256 tiles and copy them to the final shadowmap. This fixed some of the slowness, then I realised I could go a step further. Much of the PCF shadows slowdown was from rendering the landscape shadows. The buildings and objects are much rarer so I figured I could pre-render a low-res landscape shadow texture, and use this when scrolling, then only need to do expensive PCF / simple shadows on the static objects, and dynamic objects. This worked a treat, and incidentally solves at a stroke precision issues I was having with the shadow shader on the 10 bit hardware. Joysticks As well as supporting touchscreens and keyboards, I want to support gamepads, so I bought a bluetooth / wireless gamepad for xmas. It works great with the TV box with wireless dongle, unfortunately, the bluetooth doesn't seem to work with my old phone and tablet, or my desktop. So it has been very difficult / impossible to debug to get analog joystick working. And, in an oversight(?) for the emulator, there doesn't seem to be an option for emulating a gamepad. I can get a D pad but I don't think it is analog. So after some stabs in the dark with docs I am still facing gamepad focus issues so will have to wait till I have a suitable device to debug this. That's all for now folks!
  16. This video gives an overview of differing features an OpenGL ES developer would encounter when starting to develop with the Vulkan API.
  17. Falken42

    Crystal Clash Tutorial UI

    From the album: Crystal Clash

  18. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  19. radek spam

    Province Map

    Hi, I would like to create a province map, something like in attached example of Age Of Conquest. I would like to use Libgdx. After some research i learnt that it can be done by using two images, one with graphics and second invisible with distinct colors to handle clicks. I have some doubts about this method: how to deal with memory, i have created sample map with size of 960x540 and it weighs 600kb, i would need 10 times bigger map. I could cut it in some smaller pieces and render them but im afraid that it can cause lags when scrolling the map how to deal with highlighting the provinces. I managed to implement simple highlight limited to one province creating filter in OpenGl fragment shader. But what if i want to highlight multiple provinces (eg. highlight all provinces of some country). I guess It can be done by shader too but it may be much complicated i would like to also implement Fog of War over the undiscovered provinces. How one could do that? I would really appreciate your guidance. Perhaps to create the above map i need some other method?
  20. Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices. One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen. On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders. However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...
  21. I'm using Xcode on Mac OS X, and I've added a file called 'peacock.tga' into my project. I can't seem to open that file (using fopen) however. Is there anything special that I need to do in order for the file to be readable?
  22. Hello everyone!This is my first project for android.I was very interested of making games, and find LibGDX framework, and there I started. After 6-7 month I finished first mode for my game.So here we go ^^ Undercore - hardcore runner for android. You have to use skills like jump and stay on line. And you goal is to make a highscore dodging obstacles. ○ Improve your skills - the way will be rough, will you become a master?○ Contest - your friend hit 40 points? Double score and make him jealous!○ Collect - buy new color themes that would make the gameplay brighter!○ Achieve - beat records, die, earn. Collect achivements. No pain no gain.PlayMarket: https://play.google.com/store/apps/details?id=com.sneakycrago.undercore&hl=en Youtube - Gameplay I hope you enjoy it and also wait for feedback I can't make clickable button.Sorry, just link: https://goo.gl/dG1dLj
  23. I just found a code which uses libavcodec to decode videos and display them on screen Canvas canvas = surfaceHolder.lockCanvas(); canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint); surfaceHolder.unlockCanvasAndPost(canvas); anyway it looks like a ton of useless garbage, it first decodes then draws a bitmap, i would like to somehow transfer video data to gpu directly so i can just draw a video frame in a simple poly (made of 4 verts), however it may be undoable, anyone has any more information about it?
  24. My light is positioned at vec3(0, 0, 2) which is in front of an object at vec3(0, 0, 0). If I don't rotate the object, everything seems to look fine: The problem occurs when I rotate the object, the object's lit area seems to rotate with the object. Instead of just shining the faces looking at the light. In fact here's another strange example but with specular added. The effect is correct in the first the rotation, the second it's dark, and in the third it's back to good again I can't seem to figure out what the problem is with my shader. I even tried calculating the normal matrix in glsl, just in case my implementation was wrong, but I get the same results: nrms = mat3(transpose(inverse(model))) * normals; // and nrms = normalMat * normals; // both get same results. I really don't think it has to do with the normals, the light calculations visually seem okay, as long as I don't rotate the object though. In fact, I can translate and rotate the camera and the lighting is still good, again, as long as I don't rotate the object. By the way, camera rotation is not considered in the calculations since I'm passing the camera.transform.position vec3 to calculate the toCam vector I use in my lighting calculations. There's clearly something I'm doing wrong. I'm guessing it has to do with what space am I calculating against. It's almost like I'm calculating based on the model's local space instead of world. Hopefully somebody can identify what it is though, I'll share the vertex and frag shaders below. I didn't include the specular portion though. thanks a lot! Vertex #version 300 es #ifdef GL_ES precision mediump float; #endif layout (location= 0) in vec3 vertex; layout (location= 1) in vec3 normals; layout (location= 2) in vec2 uv; layout (location= 3) in vec3 colors; out vec3 fragPos; out vec3 baseColors; out vec3 nrms; out vec3 camPosition; uniform vec3 camera; uniform mat3 normalMat; uniform mat4 model; uniform mat4 projection; uniform mat4 view; uniform mat4 mvp; void main() { // nrms = mat3(transpose(inverse(model))) * normals; baseColors = colors; nrms = normalMat * normals; fragPos = vec3(model * vec4(vertex, 1.0)); camPosition = camera; gl_Position = mvp * vec4(fragPos, 1.0); } Frag #version 300 es #ifdef GL_ES precision mediump float; #endif #define PI 3.14159265359 #define TWO_PI 6.28318530718 #define NUM_LIGHTS 2 in vec3 fragPos; in vec3 baseColors; in vec3 nrms; in vec3 camPosition; out vec4 color; struct Light { vec3 position; vec3 intensities; float attenuation; float ambient; }; Light light; void main () { light.position.x = 0.0; light.position.y = 0.0; light.position.z = 2.0; light.intensities.r = 1.0; light.intensities.g = 1.0; light.intensities.b = 1.0; light.ambient = 0.005; vec4 base = vec4(baseColors, 1.0); vec3 normals = normalize(nrms); vec3 toLight = normalize(light.position - fragPos); vec3 toCamera = normalize(camPosition - fragPos); // Ambient vec3 ambient = light.ambient * base.rgb * light.intensities; // Diffuse float diffuseBrightness = max(0.0, dot(normals, toLight)); vec3 diffuse = diffuseBrightness * base.rgb * light.intensities; // Composition vec3 linearColor = ambient + (diffuse); vec3 gamma = vec3(1.0 / 2.2); color = vec4( pow(linearColor, gamma), base.a); }
  25. I'm interested in rendering a grayscale output from a shader, to save into a texture for later use. I only want an 1 channel 8 bit texture rather than RGBA, to save memory etc. I can think of a number of possible ways of doing this in OpenGL off the top of my head, just wondering what you guys think is the best / easiest / most compatible way, before I dive into coding? This has to work on old android OpenGL ES2 phones / tablets etc, so nothing too funky. Is there some way of rendering to a normal RGBA frame buffer, then using glCopyTexSubImage2D or similar to copy + translate the RGBA to a grayscale texture? This would seem the most obvious, and the docs kind of suggest it might work. Creating an 8 bit framebuffer. If this is possible / a good option? Rendering out RGBA, using glReadPixels, translating on the CPU to grayscale then reuploading as a fresh texture. Slow and horrible but this is a preprocess, and would be a good option is this is more guaranteed to work than other methods.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!