Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL ES' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 30 results

  1. Hi, i try to measure render time (per render pass or ideally per drawcall) on Android (9) using OpenGLES3 together with the EXT_disjoint_timer_query extension. I use queries for the absolute gpu/gl t imestamp via glQueryCounter(qid, GL_TIMESTAMP_EXT). According to the spec this returns nanoseconds in gl/gpu "time". To correlated the gpu/gl timeline with the cpu timeline and determine latency from drawcall dispatch until realisation i process as follows: // initially sync gl/cpu timeline uint64_t cpu_time_base_ns = getCpuTimeInNs(); uint64_t gl_time_base_ns = 0; glGetInteger64v(GL_TIMESTAMP_EXT,&gl_time_base_ns); // for every frame (queries are pooled etc..to avoid stalls) glQueryCounter(query_start,GL_TIMESTAMP_EXT); bind(fbo); glClear(..); glDrawBla(...); glQueryCounter(query_end,GL_TIMESTAMP_EXT); eglSwapBuffers(); // check if queries results are available if( queries_available) { uint64_t query_start_result_timestamp_ns = result from query_start; // project gl times to cpu timeline uint64_t query_start_in_cpu_time = cpu_time_base_ns + (query_start_result_timestamp_ns - gl_time_base_ns) } The problem is, that the projected gl times in cpu time are BEFORE the cpu timestamp the drawcall was actually issued. Do you guys have any idea what's wrong with my approach? I fear glGetInteger64v(GL_TIMESTAMP_EXT,&gl_time_base_ns); returns a time base which is not suitable for what i try to do, but i dont understand why. I need absolute times to get command latency, not just the duration of GL work on the gpu. Thanks alot for any hints!
  2. Just wrote a shader that uses a texture but only when a bool flag is set to true, so that means i can either to use texture mapping in shader or not, now since i use texturecoordinate attribute in passed vertex buffer, and send it from vertex shader to fragment one where i choose to use it or not via uniform bool I WONDER. Will drawing crash ? I mean after some time of developement i encountered problems with various phones that claim only to 'eat' correct shaders (that means i need to make 2 different shaders - one that uses texture and second one that does not), but its an overkill and waste of time to do everything like that, i could but #ifdefs in shader and load 2 different ones but still i could just switch the uniform bool and get over with it. So are phones/tablets still so strict?
  3. hi all, i am trying to implement a textbox with scroll where you can display as many text as you can and just use the scroll bar to see the rest of the image, same as to how listbox works, etc I have implemented this using 2D SDL by displaying the messages in an extra framebuffer/texture and just bitBlk the portion of it to the main screen depending on offset. I am porting my 2D SDL code to straight OpenGL ES 2.0 by creating extra framebuffer(FBO) and render to texture, now my question is how to select a portion of that texture to be rendered only in OpenGL ES 2.0 (more like how is bitblk can be implemented in OpenGL ES 2.0)? I was thinking to using scissors but im not sure if this is the right solution. Also, I am using OpenGL ES 2.0 (Mobile) so not all libraries from desktop OpenGL is available. In Summary 1. How to do bitblk in OpenGL ES.0 for textures rendered in orthographic projection (2D)?
  4. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  5. originally, i just use GL_LINE_STRIP to render lines and it has been evident in different device the difference in width of the lines, plus i cannot texture it! so i decided to use triangles to render my lines so i can have control on its width and add some textures,, i can already convert a line segment based from two given points, or two lines using 3 points using textured quad, I want to do the joints of these quads next, since the app needs to draw using the mobile touchscreen, so it is fitting to have a circular cap/joints instead of those pointy joints. I saw some lessons and tutorials and they suggest as simple as adding a circle in the joint end, is that really how simple it is done? Let me know if you guys have any tips and further suggestions, or link to a source/tutorial (OpenGL/OGL ES). much appreciated!
  6. Hello, I'm trying to make a bloom effect in my 2D game made with LibGDX. I've a problem with the shader side, in short the glow effect should be go off the bounds of the texture, if I have a texture 40x40 the shader should cover 60x60 for example. This image explain better the problem I have: , as you can see the bloom effect is cut when the texture ends. I suppose that I should modify the default vertex shader to get what I need but I'm not very familiar with GLSL programmig so I ask if someone could help me. Vertex: #ifdef GL_ES precision highp float; precision highp int; #endif attribute vec4 a_position; attribute vec4 a_color; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; varying vec4 v_color; varying vec2 v_texCoords; void main() { v_color = a_color; v_texCoords = a_texCoord0; gl_Position = u_projTrans * a_position; } Fragment #ifdef GL_ES precision highp float; precision highp int; #endif varying vec4 v_color; varying vec2 v_texCoords; uniform sampler2D u_texture; uniform mat4 u_projTrans; uniform float u_blurSize; uniform float u_intensity; void main() { vec4 sum = vec4(0); // blur in y (vertical) // take nine samples, with the distance blurSize between them sum += texture2D(u_texture, vec2(v_texCoords.x - 4.0*u_blurSize, v_texCoords.y)) * 0.05; sum += texture2D(u_texture, vec2(v_texCoords.x - 3.0*u_blurSize, v_texCoords.y)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x - 2.0*u_blurSize, v_texCoords.y)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x - u_blurSize, v_texCoords.y)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y)) * 0.16; sum += texture2D(u_texture, vec2(v_texCoords.x + u_blurSize, v_texCoords.y)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x + 2.0*u_blurSize, v_texCoords.y)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x + 3.0*u_blurSize, v_texCoords.y)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x + 4.0*u_blurSize, v_texCoords.y)) * 0.05; // blur in y (vertical) // take nine samples, with the distance blurSize between them sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 4.0*u_blurSize)) * 0.05; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 3.0*u_blurSize)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - 2.0*u_blurSize)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y - u_blurSize)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y)) * 0.16; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + u_blurSize)) * 0.15; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 2.0*u_blurSize)) * 0.12; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 3.0*u_blurSize)) * 0.09; sum += texture2D(u_texture, vec2(v_texCoords.x, v_texCoords.y + 4.0*u_blurSize)) * 0.05; gl_FragColor = sum * u_intensity + texture2D(u_texture, v_texCoords); }
  7. Hi, I'm trying to solve a problem where I can get all colors from image. I see only one: walk through a loop at raster data and collect all bytes, I'm sure there is a better way to get colors from image. I'm thinking about some sort of collecting colors in result texture... Is it ordinary situation, could you help me, I didn find anithing on the internet... Thanks.
  8. I have a fullscreen sized quad, now i draw it on screen then in fragment shader i choose whenever linenis visible or not and draw it on screen (2d grid rendering) However when i zoom out lines dissapear not to mention i cant achieve one thickeness regardless of the zoom Left zoomed in right and below zoomed out And heres the shader precision highp float; uniform vec3 translation; uniform float grid_size; uniform vec3 bg_color; uniform vec3 grid_color; uniform float lthick; uniform float sw; uniform float sh; uniform float scale; varying vec3 vc; int modulo(float x, float y) { return int (x - y * float ( int (x/y))); } float modulof(float x, float y) { return x - y * float ( int (x/y) ); } void main() { bool found = false; vec2 fragcoord = vc.xy * 0.5 + 0.5; //find how much units do we have in worldspace for a halfspace vec2 sc = ( vec2(sw, sh) / 2.0 ) / scale; sc = sc * vc.xy + translation.xy; //world position int px = modulo(float (sc.x), grid_size); int py = modulo(float (sc.y), grid_size); if ( (px == 0) || (py == 0) ) found = true; if (found) gl_FragColor = vec4(grid_color, 1.0); else gl_FragColor = vec4(bg_color, 1.0); } I know that when zooming out, a fragment can not represent actual grid line.
  9. I wonder how one could achieve that, personally i could pass another vertex data first would be actual geometric position, second would be next vertex in array. But its way too overcomplicated, ill have to build two sets of arrays so i just don't. Can't actually think of something. Something that would not force me to pass another attribute to shaders, something that wont force me to change my internal model atructure at all, By the way im drawing lines with usage of GL_LINE_LOOP Any thoughts ?
  10. Hello everyone, I'm trying to display a 2D texture to screen but the rendering isn't working correctly. First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering So here is the shader I'm using : const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG. For that I'm using libPNG to load the PNG to a texture, here is my code : GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning). //MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square. The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help. Thanks !
  11. DelicateTreeFrog

    OpenGL GLSL: 9-slicing

    I have a 9-slice shader working mostly nicely: Here, both the sprites are separate images, so the shader code works well: varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels). This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet. Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!
  12. Not long ago, I create a nice OBJ loader that loads 3D Studio Max files. Only problem is, is that although it works and works great, I wasn't using Vertex Buffers. Now that I applied Vertex Buffers, it seems to only use the first color of the texture and spread it all across the poiygon. I examined my code over and over again, and the Vertex Buffer code is correct. But when I comment out all of my vertex buffer code, it works as intended. I practically given up on fixing it on my own, so hopefully you guys will be able to figure out what is wrong. public static final int BYTES_PER_FLOAT = 4; public static final int POSITION_COMPONENT_COUNT_3D = 4; public static final int COLOR_COMPONENT_COUNT = 4; public static final int TEXTURE_COORDINATES_COMPONENT_COUNT = 2; public static final int NORMAL_COMPONENT_COUNT = 3; public static final int POSITION_COMPONENT_STRIDE_2D = POSITION_COMPONENT_COUNT_2D * BYTES_PER_FLOAT; public static final int POSITION_COMPONENT_STRIDE_3D = POSITION_COMPONENT_COUNT_3D * BYTES_PER_FLOAT; public static final int COLOR_COMPONENT_STRIDE = COLOR_COMPONENT_COUNT * BYTES_PER_FLOAT; public static final int TEXTURE_COORDINATE_COMPONENT_STRIDE = TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT; public static final int NORMAL_COMPONENT_STRIDE = NORMAL_COMPONENT_COUNT * BYTES_PER_FLOAT; int loadFile() { ArrayList<Vertex3D> tempVertexArrayList = new ArrayList<Vertex3D>(); ArrayList<TextureCoord2D> tempTextureCoordArrayList = new ArrayList<TextureCoord2D>(); ArrayList<Vector3D> tempNormalArrayList = new ArrayList<Vector3D>(); ArrayList<Face3D> tempFaceArrayList = new ArrayList<Face3D>(); StringBuilder body = new StringBuilder(); try { InputStream inputStream = context.getResources().openRawResource(resourceID); InputStreamReader inputStreamReader = new InputStreamReader(inputStream); BufferedReader bufferedReader = new BufferedReader(inputStreamReader); String nextLine; String subString; String[] stringArray; String[] stringArray2; int[] indexNumberList = new int[3]; int[] textureCoordNumberList = new int[3]; int[] normalNumberList = new int[3]; int i = 0; int j = 0; int k = 0; try { while ((nextLine = bufferedReader.readLine()) != null) { if (nextLine.startsWith("v ")) { subString = nextLine.substring(1).trim(); stringArray = subString.split(" "); try { tempVertexArrayList.add(new Vertex3D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]), Float.parseFloat(stringArray[2]), 1f)); } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading vertex list"); return 0; } String x = String.valueOf(tempVertexArrayList.get(i).x); String y = String.valueOf(tempVertexArrayList.get(i).y); String z = String.valueOf(tempVertexArrayList.get(i).z); //Log.d(TAG, "vertex " + String.valueOf(i) + ": " + x + ", " + y + ", " + z); i++; } if (nextLine.startsWith("vn ")) { subString = nextLine.substring(2).trim(); stringArray = subString.split(" "); try { if(reverseNormals){ tempNormalArrayList.add(new Vector3D(-Float.parseFloat(stringArray[0]), -Float.parseFloat(stringArray[1]), -Float.parseFloat(stringArray[2]))); } else{ tempNormalArrayList.add(new Vector3D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]), Float.parseFloat(stringArray[2]))); } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading normal list"); return 0; } String nx = String.valueOf(tempNormalArrayList.get(j).x); String ny = String.valueOf(tempNormalArrayList.get(j).y); String nz = String.valueOf(tempNormalArrayList.get(j).z); //Log.d(TAG, "normal " + String.valueOf(j) + ": " + nx + ", " + ny + ", " + nz); j++; } if (nextLine.startsWith("vt ")) { subString = nextLine.substring(2).trim(); stringArray = subString.split(" "); try { tempTextureCoordArrayList.add(new TextureCoord2D(Float.parseFloat(stringArray[0]), Float.parseFloat(stringArray[1]))); } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading texture coordinate list"); return 0; } String tu = String.valueOf(tempTextureCoordArrayList.get(k).tu); String tv = String.valueOf(tempTextureCoordArrayList.get(k).tv); //Log.d(TAG, "texture coord " + String.valueOf(k) + ": " + tu + ", " + tv); k++; } if (nextLine.startsWith("f ")) { subString = nextLine.substring(1).trim(); stringArray = subString.split(" "); for (int index = 0; index <= 2; index++) { stringArray2 = stringArray[index].split("/"); try { indexNumberList[index] = Integer.parseInt(stringArray2[0]) - 1; if(indexNumberList[index] < 0){ Log.d(TAG, "Error: indexNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading indexNumberList[]"); return 0; } try{ textureCoordNumberList[index] = Integer.parseInt(stringArray2[1]) - 1; if(textureCoordNumberList[index] < 0){ Log.d(TAG, "Error: textureCoordNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading textureCoordNumberList[]"); return 0; } try{ normalNumberList[index] = Integer.parseInt(stringArray2[2]) - 1; if(normalNumberList[index] < 0){ Log.d(TAG, "Error: normalNumberList[] is less than zero"); return 0; } } catch(NumberFormatException e){ Log.d(TAG, "Error: Invalid number format in loading normalNumberList[]"); return 0; } } tempFaceArrayList.add(new Face3D(indexNumberList[0], textureCoordNumberList[0], normalNumberList[0], indexNumberList[1], textureCoordNumberList[1], normalNumberList[1], indexNumberList[2], textureCoordNumberList[2], normalNumberList[2])); } body.append(nextLine); body.append('\n'); } //Now that everything has successfully loaded, you can now populate the public variables. if(tempVertexArrayList != null && tempVertexArrayList.size() != 0) vertexArrayList.addAll(tempVertexArrayList); if(tempTextureCoordArrayList != null && tempTextureCoordArrayList.size() != 0) textureCoordArrayList.addAll(tempTextureCoordArrayList); if(tempNormalArrayList != null && tempNormalArrayList.size() != 0) normalArrayList.addAll(tempNormalArrayList); if(tempFaceArrayList != null && tempFaceArrayList.size() != 0) faceArrayList.addAll(tempFaceArrayList); vertexList = new float[faceArrayList.size() * POSITION_COMPONENT_COUNT_3D * NUMBER_OF_SIDES_PER_FACE]; indexList = new short[faceArrayList.size() * NUMBER_OF_SIDES_PER_FACE]; colorList = new float[faceArrayList.size() * COLOR_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; textureCoordList = new float[faceArrayList.size() * TEXTURE_COORDINATES_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; normalList = new float[faceArrayList.size() * NORMAL_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE]; int nextFace = 0; int step = POSITION_COMPONENT_COUNT_3D * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < vertexList.length; currentVertex += step){ vertexList[currentVertex + 0] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).x; vertexList[currentVertex + 1] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).y; vertexList[currentVertex + 2] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).z; vertexList[currentVertex + 3] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(0)).w; vertexList[currentVertex + 4] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).x; vertexList[currentVertex + 5] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).y; vertexList[currentVertex + 6] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).z; vertexList[currentVertex + 7] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(1)).w; vertexList[currentVertex + 8] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).x; vertexList[currentVertex + 9] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).y; vertexList[currentVertex + 10] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).z; vertexList[currentVertex + 11] = vertexArrayList.get(faceArrayList.get(nextFace).indexNumberList.get(2)).w; nextFace++; } numberOfVertices = vertexList.length / POSITION_COMPONENT_COUNT_3D; nextFace = 0; for (int currentIndex = 0; currentIndex < indexList.length; currentIndex += NUMBER_OF_SIDES_PER_FACE){ indexList[currentIndex + 0] = faceArrayList.get(nextFace).indexNumberList.get(0).shortValue(); indexList[currentIndex + 1] = faceArrayList.get(nextFace).indexNumberList.get(1).shortValue(); indexList[currentIndex + 2] = faceArrayList.get(nextFace).indexNumberList.get(2).shortValue(); } step = COLOR_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < colorList.length; currentVertex += step){ colorList[currentVertex + 0] = red; colorList[currentVertex + 1] = green; colorList[currentVertex + 2] = blue; colorList[currentVertex + 3] = alpha; colorList[currentVertex + 4] = red; colorList[currentVertex + 5] = green; colorList[currentVertex + 6] = blue; colorList[currentVertex + 7] = alpha; colorList[currentVertex + 8] = red; colorList[currentVertex + 9] = green; colorList[currentVertex + 10] = blue; colorList[currentVertex + 11] = alpha; } nextFace = 0; step = TEXTURE_COORDINATES_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < textureCoordList.length; currentVertex += step){ textureCoordList[currentVertex + 0] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(0)).tu * mult; textureCoordList[currentVertex + 1] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(0)).tv * mult; textureCoordList[currentVertex + 2] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(1)).tu * mult; textureCoordList[currentVertex + 3] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(1)).tv * mult; textureCoordList[currentVertex + 4] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(2)).tu * mult; textureCoordList[currentVertex + 5] = textureCoordArrayList.get(faceArrayList.get(nextFace).textureCoordNumberList.get(2)).tv * mult; nextFace++; } nextFace = 0; step = NORMAL_COMPONENT_COUNT * NUMBER_OF_SIDES_PER_FACE; for (int currentVertex = 0; currentVertex < normalList.length; currentVertex += step){ normalList[currentVertex + 0] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).x; normalList[currentVertex + 1] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).y; normalList[currentVertex + 2] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(0)).z; normalList[currentVertex + 3] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).x; normalList[currentVertex + 4] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).y; normalList[currentVertex + 5] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(1)).z; normalList[currentVertex + 6] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).x; normalList[currentVertex + 7] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).y; normalList[currentVertex + 8] = normalArrayList.get(faceArrayList.get(nextFace).normalNumberList.get(2)).z; nextFace++; } vertexBuffer = ByteBuffer.allocateDirect(vertexList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); indexBuffer = ByteBuffer.allocateDirect(indexList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asShortBuffer(); colorBuffer = ByteBuffer.allocateDirect(colorList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); textureCoordBuffer = ByteBuffer.allocateDirect(textureCoordList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); normalBuffer = ByteBuffer.allocateDirect(normalList.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder()).asFloatBuffer(); vertexBuffer.put(vertexList).position(0); indexBuffer.put(indexList).position(0); colorBuffer.put(colorList).position(0); textureCoordBuffer.put(textureCoordList).position(0); normalBuffer.put(normalList).position(0); createVertexBuffer(); uMVPMatrixHandle = glGetUniformLocation(program, U_MVPMATRIX); uMVMatrixHandle = glGetUniformLocation(program, U_MVMATRIX); uTextureUnitHandle = glGetUniformLocation(program, U_TEXTURE_UNIT); aPositionHandle = glGetAttribLocation(program, A_POSITION); aColorHandle = glGetAttribLocation(program, A_COLOR); aTextureCoordinateHandle = glGetAttribLocation(program, A_TEXTURE_COORDINATES); aNormalHandle = glGetAttribLocation(program, A_NORMAL); glEnableVertexAttribArray(aPositionHandle); glEnableVertexAttribArray(aColorHandle); glEnableVertexAttribArray(aTextureCoordinateHandle); glEnableVertexAttribArray(aNormalHandle); glActiveTexture(GL_TEXTURE0); glUniform1i(uTextureUnitHandle, 0); } catch(IOException e){ } } catch (Resources.NotFoundException nfe){ throw new RuntimeException("Resource not found: " + resourceID, nfe); } return 1; } public void draw(){ glEnable(GL_DEPTH_TEST); bindData(); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glDrawArrays(GL_TRIANGLES, 0, faceArrayList.size() * NUMBER_OF_SIDES_PER_FACE); glBindBuffer(GL_ARRAY_BUFFER, 0); } public void bindData(){ int offset = 0; glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glVertexAttribPointer(aPositionHandle, POSITION_COMPONENT_COUNT_3D, GL_FLOAT, false, POSITION_COMPONENT_STRIDE_3D, offset); offset += POSITION_COMPONENT_COUNT_3D; glVertexAttribPointer(aColorHandle, COLOR_COMPONENT_COUNT, GL_FLOAT, false, COLOR_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); offset += COLOR_COMPONENT_COUNT; glVertexAttribPointer(aTextureCoordinateHandle, TEXTURE_COORDINATES_COMPONENT_COUNT, GL_FLOAT, false, TEXTURE_COORDINATE_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); offset += TEXTURE_COORDINATES_COMPONENT_COUNT; glVertexAttribPointer(aNormalHandle, NORMAL_COMPONENT_COUNT, GL_FLOAT, false, NORMAL_COMPONENT_STRIDE, numberOfVertices * offset * BYTES_PER_FLOAT); glBindBuffer(GL_ARRAY_BUFFER, 0); ///////////////////////////////////////////////////// /* vertexBuffer.position(0); glVertexAttribPointer(aPositionHandle, POSITION_COMPONENT_COUNT_3D, GL_FLOAT, false, POSITION_COMPONENT_STRIDE_3D, vertexBuffer); colorBuffer.position(0); glVertexAttribPointer(aColorHandle, COLOR_COMPONENT_COUNT, GL_FLOAT, false, COLOR_COMPONENT_STRIDE, colorBuffer); textureCoordBuffer.position(0); glVertexAttribPointer(aTextureCoordinateHandle, TEXTURE_COORDINATES_COMPONENT_COUNT, GL_FLOAT, false, TEXTURE_COORDINATE_COMPONENT_STRIDE, textureCoordBuffer); normalBuffer.position(0); glVertexAttribPointer(aNormalHandle, NORMAL_COMPONENT_COUNT, GL_FLOAT, false, NORMAL_COMPONENT_STRIDE, normalBuffer); */ } public void createVertexBuffer(){ glGenBuffers(1, vertexBufferObject, 0); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject[0]); glBufferData(GL_ARRAY_BUFFER,numberOfVertices * (POSITION_COMPONENT_COUNT_3D + COLOR_COMPONENT_COUNT + TEXTURE_COORDINATES_COMPONENT_COUNT + NORMAL_COMPONENT_COUNT) * BYTES_PER_FLOAT, null, GL_STATIC_DRAW); int offset = 0; glBufferSubData(GL_ARRAY_BUFFER, offset, numberOfVertices * POSITION_COMPONENT_COUNT_3D * BYTES_PER_FLOAT, vertexBuffer); offset += POSITION_COMPONENT_COUNT_3D; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * COLOR_COMPONENT_COUNT * BYTES_PER_FLOAT, colorBuffer); offset += COLOR_COMPONENT_COUNT; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT, textureCoordBuffer); offset += TEXTURE_COORDINATES_COMPONENT_COUNT; glBufferSubData(GL_ARRAY_BUFFER, numberOfVertices * offset * BYTES_PER_FLOAT, numberOfVertices * NORMAL_COMPONENT_COUNT * BYTES_PER_FLOAT, normalBuffer); glBindBuffer(GL_ARRAY_BUFFER, 0); }
  13. hi all, how to implement this type of effect ? Also what is this effect called? this is considered volumetric lighting? what are the options of doing this? a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel. b. a transparent 3d mesh? and we can animate it as well? need your expert advise. additional: 2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)? Note: im using OpenGL ES 2.0 on mobile. thanks!
  14. Hey guys. Wow it's been super long since I been here. Anyways, I'm having trouble with my 2D OrthoM matrix setup for phones / tablets. Basically I wan't my coordinates to start at the top left of the screen. I also want my polygons to remain squared regardless if you have it on portrait or landscape orientation. At the same time, if I translate the polygon to the middle of the screen, I want it to come to the middle regardless if I have it in portrait or landscape mode. So far I'm pretty close with this setup: private float aspectRatio; @Override public void onSurfaceChanged(GL10 glUnused, int width, int height) { Log.d("Result", "onSurfacedChanged()"); glViewport(0, 0, width, height); if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Log.d("Result", "onSurfacedChanged(PORTRAIT)"); aspectRatio = ((float) height / (float) width); orthoM(projectionMatrix, 0, 0f, 1f, aspectRatio, 0f, -1f, 1f); } else{ Log.d("Result", "onSurfacedChanged(LANDSCAPE)"); aspectRatio = ((float) width / (float) height); orthoM(projectionMatrix, 0, 0f, aspectRatio, 1f, 0f, -1f, 1f); } } When I translate the polygon using TranslateM( ) however, goes to the middle in portrait mode but in landscape, it only moved partially to the right, as though portrait mode was on some of the left of the screen. The only time I can get the translation to match is if in Landscape I move the aspectRatio variable in OrthoM( ) from the right arguement to the bottom arguement, and make right be 1f. Works but now the polygon is stretched after doing this. Do I just simply multiply the aspectRatio to the translation values only when its in Landscape mode to fix this or is there a better way? if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Matrix.translateM(modelMatrix, 0, 0.5f, 0.5f * aspectRatio, 0f); } else { Matrix.translateM(modelMatrix, 0, 0.5f * aspectRatio, 0.5f, 0f); } Thanks in advance.
  15. Hi, so I'm trying to pack 4 color values into a single 32-bit float but I'm having some issues. The resulting color values which I am getting are not correct. What could be wrong here? This is the part of code where I pack the 4 bytes into a single float in Java byte [] colorBytes = new byte[4]; colorBytes[0] = (byte)(color.x*256); colorBytes[1] = (byte)(color.y*256); colorBytes[2] = (byte)(color.z*256); colorBytes[3] = (byte)(color.w*256); vertexManager.appendVertexColorData(ByteBuffer.wrap(colorBytes).order(ByteOrder.LITTLE_ENDIAN).getFloat()); I also tried this: bitSh.x = 1.0f/(256.0f*256.0f*256.0f); bitSh.y = 1.0f/(256.0f*256.0f); bitSh.z = 1.0f/(256.0f); bitSh.w = 1.0f; color.x = object.vertexColorData[i*4+0]*r; color.y = object.vertexColorData[i*4+1]*g; color.z = object.vertexColorData[i*4+2]*b; color.w = object.vertexColorData[i*4+3]*a; vertexManager.appendVertexColorData(color.dot(bitSh)); But it didn't work either, though it gave me different results, both are incorrect. This is the vertex shader: uniform mat4 MVPMatrix; // model-view-projection matrix attribute vec4 position; attribute vec2 textureCoords; attribute float color; varying vec4 outColor; varying vec2 outTexCoords; const vec4 bitSh = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0); const vec4 bitMsk = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0); vec4 unpack_float(const float value) { vec4 res = fract(value * bitSh); res -= res.xxyz * bitMsk; return res; } void main() { outTexCoords = textureCoords; outColor = unpack_float(color); gl_Position = MVPMatrix * position; } And this is the fragment shader: precision lowp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { gl_FragColor = texture2D(texture, outTexCoords) * outColor; } Thanks in advance.
  16. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  17. I need to pass 24 vec3, to a shader However glUniform3fv requires an array of GLfloat, Since my vec3 structure looks like: struct vec3{float x; float y; float z;}; Can i just pass that safely, to float parray[24 * 3] using memcpy? Dont bother about GLfloat and float sizes i just gave pseudocode i just need to know if sent array will have first vertex at position 0 second one will be in pos 3, thrid one in 6 and so on, cause im not sure when even float and GLfloat match the sizes i could get some extra bytes anywhere, And another question is how then i define an uniform in shader? uniform vec3 box[24];. ? Cheers
  18. Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices. One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen. On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders. However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...
  19. I'm using Xcode on Mac OS X, and I've added a file called 'peacock.tga' into my project. I can't seem to open that file (using fopen) however. Is there anything special that I need to do in order for the file to be readable?
  20. I just found a code which uses libavcodec to decode videos and display them on screen Canvas canvas = surfaceHolder.lockCanvas(); canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint); surfaceHolder.unlockCanvasAndPost(canvas); anyway it looks like a ton of useless garbage, it first decodes then draws a bitmap, i would like to somehow transfer video data to gpu directly so i can just draw a video frame in a simple poly (made of 4 verts), however it may be undoable, anyone has any more information about it?
  21. hi all, i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only), i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse. now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about. 1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection? 2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension. 3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question). lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free, Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework. IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work. thank you, and looking forward to positive replies.
  22. I'm interested in rendering a grayscale output from a shader, to save into a texture for later use. I only want an 1 channel 8 bit texture rather than RGBA, to save memory etc. I can think of a number of possible ways of doing this in OpenGL off the top of my head, just wondering what you guys think is the best / easiest / most compatible way, before I dive into coding? This has to work on old android OpenGL ES2 phones / tablets etc, so nothing too funky. Is there some way of rendering to a normal RGBA frame buffer, then using glCopyTexSubImage2D or similar to copy + translate the RGBA to a grayscale texture? This would seem the most obvious, and the docs kind of suggest it might work. Creating an 8 bit framebuffer. If this is possible / a good option? Rendering out RGBA, using glReadPixels, translating on the CPU to grayscale then reuploading as a fresh texture. Slow and horrible but this is a preprocess, and would be a good option is this is more guaranteed to work than other methods.
  23. Hi, so I am trying to implement packed VBO's with indexing on OpenGL but I have run across problems. It worked fine when I had separate buffers for vertex positions, colors and texture coordinates. But when I tried to put everything into a single packed buffer, it completely glitched out. Here's the code which I am using: this.vertexData.position(0); this.indexData.position(0); int stride = (3 + 4 + 2) * 4; GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexData.capacity()*4, vertexData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, dimensions, GLES20.GL_FLOAT, false, stride, 0); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, stride, dimensions * 4); GLES20.glEnableVertexAttribArray(texCoordAttrID); GLES20.glVertexAttribPointer(texCoordAttrID, 2, GLES20.GL_FLOAT, false, stride, (dimensions + 4) * 4); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); The data in vertex buffer is ordered like this: Vertex X, vertex Y, vertex Z, Color r, color g, color b, color a, Tex coord x, tex coord z and so on... (And I am pretty certain that the buffer I'm using is in this order) This is the version of the code which worked fine: this.vertexData.position(0); this.vertexColorData.position(0); this.vertexTexCoordData.position(0); this.indexData.position(0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexPositionData.capacity()*4, vertexPositionData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[1]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexColorData.capacity()*4, vertexColorData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[2]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexTexCoordData.capacity()*4, vertexTexCoordData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(textCoordAttrID); GLES20.glVertexAttribPointer(textCoordAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); */ GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); This is the output of the non working code: From this picture I can see that some of the vertex positions are good, but for some reason every renderable object from the game has a at least one vertex position of value 0 Thank in advance, Ed
  24. Hi Guys, I have been struggling for a number of hours trying to make a directional (per fragment) lighting shader. I have been following this tutorial in the 'per fragment' section of the page - http://www.learnopengles.com/tag/per-vertex-lighting/ along with tutorials from other sites. This is what I have at this point. // Vertex shader varying vec3 v_Normal; varying vec4 v_Colour; varying vec3 v_LightPos; uniform vec3 u_LightPos; uniform mat4 worldMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; void main() { vec4 object_space_pos = vec4(in_Position, 1.0); gl_Position = worldMatrix * vec4(in_Position, 1.0); gl_Position = viewMatrix * gl_Position; // WV gl_Position = projectionMatrix * gl_Position; mat4 WV = worldMatrix * viewMatrix; v_Position = vec3(WV * object_space_pos); v_Normal = vec3(WV * vec4(in_Normal, 0.0)); v_Colour = in_Colour; v_LightPos = u_LightPos; } And // Fragment varying vec3 v_Position; varying vec3 v_Normal; varying vec4 v_Colour; varying vec3 v_LightPos; void main() { float dist = length(v_LightPos - v_Position); vec3 lightVector = normalize(v_LightPos - v_Position); float diffuse_light = max(dot(v_Normal, lightVector), 0.1); diffuse_light = diffuse_light * (1.0 / (1.0 + (0.25 * dist * dist))); gl_FragColor = v_Colour * diffuse_light; } If I change the last line of the fragment shader to 'gl_FragColor = v_Colour;' the model (a white sphere) will render to the screen in solid white, as expected. But if I leave the shader as is above, the object is invisible. I am suspecting that it is something to do with this line in the vertex shader, but am at a loss as to what is wrong. v_Position = vec3(WV * object_space_pos); If I comment the above line out, I get some sort of shading going on which looks like it is trying to light the subject (with the normals calculating etc.) Any help would be hugely appreciated. Thanks in advance
  25. i have an application that allows drawing thru touch just like paint and using opengl es (mobile), currently, the line is just drawing with simple/default line style of opengl using GL_LINE_STRIP, i want to have different pen style on it, just like attached, so my question, is it possible to texture an opengl Line (GL_LINE_STRIP) so i can achieve my desired effect (see attached)? i know its possible to texture an OpenGL point via point sprite, but i have not found anything related to texturing an opengl Line. is this possible?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!