• Advertisement

Search the Community

Showing results for tags 'OpenGL ES' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • For Beginners
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 18 results

  1. hi all, how to implement this type of effect ? Also what is this effect called? this is considered volumetric lighting? what are the options of doing this? a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel. b. a transparent 3d mesh? and we can animate it as well? need your expert advise. additional: 2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)? Note: im using OpenGL ES 2.0 on mobile. thanks!
  2. Hey guys. Wow it's been super long since I been here. Anyways, I'm having trouble with my 2D OrthoM matrix setup for phones / tablets. Basically I wan't my coordinates to start at the top left of the screen. I also want my polygons to remain squared regardless if you have it on portrait or landscape orientation. At the same time, if I translate the polygon to the middle of the screen, I want it to come to the middle regardless if I have it in portrait or landscape mode. So far I'm pretty close with this setup: private float aspectRatio; @Override public void onSurfaceChanged(GL10 glUnused, int width, int height) { Log.d("Result", "onSurfacedChanged()"); glViewport(0, 0, width, height); if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Log.d("Result", "onSurfacedChanged(PORTRAIT)"); aspectRatio = ((float) height / (float) width); orthoM(projectionMatrix, 0, 0f, 1f, aspectRatio, 0f, -1f, 1f); } else{ Log.d("Result", "onSurfacedChanged(LANDSCAPE)"); aspectRatio = ((float) width / (float) height); orthoM(projectionMatrix, 0, 0f, aspectRatio, 1f, 0f, -1f, 1f); } } When I translate the polygon using TranslateM( ) however, goes to the middle in portrait mode but in landscape, it only moved partially to the right, as though portrait mode was on some of the left of the screen. The only time I can get the translation to match is if in Landscape I move the aspectRatio variable in OrthoM( ) from the right arguement to the bottom arguement, and make right be 1f. Works but now the polygon is stretched after doing this. Do I just simply multiply the aspectRatio to the translation values only when its in Landscape mode to fix this or is there a better way? if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Matrix.translateM(modelMatrix, 0, 0.5f, 0.5f * aspectRatio, 0f); } else { Matrix.translateM(modelMatrix, 0, 0.5f * aspectRatio, 0.5f, 0f); } Thanks in advance.
  3. Hi, so I'm trying to pack 4 color values into a single 32-bit float but I'm having some issues. The resulting color values which I am getting are not correct. What could be wrong here? This is the part of code where I pack the 4 bytes into a single float in Java byte [] colorBytes = new byte[4]; colorBytes[0] = (byte)(color.x*256); colorBytes[1] = (byte)(color.y*256); colorBytes[2] = (byte)(color.z*256); colorBytes[3] = (byte)(color.w*256); vertexManager.appendVertexColorData(ByteBuffer.wrap(colorBytes).order(ByteOrder.LITTLE_ENDIAN).getFloat()); I also tried this: bitSh.x = 1.0f/(256.0f*256.0f*256.0f); bitSh.y = 1.0f/(256.0f*256.0f); bitSh.z = 1.0f/(256.0f); bitSh.w = 1.0f; color.x = object.vertexColorData[i*4+0]*r; color.y = object.vertexColorData[i*4+1]*g; color.z = object.vertexColorData[i*4+2]*b; color.w = object.vertexColorData[i*4+3]*a; vertexManager.appendVertexColorData(color.dot(bitSh)); But it didn't work either, though it gave me different results, both are incorrect. This is the vertex shader: uniform mat4 MVPMatrix; // model-view-projection matrix attribute vec4 position; attribute vec2 textureCoords; attribute float color; varying vec4 outColor; varying vec2 outTexCoords; const vec4 bitSh = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0); const vec4 bitMsk = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0); vec4 unpack_float(const float value) { vec4 res = fract(value * bitSh); res -= res.xxyz * bitMsk; return res; } void main() { outTexCoords = textureCoords; outColor = unpack_float(color); gl_Position = MVPMatrix * position; } And this is the fragment shader: precision lowp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { gl_FragColor = texture2D(texture, outTexCoords) * outColor; } Thanks in advance.
  4. I need to pass 24 vec3, to a shader However glUniform3fv requires an array of GLfloat, Since my vec3 structure looks like: struct vec3{float x; float y; float z;}; Can i just pass that safely, to float parray[24 * 3] using memcpy? Dont bother about GLfloat and float sizes i just gave pseudocode i just need to know if sent array will have first vertex at position 0 second one will be in pos 3, thrid one in 6 and so on, cause im not sure when even float and GLfloat match the sizes i could get some extra bytes anywhere, And another question is how then i define an uniform in shader? uniform vec3 box[24];. ? Cheers
  5. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  6. Hi, so I am trying to implement packed VBO's with indexing on OpenGL but I have run across problems. It worked fine when I had separate buffers for vertex positions, colors and texture coordinates. But when I tried to put everything into a single packed buffer, it completely glitched out. Here's the code which I am using: this.vertexData.position(0); this.indexData.position(0); int stride = (3 + 4 + 2) * 4; GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexData.capacity()*4, vertexData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, dimensions, GLES20.GL_FLOAT, false, stride, 0); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, stride, dimensions * 4); GLES20.glEnableVertexAttribArray(texCoordAttrID); GLES20.glVertexAttribPointer(texCoordAttrID, 2, GLES20.GL_FLOAT, false, stride, (dimensions + 4) * 4); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); The data in vertex buffer is ordered like this: Vertex X, vertex Y, vertex Z, Color r, color g, color b, color a, Tex coord x, tex coord z and so on... (And I am pretty certain that the buffer I'm using is in this order) This is the version of the code which worked fine: this.vertexData.position(0); this.vertexColorData.position(0); this.vertexTexCoordData.position(0); this.indexData.position(0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexPositionData.capacity()*4, vertexPositionData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[1]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexColorData.capacity()*4, vertexColorData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[2]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexTexCoordData.capacity()*4, vertexTexCoordData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(textCoordAttrID); GLES20.glVertexAttribPointer(textCoordAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); */ GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); This is the output of the non working code: From this picture I can see that some of the vertex positions are good, but for some reason every renderable object from the game has a at least one vertex position of value 0 Thank in advance, Ed
  7. I'm trying to write a leather material shader. I have a normal map, bump map (grayscaled), specular map, diffuse map, cube maps. I have done the following #version 100 precision highp int; precision highp float; uniform sampler2D diffuseColorMap; uniform sampler2D ambientOcclusionMap; uniform sampler2D normalMap; uniform sampler2D specularMap; uniform sampler2D bumpMap; uniform samplerCube envMap; varying vec2 texCoord[2]; varying vec3 viewWorld; uniform float reflectionFactor; uniform float diffuseFactor; uniform float opacity; varying vec3 eyeVector; varying mat3 world2Tangent; varying vec3 lightVec; varying vec3 halfVec; varying vec3 eyeVec; void main() { vec3 normalTangent = 2.0 * texture2D (normalMap, texCoord[0]).rgb - 1.0; vec4 x_forw = texture2D( bumpMap, texCoord[0]+vec2(1.0/2048.0, 0.0)); vec4 x_back = texture2D( bumpMap, texCoord[0]-vec2(1.0/2048.0, 0.0)); vec4 y_forw = texture2D( bumpMap, texCoord[0]+vec2(0.0, 1.0/2048.0)); vec4 y_back = texture2D( bumpMap, texCoord[0]-vec2(0.0, 1.0/2048.0)); vec3 tangX = vec3(1.0, 0.0, 3.0*(x_forw.x-x_back.x)); vec3 tangY = vec3(0.0, 1.0, 3.0*(y_forw.x-y_back.x)); vec3 heightNormal = normalize(cross(tangX, tangY)); heightNormal = heightNormal*0.5 + 0.5; float bumpAngle = max(0.0, dot(vec3(0.0,0.0,1.0),heightNormal )); vec3 normalWorld = normalize(world2Tangent *heightNormal); vec3 refDir = viewWorld - 2.0 * dot(viewWorld,normalWorld) * normalWorld; // compute diffuse lighting vec4 diffuseMaterial = texture2D (diffuseColorMap, texCoord[0]); vec4 diffuseLight = vec4(1.0,1.0,1.0,1.0); // In doom3, specular value comes from a texture vec4 specularMaterial = texture2D (specularMap, texCoord[0]) ; vec4 specularLight = vec4(1.0,1.0,1.0,1.0); float shininess = pow (max (dot (halfVec,heightNormal), 0.0), 2.0) ; vec4 reflection = textureCube(envMap, refDir); //gl_FragColor=diffuseMaterial * diffuseLight * lamberFactor ; //gl_FragColor+=specularMaterial * specularLight * shininess ; //gl_FragColor+= reflection*0.3; gl_FragColor = diffuseMaterial*bumpAngle ; } My question is how would I use the bump map (Grayscale) to the result of the reflection or what's wrong in my shader ?
  8. Hi Guys, I have been struggling for a number of hours trying to make a directional (per fragment) lighting shader. I have been following this tutorial in the 'per fragment' section of the page - http://www.learnopengles.com/tag/per-vertex-lighting/ along with tutorials from other sites. This is what I have at this point. // Vertex shader varying vec3 v_Normal; varying vec4 v_Colour; varying vec3 v_LightPos; uniform vec3 u_LightPos; uniform mat4 worldMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; void main() { vec4 object_space_pos = vec4(in_Position, 1.0); gl_Position = worldMatrix * vec4(in_Position, 1.0); gl_Position = viewMatrix * gl_Position; // WV gl_Position = projectionMatrix * gl_Position; mat4 WV = worldMatrix * viewMatrix; v_Position = vec3(WV * object_space_pos); v_Normal = vec3(WV * vec4(in_Normal, 0.0)); v_Colour = in_Colour; v_LightPos = u_LightPos; } And // Fragment varying vec3 v_Position; varying vec3 v_Normal; varying vec4 v_Colour; varying vec3 v_LightPos; void main() { float dist = length(v_LightPos - v_Position); vec3 lightVector = normalize(v_LightPos - v_Position); float diffuse_light = max(dot(v_Normal, lightVector), 0.1); diffuse_light = diffuse_light * (1.0 / (1.0 + (0.25 * dist * dist))); gl_FragColor = v_Colour * diffuse_light; } If I change the last line of the fragment shader to 'gl_FragColor = v_Colour;' the model (a white sphere) will render to the screen in solid white, as expected. But if I leave the shader as is above, the object is invisible. I am suspecting that it is something to do with this line in the vertex shader, but am at a loss as to what is wrong. v_Position = vec3(WV * object_space_pos); If I comment the above line out, I get some sort of shading going on which looks like it is trying to light the subject (with the normals calculating etc.) Any help would be hugely appreciated. Thanks in advance
  9. I'm interested in rendering a grayscale output from a shader, to save into a texture for later use. I only want an 1 channel 8 bit texture rather than RGBA, to save memory etc. I can think of a number of possible ways of doing this in OpenGL off the top of my head, just wondering what you guys think is the best / easiest / most compatible way, before I dive into coding? This has to work on old android OpenGL ES2 phones / tablets etc, so nothing too funky. Is there some way of rendering to a normal RGBA frame buffer, then using glCopyTexSubImage2D or similar to copy + translate the RGBA to a grayscale texture? This would seem the most obvious, and the docs kind of suggest it might work. Creating an 8 bit framebuffer. If this is possible / a good option? Rendering out RGBA, using glReadPixels, translating on the CPU to grayscale then reuploading as a fresh texture. Slow and horrible but this is a preprocess, and would be a good option is this is more guaranteed to work than other methods.
  10. Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices. One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen. On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders. However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...
  11. Hey, I was wondering if on mobile development (Android mainly but iOS as well if you know of it), if there is a GPUView equivalent for whole system debugging so we can figure out if the CPU/GPU are being pipelined efficiently, if there are bubbles, etc. Also slightly tangent question, but do mobile GPU's have a DMA engine exposed as a dedicated Transfer Queue for Vulkan?
  12. i have an application that allows drawing thru touch just like paint and using opengl es (mobile), currently, the line is just drawing with simple/default line style of opengl using GL_LINE_STRIP, i want to have different pen style on it, just like attached, so my question, is it possible to texture an opengl Line (GL_LINE_STRIP) so i can achieve my desired effect (see attached)? i know its possible to texture an OpenGL point via point sprite, but i have not found anything related to texturing an opengl Line. is this possible?
  13. Hi ! I have a drawing problem, display seems to be corrupted. The program is opengl es 3.0 running on iPhone. I have a file loader and it loaded data correctly before, but if you look at both attached images, the red, blue and green cubes aren't displayed correctly ( and only them). I haven't changed the source code of my loader, so I don't think it comes from it. I know however one thing, it is that vertices and indices ( and texture coordinates, and normals ) are raw pointers, and I think my problem is in relation to memory data/heap corruption because I didn't freed them before. The scenes were displayed correctly before and this problem came suddenly, without changing the source code for the loader and the opengl es class. What do you think about this problem ? Can you help me to solve it ?
  14. hi all, i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only), i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse. now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about. 1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection? 2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension. 3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question). lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free, Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework. IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work. thank you, and looking forward to positive replies.
  15. I've been making pretty good use of texture atlases in my current game, but as I find myself forced to use another texture (to enable generating tex coords for point sprites), I'm wondering if there may be a better way than currently batching by texture, and calling glBind before each batch. Instead is it more efficient to pre-bind a whole set of textures to all the available active texture slots, then simply only read from the ones I'm interested in in the fragment shaders (or call glUniform to change the active texture used by the frag shader). Is there any performance penalty to have a load of textures bound, but not being used? Is this a practical way to work on GLES 2.0, and are there any typical figures for how many active texture slots should be available on mobiles? I'm getting the impression the minimum is 8. This may make a difference as I'm having to do several render passes on some frames.
  16. I'm using Xcode on Mac OS X, and I've added a file called 'peacock.tga' into my project. I can't seem to open that file (using fopen) however. Is there anything special that I need to do in order for the file to be readable?
  17. I just found a code which uses libavcodec to decode videos and display them on screen Canvas canvas = surfaceHolder.lockCanvas(); canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint); surfaceHolder.unlockCanvasAndPost(canvas); anyway it looks like a ton of useless garbage, it first decodes then draws a bitmap, i would like to somehow transfer video data to gpu directly so i can just draw a video frame in a simple poly (made of 4 verts), however it may be undoable, anyone has any more information about it?
  18. hi all, I have encountered a problem : I draw a mount of triangles using OpenGL ES 2.0. Triangles drawn at the back, seem show in frames, then disappear in next frames, and then show again. A friend told me that it’s because ParamBufferSize in powervr.ini is not enough. I googled parameter buffer, but still have no idea why this happens. Is this because of the small ParamBufferSize? Should I cut down the size of VBO? or DrawCall? Thanks,
  • Advertisement