Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL ES'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 60 results

  1. Any sage advice for me available? I am trying to get a phone shader up and working, although I am running into some sad blocks in OpenGL ES 2.0. I have used this article as a starting point: https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/lighting.php Do I just need MORE polygons, or am I fundamentally misunderstanding something? Please help me, very sorry. Vertex Shader: uniform mat4 ProjectionMatrix; uniform mat4 ModelViewMatrix; uniform mat4 NormalMatrix; attribute vec3 Positions; attribute vec3 Normals; attribute vec3 TextureCoords; varying vec2 TextureCoordsOut; varying vec3 N; varying vec3 v; void main(void) { v = vec3(NormalMatrix * vec4(Positions[0], Positions[1], Positions[2], 1.0)); N = normalize(vec3(NormalMatrix * vec4(Normals[0], Normals[1], Normals[2], 1.0))); gl_Position = ProjectionMatrix * ModelViewMatrix * vec4(Positions[0], Positions[1], Positions[2], 1.0); TextureCoordsOut = vec2(TextureCoords[0], TextureCoords[1]); } Fragment Shader: varying mediump vec3 N; varying mediump vec3 v; uniform lowp vec4 ModulateColor; varying lowp vec2 TextureCoordsOut; uniform sampler2D Texture; //r, g, b, [ambient intensity] uniform lowp vec4 Ambient; //dirX, dirY, dirZ, [diffuse intensity] uniform lowp vec4 Diffuse; //Shininess, Specular Intensity uniform lowp vec2 Specular; void main (void) { mediump vec3 Direction = vec3(-Diffuse[0], -Diffuse[1], -Diffuse[2]); //highp vec3 SpotlightPosition = vec3(Specular[0], Specular[1], Specular[2]); //highp vec3 L = normalize(SpotlightPosition - v); mediump vec3 L = Direction;//normalize(SpotlightPosition - v); mediump vec3 E = normalize(v); // we are in Eye Coordinates, so EyePos is (0,0,0) mediump vec3 R = normalize(-reflect(L,N)); //calculate Diffuse Term: lowp float Idiff = max(dot(N,L), 0.0) * Diffuse[3]; Idiff = clamp(Idiff, 0.0, 1.0); // calculate Specular Term: lowp float Ispec = pow(max(dot(R, E), 0.0), Specular[0]) * Specular[1]; Ispec = clamp(Ispec, 0.0, 100.0); lowp float Ilight = Ambient[3] + Idiff + Ispec gl_FragColor = vec4(Ilight, Ilight, Ilight, 1.0); }
  2. I'm getting some strangely unexpected results with my new sprite renderer that uses OpenGL ES 2.0. It performs much worse than my old sprite renderer from 5 years ago that uses OpenGL ES 1.1 (no shaders). All I'm doing is displaying a grid of quads 16x16 and moving and zooming it around a little bit. You can see the difference in the video below: Video to demonstrate the issue Clearly, the fixed pipeline runs smoothly, but my supposedly fast one-draw-call shader program chugs (when I tried one draw call-per-quad it was naturally even slower). This is not what I expected. How can I speed up my new sprite renderer? Is the fixed function pipeline naturally just more adapted to vertex data that changes more often? (like a new VBO on every frame) I could just re-write the new renderer in OpenGL ES 1.1 again, but then I will lose compatibility with desktop OpenGL. This is a bad idea, right? Can I emulate the fixed-function pipeline with shaders? Is there code out there that does this? What tricks did they use in it to get sprites to render so fast? Old Fixed-Function Code: for (int z = 0; z <= mTileEdit.mCurLevel; z++) { for (int y = 0; y < tm.mSizeY; y++) { for (int x = 0; x < tm.mSizeX; x++) { int t = tm.get(x, y, z); if (t != 0 && t > 0 && t < 256) { // Set alpha float alpha = 1.0f; if (Lozoware.getMP().get("name").equals("pixeledit") || Lozoware.getMP().get("name").equals("edit3d")) { alpha = 1.0f - ((float)z / (float)tm.mSizeZ); } // Set color gl.glColor4f(tm.mPalette.mRed[t], tm.mPalette.mGreen[t], tm.mPalette.mBlue[t], alpha); // Vertex buffer bb = ByteBuffer.allocateDirect((6 * 3) * 3 * 4); bb.order(ByteOrder.nativeOrder()); FloatBuffer buf = bb.asFloatBuffer(); float bottomLeftX = x * mGLTileSizeX; float bottomLeftY = y * mGLTileSizeY; float topLeftX = x * mGLTileSizeX; float topLeftY = y * mGLTileSizeY + mGLTileSizeY; float bottomRightX = x * mGLTileSizeX + mGLTileSizeX; float bottomRightY = y * mGLTileSizeY; float topRightX = x * mGLTileSizeX + mGLTileSizeX; float topRightY = y * mGLTileSizeY + mGLTileSizeY; buf.position(0); buf.put(topLeftX); buf.put(topLeftY); buf.put(0); buf.put(bottomRightX); buf.put(bottomRightY); buf.put(0); buf.put(bottomLeftX); buf.put(bottomLeftY); buf.put(0); buf.put(topLeftX); buf.put(topLeftY); buf.put(0); buf.put(topRightX); buf.put(topRightY); buf.put(0); buf.put(bottomRightX); buf.put(bottomRightY); buf.put(0); buf.position(0); // Draw gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, buf); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 6 * 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } } } gl.glFlush(); New OpenGL ES 2.0 Code: int numVerts = 0; int numQuads = 0; // Alloc enough data for all sprites for (const auto & pair: objects) { Object * obj = pair.second; if (obj != nullptr && obj - > visible && obj - > type == OBJTYPE_SPRITE) { numVerts += 6; numQuads += 1; } } int floatsPerVert = 26; float * data = new float[numVerts * floatsPerVert]; int cursor = 0; // Quad/sprite index int q = 0; // Fill data for all sprites for (const auto & pair: objects) { Object * obj = pair.second; if (obj != nullptr && obj - > visible && obj - > type == OBJTYPE_SPRITE) { // Add sprite texAtlas.add(obj - > textureName); if (texAtlas.getNeedsRefresh()) texAtlas.refresh(); // Set modelview matrix glm::mat4 mvMatrix; glm::mat4 scaleToNDC; glm::mat4 cameraRotate; glm::mat4 cameraTranslate; glm::mat4 rotate; #ifdef PLATFORM_OPENVR scaleToNDC = glm::scale(glm::mat4(), glm::vec3(VRSCALE, VRSCALE, VRSCALE));# else scaleToNDC = glm::scale(glm::mat4(), glm::vec3(NDC_SCALE, NDC_SCALE, NDC_SCALE));# endif if (obj - > alwaysFacePlayer) rotate = glm::rotate(glm::mat4(), glm::radians(-camera - > yaw), glm::vec3(0, 1, 0)) // Model yaw * glm::rotate(glm::mat4(), glm::radians(camera - > pitch), glm::vec3(1, 0, 0)); // Model pitch else rotate = glm::rotate(glm::mat4(), glm::radians(-obj - > yaw), glm::vec3(0, 1, 0)) // Model yaw * glm::rotate(glm::mat4(), glm::radians(-obj - > pitch), glm::vec3(1, 0, 0)); // Model pitch cameraRotate = glm::rotate(glm::mat4(), glm::radians(camera - > roll), glm::vec3(0, 0, 1)) // Camera roll * glm::rotate(glm::mat4(), -glm::radians(camera - > pitch), glm::vec3(1, 0, 0)) // Camera pitch * glm::rotate(glm::mat4(), glm::radians(camera - > yaw), glm::vec3(0, 1, 0)); // Camera yaw cameraTranslate = glm::translate(glm::mat4(), glm::vec3(-camera - > position.x, -camera - > position.y, -camera - > position.z)); // Camera translate #ifdef PLATFORM_OPENVR mvMatrix = glm::make_mat4((const GLfloat * ) g_poseEyeMatrix.get()) * scaleToNDC * cameraRotate * cameraTranslate * glm::translate(glm::mat4(), glm::vec3(obj - > position.x, obj - > position.y, obj - > position.z)) // World translate * rotate * glm::scale(glm::mat4(), obj - > scale / glm::vec3(2.0, 2.0, 2.0)); // Scale #else mvMatrix = scaleToNDC * cameraRotate * cameraTranslate * glm::translate(glm::mat4(), glm::vec3(obj - > position.x, obj - > position.y, obj - > position.z)) // World translate * rotate * glm::scale(glm::mat4(), obj - > scale / glm::vec3(2.0, 2.0, 2.0)); // Scale #endif // ______ // |\\5 4| // |0\\ | // | \\ | // | \\ | // | \\3| // |1__2_\\| // Triangle 1 // Vertex 0 data[cursor + 0] = -1.0 f; data[cursor + 1] = 1.0 f; data[cursor + 2] = 0.0 f; data[cursor + 3] = 1.0 f; UV input; input.u = 0.0 f; input.v = 1.0 f; UV output = texAtlas.getUV(obj - > textureName, input); data[cursor + 4] = output.u; data[cursor + 5] = output.v; data[cursor + 6] = mvMatrix[0][0]; data[cursor + 7] = mvMatrix[0][1]; data[cursor + 8] = mvMatrix[0][2]; data[cursor + 9] = mvMatrix[0][3]; data[cursor + 10] = mvMatrix[1][0]; data[cursor + 11] = mvMatrix[1][1]; data[cursor + 12] = mvMatrix[1][2]; data[cursor + 13] = mvMatrix[1][3]; data[cursor + 14] = mvMatrix[2][0]; data[cursor + 15] = mvMatrix[2][1]; data[cursor + 16] = mvMatrix[2][2]; data[cursor + 17] = mvMatrix[2][3]; data[cursor + 18] = mvMatrix[3][0]; data[cursor + 19] = mvMatrix[3][1]; data[cursor + 20] = mvMatrix[3][2]; data[cursor + 21] = mvMatrix[3][3]; data[cursor + 22] = obj - > color.r; data[cursor + 23] = obj - > color.g; data[cursor + 24] = obj - > color.b; data[cursor + 25] = obj - > color.a; cursor += floatsPerVert; // Vertex 1 data[cursor + 0] = -1.0 f; data[cursor + 1] = -1.0 f; data[cursor + 2] = 0.0 f; data[cursor + 3] = 1.0 f; input.u = 0.0 f; input.v = 0.0 f; output = texAtlas.getUV(obj - > textureName, input); data[cursor + 4] = output.u; data[cursor + 5] = output.v; data[cursor + 6] = mvMatrix[0][0]; data[cursor + 7] = mvMatrix[0][1]; data[cursor + 8] = mvMatrix[0][2]; data[cursor + 9] = mvMatrix[0][3]; data[cursor + 10] = mvMatrix[1][0]; data[cursor + 11] = mvMatrix[1][1]; data[cursor + 12] = mvMatrix[1][2]; data[cursor + 13] = mvMatrix[1][3]; data[cursor + 14] = mvMatrix[2][0]; data[cursor + 15] = mvMatrix[2][1]; data[cursor + 16] = mvMatrix[2][2]; data[cursor + 17] = mvMatrix[2][3]; data[cursor + 18] = mvMatrix[3][0]; data[cursor + 19] = mvMatrix[3][1]; data[cursor + 20] = mvMatrix[3][2]; data[cursor + 21] = mvMatrix[3][3]; data[cursor + 22] = obj - > color.r; data[cursor + 23] = obj - > color.g; data[cursor + 24] = obj - > color.b; data[cursor + 25] = obj - > color.a; cursor += floatsPerVert; // Vertex 2 data[cursor + 0] = 1.0 f; data[cursor + 1] = -1.0 f; data[cursor + 2] = 0.0 f; data[cursor + 3] = 1.0 f; input.u = 1.0 f; input.v = 0.0 f; output = texAtlas.getUV(obj - > textureName, input); data[cursor + 4] = output.u; data[cursor + 5] = output.v; data[cursor + 6] = mvMatrix[0][0]; data[cursor + 7] = mvMatrix[0][1]; data[cursor + 8] = mvMatrix[0][2]; data[cursor + 9] = mvMatrix[0][3]; data[cursor + 10] = mvMatrix[1][0]; data[cursor + 11] = mvMatrix[1][1]; data[cursor + 12] = mvMatrix[1][2]; data[cursor + 13] = mvMatrix[1][3]; data[cursor + 14] = mvMatrix[2][0]; data[cursor + 15] = mvMatrix[2][1]; data[cursor + 16] = mvMatrix[2][2]; data[cursor + 17] = mvMatrix[2][3]; data[cursor + 18] = mvMatrix[3][0]; data[cursor + 19] = mvMatrix[3][1]; data[cursor + 20] = mvMatrix[3][2]; data[cursor + 21] = mvMatrix[3][3]; data[cursor + 22] = obj - > color.r; data[cursor + 23] = obj - > color.g; data[cursor + 24] = obj - > color.b; data[cursor + 25] = obj - > color.a; cursor += floatsPerVert; // Triangle 2 // Vertex 3 data[cursor + 0] = 1.0 f; data[cursor + 1] = -1.0 f; data[cursor + 2] = 0.0 f; data[cursor + 3] = 1.0 f; input.u = 1.0 f; input.v = 0.0 f; output = texAtlas.getUV(obj - > textureName, input); data[cursor + 4] = output.u; data[cursor + 5] = output.v; data[cursor + 6] = mvMatrix[0][0]; data[cursor + 7] = mvMatrix[0][1]; data[cursor + 8] = mvMatrix[0][2]; data[cursor + 9] = mvMatrix[0][3]; data[cursor + 10] = mvMatrix[1][0]; data[cursor + 11] = mvMatrix[1][1]; data[cursor + 12] = mvMatrix[1][2]; data[cursor + 13] = mvMatrix[1][3]; data[cursor + 14] = mvMatrix[2][0]; data[cursor + 15] = mvMatrix[2][1]; data[cursor + 16] = mvMatrix[2][2]; data[cursor + 17] = mvMatrix[2][3]; data[cursor + 18] = mvMatrix[3][0]; data[cursor + 19] = mvMatrix[3][1]; data[cursor + 20] = mvMatrix[3][2]; data[cursor + 21] = mvMatrix[3][3]; data[cursor + 22] = obj - > color.r; data[cursor + 23] = obj - > color.g; data[cursor + 24] = obj - > color.b; data[cursor + 25] = obj - > color.a; cursor += floatsPerVert; // Vertex 4 data[cursor + 0] = 1.0 f; data[cursor + 1] = 1.0 f; data[cursor + 2] = 0.0 f; data[cursor + 3] = 1.0 f; input.u = 1.0 f; input.v = 1.0 f; output = texAtlas.getUV(obj - > textureName, input); data[cursor + 4] = output.u; data[cursor + 5] = output.v; data[cursor + 6] = mvMatrix[0][0]; data[cursor + 7] = mvMatrix[0][1]; data[cursor + 8] = mvMatrix[0][2]; data[cursor + 9] = mvMatrix[0][3]; data[cursor + 10] = mvMatrix[1][0]; data[cursor + 11] = mvMatrix[1][1]; data[cursor + 12] = mvMatrix[1][2]; data[cursor + 13] = mvMatrix[1][3]; data[cursor + 14] = mvMatrix[2][0]; data[cursor + 15] = mvMatrix[2][1]; data[cursor + 16] = mvMatrix[2][2]; data[cursor + 17] = mvMatrix[2][3]; data[cursor + 18] = mvMatrix[3][0]; data[cursor + 19] = mvMatrix[3][1]; data[cursor + 20] = mvMatrix[3][2]; data[cursor + 21] = mvMatrix[3][3]; data[cursor + 22] = obj - > color.r; data[cursor + 23] = obj - > color.g; data[cursor + 24] = obj - > color.b; data[cursor + 25] = obj - > color.a; cursor += floatsPerVert; // Vertex 5 data[cursor + 0] = -1.0 f; data[cursor + 1] = 1.0 f; data[cursor + 2] = 0.0 f; data[cursor + 3] = 1.0 f; input.u = 0.0 f; input.v = 1.0 f; output = texAtlas.getUV(obj - > textureName, input); data[cursor + 4] = output.u; data[cursor + 5] = output.v; data[cursor + 6] = mvMatrix[0][0]; data[cursor + 7] = mvMatrix[0][1]; data[cursor + 8] = mvMatrix[0][2]; data[cursor + 9] = mvMatrix[0][3]; data[cursor + 10] = mvMatrix[1][0]; data[cursor + 11] = mvMatrix[1][1]; data[cursor + 12] = mvMatrix[1][2]; data[cursor + 13] = mvMatrix[1][3]; data[cursor + 14] = mvMatrix[2][0]; data[cursor + 15] = mvMatrix[2][1]; data[cursor + 16] = mvMatrix[2][2]; data[cursor + 17] = mvMatrix[2][3]; data[cursor + 18] = mvMatrix[3][0]; data[cursor + 19] = mvMatrix[3][1]; data[cursor + 20] = mvMatrix[3][2]; data[cursor + 21] = mvMatrix[3][3]; data[cursor + 22] = obj - > color.r; data[cursor + 23] = obj - > color.g; data[cursor + 24] = obj - > color.b; data[cursor + 25] = obj - > color.a; cursor += floatsPerVert; q++; } } #if defined PLATFORM_WINDOWS || defined PLATFORM_OSX // Generate VAO glGenVertexArrays(1, (GLuint * ) & vao); checkGLError("glGenVertexArrays"); glBindVertexArray(vao); checkGLError("glBindVertexArray");# endif // Generate VBO glGenBuffers(1, (GLuint * ) & vbo); checkGLError("glGenBuffers"); glBindBuffer(GL_ARRAY_BUFFER, vbo); checkGLError("glBindBuffer"); // Load data into VBO glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 6 * floatsPerVert * q, data, GL_STATIC_DRAW); checkGLError("glBufferData"); // Delete data delete data; // Get aspect float width = PLAT_GetWindowWidth(); float height = PLAT_GetWindowHeight();# ifdef PLATFORM_OPENVR float aspect = 1.0;# else float aspect = width / height;# endif // DRAW glEnable(GL_CULL_FACE); checkGLError("glEnable"); glFrontFace(GL_CCW); checkGLError("glFrontFace"); glCullFace(GL_BACK); checkGLError("glCullFace"); glEnable(GL_BLEND); checkGLError("ShapeRenderer glEnable");# ifndef PLATFORM_ANDROID glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); checkGLError("ShapeRenderer glBlendFunc");# endif // Add program to OpenGL environment int curProgram = -1; curProgram = programMain; glUseProgram(curProgram); checkGLError("SpriteRenderer glUseProgram"); #if defined PLATFORM_WINDOWS || defined PLATFORM_OSX // Bind the VAO glBindVertexArray(vao); checkGLError("glBindVertexArray");# endif // Bind the VBO glBindBuffer(GL_ARRAY_BUFFER, vbo); checkGLError("glBindBuffer"); // Set the projection matrix glm::mat4 projMatrix; #if defined PLATFORM_OPENVR projMatrix = glm::make_mat4((const GLfloat * ) g_projectionMatrix.get());# else projMatrix = glm::perspective(VIEW_FOV, aspect, 0.001 f, 1000.0 f);# endif setMatrix(curProgram, "projectionMatrix", projMatrix); setUniform4f(curProgram, "globalColor", globalColor.x, globalColor.y, globalColor.z, globalColor.w); int t = texAtlas.getGlTexId(); glActiveTexture(GL_TEXTURE0); checkGLError("glActiveTexture"); glBindTexture(GL_TEXTURE_2D, t); setUniform2f(curProgram, "vTexSpan", 1.0, 1.0); setUniform1f(curProgram, "useTexture", 1.0); setUniform1f(curProgram, "fadeNear", 600.0 * NDC_SCALE); setUniform1f(curProgram, "fadeFar", 900.0 * NDC_SCALE); // Set attributes setVertexAttrib(curProgram, "vPosition", 4, GL_FLOAT, false, floatsPerVert * sizeof(float), 0); setVertexAttrib(curProgram, "vTexCoords", 2, GL_FLOAT, false, floatsPerVert * sizeof(float), 4); setVertexAttrib(curProgram, "mvMatrixPt1", 4, GL_FLOAT, false, floatsPerVert * sizeof(float), 6); setVertexAttrib(curProgram, "mvMatrixPt2", 4, GL_FLOAT, false, floatsPerVert * sizeof(float), 10); setVertexAttrib(curProgram, "mvMatrixPt3", 4, GL_FLOAT, false, floatsPerVert * sizeof(float), 14); setVertexAttrib(curProgram, "mvMatrixPt4", 4, GL_FLOAT, false, floatsPerVert * sizeof(float), 18); setVertexAttrib(curProgram, "vColor", 4, GL_FLOAT, false, floatsPerVert * sizeof(float), 22); // Draw glDrawArrays(GL_TRIANGLES, 0, q * 6); checkGLError("glDrawArrays"); #if defined PLATFORM_WINDOWS || defined PLATFORM_OSX // Reset glBindVertexArray(0); glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0);# endif // Delete VAO and VBO glDeleteBuffers(1, (GLuint * ) & vbo);# if defined PLATFORM_WINDOWS || defined PLATFORM_OSX glDeleteVertexArrays(1, (GLuint * ) & vao);# endif Shader Code: // // VERTEX SHADER ES 2.0 // const char * vertexShaderCodeES20 = "attribute vec4 vPosition;"\ "varying lowp vec4 posOut; "\ "attribute vec2 vTexCoords;"\ "varying lowp vec2 vTexCoordsOut; "\ "uniform vec2 vTexSpan;"\ "attribute vec4 vNormal;"\ "varying vec4 vNormalOut;"\ "attribute vec4 vVertexLight; "\ "varying vec4 vVertexLightOut; "\ "uniform mat4 projectionMatrix; "\ "varying lowp float distToCamera; "\ "attribute vec4 mvMatrixPt1; "\ "attribute vec4 mvMatrixPt2; "\ "attribute vec4 mvMatrixPt3; "\ "attribute vec4 mvMatrixPt4; "\ "attribute vec4 vColor; "\ "varying vec4 vColorOut;"\ "attribute mat4 oldmvMatrix; "\ "void main() {"\ " mat4 mvMatrix; "\ " mvMatrix[0] = mvMatrixPt1; "\ " mvMatrix[1] = mvMatrixPt2; "\ " mvMatrix[2] = mvMatrixPt3; "\ " mvMatrix[3] = mvMatrixPt4; "\ " gl_Position = projectionMatrix * mvMatrix * vPosition; " " vTexCoordsOut = vTexCoords * vTexSpan; "\ " posOut = gl_Position; "\ " vec4 posBeforeProj = mvMatrix * vPosition;"\ " distToCamera = -posBeforeProj.z; "\ " vColorOut = vColor; "\ "}\n"; // // FRAGMENT SHADER ES 2.0 // const char * fragmentShaderCodeES20 = "uniform sampler2D uTexture; "\ "uniform lowp vec4 vColor; "\ "uniform lowp vec4 globalColor; "\ "varying lowp vec2 vTexCoordsOut; "\ "varying lowp vec4 posOut; "\ "uniform lowp float useTexture; "\ "uniform lowp float fadeNear; "\ "uniform lowp float fadeFar; "\ "varying lowp float distToCamera; "\ "varying lowp vec4 vColorOut; "\ "void main() {"\ " lowp vec4 f = texture2D(uTexture, vTexCoordsOut.st); "\ " if (f.a == 0.0) "\ " discard; "\ " lowp float visibility = 1.0; "\ " lowp float alpha = 1.0; "\ " if (distToCamera >= fadeFar) discard; "\ " if (distToCamera >= fadeNear) "\ " alpha = 1.0 - (distToCamera - fadeNear) * 3.0; "\ " if (useTexture == 1.0)"\ " {"\ " gl_FragColor = texture2D(uTexture, vTexCoordsOut.st) * vColorOut * vec4(visibility, visibility, visibility, alpha) * globalColor; "\ " }"\ " else"\ " {"\ " gl_FragColor = vColorOut * vec4(visibility, visibility, visibility, alpha) * globalColor; "\ " }"\ "}\n"; The rest of the new code is here: TextureAtlas.cpp Renderer.cpp https://github.com/dimitrilozovoy/Voxyc/
  3. https://glslfan.com/?channel=-L_NAXqFFl9RAZ2eBIaH This is a web GLSL shader effect. Maybe can't be loaded on some browser or old ones. Anyway if your browser support compiled, it will show. Full code here #version 300 es // - glslfan.com -------------------------------------------------------------- // Ctrl + s or Command + s: compile shader // Ctrl + m or Command + m: toggle visibility for codepane // ---------------------------------------------------------------------------- precision mediump float; uniform vec2 resolution; // resolution (width, height) uniform vec2 mouse; // mouse (0.0 ~ 1.0) uniform float time; // time (1second == 1.0) uniform sampler2D backbuffer; // previous scene out vec4 fragColor; void main() { int t = int(time*12.0); vec2 uv = (gl_FragCoord.xy*2.0-resolution.xy)/resolution.y; ivec2 p = ivec2(pow(abs(uv*32.0),vec2(0.4))*16.0); int ptn = int(time*0.1)%3; int q = int[](p.x|p.y, p.x&p.y, p.x^p.y)[ptn]; t = (t&63^t<<q*(t%9)) |(t&63|t<<q*(t&7)) ^ (t*2*(t&63|t<<q*(t&15))); float x = float((t>>0&15)+(t>>4&15)+(t>>8&15))*0.1; vec3 col = vec3(0.7,0.2,0.1)*fwidth(x); fragColor = vec4(col,1); } My question is there is a line. int q = int[](p.x|p.y, p.x&p.y, p.x^p.y)[ptn]; Looks like dynamic array,but what does the content inside () mean? Is it init list or something else? And this code compile successfuly. So I think the grammar is correct,but I never seen or learn anything about something like this. Could someone explain why this is correct and what does it mean? And is there some document or example teach or show you that you can use it like this? Need more example
  4. I just found that using stencil in a framebuffer aint supported by the hardware and its emulated in software mode, now i wonder if i could somehow use a stencill pass in fragment shader, i need to combine whenever a z test fails or passes, thus i dont even know if this can be done - from my understanding only those fragments are passed to fragment shader which pass depth test right? - so maybe using diffrent glDepthFunc() would do the trick. TRUE or FALSE? Maybe theres a simplier solution?
  5. Hello folks, In my company we use Unity on both Windows desktop machines, and Android tablets/phones. We have a requirement to be able to stream raw imagery from these devices over the network during their use (so that we can run algorithms on the image data). On a Windows machine, running DirectX9 this was really easy, I just wrote a C++ native plugin that created an offscreen surface and copied the contents of the backbuffer into it. However I'm having difficulty in working out how we can do this for OpenglES (phones and tablets). I have two main questions: Does the Native C++ plugin API work on android devices? (does Unity build the .so files into the .apk?) How do I scrape the backbuffer in OpenGL ES? And also, I am curious as to whether there is a quick and efficient way to do this via script in Unity itself? I managed to get the buffer pixels, but had trouble running a separate network thread and getting the two working in sync without issues surrounding having to do everything on the rendering thread. Any help and advice would be much appreciated. Thanks!
  6. I am successfully drawing single objects with glDrawElements and I am now trying to render at least one object with glDrawElementsInstanced instead. However the object just flashes for a frame (or more, I can't tell) and then stays invisible. The main problem is I can't see what is happening on the shader side so I can't debug it. I was hoping for some experienced members taking a look before I Keep searching with Trial and error. All I did was changing the vertex shader to this #version 310 es uniform mat4 projectionMatrix; //uniform mat4 modelMatrix; in mat4 modelMatrix; uniform mat4 worldMatrix; in vec4 in_Position; in vec2 in_TextureCoord; out vec2 pass_TextureCoord; void main(void) { gl_Position = projectionMatrix * worldMatrix * modelMatrix * in_Position; pass_TextureCoord = in_TextureCoord; } and instead of using this every frame: GLES31.glUniformMatrix4fv(currentShader.getUniLocations()[1], 1, false, matrix.getArray(), 0); I am now doing this every frame: ByteBuffer buffer = createBuffer( matrix.getArray() ); GLES31.glBindVertexArray(vaoID); int id = attriLocations[index]; if (id != -1) { for( int i = 0; i < 4; i++ ) { GLES31.glEnableVertexAttribArray(id+i); } GLES31.glBindBuffer(GLES31.GL_ARRAY_BUFFER, vboIDs[index]); GLES31.glBufferData(GLES31.GL_ARRAY_BUFFER, buffer.size(), buffer, GLES31.GL_DYNAMIC_DRAW); for( int i = 0; i < 4; i++ ) { GLES31.glVertexAttribPointer(id+i, 4, GLES31.GL_FLOAT, false, 4 * 4 * 4, i * 4 * 4); } for( int i = 0; i < 4; i++ ) { GLES31.glVertexAttribDivisor(id+i, 1); } } GLES31.glBindVertexArray(0); GLES31.glBindBuffer(GLES31.GL_ARRAY_BUFFER, 0); It draws for a very short period (probably one frame) and then nothing at all, anymore. The other still non-instanced objects keep being drawn correctly and OpenGL throws no Error.
  7. Hi, i try to measure render time (per render pass or ideally per drawcall) on Android (9) using OpenGLES3 together with the EXT_disjoint_timer_query extension. I use queries for the absolute gpu/gl t imestamp via glQueryCounter(qid, GL_TIMESTAMP_EXT). According to the spec this returns nanoseconds in gl/gpu "time". To correlated the gpu/gl timeline with the cpu timeline and determine latency from drawcall dispatch until realisation i process as follows: // initially sync gl/cpu timeline uint64_t cpu_time_base_ns = getCpuTimeInNs(); uint64_t gl_time_base_ns = 0; glGetInteger64v(GL_TIMESTAMP_EXT,&gl_time_base_ns); // for every frame (queries are pooled etc..to avoid stalls) glQueryCounter(query_start,GL_TIMESTAMP_EXT); bind(fbo); glClear(..); glDrawBla(...); glQueryCounter(query_end,GL_TIMESTAMP_EXT); eglSwapBuffers(); // check if queries results are available if( queries_available) { uint64_t query_start_result_timestamp_ns = result from query_start; // project gl times to cpu timeline uint64_t query_start_in_cpu_time = cpu_time_base_ns + (query_start_result_timestamp_ns - gl_time_base_ns) } The problem is, that the projected gl times in cpu time are BEFORE the cpu timestamp the drawcall was actually issued. Do you guys have any idea what's wrong with my approach? I fear glGetInteger64v(GL_TIMESTAMP_EXT,&gl_time_base_ns); returns a time base which is not suitable for what i try to do, but i dont understand why. I need absolute times to get command latency, not just the duration of GL work on the gpu. Thanks alot for any hints!
  8. Just wrote a shader that uses a texture but only when a bool flag is set to true, so that means i can either to use texture mapping in shader or not, now since i use texturecoordinate attribute in passed vertex buffer, and send it from vertex shader to fragment one where i choose to use it or not via uniform bool I WONDER. Will drawing crash ? I mean after some time of developement i encountered problems with various phones that claim only to 'eat' correct shaders (that means i need to make 2 different shaders - one that uses texture and second one that does not), but its an overkill and waste of time to do everything like that, i could but #ifdefs in shader and load 2 different ones but still i could just switch the uniform bool and get over with it. So are phones/tablets still so strict?
  9. hi all, i am trying to implement a textbox with scroll where you can display as many text as you can and just use the scroll bar to see the rest of the image, same as to how listbox works, etc I have implemented this using 2D SDL by displaying the messages in an extra framebuffer/texture and just bitBlk the portion of it to the main screen depending on offset. I am porting my 2D SDL code to straight OpenGL ES 2.0 by creating extra framebuffer(FBO) and render to texture, now my question is how to select a portion of that texture to be rendered only in OpenGL ES 2.0 (more like how is bitblk can be implemented in OpenGL ES 2.0)? I was thinking to using scissors but im not sure if this is the right solution. Also, I am using OpenGL ES 2.0 (Mobile) so not all libraries from desktop OpenGL is available. In Summary 1. How to do bitblk in OpenGL ES.0 for textures rendered in orthographic projection (2D)?
  10. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  11. originally, i just use GL_LINE_STRIP to render lines and it has been evident in different device the difference in width of the lines, plus i cannot texture it! so i decided to use triangles to render my lines so i can have control on its width and add some textures,, i can already convert a line segment based from two given points, or two lines using 3 points using textured quad, I want to do the joints of these quads next, since the app needs to draw using the mobile touchscreen, so it is fitting to have a circular cap/joints instead of those pointy joints. I saw some lessons and tutorials and they suggest as simple as adding a circle in the joint end, is that really how simple it is done? Let me know if you guys have any tips and further suggestions, or link to a source/tutorial (OpenGL/OGL ES). much appreciated!
  12. I'm using Xcode on Mac OS X, and I've added a file called 'peacock.tga' into my project. I can't seem to open that file (using fopen) however. Is there anything special that I need to do in order for the file to be readable?
  13. I just found a code which uses libavcodec to decode videos and display them on screen Canvas canvas = surfaceHolder.lockCanvas(); canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint); surfaceHolder.unlockCanvasAndPost(canvas); anyway it looks like a ton of useless garbage, it first decodes then draws a bitmap, i would like to somehow transfer video data to gpu directly so i can just draw a video frame in a simple poly (made of 4 verts), however it may be undoable, anyone has any more information about it?
  14. Hello everyone!This is my first project for android.I was very interested of making games, and find LibGDX framework, and there I started. After 6-7 month I finished first mode for my game.So here we go ^^ Undercore - hardcore runner for android. You have to use skills like jump and stay on line. And you goal is to make a highscore dodging obstacles. ○ Improve your skills - the way will be rough, will you become a master?○ Contest - your friend hit 40 points? Double score and make him jealous!○ Collect - buy new color themes that would make the gameplay brighter!○ Achieve - beat records, die, earn. Collect achivements. No pain no gain.PlayMarket: https://play.google.com/store/apps/details?id=com.sneakycrago.undercore&hl=en Youtube - Gameplay I hope you enjoy it and also wait for feedback I can't make clickable button.Sorry, just link: https://goo.gl/dG1dLj
  15. I wonder how one could achieve that, personally i could pass another vertex data first would be actual geometric position, second would be next vertex in array. But its way too overcomplicated, ill have to build two sets of arrays so i just don't. Can't actually think of something. Something that would not force me to pass another attribute to shaders, something that wont force me to change my internal model atructure at all, By the way im drawing lines with usage of GL_LINE_LOOP Any thoughts ?
  16. Hello everyone, I'm trying to display a 2D texture to screen but the rendering isn't working correctly. First of all I did follow this tutorial to be able to render a Text to screen (I adapted it to render with OpenGL ES 2.0) : https://learnopengl.com/code_viewer.php?code=in-practice/text_rendering So here is the shader I'm using : const char gVertexShader[] = "#version 320 es\n" "layout (location = 0) in vec4 vertex;\n" "out vec2 TexCoords;\n" "uniform mat4 projection;\n" "void main() {\n" " gl_Position = projection * vec4(vertex.xy, 0.0, 1.0);\n" " TexCoords = vertex.zw;\n" "}\n"; const char gFragmentShader[] = "#version 320 es\n" "precision mediump float;\n" "in vec2 TexCoords;\n" "out vec4 color;\n" "uniform sampler2D text;\n" "uniform vec3 textColor;\n" "void main() {\n" " vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);\n" " color = vec4(textColor, 1.0) * sampled;\n" "}\n"; The render text works very well so I would like to keep those Shaders program to render a texture loaded from PNG. For that I'm using libPNG to load the PNG to a texture, here is my code : GLuint Cluster::loadPngFromPath(const char *file_name, int *width, int *height) { png_byte header[8]; FILE *fp = fopen(file_name, "rb"); if (fp == 0) { return 0; } fread(header, 1, 8, fp); if (png_sig_cmp(header, 0, 8)) { fclose(fp); return 0; } png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); return 0; } png_infop info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL); fclose(fp); return 0; } png_infop end_info = png_create_info_struct(png_ptr); if (!end_info) { png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL); fclose(fp); return 0; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); int bit_depth, color_type; png_uint_32 temp_width, temp_height; png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type, NULL, NULL, NULL); if (width) { *width = temp_width; } if (height) { *height = temp_height; } png_read_update_info(png_ptr, info_ptr); int rowbytes = png_get_rowbytes(png_ptr, info_ptr); rowbytes += 3 - ((rowbytes-1) % 4); png_byte * image_data; image_data = (png_byte *) malloc(rowbytes * temp_height * sizeof(png_byte)+15); if (image_data == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); fclose(fp); return 0; } png_bytep * row_pointers = (png_bytep *) malloc(temp_height * sizeof(png_bytep)); if (row_pointers == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); fclose(fp); return 0; } int i; for (i = 0; i < temp_height; i++) { row_pointers[temp_height - 1 - i] = image_data + i * rowbytes; } png_read_image(png_ptr, row_pointers); GLuint texture; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, GL_ZERO, GL_RGB, temp_width, temp_height, GL_ZERO, GL_RGB, GL_UNSIGNED_BYTE, image_data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(image_data); free(row_pointers); fclose(fp); return texture; } This code just generates the texture and I store the id on memory And then I want to display my texture on any position (X, Y) of my screen so I did the following (That's works, at least the positioning). //MY TEXTURE IS 32x32 pixels ! void Cluster::printTexture(GLuint idTexture, GLfloat x, GLfloat y) { glActiveTexture(GL_TEXTURE0); glBindVertexArray(VAO); GLfloat vertices[6][4] = { { x, y + 32, 0.0, 0.0 }, { x, y, 0.0, 1.0 }, { x + 32, y, 1.0, 1.0 }, { x, y + 32, 0.0, 0.0 }, { x + 32, y, 1.0, 1.0 }, { x + 32, y + 32, 1.0, 0.0 } }; glBindTexture(GL_TEXTURE_2D, idTexture); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferSubData(GL_ARRAY_BUFFER, GL_ZERO, sizeof(vertices), vertices); glBindBuffer(GL_ARRAY_BUFFER, GL_ZERO); glUniform1i(this->mTextShaderHandle, GL_ZERO); glDrawArrays(GL_TRIANGLE_STRIP, GL_ZERO, 6); } My .png is a blue square. The result is that my texture is not loaded correctly. It is not complete and there are many small black spots. I don't know what's going on ? It could be the vertices or the load ? Or maybe I need to add something on the shader. I don't know, I really need help. Thanks !
  17. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  18. radek spam

    Province Map

    Hi, I would like to create a province map, something like in attached example of Age Of Conquest. I would like to use Libgdx. After some research i learnt that it can be done by using two images, one with graphics and second invisible with distinct colors to handle clicks. I have some doubts about this method: how to deal with memory, i have created sample map with size of 960x540 and it weighs 600kb, i would need 10 times bigger map. I could cut it in some smaller pieces and render them but im afraid that it can cause lags when scrolling the map how to deal with highlighting the provinces. I managed to implement simple highlight limited to one province creating filter in OpenGl fragment shader. But what if i want to highlight multiple provinces (eg. highlight all provinces of some country). I guess It can be done by shader too but it may be much complicated i would like to also implement Fog of War over the undiscovered provinces. How one could do that? I would really appreciate your guidance. Perhaps to create the above map i need some other method?
  19. Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices. One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen. On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders. However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...
  20. This video gives an overview of differing features an OpenGL ES developer would encounter when starting to develop with the Vulkan API.
  21. Falken42

    Crystal Clash Tutorial UI

    From the album: Crystal Clash

  22. My light is positioned at vec3(0, 0, 2) which is in front of an object at vec3(0, 0, 0). If I don't rotate the object, everything seems to look fine: The problem occurs when I rotate the object, the object's lit area seems to rotate with the object. Instead of just shining the faces looking at the light. In fact here's another strange example but with specular added. The effect is correct in the first the rotation, the second it's dark, and in the third it's back to good again I can't seem to figure out what the problem is with my shader. I even tried calculating the normal matrix in glsl, just in case my implementation was wrong, but I get the same results: nrms = mat3(transpose(inverse(model))) * normals; // and nrms = normalMat * normals; // both get same results. I really don't think it has to do with the normals, the light calculations visually seem okay, as long as I don't rotate the object though. In fact, I can translate and rotate the camera and the lighting is still good, again, as long as I don't rotate the object. By the way, camera rotation is not considered in the calculations since I'm passing the camera.transform.position vec3 to calculate the toCam vector I use in my lighting calculations. There's clearly something I'm doing wrong. I'm guessing it has to do with what space am I calculating against. It's almost like I'm calculating based on the model's local space instead of world. Hopefully somebody can identify what it is though, I'll share the vertex and frag shaders below. I didn't include the specular portion though. thanks a lot! Vertex #version 300 es #ifdef GL_ES precision mediump float; #endif layout (location= 0) in vec3 vertex; layout (location= 1) in vec3 normals; layout (location= 2) in vec2 uv; layout (location= 3) in vec3 colors; out vec3 fragPos; out vec3 baseColors; out vec3 nrms; out vec3 camPosition; uniform vec3 camera; uniform mat3 normalMat; uniform mat4 model; uniform mat4 projection; uniform mat4 view; uniform mat4 mvp; void main() { // nrms = mat3(transpose(inverse(model))) * normals; baseColors = colors; nrms = normalMat * normals; fragPos = vec3(model * vec4(vertex, 1.0)); camPosition = camera; gl_Position = mvp * vec4(fragPos, 1.0); } Frag #version 300 es #ifdef GL_ES precision mediump float; #endif #define PI 3.14159265359 #define TWO_PI 6.28318530718 #define NUM_LIGHTS 2 in vec3 fragPos; in vec3 baseColors; in vec3 nrms; in vec3 camPosition; out vec4 color; struct Light { vec3 position; vec3 intensities; float attenuation; float ambient; }; Light light; void main () { light.position.x = 0.0; light.position.y = 0.0; light.position.z = 2.0; light.intensities.r = 1.0; light.intensities.g = 1.0; light.intensities.b = 1.0; light.ambient = 0.005; vec4 base = vec4(baseColors, 1.0); vec3 normals = normalize(nrms); vec3 toLight = normalize(light.position - fragPos); vec3 toCamera = normalize(camPosition - fragPos); // Ambient vec3 ambient = light.ambient * base.rgb * light.intensities; // Diffuse float diffuseBrightness = max(0.0, dot(normals, toLight)); vec3 diffuse = diffuseBrightness * base.rgb * light.intensities; // Composition vec3 linearColor = ambient + (diffuse); vec3 gamma = vec3(1.0 / 2.2); color = vec4( pow(linearColor, gamma), base.a); }
  23. I'm interested in rendering a grayscale output from a shader, to save into a texture for later use. I only want an 1 channel 8 bit texture rather than RGBA, to save memory etc. I can think of a number of possible ways of doing this in OpenGL off the top of my head, just wondering what you guys think is the best / easiest / most compatible way, before I dive into coding? This has to work on old android OpenGL ES2 phones / tablets etc, so nothing too funky. Is there some way of rendering to a normal RGBA frame buffer, then using glCopyTexSubImage2D or similar to copy + translate the RGBA to a grayscale texture? This would seem the most obvious, and the docs kind of suggest it might work. Creating an 8 bit framebuffer. If this is possible / a good option? Rendering out RGBA, using glReadPixels, translating on the CPU to grayscale then reuploading as a fresh texture. Slow and horrible but this is a preprocess, and would be a good option is this is more guaranteed to work than other methods.
  24. I get Shader error in 'Volund/Standard Character (Specular)': invalid subscript 'worldPos' at Assets/Features/Shared/Volund_UnityStandardCore.cginc(252) (on d3d11) Compiling Vertex program with DIRECTIONAL Platform defines: UNITY_NO_DXT5nm UNITY_ENABLE_REFLECTION_BUFFERS UNITY_USE_DITHER_MASK_FOR_ALPHABLENDED_SHADOWS UNITY_PBS_USE_BRDF1 UNITY_SPECCUBE_BOX_PROJECTION UNITY_SPECCUBE_BLENDING UNITY_ENABLE_DETAIL_NORMALMAP SHADER_API_DESKTOP UNITY_COLORSPACE_GAMMA UNITY_LIGHT_PROBE_PROXY_VOLUME Here is my shader code on Volund_UnityStandardCore.cginc // Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld' // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' #ifndef UNITY_STANDARD_CORE_INCLUDED #define UNITY_STANDARD_CORE_INCLUDED #include "Volund_UnityStandardInput.cginc" #include "UnityCG.cginc" #include "UnityShaderVariables.cginc" #include "UnityStandardConfig.cginc" #include "UnityPBSLighting.cginc" #include "UnityStandardUtils.cginc" #include "UnityStandardBRDF.cginc" #include "AutoLight.cginc" #if defined(ORTHONORMALIZE_TANGENT_BASE) #undef UNITY_TANGENT_ORTHONORMALIZE #define UNITY_TANGENT_ORTHONORMALIZE 1 #endif //------------------------------------------------------------------------------------- // counterpart for NormalizePerPixelNormal // skips normalization per-vertex and expects normalization to happen per-pixel half3 NormalizePerVertexNormal (half3 n) { #if (SHADER_TARGET < 30) return normalize(n); #else return n; // will normalize per-pixel instead #endif } half3 NormalizePerPixelNormal (half3 n) { #if (SHADER_TARGET < 30) return n; #else return normalize(n); #endif } //------------------------------------------------------------------------------------- UnityLight MainLight (half3 normalWorld) { UnityLight l; #ifdef LIGHTMAP_OFF l.color = _LightColor0.rgb; l.dir = _WorldSpaceLightPos0.xyz; l.ndotl = LambertTerm (normalWorld, l.dir); #else // no light specified by the engine // analytical light might be extracted from Lightmap data later on in the shader depending on the Lightmap type l.color = half3(0.f, 0.f, 0.f); l.ndotl = 0.f; l.dir = half3(0.f, 0.f, 0.f); #endif return l; } UnityLight AdditiveLight (half3 normalWorld, half3 lightDir, half atten) { UnityLight l; l.color = _LightColor0.rgb; l.dir = lightDir; #ifndef USING_DIRECTIONAL_LIGHT l.dir = NormalizePerPixelNormal(l.dir); #endif l.ndotl = LambertTerm (normalWorld, l.dir); // shadow the light l.color *= atten; return l; } UnityLight DummyLight (half3 normalWorld) { UnityLight l; l.color = 0; l.dir = half3 (0,1,0); l.ndotl = LambertTerm (normalWorld, l.dir); return l; } UnityIndirect ZeroIndirect () { UnityIndirect ind; ind.diffuse = 0; ind.specular = 0; return ind; } //------------------------------------------------------------------------------------- // Common fragment setup half3 WorldNormal(half4 tan2world[3]) { return normalize(tan2world[2].xyz); } #ifdef _TANGENT_TO_WORLD half3x3 ExtractTangentToWorldPerPixel(half4 tan2world[3]) { half3 t = tan2world[0].xyz; half3 b = tan2world[1].xyz; half3 n = tan2world[2].xyz; #if UNITY_TANGENT_ORTHONORMALIZE n = NormalizePerPixelNormal(n); // ortho-normalize Tangent t = normalize (t - n * dot(t, n)); // recalculate Binormal half3 newB = cross(n, t); b = newB * sign (dot (newB, b)); #endif return half3x3(t, b, n); } #else half3x3 ExtractTangentToWorldPerPixel(half4 tan2world[3]) { return half3x3(0,0,0,0,0,0,0,0,0); } #endif #ifdef _PARALLAXMAP #define IN_VIEWDIR4PARALLAX(i) NormalizePerPixelNormal(half3(i.tangentToWorldAndParallax[0].w,i.tangentToWorldAndParallax[1].w,i.tangentToWorldAndParallax[2].w)) #define IN_VIEWDIR4PARALLAX_FWDADD(i) NormalizePerPixelNormal(i.viewDirForParallax.xyz) #else #define IN_VIEWDIR4PARALLAX(i) half3(0,0,0) #define IN_VIEWDIR4PARALLAX_FWDADD(i) half3(0,0,0) #endif #if UNITY_SPECCUBE_BOX_PROJECTION #define IN_WORLDPOS(i) i.posWorld #else #define IN_WORLDPOS(i) half3(0,0,0) #endif #define IN_LIGHTDIR_FWDADD(i) half3(i.tangentToWorldAndLightDir[0].w, i.tangentToWorldAndLightDir[1].w, i.tangentToWorldAndLightDir[2].w) #define FRAGMENT_SETUP(x) FragmentCommonData x = \ FragmentSetup(i.tex, i.eyeVec, WorldNormal(i.tangentToWorldAndParallax), IN_VIEWDIR4PARALLAX(i), ExtractTangentToWorldPerPixel(i.tangentToWorldAndParallax), IN_WORLDPOS(i), i.pos.xy); #define FRAGMENT_SETUP_FWDADD(x) FragmentCommonData x = \ FragmentSetup(i.tex, i.eyeVec, WorldNormal(i.tangentToWorldAndLightDir), IN_VIEWDIR4PARALLAX_FWDADD(i), ExtractTangentToWorldPerPixel(i.tangentToWorldAndLightDir), half3(0,0,0), i.pos.xy); struct FragmentCommonData { half3 diffColor, specColor; // Note: oneMinusRoughness & oneMinusReflectivity for optimization purposes, mostly for DX9 SM2.0 level. // Most of the math is being done on these (1-x) values, and that saves a few precious ALU slots. half oneMinusReflectivity, oneMinusRoughness; half3 normalWorld, eyeVec, posWorld; half alpha; }; #ifndef UNITY_SETUP_BRDF_INPUT #define UNITY_SETUP_BRDF_INPUT SpecularSetup #endif inline FragmentCommonData SpecularSetup (float4 i_tex) { half4 specGloss = SpecularGloss(i_tex.xy); half3 specColor = specGloss.rgb; half oneMinusRoughness = specGloss.a; #ifdef SMOOTHNESS_IN_ALBEDO half3 albedo = Albedo(i_tex, /*out*/ oneMinusRoughness); #else half3 albedo = Albedo(i_tex); #endif half oneMinusReflectivity; half3 diffColor = EnergyConservationBetweenDiffuseAndSpecular (albedo, specColor, /*out*/ oneMinusReflectivity); FragmentCommonData o = (FragmentCommonData)0; o.diffColor = diffColor; o.specColor = specColor; o.oneMinusReflectivity = oneMinusReflectivity; o.oneMinusRoughness = oneMinusRoughness; return o; } inline FragmentCommonData MetallicSetup (float4 i_tex) { half2 metallicGloss = MetallicGloss(i_tex.xy); half metallic = metallicGloss.x; half oneMinusRoughness = metallicGloss.y; #ifdef SMOOTHNESS_IN_ALBEDO half3 albedo = Albedo(i_tex, /*out*/ oneMinusRoughness); #else half3 albedo = Albedo(i_tex); #endif half oneMinusReflectivity; half3 specColor; half3 diffColor = DiffuseAndSpecularFromMetallic (albedo, metallic, /*out*/ specColor, /*out*/ oneMinusReflectivity); FragmentCommonData o = (FragmentCommonData)0; o.diffColor = diffColor; o.specColor = specColor; o.oneMinusReflectivity = oneMinusReflectivity; o.oneMinusRoughness = oneMinusRoughness; return o; } inline FragmentCommonData FragmentSetup (float4 i_tex, half3 i_eyeVec, half3 i_normalWorld, half3 i_viewDirForParallax, half3x3 i_tanToWorld, half3 i_posWorld, float2 iPos) { i_tex = Parallax(i_tex, i_viewDirForParallax); half alpha = Alpha(i_tex.xy); #if defined(_ALPHATEST_ON) clip (alpha - _Cutoff); #endif #ifdef _NORMALMAP half3 normalWorld = NormalizePerPixelNormal(mul(NormalInTangentSpace(i_tex), i_tanToWorld)); // @TODO: see if we can squeeze this normalize on SM2.0 as well #else // Should get compiled out, isn't being used in the end. half3 normalWorld = i_normalWorld; #endif half3 eyeVec = i_eyeVec; eyeVec = NormalizePerPixelNormal(eyeVec); FragmentCommonData o = UNITY_SETUP_BRDF_INPUT (i_tex); o.normalWorld = normalWorld; o.eyeVec = eyeVec; o.posWorld = i_posWorld; // NOTE: shader relies on pre-multiply alpha-blend (_SrcBlend = One, _DstBlend = OneMinusSrcAlpha) o.diffColor = PreMultiplyAlpha (o.diffColor, alpha, o.oneMinusReflectivity, /*out*/ o.alpha); return o; } inline UnityGI FragmentGI ( float3 posWorld, half occlusion, half4 i_ambientOrLightmapUV, half atten, half oneMinusRoughness, half3 normalWorld, half3 eyeVec, UnityLight light ) { UnityGI d; ResetUnityGI(d); d.light = light; d.worldPos = posWorld; d.worldViewDir = -eyeVec; d.atten = atten; #if defined(LIGHTMAP_ON) || defined(DYNAMICLIGHTMAP_ON) d.ambient = 0; d.lightmapUV = i_ambientOrLightmapUV; #else d.ambient = i_ambientOrLightmapUV.rgb; d.lightmapUV = 0; #endif //change the above code with this #if UNITY_SPECCUBE_BLENDING || UNITY_SPECCUBE_BOX_PROJECTION d.boxMin[0] = unity_SpecCube0_BoxMin; d.boxMin[1] = unity_SpecCube1_BoxMin; #endif #if UNITY_SPECCUBE_BOX_PROJECTION d.boxMax[0] = unity_SpecCube0_BoxMax; d.boxMax[1] = unity_SpecCube1_BoxMax; d.probePosition[0] = unity_SpecCube0_ProbePosition; d.probePosition[1] = unity_SpecCube1_ProbePosition; #endif //lets change the code //d.boxMax[0] = unity_SpecCube0_BoxMax; //d.boxMin[0] = unity_SpecCube0_BoxMin; //d.probePosition[0] = unity_SpecCube0_ProbePosition; //d.probeHDR[0] = unity_SpecCube0_HDR; //d.boxMax[1] = unity_SpecCube1_BoxMax; //d.boxMin[1] = unity_SpecCube1_BoxMin; //d.probePosition[1] = unity_SpecCube1_ProbePosition; //d.probeHDR[1] = unity_SpecCube1_HDR; return UnityGlobalIllumination( d, occlusion, oneMinusRoughness, normalWorld); } //------------------------------------------------------------------------------------- half4 OutputForward (half4 output, half alphaFromSurface) { #if defined(_ALPHABLEND_ON) || defined(_ALPHAPREMULTIPLY_ON) output.a = alphaFromSurface; #else UNITY_OPAQUE_ALPHA(output.a); #endif return output; } // ------------------------------------------------------------------ // Base forward pass (directional light, emission, lightmaps, ...) struct VertexOutputForwardBase { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndParallax[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:viewDirForParallax] half4 ambientOrLightmapUV : TEXCOORD5; // SH or Lightmap UV SHADOW_COORDS(6) UNITY_FOG_COORDS(7) // next ones would not fit into SM2.0 limits, but they are always for SM3.0+ #if UNITY_SPECCUBE_BOX_PROJECTION float3 posWorld : TEXCOORD8; #endif }; VertexOutputForwardBase vertForwardBase (VertexInput v) { VertexOutputForwardBase o; UNITY_INITIALIZE_OUTPUT(VertexOutputForwardBase, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); #if UNITY_SPECCUBE_BOX_PROJECTION o.posWorld = posWorld.xyz; #endif o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndParallax[0].xyz = tangentToWorld[0]; o.tangentToWorldAndParallax[1].xyz = tangentToWorld[1]; o.tangentToWorldAndParallax[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndParallax[0].xyz = 0; o.tangentToWorldAndParallax[1].xyz = 0; o.tangentToWorldAndParallax[2].xyz = normalWorld; #endif //We need this for shadow receving TRANSFER_SHADOW(o); // Static lightmaps #ifndef LIGHTMAP_OFF o.ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw; o.ambientOrLightmapUV.zw = 0; // Sample light probe for Dynamic objects only (no static or dynamic lightmaps) #elif UNITY_SHOULD_SAMPLE_SH #if UNITY_SAMPLE_FULL_SH_PER_PIXEL o.ambientOrLightmapUV.rgb = 0; #elif (SHADER_TARGET < 30) o.ambientOrLightmapUV.rgb = ShadeSH9(half4(normalWorld, 1.0)); #else // Optimization: L2 per-vertex, L0..L1 per-pixel o.ambientOrLightmapUV.rgb = ShadeSH3Order(half4(normalWorld, 1.0)); #endif // Add approximated illumination from non-important point lights #ifdef VERTEXLIGHT_ON o.ambientOrLightmapUV.rgb += Shade4PointLights ( unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0, unity_LightColor[0].rgb, unity_LightColor[1].rgb, unity_LightColor[2].rgb, unity_LightColor[3].rgb, unity_4LightAtten0, posWorld, normalWorld); #endif #endif #ifdef DYNAMICLIGHTMAP_ON o.ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw; #endif #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); o.tangentToWorldAndParallax[0].w = viewDirForParallax.x; o.tangentToWorldAndParallax[1].w = viewDirForParallax.y; o.tangentToWorldAndParallax[2].w = viewDirForParallax.z; #endif UNITY_TRANSFER_FOG(o,o.pos); return o; } half4 fragForwardBase (VertexOutputForwardBase i, float face : VFACE) : SV_Target { // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndParallax[2].xyz *= face; FRAGMENT_SETUP(s) UnityLight mainLight = MainLight (s.normalWorld); half atten = SHADOW_ATTENUATION(i); half occlusion = Occlusion(i.tex.xy); UnityGI gi = FragmentGI ( s.posWorld, occlusion, i.ambientOrLightmapUV, atten, s.oneMinusRoughness, s.normalWorld, s.eyeVec, mainLight); half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect); c.rgb += UNITY_BRDF_GI (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, occlusion, gi); c.rgb += Emission(i.tex.xy); UNITY_APPLY_FOG(i.fogCoord, c.rgb); return OutputForward (c, s.alpha); } // ------------------------------------------------------------------ // Additive forward pass (one light per pass) struct VertexOutputForwardAdd { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndLightDir[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:lightDir] LIGHTING_COORDS(5,6) UNITY_FOG_COORDS(7) // next ones would not fit into SM2.0 limits, but they are always for SM3.0+ #if defined(_PARALLAXMAP) half3 viewDirForParallax : TEXCOORD8; #endif }; VertexOutputForwardAdd vertForwardAdd (VertexInput v) { VertexOutputForwardAdd o; UNITY_INITIALIZE_OUTPUT(VertexOutputForwardAdd, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndLightDir[0].xyz = tangentToWorld[0]; o.tangentToWorldAndLightDir[1].xyz = tangentToWorld[1]; o.tangentToWorldAndLightDir[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndLightDir[0].xyz = 0; o.tangentToWorldAndLightDir[1].xyz = 0; o.tangentToWorldAndLightDir[2].xyz = normalWorld; #endif //We need this for shadow receving TRANSFER_VERTEX_TO_FRAGMENT(o); float3 lightDir = _WorldSpaceLightPos0.xyz - posWorld.xyz * _WorldSpaceLightPos0.w; #ifndef USING_DIRECTIONAL_LIGHT lightDir = NormalizePerVertexNormal(lightDir); #endif o.tangentToWorldAndLightDir[0].w = lightDir.x; o.tangentToWorldAndLightDir[1].w = lightDir.y; o.tangentToWorldAndLightDir[2].w = lightDir.z; #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; o.viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); #endif UNITY_TRANSFER_FOG(o,o.pos); return o; } half4 fragForwardAdd (VertexOutputForwardAdd i, float face : VFACE) : SV_Target { // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndLightDir[2].xyz *= face; FRAGMENT_SETUP_FWDADD(s) UnityLight light = AdditiveLight (s.normalWorld, IN_LIGHTDIR_FWDADD(i), LIGHT_ATTENUATION(i)); UnityIndirect noIndirect = ZeroIndirect (); half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, light, noIndirect); UNITY_APPLY_FOG_COLOR(i.fogCoord, c.rgb, half4(0,0,0,0)); // fog towards black in additive pass return OutputForward (c, s.alpha); } // ------------------------------------------------------------------ // Deferred pass struct VertexOutputDeferred { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndParallax[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:viewDirForParallax] half4 ambientOrLightmapUV : TEXCOORD5; // SH or Lightmap UVs #if UNITY_SPECCUBE_BOX_PROJECTION float3 posWorld : TEXCOORD6; #endif }; VertexOutputDeferred vertDeferred (VertexInput v) { VertexOutputDeferred o; UNITY_INITIALIZE_OUTPUT(VertexOutputDeferred, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); #if UNITY_SPECCUBE_BOX_PROJECTION o.posWorld = posWorld.xyz; #endif o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndParallax[0].xyz = tangentToWorld[0]; o.tangentToWorldAndParallax[1].xyz = tangentToWorld[1]; o.tangentToWorldAndParallax[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndParallax[0].xyz = 0; o.tangentToWorldAndParallax[1].xyz = 0; o.tangentToWorldAndParallax[2].xyz = normalWorld; #endif #ifndef LIGHTMAP_OFF o.ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw; o.ambientOrLightmapUV.zw = 0; #elif UNITY_SHOULD_SAMPLE_SH #if (SHADER_TARGET < 30) o.ambientOrLightmapUV.rgb = ShadeSH9(half4(normalWorld, 1.0)); #else // Optimization: L2 per-vertex, L0..L1 per-pixel o.ambientOrLightmapUV.rgb = ShadeSH3Order(half4(normalWorld, 1.0)); #endif #endif #ifdef DYNAMICLIGHTMAP_ON o.ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw; #endif #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); o.tangentToWorldAndParallax[0].w = viewDirForParallax.x; o.tangentToWorldAndParallax[1].w = viewDirForParallax.y; o.tangentToWorldAndParallax[2].w = viewDirForParallax.z; #endif return o; } void fragDeferred ( VertexOutputDeferred i, out half4 outDiffuse : SV_Target0, // RT0: diffuse color (rgb), occlusion (a) out half4 outSpecSmoothness : SV_Target1, // RT1: spec color (rgb), smoothness (a) out half4 outNormal : SV_Target2, // RT2: normal (rgb), --unused, very low precision-- (a) out half4 outEmission : SV_Target3, // RT3: emission (rgb), --unused-- (a) float face : VFACE ) { #if (SHADER_TARGET < 30) outDiffuse = 1; outSpecSmoothness = 1; outNormal = 0; outEmission = 0; return; #endif // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndParallax[2].xyz *= face; FRAGMENT_SETUP(s) // no analytic lights in this pass UnityLight dummyLight = DummyLight (s.normalWorld); half atten = 1; half occlusion = Occlusion(i.tex.xy); // only GI UnityGI gi = FragmentGI ( s.posWorld, occlusion, i.ambientOrLightmapUV, atten, s.oneMinusRoughness, s.normalWorld, s.eyeVec, dummyLight); half3 color = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect).rgb; color += UNITY_BRDF_GI (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, occlusion, gi); #ifdef _EMISSION color += Emission (i.tex.xy); #endif #ifndef UNITY_HDR_ON color.rgb = exp2(-color.rgb); #endif outDiffuse = half4(s.diffColor, occlusion); outSpecSmoothness = half4(s.specColor, s.oneMinusRoughness); outNormal = half4(s.normalWorld*0.5+0.5,1); outEmission = half4(color, 1); } #endif // UNITY_STANDARD_CORE_INCLUDED I really don't know what is happening there i've been stuck there for 2 days.
  25. Iñigo Quilez presented techniques for raytracing and distance fields in 4096 bytes on the GPU at NVSCENE 2008. Click the link below to view the presentation PDF. PDF: click here.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!