• Advertisement

Search the Community

Showing results for tags 'OpenGL ES'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • For Beginners
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 41 results

  1. hi all, how to implement this type of effect ? Also what is this effect called? this is considered volumetric lighting? what are the options of doing this? a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel. b. a transparent 3d mesh? and we can animate it as well? need your expert advise. additional: 2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)? Note: im using OpenGL ES 2.0 on mobile. thanks!
  2. Hey guys. Wow it's been super long since I been here. Anyways, I'm having trouble with my 2D OrthoM matrix setup for phones / tablets. Basically I wan't my coordinates to start at the top left of the screen. I also want my polygons to remain squared regardless if you have it on portrait or landscape orientation. At the same time, if I translate the polygon to the middle of the screen, I want it to come to the middle regardless if I have it in portrait or landscape mode. So far I'm pretty close with this setup: private float aspectRatio; @Override public void onSurfaceChanged(GL10 glUnused, int width, int height) { Log.d("Result", "onSurfacedChanged()"); glViewport(0, 0, width, height); if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Log.d("Result", "onSurfacedChanged(PORTRAIT)"); aspectRatio = ((float) height / (float) width); orthoM(projectionMatrix, 0, 0f, 1f, aspectRatio, 0f, -1f, 1f); } else{ Log.d("Result", "onSurfacedChanged(LANDSCAPE)"); aspectRatio = ((float) width / (float) height); orthoM(projectionMatrix, 0, 0f, aspectRatio, 1f, 0f, -1f, 1f); } } When I translate the polygon using TranslateM( ) however, goes to the middle in portrait mode but in landscape, it only moved partially to the right, as though portrait mode was on some of the left of the screen. The only time I can get the translation to match is if in Landscape I move the aspectRatio variable in OrthoM( ) from the right arguement to the bottom arguement, and make right be 1f. Works but now the polygon is stretched after doing this. Do I just simply multiply the aspectRatio to the translation values only when its in Landscape mode to fix this or is there a better way? if (MainActivity.orientation == Configuration.ORIENTATION_PORTRAIT) { Matrix.translateM(modelMatrix, 0, 0.5f, 0.5f * aspectRatio, 0f); } else { Matrix.translateM(modelMatrix, 0, 0.5f * aspectRatio, 0.5f, 0f); } Thanks in advance.
  3. I'm trying to capture a frame with gl.readPixels and send the data to my server. For testing purposes, I tried rendering a texture with the same Uint8Array I used with gl.readPixels, but unfortunately can't get the texture to show an image. Let me share the steps I'm taking. I made sure to allocate memory outside of the game loop: const width = Game.Renderer.width; const height = Game.Renderer.height; let pixels = new Uint8Array(4 * width * height); And before i unbind the frame buffer in the drawing function, I pick up the pixels: gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels); if (stream) { if (stream.ready) stream.socket.send(pixels); } This is also where I send the pixels to the server. In my render function I have a function updating the texture I use for displaying video, or in this case: a different image every frame: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this._video) This works perfectly with a video or an image element, but if I pass in my uint8array no image is rendered. The plan is to have the server send that same array to the other clients so they can use it to update their textures. Hopefully this makes sense. Thanks! BTW: Not sure why my thread appeared two times, my connection timed out and I guess I pressed it two times. My apologies mods, I hid the duplicate thread.
  4. Crystal Clash Build UI

    From the album Crystal Clash

  5. Crystal Clash Main Menu UI

    From the album Crystal Clash

  6. Crystal Clash In-game Screenshot #2

    From the album Crystal Clash

  7. Crystal Clash In-game Screenshot #1

    From the album Crystal Clash

  8. Crystal Clash Tutorial UI

    From the album Crystal Clash

  9. My light is positioned at vec3(0, 0, 2) which is in front of an object at vec3(0, 0, 0). If I don't rotate the object, everything seems to look fine: The problem occurs when I rotate the object, the object's lit area seems to rotate with the object. Instead of just shining the faces looking at the light. In fact here's another strange example but with specular added. The effect is correct in the first the rotation, the second it's dark, and in the third it's back to good again I can't seem to figure out what the problem is with my shader. I even tried calculating the normal matrix in glsl, just in case my implementation was wrong, but I get the same results: nrms = mat3(transpose(inverse(model))) * normals; // and nrms = normalMat * normals; // both get same results. I really don't think it has to do with the normals, the light calculations visually seem okay, as long as I don't rotate the object though. In fact, I can translate and rotate the camera and the lighting is still good, again, as long as I don't rotate the object. By the way, camera rotation is not considered in the calculations since I'm passing the camera.transform.position vec3 to calculate the toCam vector I use in my lighting calculations. There's clearly something I'm doing wrong. I'm guessing it has to do with what space am I calculating against. It's almost like I'm calculating based on the model's local space instead of world. Hopefully somebody can identify what it is though, I'll share the vertex and frag shaders below. I didn't include the specular portion though. thanks a lot! Vertex #version 300 es #ifdef GL_ES precision mediump float; #endif layout (location= 0) in vec3 vertex; layout (location= 1) in vec3 normals; layout (location= 2) in vec2 uv; layout (location= 3) in vec3 colors; out vec3 fragPos; out vec3 baseColors; out vec3 nrms; out vec3 camPosition; uniform vec3 camera; uniform mat3 normalMat; uniform mat4 model; uniform mat4 projection; uniform mat4 view; uniform mat4 mvp; void main() { // nrms = mat3(transpose(inverse(model))) * normals; baseColors = colors; nrms = normalMat * normals; fragPos = vec3(model * vec4(vertex, 1.0)); camPosition = camera; gl_Position = mvp * vec4(fragPos, 1.0); } Frag #version 300 es #ifdef GL_ES precision mediump float; #endif #define PI 3.14159265359 #define TWO_PI 6.28318530718 #define NUM_LIGHTS 2 in vec3 fragPos; in vec3 baseColors; in vec3 nrms; in vec3 camPosition; out vec4 color; struct Light { vec3 position; vec3 intensities; float attenuation; float ambient; }; Light light; void main () { light.position.x = 0.0; light.position.y = 0.0; light.position.z = 2.0; light.intensities.r = 1.0; light.intensities.g = 1.0; light.intensities.b = 1.0; light.ambient = 0.005; vec4 base = vec4(baseColors, 1.0); vec3 normals = normalize(nrms); vec3 toLight = normalize(light.position - fragPos); vec3 toCamera = normalize(camPosition - fragPos); // Ambient vec3 ambient = light.ambient * base.rgb * light.intensities; // Diffuse float diffuseBrightness = max(0.0, dot(normals, toLight)); vec3 diffuse = diffuseBrightness * base.rgb * light.intensities; // Composition vec3 linearColor = ambient + (diffuse); vec3 gamma = vec3(1.0 / 2.2); color = vec4( pow(linearColor, gamma), base.a); }
  10. I'm trying to learn how to make my own model, view, projection setup. I've managed to translate, rotate, and scale my models, but have an issue with my perspective projection matrix. Even though I'm multiplying halfFOV with my aspect ratio, the image looks squished unless my window is a perfect square like this: If it's not a perfect square: the wider my window the more stretched my object looks in the Z axis like an egg. So it definitely has to do with with my projection matrix, specifically related to my aspect ratio. The way I'm multiplying my matrices is as follows, I'll show you my translation and projection keep it simple: Translation [1, 0, 0, 0, 0, 1 0, 0 0, 0, 1, 0, x, y, z, 1] Projection let halfFOV = Math.tan(toRad(FOV/2.0)); let zRange = (NEAR - FAR); let x = 1.0 / (halfFOV * aspect); let y = 1.0 / (halfFOV); let z = (NEAR + FAR) / zRange; let w = 2 * FAR * NEAR / zRange; [x, 0, 0, 0, 0, y, 0, 0, 0, 0, z, -1, 0, 0, w, 0] I normally see the -1 where the w is, but for some reason I need to set it up the way you see it in my matrix, if not it won't work. I'll also share how I'm multiplying matrices very quickly: function (r, a, b) r.mat[0] = (a.mat[0] * b.mat[0]) + (a.mat[1] * b.mat[4]) + (a.mat[2] * b.mat[8]) + (a.mat[3] * b.mat[12]); r.mat[1] = (a.mat[0] * b.mat[1]) + (a.mat[1] * b.mat[5]) + (a.mat[2] * b.mat[9]) + (a.mat[3] * b.mat[13]); r.mat[2] = (a.mat[0] * b.mat[2]) + (a.mat[1] * b.mat[6]) + (a.mat[2] * b.mat[10]) + (a.mat[3] * b.mat[14]); r.mat[3] = (a.mat[0] * b.mat[3]) + (a.mat[1] * b.mat[7]) + (a.mat[2] * b.mat[11]) + (a.mat[3] * b.mat[15]); // don't need to add the rest of it... // How I use it Mathf.mul(resultMat, position, projection); So I take the left row and multiply it against the right column. I then get rows as a result as you can see. If you want to check out the full implementation it's here. I've also checked that I'm getting the correct window size. I divide width/height to get the aspect ratio too. Not sure what I'm doing wrong. I also tried multiplying my matrices on the gpu (glsl) and I gt the same results, so it's definitely my projection matrix Hope this all makes sense. Edit: I probably should have posted this thread in the Math categories, my apologies.
  11. Hi, so I'm trying to pack 4 color values into a single 32-bit float but I'm having some issues. The resulting color values which I am getting are not correct. What could be wrong here? This is the part of code where I pack the 4 bytes into a single float in Java byte [] colorBytes = new byte[4]; colorBytes[0] = (byte)(color.x*256); colorBytes[1] = (byte)(color.y*256); colorBytes[2] = (byte)(color.z*256); colorBytes[3] = (byte)(color.w*256); vertexManager.appendVertexColorData(ByteBuffer.wrap(colorBytes).order(ByteOrder.LITTLE_ENDIAN).getFloat()); I also tried this: bitSh.x = 1.0f/(256.0f*256.0f*256.0f); bitSh.y = 1.0f/(256.0f*256.0f); bitSh.z = 1.0f/(256.0f); bitSh.w = 1.0f; color.x = object.vertexColorData[i*4+0]*r; color.y = object.vertexColorData[i*4+1]*g; color.z = object.vertexColorData[i*4+2]*b; color.w = object.vertexColorData[i*4+3]*a; vertexManager.appendVertexColorData(color.dot(bitSh)); But it didn't work either, though it gave me different results, both are incorrect. This is the vertex shader: uniform mat4 MVPMatrix; // model-view-projection matrix attribute vec4 position; attribute vec2 textureCoords; attribute float color; varying vec4 outColor; varying vec2 outTexCoords; const vec4 bitSh = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0); const vec4 bitMsk = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0); vec4 unpack_float(const float value) { vec4 res = fract(value * bitSh); res -= res.xxyz * bitMsk; return res; } void main() { outTexCoords = textureCoords; outColor = unpack_float(color); gl_Position = MVPMatrix * position; } And this is the fragment shader: precision lowp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { gl_FragColor = texture2D(texture, outTexCoords) * outColor; } Thanks in advance.
  12. I need to pass 24 vec3, to a shader However glUniform3fv requires an array of GLfloat, Since my vec3 structure looks like: struct vec3{float x; float y; float z;}; Can i just pass that safely, to float parray[24 * 3] using memcpy? Dont bother about GLfloat and float sizes i just gave pseudocode i just need to know if sent array will have first vertex at position 0 second one will be in pos 3, thrid one in 6 and so on, cause im not sure when even float and GLfloat match the sizes i could get some extra bytes anywhere, And another question is how then i define an uniform in shader? uniform vec3 box[24];. ? Cheers
  13. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  14. Hi, so I am trying to implement packed VBO's with indexing on OpenGL but I have run across problems. It worked fine when I had separate buffers for vertex positions, colors and texture coordinates. But when I tried to put everything into a single packed buffer, it completely glitched out. Here's the code which I am using: this.vertexData.position(0); this.indexData.position(0); int stride = (3 + 4 + 2) * 4; GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexData.capacity()*4, vertexData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, dimensions, GLES20.GL_FLOAT, false, stride, 0); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, stride, dimensions * 4); GLES20.glEnableVertexAttribArray(texCoordAttrID); GLES20.glVertexAttribPointer(texCoordAttrID, 2, GLES20.GL_FLOAT, false, stride, (dimensions + 4) * 4); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); The data in vertex buffer is ordered like this: Vertex X, vertex Y, vertex Z, Color r, color g, color b, color a, Tex coord x, tex coord z and so on... (And I am pretty certain that the buffer I'm using is in this order) This is the version of the code which worked fine: this.vertexData.position(0); this.vertexColorData.position(0); this.vertexTexCoordData.position(0); this.indexData.position(0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexPositionData.capacity()*4, vertexPositionData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[1]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexColorData.capacity()*4, vertexColorData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[2]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexTexCoordData.capacity()*4, vertexTexCoordData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(textCoordAttrID); GLES20.glVertexAttribPointer(textCoordAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); */ GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); This is the output of the non working code: From this picture I can see that some of the vertex positions are good, but for some reason every renderable object from the game has a at least one vertex position of value 0 Thank in advance, Ed
  15. I'm trying to write a leather material shader. I have a normal map, bump map (grayscaled), specular map, diffuse map, cube maps. I have done the following #version 100 precision highp int; precision highp float; uniform sampler2D diffuseColorMap; uniform sampler2D ambientOcclusionMap; uniform sampler2D normalMap; uniform sampler2D specularMap; uniform sampler2D bumpMap; uniform samplerCube envMap; varying vec2 texCoord[2]; varying vec3 viewWorld; uniform float reflectionFactor; uniform float diffuseFactor; uniform float opacity; varying vec3 eyeVector; varying mat3 world2Tangent; varying vec3 lightVec; varying vec3 halfVec; varying vec3 eyeVec; void main() { vec3 normalTangent = 2.0 * texture2D (normalMap, texCoord[0]).rgb - 1.0; vec4 x_forw = texture2D( bumpMap, texCoord[0]+vec2(1.0/2048.0, 0.0)); vec4 x_back = texture2D( bumpMap, texCoord[0]-vec2(1.0/2048.0, 0.0)); vec4 y_forw = texture2D( bumpMap, texCoord[0]+vec2(0.0, 1.0/2048.0)); vec4 y_back = texture2D( bumpMap, texCoord[0]-vec2(0.0, 1.0/2048.0)); vec3 tangX = vec3(1.0, 0.0, 3.0*(x_forw.x-x_back.x)); vec3 tangY = vec3(0.0, 1.0, 3.0*(y_forw.x-y_back.x)); vec3 heightNormal = normalize(cross(tangX, tangY)); heightNormal = heightNormal*0.5 + 0.5; float bumpAngle = max(0.0, dot(vec3(0.0,0.0,1.0),heightNormal )); vec3 normalWorld = normalize(world2Tangent *heightNormal); vec3 refDir = viewWorld - 2.0 * dot(viewWorld,normalWorld) * normalWorld; // compute diffuse lighting vec4 diffuseMaterial = texture2D (diffuseColorMap, texCoord[0]); vec4 diffuseLight = vec4(1.0,1.0,1.0,1.0); // In doom3, specular value comes from a texture vec4 specularMaterial = texture2D (specularMap, texCoord[0]) ; vec4 specularLight = vec4(1.0,1.0,1.0,1.0); float shininess = pow (max (dot (halfVec,heightNormal), 0.0), 2.0) ; vec4 reflection = textureCube(envMap, refDir); //gl_FragColor=diffuseMaterial * diffuseLight * lamberFactor ; //gl_FragColor+=specularMaterial * specularLight * shininess ; //gl_FragColor+= reflection*0.3; gl_FragColor = diffuseMaterial*bumpAngle ; } My question is how would I use the bump map (Grayscale) to the result of the reflection or what's wrong in my shader ?
  16. Hello everyone, This is going to be my first blog here and I would like to start by introducing you to my new project on which I already worked for over 2 weeks. In these two weeks I implemented joystick system, created a custom map creation tool, added collision detection and pathfinding, programmed simple quest system and much more. The game is written in Java and OpenGL. Here are some screenshots and a video: https://www.youtube.com/watch?time_continue=1&v=LqYiXptAIRI
  17. Hi Guys, I have been struggling for a number of hours trying to make a directional (per fragment) lighting shader. I have been following this tutorial in the 'per fragment' section of the page - http://www.learnopengles.com/tag/per-vertex-lighting/ along with tutorials from other sites. This is what I have at this point. // Vertex shader varying vec3 v_Normal; varying vec4 v_Colour; varying vec3 v_LightPos; uniform vec3 u_LightPos; uniform mat4 worldMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; void main() { vec4 object_space_pos = vec4(in_Position, 1.0); gl_Position = worldMatrix * vec4(in_Position, 1.0); gl_Position = viewMatrix * gl_Position; // WV gl_Position = projectionMatrix * gl_Position; mat4 WV = worldMatrix * viewMatrix; v_Position = vec3(WV * object_space_pos); v_Normal = vec3(WV * vec4(in_Normal, 0.0)); v_Colour = in_Colour; v_LightPos = u_LightPos; } And // Fragment varying vec3 v_Position; varying vec3 v_Normal; varying vec4 v_Colour; varying vec3 v_LightPos; void main() { float dist = length(v_LightPos - v_Position); vec3 lightVector = normalize(v_LightPos - v_Position); float diffuse_light = max(dot(v_Normal, lightVector), 0.1); diffuse_light = diffuse_light * (1.0 / (1.0 + (0.25 * dist * dist))); gl_FragColor = v_Colour * diffuse_light; } If I change the last line of the fragment shader to 'gl_FragColor = v_Colour;' the model (a white sphere) will render to the screen in solid white, as expected. But if I leave the shader as is above, the object is invisible. I am suspecting that it is something to do with this line in the vertex shader, but am at a loss as to what is wrong. v_Position = vec3(WV * object_space_pos); If I comment the above line out, I get some sort of shading going on which looks like it is trying to light the subject (with the normals calculating etc.) Any help would be hugely appreciated. Thanks in advance
  18. Province Map

    Hi, I would like to create a province map, something like in attached example of Age Of Conquest. I would like to use Libgdx. After some research i learnt that it can be done by using two images, one with graphics and second invisible with distinct colors to handle clicks. I have some doubts about this method: how to deal with memory, i have created sample map with size of 960x540 and it weighs 600kb, i would need 10 times bigger map. I could cut it in some smaller pieces and render them but im afraid that it can cause lags when scrolling the map how to deal with highlighting the provinces. I managed to implement simple highlight limited to one province creating filter in OpenGl fragment shader. But what if i want to highlight multiple provinces (eg. highlight all provinces of some country). I guess It can be done by shader too but it may be much complicated i would like to also implement Fog of War over the undiscovered provinces. How one could do that? I would really appreciate your guidance. Perhaps to create the above map i need some other method?
  19. Android Build and Performance

    In the last few weeks I've been focusing on getting the Android build of my jungle game working and tested. Last time I did this I was working from Windows, but now I've totally migrated to Linux I wasn't sure how easily everything would go. In the end, it turns out that support for Linux is great, in fact it was easier than getting things up and running on Windows, no special drivers needed. Definitely Android studio and particularly the emulators seem to be better than last time, with x86 emulators running near native speed, and much quicker APK uploads to the emulators (although still slow to the devices, I gather I can increase this by updating them to high Android version but then less good for testing). My devices I have at home are an old Cat B15 phone, 800x480 with a GPU that seems to date from 2006(!), a Nexus 7 2012 tablet, and finally an Amlogic S905X TV media player (2017). Funnily enough the TV box has been the most involved to get working. CPU issues My first issue to contend with was I got a 'SIGBUS illegal alignment' error when running on the phone. After tracking it down, it turns out the particular Arm CPU is very picky about the alignment of data. It is usually good practice to keep structures aligned well, but the x86 is very forgiving, and I use quite a few structs #pragma packed to 1 byte, particularly in serialization. Some padding in the structures sorted this. Next I had spent many hours trying to figure out a strange bug whereby the object lighting worked fine on emulators, but looked wrong on the device. I had a suspicion it was a signed / unsigned issue in values for diffuse light in a shader input, but I couldn't see anything wrong with the code. Almost unbelievably, when I tracked it down, it turned there wasn't anything wrong with the code. The problem was that on the x86 compiler, a 'char' defaults to be a signed char, but on the ARM compiler, 'char' defaults to unsigned!! This is an interesting choice (apparently on the ARM chip the unsigned may be faster) but it goes against the usual convention for short, int etc. It was easy enough to fix by flipping a compiler switch. I guess I should really be using explicit signed / unsigned types. It has always struck me as somewhat wierd that C is so vague with the in-built types, with number of bits and the sign, given that changing these usually gives bugs. GPU issues The biggest problem I tend to have with OpenGL ES devices is the 'precision' specifiers in shaders. You can fill them however you want on the desktop, but it just ignores them and uses high precision. However different devices have different capabilities for lowp, mediump and highp both in vertex and fragment shaders. What would be really helpful if the guys making the emulators / OpenGL ES on the desktop could allow it to emulate the lower precision, allowing us to debug precision on the desktop. Alas no, I couldn't figure out a way to get this to work. It may be impossible using the hardware OpenGL ES, but the emulator also can use SwiftShader so maybe they could implement this? My biggest problems were that my worst performing device for precision was actually my newest, the TV box. It is built for super fast decoding video at high resolution, but the fragment shaders are a minimal 10 bit precision affair, and the fill rate is poor for a 1080P device. This was coupled with the problem I couldn't usb connect up to the desktop for debugging, I literally was compiling an APK, putting it on a usb stick (or dropbox), taking to bedroom, installing, running. This is not ideal and I will look into either seeing if ADB will run over my LAN or getting another low precision device for testing. I won't go into detail on the precision issues, I wrote more on this on a post here: https://www.gamedev.net/forums/topic/694188-debugging-precision-issues-in-opengl-es-2 As a quick summary, 10 bits of precision in the fragment shader can lead to sampling error in any maths done there, especially in texture coordinate math. I was able to fix some of my problems by moving the tex coordinate calculations to the vertex shader, which has more precision. Then, it turns out that my TV box (and presumably many such chipsets) support an extra high precision path in the fragment shader, *as long as you don't touch the input data*. This allows them to do accurate uv coords on large texture maps, because they don't use the 10 bit precision. Menus I've written a rudimentary menu system for the game, with tickboxes, sliders and listboxes. This has enabled me to put in a bunch of debugging features I can turn on and off on devices, to try and find out what affects performance, without recompiling. Another trick from my console days is I have put in some simple graphical performance bars. I record the last 60 frames into a circle buffer and store things like the frame duration, and when certain game tasks took place. In my case the big issue is when a 'scroll' event takes place, as I render horizontal and vertical tiles of the landscape as you move about it. In the diagram the blue bar is where a scroll happens, a green bar is where the ground scroll happens, and the red is the frame duration. It doesn't show much on the desktop as the GPU is fast, but on the slow devices I often get a dropped frame on the scrolls, so I am trying to reduce this. I can turn on and off various aspects of the scrolling / rendering to track down what causes performance issues. Certainly PCF shadows are a big ask on mobiles, as is the ground (terrain) shader. On my first incarnation of the game I pre-rendered everything (graphics + shadows) out to a massive texture at loadup and just scrolled through it as you moved. This is great for performance, but unfortunately uses a shedload of memory if you want big maps. And phones don't have lots of memory. So a lot of technical effort has gone into writing the scrolling system which redraws the background in horizontal and vertical tiles as you move about. This is much more tricky with an angled landscape than with a top-down 90 degree view, and even more tricky when you have to render shadow maps as you move. Having identified the shadow map pass as being a bottleneck, I did some quick calculations for my max map size (approx 16384x16384) and decided that I could probably get away with pre-rendering the shadow map to a 2048x2048 texture. Alright it isn't very high resolution, but it beats turning shadows off completely. This is working fine, and avoids a lot of ugly issues from scrolling the shadow map. To render out the shadow map I render a bunch of 256x256 tiles and copy them to the final shadowmap. This fixed some of the slowness, then I realised I could go a step further. Much of the PCF shadows slowdown was from rendering the landscape shadows. The buildings and objects are much rarer so I figured I could pre-render a low-res landscape shadow texture, and use this when scrolling, then only need to do expensive PCF / simple shadows on the static objects, and dynamic objects. This worked a treat, and incidentally solves at a stroke precision issues I was having with the shadow shader on the 10 bit hardware. Joysticks As well as supporting touchscreens and keyboards, I want to support gamepads, so I bought a bluetooth / wireless gamepad for xmas. It works great with the TV box with wireless dongle, unfortunately, the bluetooth doesn't seem to work with my old phone and tablet, or my desktop. So it has been very difficult / impossible to debug to get analog joystick working. And, in an oversight(?) for the emulator, there doesn't seem to be an option for emulating a gamepad. I can get a D pad but I don't think it is analog. So after some stabs in the dark with docs I am still facing gamepad focus issues so will have to wait till I have a suitable device to debug this. That's all for now folks!
  20. I'm interested in rendering a grayscale output from a shader, to save into a texture for later use. I only want an 1 channel 8 bit texture rather than RGBA, to save memory etc. I can think of a number of possible ways of doing this in OpenGL off the top of my head, just wondering what you guys think is the best / easiest / most compatible way, before I dive into coding? This has to work on old android OpenGL ES2 phones / tablets etc, so nothing too funky. Is there some way of rendering to a normal RGBA frame buffer, then using glCopyTexSubImage2D or similar to copy + translate the RGBA to a grayscale texture? This would seem the most obvious, and the docs kind of suggest it might work. Creating an 8 bit framebuffer. If this is possible / a good option? Rendering out RGBA, using glReadPixels, translating on the CPU to grayscale then reuploading as a fresh texture. Slow and horrible but this is a preprocess, and would be a good option is this is more guaranteed to work than other methods.
  21. Am currently debugging compatibility issues with my OpenGL ES 2.0 shaders across several different Android devices. One of the biggest problems I'm finding is how the different precisions in GLSL (lowp, mediump, highp) equate to actual precisions in the hardware. To that end I've been using glGetShaderPrecisionFormat to get the log2 of each precision for vertex and fragment shaders, and outputting this in-game to the game screen. On my PC the precision is coming back as 23, 23, 23 for all 3 (lo, medium, high), running under linux natively, or the Android Studio emulator. On my tablet, it is 23, 23, 23 also. On my phone it comes back with 8, 10, 23. If I get a precision issue on the phone I can always bump it up to the next level to cure it. However, the fun comes on my android TV box (Amlogic S905X) which seems to only support 10, 10, 0 for fragment shaders. That is, it doesn't even support high precision in fragment shaders. However being the only device with this problem it is incredibly difficult to debug the shaders, as I can't attach it via USB (unless I can get it connected via the LAN which I haven't tried yet). I'm having to compile the APK, put it on a usb stick, take into the other room, install and run. Which is ridiculous. My question is what method do other people use to debug these precision issues? Is there a way to get the emulator to emulate having rubbish precision? That would seem the most convenient solution (and if not, why haven't they implemented this?). Other than that it seems like I need to buy some old phones / tablets off Ebay, or 'downgrade' the precision in the shader (to mediump) and debug it on my phone...
  22. I am working on a multiplayer Android game using OpenGL ES in Android Studio. The game is planned as a 2d top-down shooter, in which the players survive while defeating waves of enemies. Coordination between the players is required to defeat the enemies, due to the enemy design - for an example, some enemies can be seen only by the player they target, and must be killed by others. I have already implemented basic menus and mechanics for the game, and am currently searching for a 2d artist to create graphics for the game. I can be contacted at ron_solan@walla.com
  23. (Posted this in graphics forum too, which was perhaps the wrong forum for it) Hey, I was wondering if on mobile development (Android mainly but iOS as well if you know of it), if there is a GPUView equivalent for whole system debugging so we can figure out if the CPU/GPU are being pipelined efficiently, if there are bubbles, etc. Also slightly tangent question, but do mobile GPU's have a DMA engine exposed as a dedicated Transfer Queue for Vulkan? Thanks!
  24. Hey, I was wondering if on mobile development (Android mainly but iOS as well if you know of it), if there is a GPUView equivalent for whole system debugging so we can figure out if the CPU/GPU are being pipelined efficiently, if there are bubbles, etc. Also slightly tangent question, but do mobile GPU's have a DMA engine exposed as a dedicated Transfer Queue for Vulkan?
  25. I get Shader error in 'Volund/Standard Character (Specular)': invalid subscript 'worldPos' at Assets/Features/Shared/Volund_UnityStandardCore.cginc(252) (on d3d11) Compiling Vertex program with DIRECTIONAL Platform defines: UNITY_NO_DXT5nm UNITY_ENABLE_REFLECTION_BUFFERS UNITY_USE_DITHER_MASK_FOR_ALPHABLENDED_SHADOWS UNITY_PBS_USE_BRDF1 UNITY_SPECCUBE_BOX_PROJECTION UNITY_SPECCUBE_BLENDING UNITY_ENABLE_DETAIL_NORMALMAP SHADER_API_DESKTOP UNITY_COLORSPACE_GAMMA UNITY_LIGHT_PROBE_PROXY_VOLUME Here is my shader code on Volund_UnityStandardCore.cginc // Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld' // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' #ifndef UNITY_STANDARD_CORE_INCLUDED #define UNITY_STANDARD_CORE_INCLUDED #include "Volund_UnityStandardInput.cginc" #include "UnityCG.cginc" #include "UnityShaderVariables.cginc" #include "UnityStandardConfig.cginc" #include "UnityPBSLighting.cginc" #include "UnityStandardUtils.cginc" #include "UnityStandardBRDF.cginc" #include "AutoLight.cginc" #if defined(ORTHONORMALIZE_TANGENT_BASE) #undef UNITY_TANGENT_ORTHONORMALIZE #define UNITY_TANGENT_ORTHONORMALIZE 1 #endif //------------------------------------------------------------------------------------- // counterpart for NormalizePerPixelNormal // skips normalization per-vertex and expects normalization to happen per-pixel half3 NormalizePerVertexNormal (half3 n) { #if (SHADER_TARGET < 30) return normalize(n); #else return n; // will normalize per-pixel instead #endif } half3 NormalizePerPixelNormal (half3 n) { #if (SHADER_TARGET < 30) return n; #else return normalize(n); #endif } //------------------------------------------------------------------------------------- UnityLight MainLight (half3 normalWorld) { UnityLight l; #ifdef LIGHTMAP_OFF l.color = _LightColor0.rgb; l.dir = _WorldSpaceLightPos0.xyz; l.ndotl = LambertTerm (normalWorld, l.dir); #else // no light specified by the engine // analytical light might be extracted from Lightmap data later on in the shader depending on the Lightmap type l.color = half3(0.f, 0.f, 0.f); l.ndotl = 0.f; l.dir = half3(0.f, 0.f, 0.f); #endif return l; } UnityLight AdditiveLight (half3 normalWorld, half3 lightDir, half atten) { UnityLight l; l.color = _LightColor0.rgb; l.dir = lightDir; #ifndef USING_DIRECTIONAL_LIGHT l.dir = NormalizePerPixelNormal(l.dir); #endif l.ndotl = LambertTerm (normalWorld, l.dir); // shadow the light l.color *= atten; return l; } UnityLight DummyLight (half3 normalWorld) { UnityLight l; l.color = 0; l.dir = half3 (0,1,0); l.ndotl = LambertTerm (normalWorld, l.dir); return l; } UnityIndirect ZeroIndirect () { UnityIndirect ind; ind.diffuse = 0; ind.specular = 0; return ind; } //------------------------------------------------------------------------------------- // Common fragment setup half3 WorldNormal(half4 tan2world[3]) { return normalize(tan2world[2].xyz); } #ifdef _TANGENT_TO_WORLD half3x3 ExtractTangentToWorldPerPixel(half4 tan2world[3]) { half3 t = tan2world[0].xyz; half3 b = tan2world[1].xyz; half3 n = tan2world[2].xyz; #if UNITY_TANGENT_ORTHONORMALIZE n = NormalizePerPixelNormal(n); // ortho-normalize Tangent t = normalize (t - n * dot(t, n)); // recalculate Binormal half3 newB = cross(n, t); b = newB * sign (dot (newB, b)); #endif return half3x3(t, b, n); } #else half3x3 ExtractTangentToWorldPerPixel(half4 tan2world[3]) { return half3x3(0,0,0,0,0,0,0,0,0); } #endif #ifdef _PARALLAXMAP #define IN_VIEWDIR4PARALLAX(i) NormalizePerPixelNormal(half3(i.tangentToWorldAndParallax[0].w,i.tangentToWorldAndParallax[1].w,i.tangentToWorldAndParallax[2].w)) #define IN_VIEWDIR4PARALLAX_FWDADD(i) NormalizePerPixelNormal(i.viewDirForParallax.xyz) #else #define IN_VIEWDIR4PARALLAX(i) half3(0,0,0) #define IN_VIEWDIR4PARALLAX_FWDADD(i) half3(0,0,0) #endif #if UNITY_SPECCUBE_BOX_PROJECTION #define IN_WORLDPOS(i) i.posWorld #else #define IN_WORLDPOS(i) half3(0,0,0) #endif #define IN_LIGHTDIR_FWDADD(i) half3(i.tangentToWorldAndLightDir[0].w, i.tangentToWorldAndLightDir[1].w, i.tangentToWorldAndLightDir[2].w) #define FRAGMENT_SETUP(x) FragmentCommonData x = \ FragmentSetup(i.tex, i.eyeVec, WorldNormal(i.tangentToWorldAndParallax), IN_VIEWDIR4PARALLAX(i), ExtractTangentToWorldPerPixel(i.tangentToWorldAndParallax), IN_WORLDPOS(i), i.pos.xy); #define FRAGMENT_SETUP_FWDADD(x) FragmentCommonData x = \ FragmentSetup(i.tex, i.eyeVec, WorldNormal(i.tangentToWorldAndLightDir), IN_VIEWDIR4PARALLAX_FWDADD(i), ExtractTangentToWorldPerPixel(i.tangentToWorldAndLightDir), half3(0,0,0), i.pos.xy); struct FragmentCommonData { half3 diffColor, specColor; // Note: oneMinusRoughness & oneMinusReflectivity for optimization purposes, mostly for DX9 SM2.0 level. // Most of the math is being done on these (1-x) values, and that saves a few precious ALU slots. half oneMinusReflectivity, oneMinusRoughness; half3 normalWorld, eyeVec, posWorld; half alpha; }; #ifndef UNITY_SETUP_BRDF_INPUT #define UNITY_SETUP_BRDF_INPUT SpecularSetup #endif inline FragmentCommonData SpecularSetup (float4 i_tex) { half4 specGloss = SpecularGloss(i_tex.xy); half3 specColor = specGloss.rgb; half oneMinusRoughness = specGloss.a; #ifdef SMOOTHNESS_IN_ALBEDO half3 albedo = Albedo(i_tex, /*out*/ oneMinusRoughness); #else half3 albedo = Albedo(i_tex); #endif half oneMinusReflectivity; half3 diffColor = EnergyConservationBetweenDiffuseAndSpecular (albedo, specColor, /*out*/ oneMinusReflectivity); FragmentCommonData o = (FragmentCommonData)0; o.diffColor = diffColor; o.specColor = specColor; o.oneMinusReflectivity = oneMinusReflectivity; o.oneMinusRoughness = oneMinusRoughness; return o; } inline FragmentCommonData MetallicSetup (float4 i_tex) { half2 metallicGloss = MetallicGloss(i_tex.xy); half metallic = metallicGloss.x; half oneMinusRoughness = metallicGloss.y; #ifdef SMOOTHNESS_IN_ALBEDO half3 albedo = Albedo(i_tex, /*out*/ oneMinusRoughness); #else half3 albedo = Albedo(i_tex); #endif half oneMinusReflectivity; half3 specColor; half3 diffColor = DiffuseAndSpecularFromMetallic (albedo, metallic, /*out*/ specColor, /*out*/ oneMinusReflectivity); FragmentCommonData o = (FragmentCommonData)0; o.diffColor = diffColor; o.specColor = specColor; o.oneMinusReflectivity = oneMinusReflectivity; o.oneMinusRoughness = oneMinusRoughness; return o; } inline FragmentCommonData FragmentSetup (float4 i_tex, half3 i_eyeVec, half3 i_normalWorld, half3 i_viewDirForParallax, half3x3 i_tanToWorld, half3 i_posWorld, float2 iPos) { i_tex = Parallax(i_tex, i_viewDirForParallax); half alpha = Alpha(i_tex.xy); #if defined(_ALPHATEST_ON) clip (alpha - _Cutoff); #endif #ifdef _NORMALMAP half3 normalWorld = NormalizePerPixelNormal(mul(NormalInTangentSpace(i_tex), i_tanToWorld)); // @TODO: see if we can squeeze this normalize on SM2.0 as well #else // Should get compiled out, isn't being used in the end. half3 normalWorld = i_normalWorld; #endif half3 eyeVec = i_eyeVec; eyeVec = NormalizePerPixelNormal(eyeVec); FragmentCommonData o = UNITY_SETUP_BRDF_INPUT (i_tex); o.normalWorld = normalWorld; o.eyeVec = eyeVec; o.posWorld = i_posWorld; // NOTE: shader relies on pre-multiply alpha-blend (_SrcBlend = One, _DstBlend = OneMinusSrcAlpha) o.diffColor = PreMultiplyAlpha (o.diffColor, alpha, o.oneMinusReflectivity, /*out*/ o.alpha); return o; } inline UnityGI FragmentGI ( float3 posWorld, half occlusion, half4 i_ambientOrLightmapUV, half atten, half oneMinusRoughness, half3 normalWorld, half3 eyeVec, UnityLight light ) { UnityGI d; ResetUnityGI(d); d.light = light; d.worldPos = posWorld; d.worldViewDir = -eyeVec; d.atten = atten; #if defined(LIGHTMAP_ON) || defined(DYNAMICLIGHTMAP_ON) d.ambient = 0; d.lightmapUV = i_ambientOrLightmapUV; #else d.ambient = i_ambientOrLightmapUV.rgb; d.lightmapUV = 0; #endif //change the above code with this #if UNITY_SPECCUBE_BLENDING || UNITY_SPECCUBE_BOX_PROJECTION d.boxMin[0] = unity_SpecCube0_BoxMin; d.boxMin[1] = unity_SpecCube1_BoxMin; #endif #if UNITY_SPECCUBE_BOX_PROJECTION d.boxMax[0] = unity_SpecCube0_BoxMax; d.boxMax[1] = unity_SpecCube1_BoxMax; d.probePosition[0] = unity_SpecCube0_ProbePosition; d.probePosition[1] = unity_SpecCube1_ProbePosition; #endif //lets change the code //d.boxMax[0] = unity_SpecCube0_BoxMax; //d.boxMin[0] = unity_SpecCube0_BoxMin; //d.probePosition[0] = unity_SpecCube0_ProbePosition; //d.probeHDR[0] = unity_SpecCube0_HDR; //d.boxMax[1] = unity_SpecCube1_BoxMax; //d.boxMin[1] = unity_SpecCube1_BoxMin; //d.probePosition[1] = unity_SpecCube1_ProbePosition; //d.probeHDR[1] = unity_SpecCube1_HDR; return UnityGlobalIllumination( d, occlusion, oneMinusRoughness, normalWorld); } //------------------------------------------------------------------------------------- half4 OutputForward (half4 output, half alphaFromSurface) { #if defined(_ALPHABLEND_ON) || defined(_ALPHAPREMULTIPLY_ON) output.a = alphaFromSurface; #else UNITY_OPAQUE_ALPHA(output.a); #endif return output; } // ------------------------------------------------------------------ // Base forward pass (directional light, emission, lightmaps, ...) struct VertexOutputForwardBase { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndParallax[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:viewDirForParallax] half4 ambientOrLightmapUV : TEXCOORD5; // SH or Lightmap UV SHADOW_COORDS(6) UNITY_FOG_COORDS(7) // next ones would not fit into SM2.0 limits, but they are always for SM3.0+ #if UNITY_SPECCUBE_BOX_PROJECTION float3 posWorld : TEXCOORD8; #endif }; VertexOutputForwardBase vertForwardBase (VertexInput v) { VertexOutputForwardBase o; UNITY_INITIALIZE_OUTPUT(VertexOutputForwardBase, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); #if UNITY_SPECCUBE_BOX_PROJECTION o.posWorld = posWorld.xyz; #endif o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndParallax[0].xyz = tangentToWorld[0]; o.tangentToWorldAndParallax[1].xyz = tangentToWorld[1]; o.tangentToWorldAndParallax[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndParallax[0].xyz = 0; o.tangentToWorldAndParallax[1].xyz = 0; o.tangentToWorldAndParallax[2].xyz = normalWorld; #endif //We need this for shadow receving TRANSFER_SHADOW(o); // Static lightmaps #ifndef LIGHTMAP_OFF o.ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw; o.ambientOrLightmapUV.zw = 0; // Sample light probe for Dynamic objects only (no static or dynamic lightmaps) #elif UNITY_SHOULD_SAMPLE_SH #if UNITY_SAMPLE_FULL_SH_PER_PIXEL o.ambientOrLightmapUV.rgb = 0; #elif (SHADER_TARGET < 30) o.ambientOrLightmapUV.rgb = ShadeSH9(half4(normalWorld, 1.0)); #else // Optimization: L2 per-vertex, L0..L1 per-pixel o.ambientOrLightmapUV.rgb = ShadeSH3Order(half4(normalWorld, 1.0)); #endif // Add approximated illumination from non-important point lights #ifdef VERTEXLIGHT_ON o.ambientOrLightmapUV.rgb += Shade4PointLights ( unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0, unity_LightColor[0].rgb, unity_LightColor[1].rgb, unity_LightColor[2].rgb, unity_LightColor[3].rgb, unity_4LightAtten0, posWorld, normalWorld); #endif #endif #ifdef DYNAMICLIGHTMAP_ON o.ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw; #endif #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); o.tangentToWorldAndParallax[0].w = viewDirForParallax.x; o.tangentToWorldAndParallax[1].w = viewDirForParallax.y; o.tangentToWorldAndParallax[2].w = viewDirForParallax.z; #endif UNITY_TRANSFER_FOG(o,o.pos); return o; } half4 fragForwardBase (VertexOutputForwardBase i, float face : VFACE) : SV_Target { // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndParallax[2].xyz *= face; FRAGMENT_SETUP(s) UnityLight mainLight = MainLight (s.normalWorld); half atten = SHADOW_ATTENUATION(i); half occlusion = Occlusion(i.tex.xy); UnityGI gi = FragmentGI ( s.posWorld, occlusion, i.ambientOrLightmapUV, atten, s.oneMinusRoughness, s.normalWorld, s.eyeVec, mainLight); half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect); c.rgb += UNITY_BRDF_GI (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, occlusion, gi); c.rgb += Emission(i.tex.xy); UNITY_APPLY_FOG(i.fogCoord, c.rgb); return OutputForward (c, s.alpha); } // ------------------------------------------------------------------ // Additive forward pass (one light per pass) struct VertexOutputForwardAdd { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndLightDir[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:lightDir] LIGHTING_COORDS(5,6) UNITY_FOG_COORDS(7) // next ones would not fit into SM2.0 limits, but they are always for SM3.0+ #if defined(_PARALLAXMAP) half3 viewDirForParallax : TEXCOORD8; #endif }; VertexOutputForwardAdd vertForwardAdd (VertexInput v) { VertexOutputForwardAdd o; UNITY_INITIALIZE_OUTPUT(VertexOutputForwardAdd, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndLightDir[0].xyz = tangentToWorld[0]; o.tangentToWorldAndLightDir[1].xyz = tangentToWorld[1]; o.tangentToWorldAndLightDir[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndLightDir[0].xyz = 0; o.tangentToWorldAndLightDir[1].xyz = 0; o.tangentToWorldAndLightDir[2].xyz = normalWorld; #endif //We need this for shadow receving TRANSFER_VERTEX_TO_FRAGMENT(o); float3 lightDir = _WorldSpaceLightPos0.xyz - posWorld.xyz * _WorldSpaceLightPos0.w; #ifndef USING_DIRECTIONAL_LIGHT lightDir = NormalizePerVertexNormal(lightDir); #endif o.tangentToWorldAndLightDir[0].w = lightDir.x; o.tangentToWorldAndLightDir[1].w = lightDir.y; o.tangentToWorldAndLightDir[2].w = lightDir.z; #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; o.viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); #endif UNITY_TRANSFER_FOG(o,o.pos); return o; } half4 fragForwardAdd (VertexOutputForwardAdd i, float face : VFACE) : SV_Target { // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndLightDir[2].xyz *= face; FRAGMENT_SETUP_FWDADD(s) UnityLight light = AdditiveLight (s.normalWorld, IN_LIGHTDIR_FWDADD(i), LIGHT_ATTENUATION(i)); UnityIndirect noIndirect = ZeroIndirect (); half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, light, noIndirect); UNITY_APPLY_FOG_COLOR(i.fogCoord, c.rgb, half4(0,0,0,0)); // fog towards black in additive pass return OutputForward (c, s.alpha); } // ------------------------------------------------------------------ // Deferred pass struct VertexOutputDeferred { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndParallax[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:viewDirForParallax] half4 ambientOrLightmapUV : TEXCOORD5; // SH or Lightmap UVs #if UNITY_SPECCUBE_BOX_PROJECTION float3 posWorld : TEXCOORD6; #endif }; VertexOutputDeferred vertDeferred (VertexInput v) { VertexOutputDeferred o; UNITY_INITIALIZE_OUTPUT(VertexOutputDeferred, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); #if UNITY_SPECCUBE_BOX_PROJECTION o.posWorld = posWorld.xyz; #endif o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndParallax[0].xyz = tangentToWorld[0]; o.tangentToWorldAndParallax[1].xyz = tangentToWorld[1]; o.tangentToWorldAndParallax[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndParallax[0].xyz = 0; o.tangentToWorldAndParallax[1].xyz = 0; o.tangentToWorldAndParallax[2].xyz = normalWorld; #endif #ifndef LIGHTMAP_OFF o.ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw; o.ambientOrLightmapUV.zw = 0; #elif UNITY_SHOULD_SAMPLE_SH #if (SHADER_TARGET < 30) o.ambientOrLightmapUV.rgb = ShadeSH9(half4(normalWorld, 1.0)); #else // Optimization: L2 per-vertex, L0..L1 per-pixel o.ambientOrLightmapUV.rgb = ShadeSH3Order(half4(normalWorld, 1.0)); #endif #endif #ifdef DYNAMICLIGHTMAP_ON o.ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw; #endif #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); o.tangentToWorldAndParallax[0].w = viewDirForParallax.x; o.tangentToWorldAndParallax[1].w = viewDirForParallax.y; o.tangentToWorldAndParallax[2].w = viewDirForParallax.z; #endif return o; } void fragDeferred ( VertexOutputDeferred i, out half4 outDiffuse : SV_Target0, // RT0: diffuse color (rgb), occlusion (a) out half4 outSpecSmoothness : SV_Target1, // RT1: spec color (rgb), smoothness (a) out half4 outNormal : SV_Target2, // RT2: normal (rgb), --unused, very low precision-- (a) out half4 outEmission : SV_Target3, // RT3: emission (rgb), --unused-- (a) float face : VFACE ) { #if (SHADER_TARGET < 30) outDiffuse = 1; outSpecSmoothness = 1; outNormal = 0; outEmission = 0; return; #endif // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndParallax[2].xyz *= face; FRAGMENT_SETUP(s) // no analytic lights in this pass UnityLight dummyLight = DummyLight (s.normalWorld); half atten = 1; half occlusion = Occlusion(i.tex.xy); // only GI UnityGI gi = FragmentGI ( s.posWorld, occlusion, i.ambientOrLightmapUV, atten, s.oneMinusRoughness, s.normalWorld, s.eyeVec, dummyLight); half3 color = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect).rgb; color += UNITY_BRDF_GI (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, occlusion, gi); #ifdef _EMISSION color += Emission (i.tex.xy); #endif #ifndef UNITY_HDR_ON color.rgb = exp2(-color.rgb); #endif outDiffuse = half4(s.diffColor, occlusion); outSpecSmoothness = half4(s.specColor, s.oneMinusRoughness); outNormal = half4(s.normalWorld*0.5+0.5,1); outEmission = half4(color, 1); } #endif // UNITY_STANDARD_CORE_INCLUDED I really don't know what is happening there i've been stuck there for 2 days.
  • Advertisement