zer0force

Member
  • Content count

    19
  • Joined

  • Last visited

Community Reputation

144 Neutral

1 Follower

About zer0force

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming
  1. Second you use the Unity Commandline , to execute the build: C:\program files\Unity\Editor\Unity.exe -quit -batchmode -executeMethod MyEditorScript.PerformBuild I think if you include the reading of a JSON txt file in your "MyEditorScript" method you can further customize the code like: "build.info": {"platform":"Windows","scenes":"Scene1|Scene2|Scene3"} {"platform":"Mac","scenes":"Scene4|Scene5|Scene6"}
  2. Never done, but this should work: https://docs.unity3d.com/Manual/BuildPlayerPipeline.html For example this would allow you to create one method for each platform and pass the related scenes.
  3. Why A.I is impossible

    Gravitation seems to be a force between two entities or bodies, but that is a fallacy. Gravitation is the effect of the topology of the "space". Gravitation is determined by the topology of the 4-dimensional (or more dimensional) space . Masses bend the 4dimensional space around them and indirectly create the effect of gravity. Gravity is thus only an indirect side effect of the topology of 4dimensional space. The even more interesting question is, what is mass, or what is matter, because matter (or moving matter) bends the space. A partial answer to this is. matter is energy, relating to e=mc2, but that does not explain the structure of matter. Even today's subatomic particle model explains the structure of matter only insufficient (see quantum physics).
  4. Why A.I is impossible

    I wanted to add an interesting point. What is intelligence in relation to the brain? After my experiences (also with neural networks), it's the ability to represent the outside world in miniature form inside the brain. That sounds strange, but once you think about it, many conclusions come up. The better the brain can "image" the outside world, the better it can predict future events. And this was the evolutionary advantage of human intelligence brought forth. You could respond to a "danger" in the outside world before it took place. Of course, a brain can never accurately map the outside world, but it does not necessarily have to. It only has to map the outside world so far that reliable predictions can be made. If one determines that our brain is an image of the outside world, all we have to do now is clarify how the brain manages to produce reliable predictions from this image and how is this image created.
  5. Why A.I is impossible

    That's not true, you can now even recognize by experiments with animals, whether they have a self-image or not. There are animals (some species of birds, monkeys, dolphins) that recognize themselves in a mirror image and this clearly indicates that they have a sense of self-awareness. It is therefore possible to clearly identify by observation whether an animal recognizes itself in the mirror or not. Self-awareness in my opinion only marks a higher advanced intelligence. But once again an AI does not need self-awareness or a "soul".
  6. Unity Cloud shadows on a terrain

    Thank you for your hint, I'll give it a try I've already tried a few, some dazzle with too much advertising, others are too slow or simply crash.
  7. Why A.I is impossible

    On the other hand, where is it written that an AI needs a consciousness? And that makes this point of discussion partly obsolete. The question should have been called: Can AI have a consciousness?
  8. Why A.I is impossible

    I think you are right in the case of human-like AI, because there is still a lack of appropriate processors, currently distributed processors are digital, but you would need analogue processors in conjunction with analog nano-memories. Not only are digital processors slowly reaching their performance limits, they are still way too slow and consume too much energy. In addition, true neural networks in my opinion are an interconnection of millions if not billions of such processors. This is an old article but still right in my opinion: https://www.wired.com/2012/08/upside/
  9. Hi there, here is a little experiment I wanted to show. I developed this controller as help for a guy from the german Unity forum. He wanted a 1st person view controller (with 3rd person character in the background ) and using the Unity CC (character controller) component as base. Its a complete different style of controller but only to show whats possible. At the end, I liked the dynamics of this controller. It was also an exercise for IK and using animations in Unity for me again. For this i changed the original 1st person controller of Unity with the "Ethan" character model. There were some problems, among other things, because this controller uses the CC as a component and if you now just puts "Ethan" the axe in his hand, then he blows the weapon when hitting through the walls. To solve this, we added a backspin to the character at a collision. Additionally i mixed a slay animation, an axe animation and IK. We had started completely without a slay animation, this was also because the movement of the axe was controlled by a self-created axe animation. Ethan's hand is held in position via IK. I just did not really like the mix of idle animation and axe animation, so I added a slay animation to Ethan. For hitting the slay animation and the axe animation is now mixed. The hand follows the axe animation and Ethan plays a suitable slay animation. This allowed us to better control the position of the axe in front of the player. I had further problems with the clipping of the body of Ethan. For this I had to slightly modify the original controller, so that the camera is always nicely in front of his face. In addition, the camera is now partly controlled by the bone of the player animation, which, like me, creates a very nice dynamic effect: Video: https://streamable.com/4xvx5
  10. Hi all, just a little demo for cloud shadows on my actual terrain. The weather system I'm using had not implemented this feature yet, so I had to do it myself. My terrain still has no grass details or trees, but that comes next. In addition I installed a little music composition from our composer Daniel Larsen. Additionally, I've incorporated a character's head (and body) tracking to the camera target. Here's the result, have fun: Video: https://streamable.com/68h6h
  11. I meant the Unity engine and therefore my above statement was correct as far as I know.
  12. As i know Unity dont uses Flash anymore. Unity removed Flash in Unity version 5. https://www.gamasutra.com/view/news/191112/Unity_drops_Flash_support_says_Adobe_is_not_firmly_committed.php As far as I know, it has never been used for UI. It was used for the deployment of a game for a browser platform.
  13. I would add this: The 3 laws are immutable and no new laws can be derived from them Besides, I would extend the 1st law with: A robot may not, through indirect actions, cause harm to a human being
  14. Hello everybody, today someone in the german Unity forum asks how best to pick up an item in Unity. It's about which animations have to be played and how does one accomplish that the character actually pick up the item? The following way was discussed I would play an animation that is similar to a pick up animation. If necessary, you can also just take an idle animation. In addition, I would use IK to move the left or right hand to the target and move the body (also over IK) toward the target. If you do it completely over IK, then the character bends almost completely from the hip and bends the upper body. I suppose for a better animation you should find an animation in which a character goes to his knees and then put the IK over it. I ended up using an pickup animation. This animation starts by pressing a key and creates a transition from the idle animation to the pickup animation. After that IK has been activated. The IK bends the entire body in the direction of the pickup goal and additionally the arm bone was moved to the pickup target. I've fixed a few minor errors in grip accuracy and animations. Here's the newest version: Video: https://streamable.com/m10p2 Image:
  15. Here's my full shader code, its from the Blacksmith Hair shader example from Unity, perhaps you should download the latest version: // Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld' // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' #ifndef UNITY_STANDARD_CORE_INCLUDED #define UNITY_STANDARD_CORE_INCLUDED #include "Volund_UnityStandardInput.cginc" #include "UnityCG.cginc" #include "UnityShaderVariables.cginc" #include "UnityStandardConfig.cginc" #include "UnityPBSLighting.cginc" #include "UnityStandardUtils.cginc" #include "UnityStandardBRDF.cginc" #include "AutoLight.cginc" #if defined(ORTHONORMALIZE_TANGENT_BASE) #undef UNITY_TANGENT_ORTHONORMALIZE #define UNITY_TANGENT_ORTHONORMALIZE 1 #endif //------------------------------------------------------------------------------------- // counterpart for NormalizePerPixelNormal // skips normalization per-vertex and expects normalization to happen per-pixel half3 NormalizePerVertexNormal (half3 n) { #if (SHADER_TARGET < 30) return normalize(n); #else return n; // will normalize per-pixel instead #endif } half3 NormalizePerPixelNormal (half3 n) { #if (SHADER_TARGET < 30) return n; #else return normalize(n); #endif } //------------------------------------------------------------------------------------- UnityLight MainLight (half3 normalWorld) { UnityLight l; #ifdef LIGHTMAP_OFF l.color = _LightColor0.rgb; l.dir = _WorldSpaceLightPos0.xyz; l.ndotl = LambertTerm (normalWorld, l.dir); #else // no light specified by the engine // analytical light might be extracted from Lightmap data later on in the shader depending on the Lightmap type l.color = half3(0.f, 0.f, 0.f); l.ndotl = 0.f; l.dir = half3(0.f, 0.f, 0.f); #endif return l; } UnityLight AdditiveLight (half3 normalWorld, half3 lightDir, half atten) { UnityLight l; l.color = _LightColor0.rgb; l.dir = lightDir; #ifndef USING_DIRECTIONAL_LIGHT l.dir = NormalizePerPixelNormal(l.dir); #endif l.ndotl = LambertTerm (normalWorld, l.dir); // shadow the light l.color *= atten; return l; } UnityLight DummyLight (half3 normalWorld) { UnityLight l; l.color = 0; l.dir = half3 (0,1,0); l.ndotl = LambertTerm (normalWorld, l.dir); return l; } UnityIndirect ZeroIndirect () { UnityIndirect ind; ind.diffuse = 0; ind.specular = 0; return ind; } //------------------------------------------------------------------------------------- // Common fragment setup half3 WorldNormal(half4 tan2world[3]) { return normalize(tan2world[2].xyz); } #ifdef _TANGENT_TO_WORLD half3x3 ExtractTangentToWorldPerPixel(half4 tan2world[3]) { half3 t = tan2world[0].xyz; half3 b = tan2world[1].xyz; half3 n = tan2world[2].xyz; #if UNITY_TANGENT_ORTHONORMALIZE n = NormalizePerPixelNormal(n); // ortho-normalize Tangent t = normalize (t - n * dot(t, n)); // recalculate Binormal half3 newB = cross(n, t); b = newB * sign (dot (newB, b)); #endif return half3x3(t, b, n); } #else half3x3 ExtractTangentToWorldPerPixel(half4 tan2world[3]) { return half3x3(0,0,0,0,0,0,0,0,0); } #endif #ifdef _PARALLAXMAP #define IN_VIEWDIR4PARALLAX(i) NormalizePerPixelNormal(half3(i.tangentToWorldAndParallax[0].w,i.tangentToWorldAndParallax[1].w,i.tangentToWorldAndParallax[2].w)) #define IN_VIEWDIR4PARALLAX_FWDADD(i) NormalizePerPixelNormal(i.viewDirForParallax.xyz) #else #define IN_VIEWDIR4PARALLAX(i) half3(0,0,0) #define IN_VIEWDIR4PARALLAX_FWDADD(i) half3(0,0,0) #endif #if UNITY_SPECCUBE_BOX_PROJECTION #define IN_WORLDPOS(i) i.posWorld #else #define IN_WORLDPOS(i) half3(0,0,0) #endif #define IN_LIGHTDIR_FWDADD(i) half3(i.tangentToWorldAndLightDir[0].w, i.tangentToWorldAndLightDir[1].w, i.tangentToWorldAndLightDir[2].w) #define FRAGMENT_SETUP(x) FragmentCommonData x = \ FragmentSetup(i.tex, i.eyeVec, WorldNormal(i.tangentToWorldAndParallax), IN_VIEWDIR4PARALLAX(i), ExtractTangentToWorldPerPixel(i.tangentToWorldAndParallax), IN_WORLDPOS(i), i.pos.xy); #define FRAGMENT_SETUP_FWDADD(x) FragmentCommonData x = \ FragmentSetup(i.tex, i.eyeVec, WorldNormal(i.tangentToWorldAndLightDir), IN_VIEWDIR4PARALLAX_FWDADD(i), ExtractTangentToWorldPerPixel(i.tangentToWorldAndLightDir), half3(0,0,0), i.pos.xy); struct FragmentCommonData { half3 diffColor, specColor; // Note: oneMinusRoughness & oneMinusReflectivity for optimization purposes, mostly for DX9 SM2.0 level. // Most of the math is being done on these (1-x) values, and that saves a few precious ALU slots. half oneMinusReflectivity, oneMinusRoughness; half3 normalWorld, eyeVec, posWorld; half alpha; }; #ifndef UNITY_SETUP_BRDF_INPUT #define UNITY_SETUP_BRDF_INPUT SpecularSetup #endif inline FragmentCommonData SpecularSetup (float4 i_tex) { half4 specGloss = SpecularGloss(i_tex.xy); half3 specColor = specGloss.rgb; half oneMinusRoughness = specGloss.a; #ifdef SMOOTHNESS_IN_ALBEDO half3 albedo = Albedo(i_tex, /*out*/ oneMinusRoughness); #else half3 albedo = Albedo(i_tex); #endif half oneMinusReflectivity; half3 diffColor = EnergyConservationBetweenDiffuseAndSpecular (albedo, specColor, /*out*/ oneMinusReflectivity); FragmentCommonData o = (FragmentCommonData)0; o.diffColor = diffColor; o.specColor = specColor; o.oneMinusReflectivity = oneMinusReflectivity; o.oneMinusRoughness = oneMinusRoughness; return o; } inline FragmentCommonData MetallicSetup (float4 i_tex) { half2 metallicGloss = MetallicGloss(i_tex.xy); half metallic = metallicGloss.x; half oneMinusRoughness = metallicGloss.y; #ifdef SMOOTHNESS_IN_ALBEDO half3 albedo = Albedo(i_tex, /*out*/ oneMinusRoughness); #else half3 albedo = Albedo(i_tex); #endif half oneMinusReflectivity; half3 specColor; half3 diffColor = DiffuseAndSpecularFromMetallic (albedo, metallic, /*out*/ specColor, /*out*/ oneMinusReflectivity); FragmentCommonData o = (FragmentCommonData)0; o.diffColor = diffColor; o.specColor = specColor; o.oneMinusReflectivity = oneMinusReflectivity; o.oneMinusRoughness = oneMinusRoughness; return o; } inline FragmentCommonData FragmentSetup (float4 i_tex, half3 i_eyeVec, half3 i_normalWorld, half3 i_viewDirForParallax, half3x3 i_tanToWorld, half3 i_posWorld, float2 iPos) { i_tex = Parallax(i_tex, i_viewDirForParallax); half alpha = Alpha(i_tex.xy); #if defined(_ALPHATEST_ON) clip (alpha - _Cutoff); #endif #ifdef _NORMALMAP half3 normalWorld = NormalizePerPixelNormal(mul(NormalInTangentSpace(i_tex), i_tanToWorld)); // @TODO: see if we can squeeze this normalize on SM2.0 as well #else // Should get compiled out, isn't being used in the end. half3 normalWorld = i_normalWorld; #endif half3 eyeVec = i_eyeVec; eyeVec = NormalizePerPixelNormal(eyeVec); FragmentCommonData o = UNITY_SETUP_BRDF_INPUT (i_tex); o.normalWorld = normalWorld; o.eyeVec = eyeVec; o.posWorld = i_posWorld; // NOTE: shader relies on pre-multiply alpha-blend (_SrcBlend = One, _DstBlend = OneMinusSrcAlpha) o.diffColor = PreMultiplyAlpha (o.diffColor, alpha, o.oneMinusReflectivity, /*out*/ o.alpha); return o; } inline UnityGI FragmentGI ( float3 posWorld, half occlusion, half4 i_ambientOrLightmapUV, half atten, half oneMinusRoughness, half3 normalWorld, half3 eyeVec, UnityLight light) { UnityGIInput d; d.light = light; d.worldPos = posWorld; d.worldViewDir = -eyeVec; d.atten = atten; #if defined(LIGHTMAP_ON) || defined(DYNAMICLIGHTMAP_ON) d.ambient = 0; d.lightmapUV = i_ambientOrLightmapUV; #else d.ambient = i_ambientOrLightmapUV.rgb; d.lightmapUV = 0; #endif d.boxMax[0] = unity_SpecCube0_BoxMax; d.boxMin[0] = unity_SpecCube0_BoxMin; d.probePosition[0] = unity_SpecCube0_ProbePosition; d.probeHDR[0] = unity_SpecCube0_HDR; d.boxMax[1] = unity_SpecCube1_BoxMax; d.boxMin[1] = unity_SpecCube1_BoxMin; d.probePosition[1] = unity_SpecCube1_ProbePosition; d.probeHDR[1] = unity_SpecCube1_HDR; return UnityGlobalIllumination ( d, occlusion, oneMinusRoughness, normalWorld); } //------------------------------------------------------------------------------------- half4 OutputForward (half4 output, half alphaFromSurface) { #if defined(_ALPHABLEND_ON) || defined(_ALPHAPREMULTIPLY_ON) output.a = alphaFromSurface; #else UNITY_OPAQUE_ALPHA(output.a); #endif return output; } // ------------------------------------------------------------------ // Base forward pass (directional light, emission, lightmaps, ...) struct VertexOutputForwardBase { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndParallax[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:viewDirForParallax] half4 ambientOrLightmapUV : TEXCOORD5; // SH or Lightmap UV SHADOW_COORDS(6) UNITY_FOG_COORDS(7) // next ones would not fit into SM2.0 limits, but they are always for SM3.0+ #if UNITY_SPECCUBE_BOX_PROJECTION float3 posWorld : TEXCOORD8; #endif }; VertexOutputForwardBase vertForwardBase (VertexInput v) { VertexOutputForwardBase o; UNITY_INITIALIZE_OUTPUT(VertexOutputForwardBase, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); #if UNITY_SPECCUBE_BOX_PROJECTION o.posWorld = posWorld.xyz; #endif o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndParallax[0].xyz = tangentToWorld[0]; o.tangentToWorldAndParallax[1].xyz = tangentToWorld[1]; o.tangentToWorldAndParallax[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndParallax[0].xyz = 0; o.tangentToWorldAndParallax[1].xyz = 0; o.tangentToWorldAndParallax[2].xyz = normalWorld; #endif //We need this for shadow receving TRANSFER_SHADOW(o); // Static lightmaps #ifndef LIGHTMAP_OFF o.ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw; o.ambientOrLightmapUV.zw = 0; // Sample light probe for Dynamic objects only (no static or dynamic lightmaps) #elif UNITY_SHOULD_SAMPLE_SH #if UNITY_SAMPLE_FULL_SH_PER_PIXEL o.ambientOrLightmapUV.rgb = 0; #elif (SHADER_TARGET < 30) o.ambientOrLightmapUV.rgb = ShadeSH9(half4(normalWorld, 1.0)); #else // Optimization: L2 per-vertex, L0..L1 per-pixel o.ambientOrLightmapUV.rgb = ShadeSH3Order(half4(normalWorld, 1.0)); #endif // Add approximated illumination from non-important point lights #ifdef VERTEXLIGHT_ON o.ambientOrLightmapUV.rgb += Shade4PointLights ( unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0, unity_LightColor[0].rgb, unity_LightColor[1].rgb, unity_LightColor[2].rgb, unity_LightColor[3].rgb, unity_4LightAtten0, posWorld, normalWorld); #endif #endif #ifdef DYNAMICLIGHTMAP_ON o.ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw; #endif #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); o.tangentToWorldAndParallax[0].w = viewDirForParallax.x; o.tangentToWorldAndParallax[1].w = viewDirForParallax.y; o.tangentToWorldAndParallax[2].w = viewDirForParallax.z; #endif UNITY_TRANSFER_FOG(o,o.pos); return o; } half4 fragForwardBase (VertexOutputForwardBase i, float face : VFACE) : SV_Target { // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndParallax[2].xyz *= face; FRAGMENT_SETUP(s) UnityLight mainLight = MainLight (s.normalWorld); half atten = SHADOW_ATTENUATION(i); half occlusion = Occlusion(i.tex.xy); UnityGI gi = FragmentGI ( s.posWorld, occlusion, i.ambientOrLightmapUV, atten, s.oneMinusRoughness, s.normalWorld, s.eyeVec, mainLight); half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect); c.rgb += UNITY_BRDF_GI (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, occlusion, gi); c.rgb += Emission(i.tex.xy); UNITY_APPLY_FOG(i.fogCoord, c.rgb); return OutputForward (c, s.alpha); } // ------------------------------------------------------------------ // Additive forward pass (one light per pass) struct VertexOutputForwardAdd { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndLightDir[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:lightDir] LIGHTING_COORDS(5,6) UNITY_FOG_COORDS(7) // next ones would not fit into SM2.0 limits, but they are always for SM3.0+ #if defined(_PARALLAXMAP) half3 viewDirForParallax : TEXCOORD8; #endif }; VertexOutputForwardAdd vertForwardAdd (VertexInput v) { VertexOutputForwardAdd o; UNITY_INITIALIZE_OUTPUT(VertexOutputForwardAdd, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndLightDir[0].xyz = tangentToWorld[0]; o.tangentToWorldAndLightDir[1].xyz = tangentToWorld[1]; o.tangentToWorldAndLightDir[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndLightDir[0].xyz = 0; o.tangentToWorldAndLightDir[1].xyz = 0; o.tangentToWorldAndLightDir[2].xyz = normalWorld; #endif //We need this for shadow receving TRANSFER_VERTEX_TO_FRAGMENT(o); float3 lightDir = _WorldSpaceLightPos0.xyz - posWorld.xyz * _WorldSpaceLightPos0.w; #ifndef USING_DIRECTIONAL_LIGHT lightDir = NormalizePerVertexNormal(lightDir); #endif o.tangentToWorldAndLightDir[0].w = lightDir.x; o.tangentToWorldAndLightDir[1].w = lightDir.y; o.tangentToWorldAndLightDir[2].w = lightDir.z; #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; o.viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); #endif UNITY_TRANSFER_FOG(o,o.pos); return o; } half4 fragForwardAdd (VertexOutputForwardAdd i, float face : VFACE) : SV_Target { // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndLightDir[2].xyz *= face; FRAGMENT_SETUP_FWDADD(s) UnityLight light = AdditiveLight (s.normalWorld, IN_LIGHTDIR_FWDADD(i), LIGHT_ATTENUATION(i)); UnityIndirect noIndirect = ZeroIndirect (); half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, light, noIndirect); UNITY_APPLY_FOG_COLOR(i.fogCoord, c.rgb, half4(0,0,0,0)); // fog towards black in additive pass return OutputForward (c, s.alpha); } // ------------------------------------------------------------------ // Deferred pass struct VertexOutputDeferred { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; half3 eyeVec : TEXCOORD1; half4 tangentToWorldAndParallax[3] : TEXCOORD2; // [3x3:tangentToWorld | 1x3:viewDirForParallax] half4 ambientOrLightmapUV : TEXCOORD5; // SH or Lightmap UVs #if UNITY_SPECCUBE_BOX_PROJECTION float3 posWorld : TEXCOORD6; #endif }; VertexOutputDeferred vertDeferred (VertexInput v) { VertexOutputDeferred o; UNITY_INITIALIZE_OUTPUT(VertexOutputDeferred, o); float4 posWorld = mul(unity_ObjectToWorld, v.vertex); #if UNITY_SPECCUBE_BOX_PROJECTION o.posWorld = posWorld.xyz; #endif o.pos = UnityObjectToClipPos(v.vertex); o.tex = TexCoords(v); o.eyeVec = NormalizePerVertexNormal(posWorld.xyz - _WorldSpaceCameraPos); float3 normalWorld = UnityObjectToWorldNormal(v.normal); #ifdef _TANGENT_TO_WORLD float4 tangentWorld = float4(UnityObjectToWorldDir(v.tangent.xyz), v.tangent.w); float3x3 tangentToWorld = CreateTangentToWorldPerVertex(normalWorld, tangentWorld.xyz, tangentWorld.w); o.tangentToWorldAndParallax[0].xyz = tangentToWorld[0]; o.tangentToWorldAndParallax[1].xyz = tangentToWorld[1]; o.tangentToWorldAndParallax[2].xyz = tangentToWorld[2]; #else o.tangentToWorldAndParallax[0].xyz = 0; o.tangentToWorldAndParallax[1].xyz = 0; o.tangentToWorldAndParallax[2].xyz = normalWorld; #endif #ifndef LIGHTMAP_OFF o.ambientOrLightmapUV.xy = v.uv1.xy * unity_LightmapST.xy + unity_LightmapST.zw; o.ambientOrLightmapUV.zw = 0; #elif UNITY_SHOULD_SAMPLE_SH #if (SHADER_TARGET < 30) o.ambientOrLightmapUV.rgb = ShadeSH9(half4(normalWorld, 1.0)); #else // Optimization: L2 per-vertex, L0..L1 per-pixel o.ambientOrLightmapUV.rgb = ShadeSH3Order(half4(normalWorld, 1.0)); #endif #endif #ifdef DYNAMICLIGHTMAP_ON o.ambientOrLightmapUV.zw = v.uv2.xy * unity_DynamicLightmapST.xy + unity_DynamicLightmapST.zw; #endif #ifdef _PARALLAXMAP TANGENT_SPACE_ROTATION; half3 viewDirForParallax = mul (rotation, ObjSpaceViewDir(v.vertex)); o.tangentToWorldAndParallax[0].w = viewDirForParallax.x; o.tangentToWorldAndParallax[1].w = viewDirForParallax.y; o.tangentToWorldAndParallax[2].w = viewDirForParallax.z; #endif return o; } void fragDeferred ( VertexOutputDeferred i, out half4 outDiffuse : SV_Target0, // RT0: diffuse color (rgb), occlusion (a) out half4 outSpecSmoothness : SV_Target1, // RT1: spec color (rgb), smoothness (a) out half4 outNormal : SV_Target2, // RT2: normal (rgb), --unused, very low precision-- (a) out half4 outEmission : SV_Target3, // RT3: emission (rgb), --unused-- (a) float face : VFACE ) { #if (SHADER_TARGET < 30) outDiffuse = 1; outSpecSmoothness = 1; outNormal = 0; outEmission = 0; return; #endif // Experimental normal flipping if(_CullMode < 0.5f) i.tangentToWorldAndParallax[2].xyz *= face; FRAGMENT_SETUP(s) // no analytic lights in this pass UnityLight dummyLight = DummyLight (s.normalWorld); half atten = 1; half occlusion = Occlusion(i.tex.xy); // only GI UnityGI gi = FragmentGI ( s.posWorld, occlusion, i.ambientOrLightmapUV, atten, s.oneMinusRoughness, s.normalWorld, s.eyeVec, dummyLight); half3 color = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect).rgb; color += UNITY_BRDF_GI (s.diffColor, s.specColor, s.oneMinusReflectivity, s.oneMinusRoughness, s.normalWorld, -s.eyeVec, occlusion, gi); #ifdef _EMISSION color += Emission (i.tex.xy); #endif #ifndef UNITY_HDR_ON color.rgb = exp2(-color.rgb); #endif outDiffuse = half4(s.diffColor, occlusion); outSpecSmoothness = half4(s.specColor, s.oneMinusRoughness); outNormal = half4(s.normalWorld*0.5+0.5,1); outEmission = half4(color, 1); } #endif // UNITY_STANDARD_CORE_INCLUDED