Jump to content
  • Advertisement

jb-dev

Member
  • Content count

    66
  • Joined

  • Last visited

Community Reputation

45 Neutral

1 Follower

About jb-dev

  • Rank
    Member

Personal Information

  • Role
    3D Artist
    Game Designer
    Programmer
  • Interests
    Art
    Audio
    Design
    Programming

Social

  • Twitter
    MiiMii1205
  • Github
    MiiMii1205
  • Steam
    MiiMii1205

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. jb-dev

    Vaporwave Roguelite

  2. During the past days, lots of shaders were updated and other visual things did too. Firstly, I've added lights effects when the crystals get shattered. There's also a burst of particle emanating from the broken crystal on impact. Also, enemies now leave a ragdoll corpse behind when they die. I love some of the poses those ragdolls make. On another note, I've toyed around with corpse removal and got captivated by the shrinking effect it created. It can sometimes be off-putting, but I'm still captivated. I've also added a nice VHS-like effect from layering two VHS shader together; namely "more AVdistortion" and "VHS pause effect". I've already ported the former and it's already active and the latter was just a matter of porting GLSL shaders to HLSL. No biggie. I did change the code a bit to make the white noises move through time. And there's nothing like trigonometry to help us with that fixed4 frag (v2f i) : SV_Target { fixed4 col = fixed4(0, 0, 0, 0); // get position to sample fixed2 samplePosition = i.vertex.xy / _ScreenParams.xy; float whiteNoise = 9999.0; // Jitter each line left and right samplePosition.x = samplePosition.x + (((rand(float2(_UnscaledTime, i.vertex.y))-0.5)/64.0) * _EffectStrength ); // Jitter the whole picture up and down samplePosition.y = samplePosition.y + (((rand(float2(_UnscaledTime, _UnscaledTime))-0.5)/32.0) * _EffectStrength ); // Slightly add color noise to each line col += (fixed4(-0.5, -0.5, -0.5 , -0.5)+fixed4(rand(float2(i.vertex.y,_UnscaledTime)),rand(float2(i.vertex.y,_UnscaledTime+1.0)),rand(float2(i.vertex.y,_UnscaledTime+2.0)),0))*0.1; // Either sample the texture, or just make the pixel white (to get the staticy-bit at the bottom) whiteNoise = rand(float2(floor(samplePosition.y*80.0),floor(samplePosition.x*50.0))+float2(_UnscaledTime,0)); float t = sin(_UnscaledTime / 2); if (whiteNoise > 11.5-30.0*(samplePosition.y + t) || whiteNoise < 1.5-5.0*(samplePosition.y + t) ) { // Sample the texture. col = lerp(tex2D(_MainTex ,samplePosition) , col + tex2D(_MainTex ,samplePosition), _EffectStrength); } else { // Use white. (I'm adding here so the color noise still applies) col = lerp(tex2D(_MainTex ,samplePosition), fixed4(1, 1, 1,1), _EffectStrength); } return col; } It's nice to have HLSL code, but a video is better:
  3. Today was kind of a slow day too. I've haven't got a lot of sleep lately (thanks little hamster wheel in my head) But at last, I was still able to add (and also fix) some graphical components here and there. In short, I've made the first and last rooms of the level more distinct from every other room. For example, I've added a room flow on these rooms to properly align props and, in the case of the starting room. the spawning rotation. I've also added a little decal-like plane that tells the player what to do (take it as a little tutorial, if you may) The important thing is that this decal is, not unlike my palette shader, dynamic in terms of colours. What I've done is quite simple: I've mapped each channel of a texture to a specific colour. Here's the original texture: After inputting this texture in my shader, it was just a matter of interpolating values and saturating them: Shader "Custom/TriColorMaps" { Properties { _MainTex ("Albedo (RGB)", 2D) = "white" {} _Glossiness ("Smoothness", Range(0,1)) = 0.5 _Metallic ("Metallic", Range(0,1)) = 0.0 _RedMappedColor ("Mapped color (Red channel)", Color) = (1, 0, 0, 1) _GreenMappedColor ("Mapped color (Green channel)", Color) = (0, 1, 0, 1) _BlueMappedColor ("Mapped color (Blue channel)", Color) = (0, 0, 1, 1) } SubShader { Tags { "RenderType"="Transparent" } LOD 200 CGPROGRAM // Physically based Standard lighting model, and enable shadows on all light types #pragma surface surf Standard fullforwardshadows vertex:vert decal:blend // Use shader model 3.0 target, to get nicer looking lighting #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; half _Glossiness; half _Metallic; fixed4 _RedMappedColor; fixed4 _GreenMappedColor; fixed4 _BlueMappedColor; void vert (inout appdata_full v) { v.vertex.y += v.normal.y * 0.0125; } // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader. // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing. // #pragma instancing_options assumeuniformscaling UNITY_INSTANCING_BUFFER_START(Props) // put more per-instance properties here UNITY_INSTANCING_BUFFER_END(Props) void surf (Input IN, inout SurfaceOutputStandard o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex); c.rgb = saturate((lerp(fixed4(0, 0, 0, 0), _RedMappedColor, c.r) + lerp(fixed4(0, 0, 0, 0), _GreenMappedColor, c.g) + lerp(fixed4(0, 0, 0, 0), _BlueMappedColor, c.b))).rgb; o.Albedo = c.rgb; // Metallic and smoothness come from slider variables o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } Also, note that I've changed the vertices of the model. I needed a way to eliminate the Z-Fighting and just thought of offsetting the vertices by their normals. In conclusion, It's nothing really special, really. But I'm still working hard on this. EDIT: After a little bit of searching, I've seen that you can give a Z-buffer offset in those Unity shaders by using the Offset state. So I've then tried to change a bit my previous shader to use that functionality rather than just offsetting the vertices: SubShader { Tags { "RenderType"="Opaque" "Queue"="Geometry+1" "ForceNoShadowCasting"="True" } LOD 200 Offset -1, -1 CGPROGRAM // Physically based Standard lighting model, and enable shadows on all light types #pragma surface surf Lambert decal:blend // Use shader model 3.0 target, to get nicer looking lighting #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; fixed4 _RedMappedColor; fixed4 _GreenMappedColor; fixed4 _BlueMappedColor; // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader. // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing. // #pragma instancing_options assumeuniformscaling UNITY_INSTANCING_BUFFER_START(Props) // put more per-instance properties here UNITY_INSTANCING_BUFFER_END(Props) void surf (Input IN, inout SurfaceOutput o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex); c.rgb = saturate((lerp(fixed4(0, 0, 0, 0), _RedMappedColor, c.r) + lerp(fixed4(0, 0, 0, 0), _GreenMappedColor, c.g) + lerp(fixed4(0, 0, 0, 0), _BlueMappedColor, c.b))).rgb; o.Albedo = c.rgb; // We keep the alpha: it's supposed to be a decal o.Alpha = c.a; } ENDCG }
  4. Today, I've worked on level exits. When the player arrived at the last room before, nothing awaited him. He was stuck for eternity on the same level. Kinda boring, actually... But today this is no more! Now a big Ethernet port awaits the player at the end of the level. He just needs to jump in it to clear the level. I've had to create two new shaders: one that can fade the screen to black according to the player's y coordinates. I've also needed to modify my main shader to add a new parameter that can create a gradient from the usual colours to pitch black. This way, I can simulate a bottomless pit.
  5. Today, I've fixed some bugs with the crystal throwing algorithm. Basically, crystals will be used by the player to get to those alternative paths I've mentioned in my BPS tree entry. There'll be at least 3 type of crystals : Fire, Water and Life. Each will be thrown to eliminate obstacles at the entry point of each alternatives paths. I've also planned them to be persistent throughout runs. This means that the amount of crystals are linked to a save file rather than a run. So if the player didn't used their crystal during the run, then they'll have the opportunity to use them in another one. As you can see, each type of crystal has a particular particle effect attached to them. I've tried to simulate how each type of elements would react. So the fire crystal particles behaves like flames, life one like fireflies and water one like drops of water. I still have to give them a distinctive shape... For now, I've used the default particle sprite provided with Unity. Also, after a crystal hit something, it'll generate other type of particles. These particle will move accordingly to the impact force. So, to recap, I've been working on particles for the past days... Kinda slow, but I think it's worth it...
  6. In the previous iteration of our game, we decided to use an actual cone as a way to make an AI "see". This implementation was hazardous, and it quickly became one of the hardest things to implement. We eventually were able to code it all, but the results were really static and not really realistic. Because of the reboot, I took the time to actually identify what constraint one's vision has. The visual field First of all, a cone isn't really the best in therm of collision checking. It required a special collider and could have potentially been a bottleneck in the future when multiple AI would roam the level. In actuality, the visual field can be represented as a 3D piece of a sphere (or more like a sector of a sphere). So we're gonna need to use a sphere in the new version. It's cleaner and more efficient that way. Here's how I've done it: foreach (Collider otherCollider in Physics.OverlapSphere(m_head.transform.position, m_visionDistance / 2, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore)) { // Do your things here } Pretty simple, really... Afterwards (not unlike our previous endeavour), we can just do a simple ray cast to see if the AI's vision is obstructed: // Do a raycast RaycastHit hit; if (Physics.Raycast(m_head.position, (otherPosition - m_head.position).normalized, out hit, m_visionDistance, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore) && hit.collider == otherCollider) { // We can see the other without any obstacles } But with that came another problem: if we use a sphere as a visual field, then the AI can surely see behind his back. Enters the cross product. Vectorial cross product The cross product is a vectorial operation that is quite useful. Here's the actual operation that takes place: \(\mathbf{c} = \mathbf{a} \times \mathbf{b} = ( \mathbf{a}_{y}\mathbf{b}_{z} -\mathbf{a}_{z}\mathbf{b}_{y}, \mathbf{a}_{z}\mathbf{b}_{x} -\mathbf{a}_{x}\mathbf{b}_{z}, \mathbf{a}_{x}\mathbf{b}_{y} -\mathbf{a}_{y}\mathbf{b}_{x} )\) This actually makes a third vector. This third vector is said to be "orthogonal" to the two others. This is a visual representation of that vector: As you can see, this is pretty cool. It looks like the translation gizmo of many 3D editors. But this operation is more useful than creating 3D gizmos. It can actually help us in our objective. Interesting Properties One of the most interesting properties of the cross product is actually its magnitude. Depending on the angle between our two a and b vectors, the magnitude of the resulting vector changes. Here's a nice visualization of it: As you can see, this property can be useful for many things... Including determining the position of a third vector compared to two other vectors. But, however, there's a catch: the order of our a and b vector matters. We need to make sure that we don't make a mistake, as this can easily induce many bugs in our code. The funnel algorithm In one of my articles, I've actually explained how pathfinding kinda works. I've said that the navigational mesh algorithm is actually an amalgamation of different algorithms. One of these algorithms is the Funnel algorithm, with which we actually do the string pulling. When the Funnel algorithm is launched, we basically do a variation of the cross product operation in order to find if a certain point lay inside a given triangle described by a left and right apexes. This is particularly useful, as we can actually apply a nice string pulling on the identified path. Here's the actual code: public static float FunnelCross2D(Vector3 tip, Vector3 vertexA, Vector3 vertexB) { return (vertexB.x - tip.x) * (vertexA.z - tip.z) - (vertexA.x - tip.x) * (vertexB.z - tip.z); } With this function, we get a float. The float in question (or more particularly its sign) can indicate whether the tip is to the left or to the right of the line described by vertexA and vertexB. (As long as the order of those vectors are counterclockwise, otherwise, the sign is inverted) Application Now, with that FunelCross2D function, we can actually attack our problem head-on. With the function, we can essentially tell whether or not a given point is behind or in front of an AI. Here's how I've managed to do it: if ( FunnelCross2D((otherTransform.position - m_head.position).normalized, m_head.right, -m_head.right) > 0 ) { // otherTransform is in front of us } Because this is Unity, we have access to directional vectors for each Transform objects. This is useful because we can then plug these vectors into our FunnelCross2D function and voilà: we now have a way to tell if another entity is behind or in front of our AI. But wait, there's more! Limit the visual angle Most people are aware that our visual field has a limited viewing angle. It happens that, for humans, the viewing angle is about 114°. The problem is that, right now, our AI viewing angle is actually 180°. Not really realistic if you ask me. Thankfully, we have our trusty FunnelCross2D function to help with that. Let's take another look at the nice cross product animation from before: If you noticed, the magnitude is actually cyclic in its property: when the angle between a and b is 90°, then the magnitude of the resulting vector of the cross product is literally 1. The closet the angle gets to 180° or 0°, the closest our magnitude get to 0. This means that for a given magnitude (except for 1), there are actually 2 possible a and b vector configurations. So, we can then try to find the actual magnitude of the cross given a certain angle. Afterwards, we can store the result in memory. m_visionCrossLimit = FunnelCross2D(new Vector3(Mathf.Cos((Mathf.PI / 2) - (m_visionAngle / 2)), 0, Mathf.Sin((Mathf.PI / 2) - (m_visionAngle / 2))).normalized, m_head.right, -m_head.right); Now we can just go back to our if and change some things: if ( FunnelCross2D((otherTransform.position - m_head.position).normalized, m_head.right, -m_head.right) > m_visionCrossLimit ) { // otherTransform is in our visual field } Then we did it! the AI only reacts to enemies in their visual field. Conclusion In conclusion, you can see how I've managed to simulate a 3D visual field using the trustworthy cross product. But the fun doesn't end there! We can apply this to many different situations. For example, I've implemented the same thing but in order to limit neck rotations. it's just like previously, but with another variable and some other fancy codes and what not... The cross product is indeed a valuable tool in the game developer's toolset. No doubt about it.
  7. @ethancodes does the original ball get its velocity changed? If not, maybe you could try debugging it with breakpoints. If you don't know how, Unity has a tutorial on this matter. For example, you could just put breakpoints before and after the velocity change and see if your balls' velocity changes at all.
  8. Today was kind of a slow day: I had many things to do, so development was kind of light... Nevertheless, I've still managed to do something... I've added a way to highlight items through emission (not unlike how we did it previously) and make enemies blink when they get hurt. It wasn't really hard: because this is Unity, the surface shader got us covered. It was just one simple line of code. #ifdef IS_EMISSIVE o.Emission = lerp(fixed3(0, 0, 0), _EmissionColor.rgb, _EmissionRatio); #endif
  9. Have you tried with a ForceMode2D parameter too? (Like, right after the force) If so, then maybe there's somewhere else during the frame that just removes your previously set velocity... I do kind of the same thing (except in 3D). I instantiate rigidbodies and add a force to them...
  10. Today I've worked on adding bombs to the game. They are really useful for dispersing enemies and such. The model was made a while back. It was just a mater of importing it in Unity and coding the script. Here's a nice video I've made just for that: There's nothing really special here, just polymorphism, Unity Components and C# delegates.... Collider[] cols = Physics.OverlapSphere(explosionCenter, m_explosionRadius, LayerMask.GetMask("Obstacles", "Entity", "Player Body", "Pickable"), QueryTriggerInteraction.Ignore); for (int i = 0, length = cols.Length; i < length; ++i) { GameObject collidedObject = cols[i].gameObject; HealthController healthController; Rigidbody rigidbody; AbstractExplosionActionable explosionActionable; if(collidedObject.layer == LayerMask.NameToLayer("Entity") || collidedObject.CompareTag("Player")){ healthController = collidedObject.GetComponent<HealthController>(); healthController.RawHurt(m_explosionDamage, transform); } else if(collidedObject.layer == LayerMask.NameToLayer("Pickable")) { rigidbody = collidedObject.GetComponent<Rigidbody>(); if (rigidbody != null && !rigidbody.isKinematic) { rigidbody.AddExplosionForce(m_explosionDamage, transform.position, m_explosionRadius); } } else if (collidedObject.layer == LayerMask.NameToLayer("Obstacles")) { explosionActionable = collidedObject.GetComponent<AbstractExplosionActionable>(); if (explosionActionable != null) { explosionActionable.action.Invoke(m_explosionDamage, m_explosionRadius, explosionCenter); } } }
  11. Try calling "AddForce" rather than setting the velocity by hand... something like this: newBall.GetComponent<Rigidbody2D>().AddForce(mainBallVector);
  12. Roguelites are synonymous with loot and treasure chests. However, my game is vaporwave, so traditional treasure chests are out of the question. This is what I came up with: You may get the references if you know your Windows well enough... 😉
  13. I've decided to make a daily update blog to have a development metric. So here goes... Today, I've modified the shader I've previously made so that it can take a "wetness" parameter. Basically, in my palette, the first 4 columns represents a pair of colors, where as the first ones are dry colors and every other ones are wet colors. I just do a clever lerp between those two consecutive colors according to the given wetness parameter. Here's an example: Here, you can see that the leaves of that palm tree is significantly more yellow that usual Here it's way more green. The way I did it was quite simple: I've just added a _Wetness parameter to my Unity shader. I then just do a simple lerp like so: /* In the shader's property block */ _Wetness ("Wetness", Range(0,1)) = 0 [Toggle(IS_VEGETATION)] _isVegetation ("Is Vegetation?", Float) = 0 /* In the SubShader body */ void surf (Input IN, inout SurfaceOutputStandard o) { #ifdef IS_VEGETATION fixed4 wetColor; float uv_x = IN.uv_MainTex.x; // A pixel is deemed to represent something alive if its U coordinate is included in specific ranges (128 is the width of my texture) bool isVegetationUV = ((uv_x >= 0.0 && uv_x <= (1.0 / float(128)) ) || (uv_x >= (2.0 / float(128)) && uv_x <= (3.0 / float(128)) )); if (isVegetationUV) { fixed2 wetUV = IN.uv_MainTex; // To get the other color, we just shift by one pixel. that's all // The _PaletteIndex parameter represents the current levels' palette index. (In other words, it changes level by levels) // There are 8 colors per palette. that's where that "8" comes from... wetUV.x = ((wetUV.x + (1.0/float(128))) + 8.0 * float(_PaletteIndex) / float(128)); wetColor = tex2D(_MainTex, wetUV); } #endif // This is part of my original palette shader IN.uv_MainTex.x = IN.uv_MainTex.x + 8.0 * float(_PaletteIndex) / float(128); fixed4 c = tex2D(_MainTex, IN.uv_MainTex); #ifdef IS_VEGETATION if (isVegetationUV){ c = lerp(c, wetColor, _Wetness); } #endif o.Albedo = c.rgb; o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = _Alpha; } Then it's just a matter of changing that parameter in a script like so: GameObject palm = PropFactory.instance.CreatePalmTree(Vector3.zero, Quaternion.Euler(0, Random.Range(0, 360), 0), transform).gameObject; MeshRenderer renderer = palm.GetComponentInChildren<MeshRenderer>(); for (int i = 0, length = renderer.materials.Length; i < length; ++i) { renderer.materials[i].SetFloat("_Wetness", m_wetness); } I've also changed the lianas's colors, but because I was too lazy to generate the UV maps of these geometries, I've just gave them a standard material and just change the main color... And I made a Unity script that takes care of that for me... // AtlasPaletteTints is an Enum, and atlasPaletteTint is a value of that enum // m_wetness is a float that represent the _Wetness parameter // m_isVegetation is a bool that indicates whether or not the mesh is considered a vegetation (this script is generic after all) Color col = AtlasPaletteController.instance.GetColorByPalette(atlasPaletteTint); if (m_isVegetation) { // Basically, we check whether or not a color can qualify for a lerp if ((atlasPaletteTint >= AtlasPaletteTints.DRY_DETAIL_PLUS_TWO && atlasPaletteTint <= AtlasPaletteTints.DRY_DETAIL_MINUS_TWO) || (atlasPaletteTint >= AtlasPaletteTints.DRY_VEGETATION_PLUS_TWO && atlasPaletteTint <= AtlasPaletteTints.DRY_VEGETATION_MINUS_TWO)) { // Each colors have 5 tints; that where the "+ 5" is coming from col = Color.Lerp(col, AtlasPaletteController.instance.GetColorByPalette(atlasPaletteTint + 5), m_wetness); } } GetComponent<Renderer>().material.color = col;
  14. Maybe "Mall" is a bit of a stretch, but essentially the part where there's a different music is supposed to be a shop. Again, it's still under progress and all, but the idea is to have only 3 items on sale for each shop.
  15. After really thinking about it, the method I've used to play MP3 wasn't the most flexible, so after re-working it out, I've decided to stream the music directly in a Unity AudioClip. This way, the audio goes through an audio mixer form which I can easily apply filter to... I was also able to cross-fade between two music track. I've also put the sound loading onto another thread, making the audio loading run in the background rather than clogging up the main thread. Also, it's easier to use the 【 VaporMaker】with the Unity Audio system... I can even add more wacky filters over it to make those songs more VAPORWAVE than ever.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!