Jump to content
  • Advertisement
  • entries
    26
  • comments
    11
  • views
    2001

About this blog

An update blog for projects going on that have some degree of interest

Entries in this blog

Unity Weekly Update #8 - Locked down

I've decided to change the frequency of these updates: most of the times, I just do some minor updates and graphical tweaks here and there. Therefore, if I do these updates weekly, then I'll have a lot more content to write about. So, yeah... Last week, I've been working on adding many different types of rooms in the level. You may or may not know that I use BPS trees to generate a level, and previously, only 5 types of rooms spawned in a level: starting rooms, ending rooms, normal rooms, tunnel rooms and Malls. It was very static and not flexible, so I've changed it to make it more dynamic. Malls Variations First, I've added two different variations for Malls: Blood Malls and Clothes Malls. These were originally planned and already built. Big Malls These are your typical type of Malls. You can find everything here. This is where, for example, you'll find hearts, keys and/or bombs. They were already in the game, but now they're more specialized (or generalized in this case) Blood Malls The Blood Malls specialized in bloody activities. (meaning that you'll mostly find a selection of weapons here) Clothes Malls The Clothes Malls are specialized in clothes, which in our case are actually pieces of equipment the player can have New Rooms Aside from these new type of malls, I've also added 3 new types of rooms. These rooms, however, are guarded by a locked door: the player must use a key to enter. In order to unlock a locked door, the player just needs to touch it. If the player has enough keys, then a key is used and the locked door disappears. There's also an event that triggers that can do things when the player unlocks the door (like revealing hidden models and what not) The Gym The gym is a room resembling some of these outside gyms you can see in some places.  The player can use up to tree gym equipment to get a permanent stats bonus to a randomly selected stat.  The prices of usages of these gym equipment doubles after each use. (i.e. if using one piece is 10$, then after buying it the others will cost 20$ and so on) I've planned that NPC would use non-interactive gym workstations for decoration, but it's not really important as of right now... The Bank The bank is not fully functional at the moment, but it still spawns. The idea is to give the player a way to store money persistently throughout runs. The player can then withdraw money (with a certain transaction fee) from that bank. That means that you can effectively get previously deposited money back when you'll need it the most. The Landfill The landfill gives you the opportunity to gain back previously thrown away pieces of equipment. Basically, the game stores the last three thrown away pieces of equipment. When the rooms spawn, it simply displays you those pieces of equipment. you can then pick them up for free. This, however, comes with a caveat: pieces of equipment that will be switched from a previously thrown away pieces of equipment won't reappear in another landfill. Also, once the landfill despawns, the items in that landfill will be discarded. (Think of it as a last chance opportunity) There aren't any props at the moment, but it's fully functional. Minor Tweaks Aside from that, there are also some minor tweaks: Bombs now damage the player if the latter is within its blast radius; Player jumps are now more precise: letting go of the jump button early makes a smaller jump than if it was held down longer; Ground detection (that was previously done with raycasting) now uses Unity's CharacterController.isGrounded property; When the player is hurt, a knockback is applied to him; The strength of said knockback is actually the total amount of damage the player took. Coins and money items now emit particles to help the player localize those important items;
They're now keys (left) and bombs (right) counters in the HUD;
The key one still needs a specific icon, but it's fully functional; There were many shader optimizations and adjustments: Many shaders were merged together and are now sharing most code; I've also changed the shaders so that we can use GPU instancing for most props, I also now use MaterialPropertyBlock for things like wetness; Also, now the palette texture and its palette index are now global variables, this effectively means that I only need to set these values once and everything else follows; A small "Sales" sign is placed in front of most types of malls. This sign has a random orientation and position each time it's spawned. ;
Props that obstruct a passage are removed from the room; This way no prop can obstruct the room so that the player cannot exit it. Some rooms now spawn ferns instead of palm trees;
Lianas also have different configurations based on which prop spawns. Next week Over the next week, I've planned to integrate the first relic. Relics are items that give the player capacities and stats boosts. It's common to have something similar in most roguelite and roguelike games. That type of thing needs to have a good abstraction in order to work: there are many different types of capacities that affect the player in radically different ways. There's a lot of work ahead. But I'm confident it'll be easy. Just need to get in the groove.

jb-dev

jb-dev

Unity Daily Update #7 - Another plane of being

During the past days, lots of shaders were updated and other visual things did too. Firstly, I've added lights effects when the crystals get shattered. There's also a burst of particle emanating from the broken crystal on impact. Also, enemies now leave a ragdoll corpse behind when they die. I love some of the poses those ragdolls make. On another note, I've toyed around with corpse removal and got captivated by the shrinking effect it created. It can sometimes be off-putting, but I'm still captivated. I've also added a nice VHS-like effect from layering two VHS shader together; namely "more AVdistortion" and "VHS pause effect". I've already ported the former and it's already active and the latter was just a matter of porting GLSL shaders to HLSL. No biggie. I did change the code a bit to make the white noises move through time. And there's nothing like trigonometry to help us with that fixed4 frag (v2f i) : SV_Target { fixed4 col = fixed4(0, 0, 0, 0); // get position to sample fixed2 samplePosition = i.vertex.xy / _ScreenParams.xy; float whiteNoise = 9999.0; // Jitter each line left and right samplePosition.x = samplePosition.x + (((rand(float2(_UnscaledTime, i.vertex.y))-0.5)/64.0) * _EffectStrength ); // Jitter the whole picture up and down samplePosition.y = samplePosition.y + (((rand(float2(_UnscaledTime, _UnscaledTime))-0.5)/32.0) * _EffectStrength ); // Slightly add color noise to each line col += (fixed4(-0.5, -0.5, -0.5 , -0.5)+fixed4(rand(float2(i.vertex.y,_UnscaledTime)),rand(float2(i.vertex.y,_UnscaledTime+1.0)),rand(float2(i.vertex.y,_UnscaledTime+2.0)),0))*0.1; // Either sample the texture, or just make the pixel white (to get the staticy-bit at the bottom) whiteNoise = rand(float2(floor(samplePosition.y*80.0),floor(samplePosition.x*50.0))+float2(_UnscaledTime,0)); float t = sin(_UnscaledTime / 2); if (whiteNoise > 11.5-30.0*(samplePosition.y + t) || whiteNoise < 1.5-5.0*(samplePosition.y + t) ) { // Sample the texture. col = lerp(tex2D(_MainTex ,samplePosition) , col + tex2D(_MainTex ,samplePosition), _EffectStrength); } else { // Use white. (I'm adding here so the color noise still applies) col = lerp(tex2D(_MainTex ,samplePosition), fixed4(1, 1, 1,1), _EffectStrength); } return col; } It's nice to have HLSL code, but a video is better:  

jb-dev

jb-dev

Unity Daily Update #6 - Dynamically colored decals

Today was kind of a slow day too. I've haven't got a lot of sleep lately (thanks little hamster wheel in my head) But at last, I was still able to add (and also fix) some graphical components here and there. In short, I've made the first and last rooms of the level more distinct from every other room. For example, I've added a room flow on these rooms to properly align props and, in the case of the starting room. the spawning rotation. I've also added a little decal-like plane that tells the player what to do (take it as a little tutorial, if you may) The important thing is that this decal is, not unlike my palette shader, dynamic in terms of colours. What I've done is quite simple: I've mapped each channel of a texture to a specific colour. Here's the original texture: After inputting this texture in my shader, it was just a matter of interpolating values and saturating them: Shader "Custom/TriColorMaps" { Properties { _MainTex ("Albedo (RGB)", 2D) = "white" {} _Glossiness ("Smoothness", Range(0,1)) = 0.5 _Metallic ("Metallic", Range(0,1)) = 0.0 _RedMappedColor ("Mapped color (Red channel)", Color) = (1, 0, 0, 1) _GreenMappedColor ("Mapped color (Green channel)", Color) = (0, 1, 0, 1) _BlueMappedColor ("Mapped color (Blue channel)", Color) = (0, 0, 1, 1) } SubShader { Tags { "RenderType"="Transparent" } LOD 200 CGPROGRAM // Physically based Standard lighting model, and enable shadows on all light types #pragma surface surf Standard fullforwardshadows vertex:vert decal:blend // Use shader model 3.0 target, to get nicer looking lighting #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; half _Glossiness; half _Metallic; fixed4 _RedMappedColor; fixed4 _GreenMappedColor; fixed4 _BlueMappedColor; void vert (inout appdata_full v) { v.vertex.y += v.normal.y * 0.0125; } // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader. // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing. // #pragma instancing_options assumeuniformscaling UNITY_INSTANCING_BUFFER_START(Props) // put more per-instance properties here UNITY_INSTANCING_BUFFER_END(Props) void surf (Input IN, inout SurfaceOutputStandard o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex); c.rgb = saturate((lerp(fixed4(0, 0, 0, 0), _RedMappedColor, c.r) + lerp(fixed4(0, 0, 0, 0), _GreenMappedColor, c.g) + lerp(fixed4(0, 0, 0, 0), _BlueMappedColor, c.b))).rgb; o.Albedo = c.rgb; // Metallic and smoothness come from slider variables o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } Also, note that I've changed the vertices of the model. I needed a way to eliminate the Z-Fighting and just thought of offsetting the vertices by their normals. In conclusion, It's nothing really special, really. But I'm still working hard on this. EDIT: After a little bit of searching, I've seen that you can give a Z-buffer offset in those Unity shaders by using the Offset state.  So I've then tried to change a bit my previous shader to use that functionality rather than just offsetting the vertices: SubShader { Tags { "RenderType"="Opaque" "Queue"="Geometry+1" "ForceNoShadowCasting"="True" } LOD 200 Offset -1, -1 CGPROGRAM // Physically based Standard lighting model, and enable shadows on all light types #pragma surface surf Lambert decal:blend // Use shader model 3.0 target, to get nicer looking lighting #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; fixed4 _RedMappedColor; fixed4 _GreenMappedColor; fixed4 _BlueMappedColor; // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader. // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing. // #pragma instancing_options assumeuniformscaling UNITY_INSTANCING_BUFFER_START(Props) // put more per-instance properties here UNITY_INSTANCING_BUFFER_END(Props) void surf (Input IN, inout SurfaceOutput o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex); c.rgb = saturate((lerp(fixed4(0, 0, 0, 0), _RedMappedColor, c.r) + lerp(fixed4(0, 0, 0, 0), _GreenMappedColor, c.g) + lerp(fixed4(0, 0, 0, 0), _BlueMappedColor, c.b))).rgb; o.Albedo = c.rgb; // We keep the alpha: it's supposed to be a decal o.Alpha = c.a; } ENDCG }  

jb-dev

jb-dev

Unity Daily Update #5 - Eternal Ethernet

Today, I've worked on level exits. When the player arrived at the last room before, nothing awaited him. He was stuck for eternity on the same level. Kinda boring, actually... But today this is no more! Now a big Ethernet port awaits the player at the end of the level. He just needs to jump in it to clear the level.  I've had to create two new shaders: one that can fade the screen to black according to the player's y coordinates. I've also needed to modify my main shader to add a new parameter that can create a gradient from the usual colours to pitch black. This way, I can simulate a bottomless pit.  

jb-dev

jb-dev

Unity Daily Update #4 - Crystal updates

Today, I've fixed some bugs with the crystal throwing algorithm. Basically, crystals will be used by the player to get to those alternative paths I've mentioned in my BPS tree entry. There'll be at least 3 type of crystals : Fire, Water and Life. Each will be thrown to eliminate obstacles at the entry point of each alternatives paths. I've also planned them to be persistent throughout runs. This means that the amount of crystals are linked to a save file rather than a run. So if the player didn't used their crystal during the run, then they'll have the opportunity to use them in another one. As you can see, each type of crystal has a particular particle effect attached to them. I've tried to simulate how each type of elements would react. So the fire crystal particles behaves like flames, life one like fireflies and water one like drops of water. I still have to give them a distinctive shape... For now, I've used the default particle sprite provided with Unity. Also, after a crystal hit something, it'll generate other type of particles. These particle will move accordingly to the impact force. So, to recap, I've been working on particles for the past days... Kinda slow, but I think it's worth it...    

jb-dev

jb-dev

Algorithm The power of the vector cross product, or how to make a realistic vision field

In the previous iteration of our game, we decided to use an actual cone as a way to make an AI "see". This implementation was hazardous, and it quickly became one of the hardest things to implement. We eventually were able to code it all, but the results were really static and not really realistic. Because of the reboot, I took the time to actually identify what constraint one's vision has. The visual field First of all, a cone isn't really the best in therm of collision checking. It required a special collider and could have potentially been a bottleneck in the future when multiple AI would roam the level. In actuality, the visual field can be represented as a 3D piece of a sphere (or more like a sector of a sphere). So we're gonna need to use a sphere in the new version. It's cleaner and more efficient that way. Here's how I've done it: foreach (Collider otherCollider in Physics.OverlapSphere(m_head.transform.position, m_visionDistance / 2, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore)) { // Do your things here } Pretty simple, really... Afterwards (not unlike our previous endeavour), we can just do a simple ray cast to see if the AI's vision is obstructed: // Do a raycast RaycastHit hit; if (Physics.Raycast(m_head.position, (otherPosition - m_head.position).normalized, out hit, m_visionDistance, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore) && hit.collider == otherCollider) { // We can see the other without any obstacles } But with that came another problem: if we use a sphere as a visual field, then the AI can surely see behind his back. Enters the cross product. Vectorial cross product The cross product is a vectorial operation that is quite useful. Here's the actual operation that takes place: \(\mathbf{c} = \mathbf{a} \times \mathbf{b} = ( \mathbf{a}_{y}\mathbf{b}_{z} -\mathbf{a}_{z}\mathbf{b}_{y}, \mathbf{a}_{z}\mathbf{b}_{x} -\mathbf{a}_{x}\mathbf{b}_{z}, \mathbf{a}_{x}\mathbf{b}_{y} -\mathbf{a}_{y}\mathbf{b}_{x} )\) This actually makes a third vector. This third vector is said to be "orthogonal" to the two others. This is a visual representation of that vector: As you can see, this is pretty cool. It looks like the translation gizmo of many 3D editors. But this operation is more useful than creating 3D gizmos. It can actually help us in our objective. Interesting Properties One of the most interesting properties of the cross product is actually its magnitude. Depending on the angle between our two a and b vectors, the magnitude of the resulting vector changes. Here's a nice visualization of it: As you can see, this property can be useful for many things... Including determining the position of a third vector compared to two other vectors. But, however, there's a catch: the order of our a and b vector matters. We need to make sure that we don't make a mistake, as this can easily induce many bugs in our code. The funnel algorithm In one of my articles, I've actually explained how pathfinding kinda works. I've said that the navigational mesh algorithm is actually an amalgamation of different algorithms.  One of these algorithms is the Funnel algorithm, with which we actually do the string pulling. When the Funnel algorithm is launched, we basically do a variation of the cross product operation in order to find if a certain point lay inside a given triangle described by a left and right apexes. This is particularly useful, as we can actually apply a nice string pulling on the identified path. Here's the actual code: public static float FunnelCross2D(Vector3 tip, Vector3 vertexA, Vector3 vertexB) { return (vertexB.x - tip.x) * (vertexA.z - tip.z) - (vertexA.x - tip.x) * (vertexB.z - tip.z); } With this function, we get a float. The float in question (or more particularly its sign) can indicate whether the tip is to the left or to the right of the line described by vertexA and vertexB. (As long as the order of those vectors are counterclockwise, otherwise, the sign is inverted) Application Now, with that FunelCross2D function, we can actually attack our problem head-on. With the function, we can essentially tell whether or not a given point is behind or in front of an AI. Here's how I've managed to do it: if ( FunnelCross2D((otherTransform.position - m_head.position).normalized, m_head.right, -m_head.right) > 0 ) { // otherTransform is in front of us } Because this is Unity, we have access to directional vectors for each Transform objects. This is useful because we can then plug these vectors into our FunnelCross2D function and voilà: we now have a way to tell if another entity is behind or in front of our AI. But wait, there's more! Limit the visual angle Most people are aware that our visual field has a limited viewing angle. It happens that, for humans, the viewing angle is about 114°. The problem is that, right now, our AI viewing angle is actually 180°. Not really realistic if you ask me. Thankfully, we have our trusty FunnelCross2D function to help with that. Let's take another look at the nice cross product animation from before: If you noticed, the magnitude is actually cyclic in its property: when the angle between a and b is 90°, then the magnitude of the resulting vector of the cross product is literally 1. The closet the angle gets to 180° or 0°, the closest our magnitude get to 0. This means that for a given magnitude (except for 1), there are actually 2 possible a and b vector configurations. So, we can then try to find the actual magnitude of the cross given a certain angle. Afterwards, we can store the result in memory. m_visionCrossLimit = FunnelCross2D(new Vector3(Mathf.Cos((Mathf.PI / 2) - (m_visionAngle / 2)), 0, Mathf.Sin((Mathf.PI / 2) - (m_visionAngle / 2))).normalized, m_head.right, -m_head.right); Now we can just go back to our if and change some things: if ( FunnelCross2D((otherTransform.position - m_head.position).normalized, m_head.right, -m_head.right) > m_visionCrossLimit ) { // otherTransform is in our visual field } Then we did it! the AI only reacts to enemies in their visual field. Conclusion In conclusion, you can see how I've managed to simulate a 3D visual field using the trustworthy cross product. But the fun doesn't end there! We can apply this to many different situations. For example, I've implemented the same thing but in order to limit neck rotations. it's just like previously, but with another variable and some other fancy codes and what not... The cross product is indeed a valuable tool in the game developer's toolset. No doubt about it.

jb-dev

jb-dev

Unity Daily Update #3 - AESTHETIC++

Today was kind of a slow day: I had many things to do, so development was kind of light... Nevertheless, I've still managed to do something... I've added a way to highlight items through emission (not unlike how we did it previously) and make enemies blink when they get hurt. It wasn't really hard: because this is Unity, the surface shader got us covered. It was just one simple line of code.  #ifdef IS_EMISSIVE o.Emission = lerp(fixed3(0, 0, 0), _EmissionColor.rgb, _EmissionRatio); #endif  

jb-dev

jb-dev

Unity Daily update #2 - Bombs

Today I've worked on adding bombs to the game. They are really useful for dispersing enemies and such. The model was made a while back. It was just a mater of importing it in Unity and coding the script. Here's a nice video I've made just for that: There's nothing really special here, just polymorphism, Unity Components and C# delegates.... Collider[] cols = Physics.OverlapSphere(explosionCenter, m_explosionRadius, LayerMask.GetMask("Obstacles", "Entity", "Player Body", "Pickable"), QueryTriggerInteraction.Ignore); for (int i = 0, length = cols.Length; i < length; ++i) { GameObject collidedObject = cols[i].gameObject; HealthController healthController; Rigidbody rigidbody; AbstractExplosionActionable explosionActionable; if(collidedObject.layer == LayerMask.NameToLayer("Entity") || collidedObject.CompareTag("Player")){ healthController = collidedObject.GetComponent<HealthController>(); healthController.RawHurt(m_explosionDamage, transform); } else if(collidedObject.layer == LayerMask.NameToLayer("Pickable")) { rigidbody = collidedObject.GetComponent<Rigidbody>(); if (rigidbody != null && !rigidbody.isKinematic) { rigidbody.AddExplosionForce(m_explosionDamage, transform.position, m_explosionRadius); } } else if (collidedObject.layer == LayerMask.NameToLayer("Obstacles")) { explosionActionable = collidedObject.GetComponent<AbstractExplosionActionable>(); if (explosionActionable != null) { explosionActionable.action.Invoke(m_explosionDamage, m_explosionRadius, explosionCenter); } } }  

jb-dev

jb-dev

3D Vaporwave treasure models

Roguelites are synonymous with loot and treasure chests. However, my game is vaporwave, so traditional treasure chests are out of the question. This is what I came up with: You may get the references if you know your Windows well enough... 😉

jb-dev

jb-dev

Unity Daily update #1 - wet-dry shader variations

I've decided to make a daily update blog to have a development metric. So here goes... Today, I've modified the shader I've previously made so that it can take a "wetness" parameter. Basically, in my palette, the first 4 columns represents a pair of colors, where as the first ones are dry colors and every other ones are wet colors. I just do a clever lerp between those two consecutive colors according to the given wetness parameter. Here's an example:     Here, you can see that the leaves of that palm tree is significantly more yellow that usual Here it's way more green. The way I did it was quite simple: I've just added a _Wetness parameter to my Unity shader. I then just do a simple lerp like so: /* In the shader's property block */ _Wetness ("Wetness", Range(0,1)) = 0 [Toggle(IS_VEGETATION)] _isVegetation ("Is Vegetation?", Float) = 0 /* In the SubShader body */ void surf (Input IN, inout SurfaceOutputStandard o) { #ifdef IS_VEGETATION fixed4 wetColor; float uv_x = IN.uv_MainTex.x; // A pixel is deemed to represent something alive if its U coordinate is included in specific ranges (128 is the width of my texture) bool isVegetationUV = ((uv_x >= 0.0 && uv_x <= (1.0 / float(128)) ) || (uv_x >= (2.0 / float(128)) && uv_x <= (3.0 / float(128)) )); if (isVegetationUV) { fixed2 wetUV = IN.uv_MainTex; // To get the other color, we just shift by one pixel. that's all // The _PaletteIndex parameter represents the current levels' palette index. (In other words, it changes level by levels) // There are 8 colors per palette. that's where that "8" comes from... wetUV.x = ((wetUV.x + (1.0/float(128))) + 8.0 * float(_PaletteIndex) / float(128)); wetColor = tex2D(_MainTex, wetUV); } #endif // This is part of my original palette shader IN.uv_MainTex.x = IN.uv_MainTex.x + 8.0 * float(_PaletteIndex) / float(128); fixed4 c = tex2D(_MainTex, IN.uv_MainTex); #ifdef IS_VEGETATION if (isVegetationUV){ c = lerp(c, wetColor, _Wetness); } #endif o.Albedo = c.rgb; o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = _Alpha; } Then it's just a matter of changing that parameter in a script like so: GameObject palm = PropFactory.instance.CreatePalmTree(Vector3.zero, Quaternion.Euler(0, Random.Range(0, 360), 0), transform).gameObject; MeshRenderer renderer = palm.GetComponentInChildren<MeshRenderer>(); for (int i = 0, length = renderer.materials.Length; i < length; ++i) { renderer.materials[i].SetFloat("_Wetness", m_wetness); } I've also changed the lianas's colors, but because I was too lazy to generate the UV maps of these geometries, I've just gave them a standard material and just change the main color... And I made a Unity script that takes care of that for me... // AtlasPaletteTints is an Enum, and atlasPaletteTint is a value of that enum // m_wetness is a float that represent the _Wetness parameter // m_isVegetation is a bool that indicates whether or not the mesh is considered a vegetation (this script is generic after all) Color col = AtlasPaletteController.instance.GetColorByPalette(atlasPaletteTint); if (m_isVegetation) { // Basically, we check whether or not a color can qualify for a lerp if ((atlasPaletteTint >= AtlasPaletteTints.DRY_DETAIL_PLUS_TWO && atlasPaletteTint <= AtlasPaletteTints.DRY_DETAIL_MINUS_TWO) || (atlasPaletteTint >= AtlasPaletteTints.DRY_VEGETATION_PLUS_TWO && atlasPaletteTint <= AtlasPaletteTints.DRY_VEGETATION_MINUS_TWO)) { // Each colors have 5 tints; that where the "+ 5" is coming from col = Color.Lerp(col, AtlasPaletteController.instance.GetColorByPalette(atlasPaletteTint + 5), m_wetness); } } GetComponent<Renderer>().material.color = col;  

jb-dev

jb-dev

Unity lets go to the mall (Video)

After really thinking about it, the method I've used to play MP3 wasn't the most flexible, so after re-working it out, I've decided to stream the music directly in a Unity AudioClip. This way, the audio goes through an audio mixer form which I can easily apply filter to... I was also able to cross-fade between two music track. I've also put the sound loading onto another thread, making the audio loading run in the background rather than clogging up the main thread. Also, it's easier to use the 【 VaporMaker】with the Unity Audio system... I can even add more wacky filters over it to make those songs more VAPORWAVE than ever.

jb-dev

jb-dev

Music The 【 VaporMaker】, and letting user play their own music while playing

Vaporwave is synonymous with music. It is primordial that music must be spot on... But with recent takedowns of Vaporwave classics, let's just say that I'm less eager to have sampled Vaporwave included: I don't want that to spoil an otherwise perfectly good game. So I'm probably gonna ship the game with sample-free Vaporwave, made by yours truly. However, that's not the end of it. I'm going to let the user play their own Vaporwave if they like. That way I can secure myself from any takedown possible. I'm also going a step further and created the 【 VaporMaker】 Unity and audio As you're probably aware, Unity can play audio assets that are imported into the UnityEditor. This approach only works with packaged audio, however. This means that we need another way to play external files.  There's a utility class that can be used to play any sound file from anywhere: WWW . This class is used to do simple HTTP requests and catch its results.  If we use the "file://" protocol, we can actually load a file from the player's local machine. And what's more, is that there's a nice method for getting audio clips: WWW.GetAudioClip Cool, let's use that. WAIT, WHAT!?! MP3 ARE ONLY SUPPORTED ON PHONES!?!? That's no good... The workaround So, you're telling me that mp3, the most universally available file format, is not compatible with Unity's audio system? Yes. It appears so... Due to licences issues, Unity cannot be shipped with an MP3 decoder. Which is really weird, but we can't really do anything about it. Thankfully, Unity has C# and .NET, which are one of the most used tools nowadays. I'm pretty sure that there exists a way to fix this. Enter NAudio. NAudio NAudio is a .NET library that can load and play most audio files. This is really useful because we can then use that library rather than Unity's Audio system.  NAudio is compatible with Unity, which is a big plus for us. Of course, we'll need to do a bit of fix around, but it's nothing really hard. NAudio is just a .dll. It's just a matter of dropping it in our Asset repository and voilà: you can now use NAudio in our scripts. Here's the little blog post I've followed if you're interested. Reading metadata Listening to mp3 is fun and all, but I also want the player to know which song is playing. Things like song title, artist, album title and even, if I can, the album cover... Most of the time, these pieces of information already exists within the .mp3 file itself as metadata. This is how, for example, most media players are able to display the album cover of a song. Similarly, this is also how some applications are able to group songs of the same album together. Depending on the used metadata convention, they are either at the very beginning of the file or at the end and can take many forms. We won't need to open a byte stream and manually seek these metadata ourselves. There's already plenty of libraries that are able to do that for us. Funny enough, the solution I've chosen came from the Linux world. Let me introduce you to Banshee. Banshee is an open source media player not unlike iTunes. It can manage one's music collection and play them. This program is written in C#, which coincidently is the same language as our favourite engine... The library responsible for reading such metadata in Banshee is called TagLib#. With a little bit of tinkering, we can include TagLib# in our Asset repository, making good use of it. After creating a UI element containing the metadata, here's the result: (The blurriness is part of the art style, trust me) The 【 VaporMaker】 Now we have both the data and the playback. But I'm not satisfied. Managing the music As you may (or may not know), Vaporwave is basically slowed downed music. Some artist chooses to keep the editing at a minimum, while some add a lot of butter on top, but it all boils down to slowed downed music. The idea I have is to let the player create their own Vaporwave to be used in the game by putting some mp3 file in a special folder. These files will be played at a slower speed than usual, thus creating some rudimentary Vaporwave. Although the idea is simple, NAudio itself doesn't come with such functions... However, they DO have a post on the subject. Basically, we'll add SoundTouch, an open source sound manipulation library. Although written in C++, we can still call its native function with the wrapper given by the post. One drawback is that we'll need to supply native .dll libraries for all platforms if it's doable. A few extra resources (Like a mac and Linux installations) are needed for this if you want multi-platform support, but nothing really hard. (If all fails, you can just copy/paste the source code and fix things here and there) So by following along the source code, we can add the required files in our Asset repository. Easy as pie. Once everything is set up, we'll just need to plug that VarispeedSampleProvider class into our mWaveOutDevice instead of the mVolumeStream like so (if you follow along that blog post I've referenced earlier) private void LoadAudioFromData(byte[] data, bool isVaporMaker) { MemoryStream tmpStr = new MemoryStream(data); mMainOutputStream = new Mp3FileReader(tmpStr); mWaveOutDevice = new WaveOut(); if (!isVaporMaker){ mVolumeStream = new WaveChannel32(mMainOutputStream); mWaveOutDevice.Init(mVolumeStream); } else { mSpeedControl = new VarispeedSampleProvider(WaveExtensionMethods.ToSampleProvider(mMainOutputStream), 100, new SoundTouchProfile(false, false)); mSpeedControl.PlaybackRate = 0.75f; mWaveOutDevice.Init(mSpeedControl); } } When we'll play the song, it'll play at any speed we specified by the PlaybackRate property of the VarispeedSampleProvider instance we constructed.  Managing the art I've could have stopped there, but I STILL wasn't satisfied. I wanted to make a clear distinction between normal custom music and any piece that went through the 【 VaporMaker】.  To do so, I've decided to change the album cover for something more vaporwave. When I fetch the album cover for vaporized songs, I actually map each pixel by their lightness to a gradient of two colours. I've also made those colours members of the MonoBehaviour so that they are available in the UnityEditor. TagLib.File file = TagLib.File.Create(filepath); TagLib.IPicture pic = file.Tag.Pictures[0]; Texture2D text = new Texture2D(2, 2); text.LoadImage(pic.Data.Data); for (int x = 0; x < text.width; ++x) { for (int y = 0; y < text.height; ++y) { float h,s,l; ColorExt.RGBToHSL(text.GetPixel(x,y), out h, out s, out l ); text.SetPixel(x, y, Color.Lerp(m_vaporMakerDarkCoverColor, m_vaporMakerLightCoverColor, l)); } } text.Apply(); Afterwards, when the UI displays the album cover of the vaporized song, it will use the funky artwork rather than the original one, Here, take a look:         With all of this, I'm sure that this feature will be popular, If not only for the ability to play our custom music.

jb-dev

jb-dev

Algorithm BSP trees, or creating a randomly generated level

So the game I'm working on is going to use rooms that are connected to each other by little holes, creating something somehow similar to "Binding of Isaac", but with organic room disposition rather than rooms placed on a grid with specific dimensions. To do this, I needed to search for a good level generation algorithms. I've found that the best algorithm is the BSP trees algorithm, which was traditionally used in old-schools roguelike games. Binary Partition Trees The algorithm works by taking an area and split it in half, thus creating tow new area to be recursively processed by the algorithm. The algorithm can be run until we stumble upon an area of a specific dimension.  This is what that algorithm can do:   Traditionally, each area becomes leaves, in which a random rectangle room is drawn within these. Afterward, each of these rooms is connected to each others using longs corridors. This approach works in 2D, as the player always has a clear view of his surroundings.    Here is an excellent reference on the subject: Basic BSP Dungeon generation Adaptations However, because the game is in a first-person perspective, the corridors won't work as the player can't really know his surrounding very well. We need to adapt the algorithm so that the player can feel comfortable while navigating in our level. I've chosen to eliminate corridors and rooms altogether and instead use the leaves themselves. Instead, each leaf is connected to each other through small holes.  Also, the BSP tree algorithm creates a web-like dungeon with no clear end or start, which is fine if you're making a traditional dungeon, but we want our levels to have a certain flow so that the player can quickly find its way out if they want to. The way I planned to do that is by transforming the BSP leaves into a kind of navmesh grid. Afterward, we just pick some positions and select specific leaves that makes up the path. Creating the navmesh graph First, before we can use the graph search algorithm, we need to build our graph. BSP tree is still binary trees, so using those to deal with connections are out of the question. We need to somehow get all the leaves created in the BSP tree algorithm and put them in a more flexible structure: enter the quadtree. Quadtrees are a kind of tree that can have at most 4 children. This characteristic is quite useful when it comes to 2D rectangles.  Here's a visual representation of a quadtree: With these kinds of trees, it's possible to get every overlapping leaf from a specific rectangle. If for a given room, we query a slightly bigger search zone, then we'll be able to find all of the given room's neighbours. We can then connect everything up and finally launch our graph search using randomly picked rooms that are far enough from each other. Do the pathfinding I've already made a blog entry on pathfinding so the procedure is almost the same... However, there is some difference here... One of the most important difference is that we add the concept of "hot" cells. When a specific cell is deemed "hot" and that the graph search algorithm stumbles upon it then its cost will be infinite. That way, we can say to the algorithm this: "Do not pick this cell unless it's your last option". This makes for a somehow imperfect path... But in our case, imperfect is perfect.  Afterwards, we add all of the chosen rooms in a final list. All rooms in this list will be part of our level and will be rendered out later on. Add more rooms After we picked the main rooms, we can then append bonus rooms to some of these main rooms if the player is lucky, not unlike hidden rooms in "Binding of  Isaac"... Also, the game is going to sometime have an "alternative path". These paths will try to be shorter than the main path and overall have more bonus rooms and loot to them. I've planned that the player needs to fulfil some requirement to enter this path. Because we already created a graph of each possible rooms, it's just a matter of luck to see if a room has a special room connected to it. Rendering it out Once the rooms configurations were made, we now need to create the geometries and collisions of the level. Before going with the BSP approach, I've tried to use cellular automata to generate cave-like structures... It wasn't controllable enough, but I've kept some of the code from it (mainly its geometry generation) Here's the cellular automata tutorial Basically, we render each rooms pixel by pixel. We then cut those "holes" I've talked about earlier and voilà. Here, I've coloured each room to give a better idea of what kind of room each is which. Blue rooms are part of the alternate path I've mentioned before. The green and red rooms represent both the starting and ending room respectively. Yellow rooms are bonus rooms. Afterward, it's only a matter of placing props and enemies. This method of creating levels is cool. It can make the game more flexible and interesting. It also all depends on luck, which is a stat that can change in-game.

jb-dev

jb-dev

 

Updates

Hi guys, I have a bad news and a good news. The bad news is that the game i was currently working on is kinda canceled. But, however, this is not the end of this endeavor. I've chosen to continue the development on my own and with Unity.  Turns out that the previous engine we tried was missing a lot of key components that was needed for our project.  Paired this with other intrapersonal problems and our team was dissolved. I, however, still believe in the idea. The feedback was somewhat positive, and I'm sure that if I give my 100% on this that i might get a chance to make you believe in it too.   So hang on tight, because the ride is far from being over.  

jb-dev

jb-dev

Design Windows-like pause menu and color computations

So the game we develop is growing steadily. We've already added a pause menu. Its function is to be a... pause menu. (I don't know what you expected...) The design Because our game is going to be really Vaporwave, we knew that the visual had to be on point. We ended up trying a classic Windows-like design with header bars and everything, but with a twist... Here's the result: (Note that not all buttons are fully functioning. Also, the seed isn't actually used in the generation) The idea is that this window appears when the player pauses. It takes the form of a popup with fully functional tabs. We also plan to let the player easily navigate through the tabs with keyboard (or buttons, in the case of a controller) shortcuts. In addition, our game uses palettes, so the GUI elements are colored according to the active palette. Here's a example with a different palette: LESS-like color computations You may have noticed that there is a difference between each palette (for example, the title of the window has changed color). This is done by a beautiful library that I built for our project. Because I was a web developer for about 2 years, I already knew (and worked with) CSS compilers like SASS and LESS. My library is strongly inspired these compilers. Especially LESS. The luma value For this reason, I knew there was a way to know if a text of a given color would be readable when displayed on a given background. This feature is present in vanilla LESS : it's called "contrast".  That function uses the luma values (sometimes called "relative lightness" or "perceived luminance") of colors. This small practical value describes the perceived luminance of a color, which means that particularly brightly perceived colors (such as lime green or yellow) gets higher luma values than other colors (red or brown) despite their real lightness value. Here's how I compute a given color's luma value: Color color = Color.GREEN; // Fictionnal class, but it stores each components as floating points values form 0 to 1 float redComponent, blueComponent, greenComponent; if (color.r < 0.03928f){ redComponent = color.r / 12.92f; } else { redComponent = Math.pow((color.r + 0.055f) / 1.055f, 2.4f); } if (color.g < 0.03928f){ greenComponent = color.g / 12.92f; } else { greenComponent = Math.pow((color.g + 0.055f) / 1.055f, 2.4f); } if (color.b < 0.03928f){ blueComponent = color.b / 12.92f } else { blueComponent = Math.pow((color.b + 0.055f) / 1.055f, 2.4f); } float luma = (0.2126f * redComponent) + (0.7152f * greenComponent) + (0.0722f * blueComponent); The actual luma computation is fully describe here. With that luma value, we can then check and compare the contrast between 2 colors: float backgroundLuma = getLuma(backgroundColor) + 0.05f; float textLuma = getLuma(textColor) + 0.05f; float contrast = Math.max(backgroundLuma, textLuma) / Math.min(backgroundLuma, textLuma); With that, we can choose between tow color by picking the one with the lowest contrast: Color chosenTextColor = getContrast(backgroundColor, lightTextColor) > getContrast(backgroundColor, darkTextColor) ? lightTextColor : darkTextColor; This can give us a lot of flexibility: especially since we use many different color palettes in our game, and each with different luma values. This, along with more nice LESS colors functions, can make coloring components a breeze. Just for example, I've inverted our color palette texture and these are the results:   Yes, it looks weird, but notice how each component is still fully readable. Paired with our dynamic palette, color computation is now really easy and flexable.  

jb-dev

jb-dev

OpenGL Analog video post-process filter

In order to increase the aesthetics, we looked for tips on the post-processing filter for our engine and came up with the idea of using a VHS / Analog post-processing filter, Because my teammate had already built OpenGL shaders in the past and that's kind of his hobby, he gave me the link to shadertoy. This site is amazing! There're a lot of shaders to use as a base we can build on, and it's also 100% web thanks to WebGL. This shader in particular caught my eye: It's really cool, and yet there are no VHS artifacts that can really obstruct the players' view . So I did a little tinkering with JMonkeyEngine and got this result: I'm really happy with the results. I could however reduce the blur amount: it can be annoying it it's too high...

jb-dev

jb-dev

Music New Vaporwave music track

As part of our vaporwave roguelite game, it's actually mandatory to have some kind of music...

Here's a first look:
  EDIT: There's another one :   

jb-dev

jb-dev

Design Relics, or permanent passive items

The game we're currently making is a rogue-lite. This means that we're gonna have some kind of upgrades during gameplay. We choose to take the "Binding of Isaac" way and went along with passive items. We call them "Relics". Anyway, here are some renderings:  

jb-dev

jb-dev

Concept Low poly meshes

We made some design concepts for trees and foods. It's pretty low poly, but that's the point! We'll plan to use foods as temporary stats boost. Some might even be in trees...

jb-dev

jb-dev

Behavior Steering behaviors: Seeking and Arriving

Steering behaviors are use to maneuver IA agents in a 3D environment. With these behaviors, agents are able to better react to changes in their environment. While the navigation mesh algorithm is ideal for planning a path from one point to another, it can't really deal with dynamic objects such as other agents. This is where steering behaviors can help. What are steering behaviors? Steering behaviors are an amalgam of different behaviors that are used to organically manage the movement of an AI agent. For example, behaviors such as obstacle avoidance, pursuit and group cohesion are all steering behaviors... Steering behavior are usually applied in a 2D plane: it is sufficient, easier to implement and understand. (However, I can think of some use cases that require the behaviors to be in 3D, like in games where the agents fly to move) One of the most important behavior of all steering behaviors is the seeking behavior. We also added the arriving behavior to make the agent's movement a whole lot more organic. Steering behaviors are described in this paper. What is the seeking behavior? The seeking behavior is the idea that an AI agent  "seeks" to have a certain velocity (vector). To begin, we'll need to have 2 things: An initial velocity (a vector) A desired velocity (also a vector) First, we need to find the velocity needed for our agent to reach a desired point... This is usually a subtraction of the current position of the agent and the desired position.   \(\overrightarrow{d} = (x_{t},y_{t},z_{t}) - (x_{a},y_{a},z_{a})\) Here, a represent our agent and t our target. d is the desired velocity   Secondly, we must also find the agent's current velocity, which is usually already available in most game engines. Next, we need to find the vector difference between the desired velocity and the agent's current velocity. it literally gives us a vector that gives the desired velocity when we add it to that agent's current velocity. We will call it "steering velocity".   \(\overrightarrow{s} = \overrightarrow{d} - \overrightarrow{c}\) Here, s is our steering velocity, c is the agent's current velocity and d is the desired velocity   After that, we truncate our steering velocity to a length called the "steering force". Finally, we simply add the steering velocity to the agent's current velocity . // truncateVectorLocal truncate a vector to a given length Vector3f currentDirection = aiAgentMovementControl.getWalkDirection(); Vector3f wantedDirection = targetPosition.subtract(aiAgent.getWorldTranslation()).normalizeLocal().setY(0).multLocal(maxSpeed); // We steer to our wanted direction Vector3f steeringVector = truncateVectorLocal(wantedDirection.subtract(currentDirection), steeringForce); Vector3f newCurrentDirection = MathExt.truncateVectorLocal(currentDirection.addLocal(MathExt.truncateVectorLocal(wantedDirection.subtract(currentDirection), m_steeringForce).divideLocal(m_mass)), maxSpeed); This computation is done frame by frame: this means that the steering velocity becomes weaker and weaker as the agent's current velocity approaches the desired one, creating a kind of interpolation curve. What is the arriving behavior? The arrival behavior is the idea that an AI agent who "arrives" near his destination will gradually slow down until it gets there. We already have a list of waypoints returned by the navigation mesh algorithm for which the agent must cross to reach its destination. When it has passed the second-to-last point, we then activate the arriving behavior. When the behavior is active, we check the distance between the destination and the current position of the agent and change its maximum speed accordingly. // This is the initial maxSpeed float maxSpeed = unitMovementControl.getMoveSpeed(); // It's the last waypoint float distance = aiAgent.getWorldTranslation().distance(nextWaypoint.getCenter()); float rampedSpeed = aiAgentMovementControl.getMoveSpeed() * (distance / slowingDistanceThreshold); float clippedSpeed = Math.min(rampedSpeed, aiAgentMovementControl.getMoveSpeed()); // This is our new maxSpeed maxSpeed = clippedSpeed; Essentially, we slow down the agent until it gets to its destination. The future? As I'm writing this, we've chosen to split the implementation of the steering behaviors individually to implement only the bare necessities, as we have no empirical evidence that we'll need to implement al of them. Therefore, we only implemented the seeking and arriving behaviors, delaying the rest of the behaviors at an indeterminate time in the future,. So, when (or if) we'll need it, we'll already have a solid and stable foundation from which we can build upon. More links Understanding Steering Behaviors: Seek Steering Behaviors · libgdx/gdx-ai Wiki Understanding Steering Behaviors: Collision Avoidance

jb-dev

jb-dev

 

Concept GUI Mockup 2

After a play-test event at our workplace, we decided to re-evaluate our priorities and begin to gradually modify our backlog accordingly. So I decided to create a new HUD sketch that is an evolution of our previous design. So there is now a queue that shows context-based notifications (for example, when the player goes to a store or loots money, the queue shows the player's current amount of money) There is also a time limit / boss life bar that is big enough to be noticed by the player.

jb-dev

jb-dev

Design Idea: Iridescent shader

In our brainstorming, we had the idea of a type of item dropped by enemies that will have a very particular look: it will reproduce bismuth and, in particular, its iridescence. What is Iridescence? Remember CD? The under side of CDs had some trippy colors that changed based on which angle you're looking at. That type of effect is called iridescence. Many other things has that kind of effect. Things like bubbles, some metals and even some bugs (especially beetles). Wikipedia defines iridescence as: In order to reproduce the visual qualities of bismuth, we must find how to recreate this effect in a shader. One of my hypotheses is that we could do it with the normals and the viewing angle. I'm not an expert in shader writing but I'm sure that's possible ...

jb-dev

jb-dev

Design Dynamic color palettes

When we started our game, we already knew it was going to be really abstract. Therefore, we also knew that, in term of shaders, it would be a real challenge. However, because we use jMonkey Engine (which is a shader oriented engine), we also knew that doing a custom shader with it was going to be a piece of cake. The Idea I used to be an avid TF2 player some time ago and I also started watching YouTubers TF2 just for fun. But the fact is that, surprisingly, some of these creators were trying to diversify their channel. Enter FUNKe, one of my favorite TF2 YouTubers. You see, he started as a normal TF2 content creator but later tried to diversify the topics of his videos. One of these videos was on the anime of Jojo's Bizarre Adventure. (I'm not a fan of anime but if I had to watch one, it will probably be that one) He said that the anime has a really abstract idea of color palettes: in some scenes, one character could be colored green while in the next, it is colored purple. That gave me an idea ... A Color Palette That Can Change The idea is to use a default color palette for each model and then, with a globally defined setting, dynamically change the specific palette used. In this way, we can change our palette according to the current level or even when events occurs in game. For example, this means that a sword can actively change its colors each time the player changes level. This can be really flexible. With some cleverness, we can make a truly abstract and unique game. In Practice All color palettes are stored on a single 512x512 texture where each pixel is a color. The coordinates are chosen according to whether the mesh is static or generated. Here's an example of a single palette: Keep in mind that it's zoomed in: each and every of these squares is supposed to be a single pixel. In our code, we load the material of the palette only once and apply it to almost all our meshes: paletteMaterial = assetManager.loadMaterial("path/to/palette/material/definition.j3md"); // The Palette material TextureKey atlasTextureKey = new TextureKey("path/to/palette/texture.png", false); m_atlasPalette = assetManager.loadTexture(atlasTextureKey); // The texture material For Static Meshes When we model our static meshes, we make sure that their UV mapping is properly aligned in the palette's texture in the appropriate pixel. Here is a simple sword model in blender. We can see that even though the UV map is really distorted, it fits well and is well aligned in the texture. Because our game is low poly and doesn't use detailed textures, there is no reason to ease these UV maps. In blender, there is a filter that changes the way our texture is displayed. Because we're going to change that in our code it doesn't really matter. We can actually fix that in our code, and it's very easy: // The Palette texture is loaded manually to overrite JMonkey's default filters paletteTexture.setMinFilter(Texture.MinFilter.NearestNoMipMaps); paletteTexture.setMagFilter(Texture.MagFilter.Nearest); We do that kind of UV mapping for (almost) all of our static meshes. For Generated Meshes Our game generates meshes that are used in our organic dungeon, but the UV mapping is quite basic... To make them compatible with our shader, we must explicitly modify the UV mapping of these meshes. Because our texture is actually an amalgam of all palettes, we have to take the first palette (which is our default palette) and use its UV coordinates to UV map our generated mesh. To help us with that, we made some enums that stores the UV coordinates of these colors. Technically speaking, in this case we use the middle of each pixel as UV coordinates. After having generated our mesh, we use a float buffer to store the UV coordinates of each of the mesh's triangles. That's why we need those enums. Here's the code we use to find out those UVs: public static FloatBuffer createDungeonTextureBuffer(FloatBuffer normalBuffer) { FloatBuffer textureBuffer = (FloatBuffer) VertexBuffer.createBuffer(VertexBuffer.Format.Float, TEXTURE_BUFFER_COMPONENT_COUNT, normalBuffer.capacity() / NORMAL_BUFFER_COMPONENT_COUNT); float skyUCoordinate = AtlasPaletteColor.SKY.getBaseTint().getUCoordinate(); float skyVCoordinate = AtlasPaletteColor.SKY.getBaseTint().getVCoordinate(); float soilUCoordinate = AtlasPaletteColor.SOIL.getBaseTint().getUCoordinate(); float soilVCoordinate = AtlasPaletteColor.SOIL.getBaseTint().getVCoordinate(); float wallUCoordinate = AtlasPaletteColor.WET_DETAIL.getBaseTint().getUCoordinate(); float wallVCoordinate = AtlasPaletteColor.WET_DETAIL.getBaseTint().getVCoordinate(); Vector3f normal = new Vector3f(); while (textureBuffer.position() < textureBuffer.capacity()) { normal.set(normalBuffer.get(), normalBuffer.get(), normalBuffer.get()); // Ground if (Direction3D.TOP.getDirection().equals(normal)) { textureBuffer.put(soilUCoordinate); textureBuffer.put(soilVCoordinate); } // Ceiling else if (Direction3D.BOTTOM.getDirection().equals(normal)) { textureBuffer.put(skyUCoordinate); textureBuffer.put(skyVCoordinate); } // Walls else { textureBuffer.put(wallUCoordinate); textureBuffer.put(wallVCoordinate); } } return textureBuffer; } With this, we can chose the UV based on the triangle's normal. The Shader The shader itself is quite simple. We took the generic shader provided with jMonkey Engine and added some uniforms and constants here and there. We take the dimensions of a palette (which is 8 x 5) and change the texture with this piece of code in our fragment shader: /* IN OUR FRAGMENT SHADER */ const int m_PaletteWorldWidth = 8; const int m_PaletteWorldHeight = 5; uniform int m_PaletteWorldIndex; // Later in the shader... ivec2 textureDimensions = textureSize(m_DiffuseMap, 0); newTexCoord.x += float(m_PaletteWorldWidth) * float(m_PaletteWorldIndex) / float(textureDimensions.x); We can then change the used palette by changing the PaletteWordIndex uniform in the code by doing this: // Palette Material is loaded manually to be able to change its PaletteWorldIndex value paletteMaterial.setInt("PaletteWorldIndex", paletteIndexUsed); // paletteIndexUsed is usually an Enum value Changing the palletIndexUsed value to a different one does that: And then we change the value of PaletteWorldIndex to 2: Although the colors are similar, they are also technically different. This can give us a lot of flexibility, but we still have to be careful: we still need to evoke the right emotions by using the right color at the right place at the right time. Otherwise, it can be harmful to our artistic style. We also need to maintain some visual consistency with the most crucial meshes. For example, our white crystal there could possibly be colored, and this color could be very important for the gameplay.

jb-dev

jb-dev

 

Design GUI mockups

I've came up with some GUI ideas for a Vaporwave roguelite I'm making with @thecheeselover
I'm trying to find a really useful and clever way to display informations while keeping the AESTHETICS up... This is a kind of main view displaying health, mana and an enemy's health :

jb-dev

jb-dev

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!