Jump to content
  • Advertisement
  • entries
    4
  • comments
    2
  • views
    1075

Entries in this blog

Major graphics overhaul

Dev. update: Major graphics overhaul Over the past few weeks, Close Quarters received a cosmetic makeover: See the “before” version live (networking disabled).
See the “after” version live. I think it’s a major improvement! In this post, I discuss some of the details, technical and otherwise, of this major change. The need for better graphics My original graphics philosophy for Close Quarters was minimalism: simple, smooth graphics that provide the player with only the information that affects gameplay. However, as time went by, I began to see the simple graphics as a detriment rather than a positive. Some people whom I showed the game were immediately dismissive because it “has no graphics”, so I began to fear that the vector graphics would give the impression that the game is a low-effort product and repel casual players. I also began to become more sensitive to the lack of visual variation (environments that consist only of line segments have no visual theme and all look alike). Hence, I decided to revamp the rendering engine to support textured polygons and other embellishments. Graphic design principles The paramount principle I wanted to maintain from the original graphics was clarity. I found inspiration in architectural and urban designs, for example: The desire for clarity influenced several choices: I decided that all traversable space should feature very light and subtle textures so that players, projectiles, and effects (which were originally all designed for a white background) still stand out. See, for example, the following two textures:   The original choice (left) has too many lines and details and is too dark, so it obscures bullets and other gameplay elements passing over it. Hence, I replaced it with the less complex and lighter texture on the right. Similarly, I decided that solid polygons should generally have much darker colours so that they are easily distinguishable from traversable space. I also increased the rendering size of projectiles so that they are more apparent against a textured background. Finding resources Finding textures subtle enough not to interfere with clarity isn’t easy. The two most useful resources I found were Subtle Patterns by Toptal and SketchUp Texture Club, both of which offer textures for free. Rendering system The internal workings of the new rendering system can be summarised as follows:

1. Upon the loading of the map, all polygons are triangulated individually.
2. Memory is then allocated for a number of arrays, one for each layer/texture. Enough memory is allocated so that each of these arrays could contain all the integer indices of every triangle bearing the corresponding texture.
2. The triangles are then added to the spatial partitioning system. This system is a grid, and each cell ultimately contains a list of the indices of the triangles that intersect it.
3. At run-time, the spatial partitioning system is queried with the axis-aligned bounding box of the player’s view, whereupon it adds the indices of all visible triangles to the aforementioned arrays, taking care not to allow triangles that span multiple grid cells from having their indices added to the relevant array multiple times.
4. These arrays (or, rather, the triangles whose indices are contained therein) are then rendered sequentially via a batching system that copies 5,000 vertices to the GPU at a time. Rendering shadows The most interesting aspect of the new graphics system is probably the soft shadows, which give everything a sense of volume. The shadows were also the aspect most time-consuming to produce (primarily because Close Quarters supports three graphics APIs, namely OpenGL, WebGL 1.0, and WebGL 2.0, and each one has its own idiosyncrasies). I decided that the map designer should manually plot shadow polygons. Though tedious, this allows for some depth effects that would be difficult to achieve if the shadows were generated programmatically because the game stores only very basic height information for each polygon. To render the shadows, I first render the shadow polygons in view in solid black to an offscreen, transparent framebuffer. I later render the content of that buffer to the screen using a one-pass blur fragment shader and a low alpha value: precision mediump float; varying vec4 vcolor; varying vec2 vtexcoords; uniform sampler2D texture; const float blurinc = 0.002; void main() { vec4 c0 = texture2D( texture, vtexcoords + vec2( -blurinc, -blurinc ) ) * 0.0625; vec4 c1 = texture2D( texture, vtexcoords + vec2( 0.0, -blurinc ) ) * 0.125; vec4 c2 = texture2D( texture, vtexcoords + vec2( blurinc, -blurinc ) ) * 0.0625; vec4 c3 = texture2D( texture, vtexcoords + vec2( -blurinc, 0.0 ) ) * 0.125; vec4 c4 = texture2D( texture, vtexcoords + vec2( 0.0, 0.0 ) ) * 0.25; vec4 c5 = texture2D( texture, vtexcoords + vec2( blurinc, 0.0 ) ) * 0.125; vec4 c6 = texture2D( texture, vtexcoords + vec2( -blurinc, blurinc ) ) * 0.0625; vec4 c7 = texture2D( texture, vtexcoords + vec2( 0.0, blurinc ) ) * 0.125; vec4 c8 = texture2D( texture, vtexcoords + vec2( blurinc, blurinc ) ) * 0.0625; gl_FragColor = vcolor * ( c0 + c1 + c2 + c3 + c4 + c5 + c6 + c7 + c8 ); } Usually, a Gaussian blur would be done in two passes because that would allow for each pixel to be rendered via N+N texture samples instead of N*N samples. In my case, my blur kernel is only 3x3 in size, so the difference between a one-pass blur and two-pass blur would only be the difference between nine and six samples, and the gain made by using the two-pass approach might be negated by the need for an additional framebuffer and the fact that I would be rendering twice. One thing that is important to realise for anyone attempting to replicate this approach to rendering shadows is that the offscreen buffer should support multisampling. Otherwise, the shadow polygon’s edges will “snap” to the nearest pixel and therefore appear to “dance” very slightly as the view scrolls around. As WebGL 1.0 does not support multisampled framebuffers, this undesirable effect can currently be seen in Close Quarters when played using Microsoft Edge, which does not support WebGL 2.0. Also note that a similar soft shadow effect can be achieved without relying on a blur shader. Once the shadows are rendered with solid black into the offscreen buffer, the offscreen buffer can then be rendered to the backbuffer a dozen or so times with a very low alpha value and slightly offset in a circular pattern, resulting in blurred edges. The advantage of this approach is code simplicity, but I found its performance erratic in WebGL (sometimes it would have no discernible performance impact and other times it would cause the frame rate to drop from 60 to about 45). New gameplay possibilities While this “makeover” is mainly cosmetic, it does open new possibilities regarding gameplay mechanics. Firstly, the addition of shadows makes it easier to represent depth, so by carefully crafting the environment, it is now possible to create spaces that appear higher than others:
Longer shadows on the right building suggest greater height. Moving into such spaces could cause the player’s view to zoom out slightly, giving an advantage in terms of seeing distance. Secondly, top-level overlays can provide concealment, a mechanic that exists in the game surviv.io. Currently, such concealment is possible under treetops in Close Quarters: Going forward Of course, there is still much room for graphical improvements. Some elements, such as the explosions, looked good when paired with the old, simple graphics, but now look too basic, so I will have to make an effort to restore congruence across all the different visual elements. Also, the additional rendering seems to be having some side effects on Web Audio such that a crackling noise can now be heard in some clips – an issue that could be related to JavaScript’s lack of multithreading and will need to be explored.

Antipersonnel mines, sonar, and a burst-fire rifle

Mini dev. update: Antipersonnel mines, sonar, and a burst-fire rifle: This week sees the addition of equipment that players can select and deploy for a tactical advantage. Proximity mines can be tossed on to walls to await unsuspecting enemies: Sonar devices emit pulses that reveal concealed enemies and allies: Also new is a burst-fire rifle. This rifle delivers three shots in quick succession and can therefore damage enemies faster than any other weapon. However, the interval between bursts and the weapon’s poor “spray and pray”  performance mean that shooters must aim accurately and make extra effective use of cover. Here’s a clip of the weapon in action: This clip also serves as a spoiler for what to expect in the next major update Other major, albeit less obvious, changes in this past update are that bullet speed has been increased so that it’s easier to hit targets and that the AI’s reflexes and accuracy have been curbed so that the enemies are a little less formidable and more fun to play against.

Try the in-development version of Close Quarters here right from your browser.

Guns, bullets, and explosions

Mini dev. update: Guns, bullets, and explosions Play Close Quarters here. New weapons: A shotgun and a suppressed assault rifle:   In Close Quarters, because of the top-down perspective and delayed feedback due to bullet travel time, weapons have less of a “feel” than they do in first-person shooters, where recoil is tied to the camera orientation. Hence, it’s important that each weapon has a clear defining characteristic. The new shotgun fires five bullets at once and is deadly at close range but a mere annoyance at long range. The new suppressed assault rifle, on the other hand, is quieter than the other weapons. But because the advantage of muffled gunfire is small in a game wherein bullets themselves are visible and therefore give away the shooter’s position, the suppressed assault rifle has a secondary benefit: it muffles the sound of the player’s footsteps. New damage model and communicating it to the player: The introduction of the shotgun required a new damage model whereby bullets do less damage at a greater distance (depending on the weapon). The question is, how can we communicate this damage model to the player? I combined three solutions: The first, and most subtle, is to make bullets more transparent as the damage they do declines. The second is to fade the player’s aiming indicator as the damage his or her bullets would do at a given distance declines. The effect is most evident for the shotgun:
The third, and most overt, is to directly show players the damage model with a small graph when they select a weapon:
New explosions and muzzle flashes: Explosions and muzzle flashes have been embellished with a sprite-based smoke effect to give guns and grenades a more powerful feel (see video above).    

Close Quarters Development: Realistic Combat AI Part I

Close Quarters Development: Realistic Combat AI Part I  Although Close Quarters is primarily a multiplayer game, it must feature challenging AI bots so that players can still have fun when their internet connection is poor or there are no other players online. Bots will also play an important subsidiary role in some game modes. Hence, the bots must behave believably and manifest a range of complex behaviours, such as using cover, using items at opportune times, flanking, and throwing and escaping grenades. The environment and the constraints: The game environments consist of polygons. Most polygons block movement, sight, and gunfire, though there are some “low” polygons that only block movement. The environments are densely packed with obstacles and cover options. The AI is also constrained by several technical factors. Most importantly, the server–which runs bots when few players are online–must perform well on an inexpensive VPS with at least ten bots. Moreover, CPU usage should be low enough to run multiple server instances on the one VPS without maximizing the CPU allotment and thereby irritating the VPS provider.
Figure 1: Environments
Figure 2: A player’s view with obstacles blocking sight (grey areas are obscured) Navigation and visibility mesh and tactical pathfinding: The bots rely on a dense navigation mesh consisting of discrete points. The mesh is generated by first expanding and combining the polygons constituting the environment. Extra nodes are added near corners as these locations are the most likely to offer sensible cover positions. The resulting walkable space is then triangulated to generate the mesh.
Figure 3: The navigation mesh As the bots need to perform tactical pathfinding, the navigation mesh must hold visibility data that allows us to quickly test whether a node has cover from given enemy. Each node therefore contains an array of 24 bytes, each representing the precomputed approximate distance between the node and the nearest sight-blocking obstacle in a certain direction.
Figure 4: Visibility data (blue lines) stored in a node. Each line represents a one-byte value indicating visibility in a given direction. With this data, it is possible to perform A* graph searches on the navigation mesh wherein nodes that are probably exposed to known enemies are penalized without performing many costly line-of-sight checks. To test if a given node is exposed to an enemy, we determine the direction and distance from the node to that enemy and then check whether that distance is lower than that stored in the array element corresponding most closely to that direction. Additionally, we may check whether the enemy is facing the node. When we apply a penalty multiplier to the traversal cost of exposed nodes,  the resulting path tends to avoid such nodes. The same visibility data can be used for other “tactical” actions besides pathfinding between two predetermined points. For example, a bot seeks cover by executing a breadth-first search and stopping once it finds a covered node. Actual line-of-sight checks are used to verify that a node that seems to provide cover actually does. Similarly, we can generate flanking attack paths by executing an A* search towards the target while heavily penalizing exposed nodes within its firing cone and stopping once we reach an exposed node outside of that firing cone (One problem with this approach is that bots left out of sight tend to constantly advance towards their target and therefore seem too aggressive, though it could perhaps be solved be adjusting the A* heuristic to guide the bot not directly towards the target but towards any nodes a preferred distance away from the target). Bot senses and memory: For the bots to behave believably, they must not be seen as “cheating”. In other words, the information a bot works with should be similar to the information a player would have in its place. For example, an enemy behind an obstacle should be hidden from the bot just as it would be hidden to a player. There are two ways a player might discover an enemy’s position: they may see the enemy, or they may hear the enemy moving, shooting, or performing some other action. Each bot maintains a list of known “facts” about enemies’ positions and orientations. These facts expire after ten seconds pass without updates. A fact pertaining to a given enemy is updated whenever the bot can see or hear that enemy. To simulate uncertainty, when a bot hears an enemy, the resulting position of the relevant fact is offset from the enemy’s actual position in a random direction and distance based on how close the sound was to the bot (see video at 1:28).
Figure 5: Facts (pink circles) in a bot’s memory Behaviour tree: In an earlier version of Close Quarters, the AI used STRIPS, an approach popularized by F.E.A.R.  in 2005. In STRIPS, the relationships between different AI behaviours are not predetermined by the programmer. Rather, each behaviour contains a list of binary preconditions and outcomes. Each bot has a goal world state and then uses an A* graph search to find a sequence of behaviours that would achieve it. This approach worked well, but I felt that it was overkill for this application and better suited to AI that needs to develop elaborate plans involving many different behaviours. In most cases, I already knew the circumstances under which I wanted a bot to execute one behaviour or another, so using A* to determine that was an unnecessary waste of CPU resources. Hence, this time around the bots use a simple decision and behaviour tree. Each step, a bot traverses the tree from the root until it reaches a behaviour. If that behaviour is the one already being executed, the bot continues that behaviour. If not, the bot initiates the behaviour and begins running it. Some behaviours can “block”, meaning that they will prevent the bot from re-traversing the tree until some condition is met. This is useful, for example, when ensuring that bots properly make it behind cover before deciding to attack. Behaviours can also “cancel” themselves, forcing a re-traversal of the tree and re-initiation of the behaviour. This is useful, for example, when a bot is fleeing a grenade and another grenade appears compromising the chosen flight positions.
Figure 6: The current decision and behaviour tree Some secondary behaviours are coded within other, more general behaviours. For example, if a bot tries to attack an enemy and finds that the enemy is not in its suspected position, the bot estimates where the enemy might now be and calculates a new attack path without ever exiting the attack behaviour. Distributing the load: It is not necessary that every bot be updated every physics frame, i.e. 40 times per second. To reduce CPU usage, each bot only “thinks” 20 times per second (a figure that could be further reduced if necessary). Hence, on each physics step, the AI of only half the bots is updated. Grenade handling: One major challenge is having the bots use grenades intelligently. Grenade handling is much more complicated than shooting because grenades can be bounced off walls, have an area effect, and take time to throw. Luckily, the bots don’t need to use grenades perfectly, just sensibly. Traditional approaches to this problem include precomputing grenade trajectories at navigation nodes, but doing so added several seconds to each map’s load time, which conflicts with my goal of having Close Quarters place players in a battle within a few seconds of opening the game. Hence, the bots look for opportunities to use grenades by calculating grenade trajectories on the fly. Each step, an attacking bot will test several viable trajectories in a given direction up to 60 degrees away from the direction to the intended target. If a grenade thrown along a trajectory tested might kill the target and the target is out of sight, the bot throws a grenade. The test direction is cycled each AI step.
Figure 7: The directions a bot will test over the course of one second (pale pink lines) and the trajectories tested (blue circles) along the direction selected in the current step (solid pink line). The bots’ grenade use is therefore opportunistic as a bot will not move to a position specifically to throw a grenade. However, the path that a bot chooses to attack an enemy will often also be a sensible path from which to throw a grenade. A significant drawback is that the limited number of directions tested means that bots will miss opportunities to bounce grenades off small obstacles. This is most noticeable around doorframes, where bots usually won’t recognize an opportunity to use a grenade. This issue could be addressed by testing multiple directions each frame and thereby reducing the angle between a test direction and the one that follows. Towards more human behaviour: One issue that quickly became evident was that bots were too quick on the trigger, so they were very difficult to beat in a one-on-one engagement. The average human reaction time to visual stimuli is 250 milliseconds, but at 20 steps per seconds, the maximum bot reaction time would only be 50 milliseconds! To address this issue, I added an intentional delay between when a bot acquires a shot and when it shoots, bringing its reaction time insofar as shooting is concerned more in line with a human player’s reaction time. Further development: The above systems provide a strong foundational AI, but there is room for major improvements. For example, currently, a bot’s spatial reasoning is limited to its immediate environment, so while a bot usually tries to flank its enemy, it will often miss flanking paths around rather large obstacles. Similarly, bots are only roughly aware of their teammates’ presence, so they will sometimes clump up when it makes more sense to split up and flank. Hence, the future devlogs in this series will cover: Squad-level AI, including coordinated flanking and suppressing fire, and higher-level spatial reasoning Deliberately assisting human teammates Following a human player’s orders You can play the in-development version of Close Quarters here and follow its development on Reddit or Twitter. This post was discussed here.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!