• # Graphics A Practical Approach to Managing Resource States in Vulkan and Direct3D12

Graphics and GPU Programming

# Introduction

Explicit resource state management and synchronization is one of the main advantages and main challenges that modern graphics APIs such as Direct3D12 and Vulkan offer application developers. It makes rendering command recording very efficient, but getting state management right is a challenging problem. This article explains why explicit state management is important and introduces a solution implemented in Diligent Engine, a modern cross-platform low-level graphics library. Diligent Engine has Direct3D11, Direct3D12, OpenGL/GLES and Vulkan backends and supports Windows Desktop, Universal Windows, Linux, Android, Mac and iOS platforms. Its full source code is available on GitHub and is free to use. This article gives an introduction to Diligent Engine.

# Synchronization in Next-Gen APIs

Modern graphics applications can best be described as client-server systems where CPU is a client that records rendering commands and puts them into queue(s), and GPU is a server that asynchronously pulls commands from the queue(s) and processes them. As a result, commands are not executed immediately when CPU issues them, but rather sometime later (typically one to two frames) when GPU gets to the corresponding point in the queue. Besides that, GPU architecture is very different from CPU because of the kind of problems that GPUs are designed to handle. While CPUs are great at running algorithms with lots of flow control constructs (branches, loops, etc.) such as handling events in an application input loop, GPUs are more efficient at crunching numbers by executing the same computation thousands and even millions of times. Of course, there is a little bit of oversimplification in that statement as modern CPUs also have wide SIMD (single instruction multiple data) units that allow them to perform computations efficiently as well. Still, GPUs are at least order of magnitude faster in these kinds of problems.

The main challenge that both CPUs and GPUs need to solve is memory latency. CPUs are out-of-order machines with beefy cores and large caches that use fancy prefetching and branch-prediction circuitry to make sure that data is available when a core actually needs it. GPUs, in contrast, are in-order beasts with small caches, thousands of tiny cores and very deep pipelines. They don't use any branch prediction or prefetching, but instead maintain tens of thousands of threads in flight and are capable of switching between threads instantaneously. When one group of threads waits for a memory request, GPU can simply switch to another group provided it has enough work.

When programming CPU (when talking about CPU I will mean x86 CPU; things may be a little bit more involved for ARM ones), the hardware does a lot of things that we usually take for granted. For instance, after one core has written something at a memory address, we know that another core can immediately read the same memory. The cache line containing the data will need to do a little bit of travelling through the CPU, but eventually, another core will get the correct piece of information with no extra effort from the application. GPUs, in contrast, give very few explicit guarantees. In many cases, you cannot expect that a write is visible to subsequent reads unless special care is taken by the application. Besides that, the data may need to be converted from one form to another before it can be consumed by the next step. Few examples where explicit synchronization may be required:

• After data has been written to a texture or a buffer through an unordered access view (UAV in Direct3D) or an image (in Vulkan/OpenGL terminology), the GPU may need to wait until all writes are complete and flush the caches to memory before the same texture or buffer can be read by another shader.
• After shadow map rendering command is executed, the GPU may need to wait until rasterization and all writes are complete, flush the caches and change the texture layout to a format optimized for sampling before that shadow map can be used in a lighting shader.
• If CPU needs to read data previously written by the GPU, it may need to invalidate that memory region to make sure that caches get updated bytes.

These are just a few examples of synchronization dependencies that a GPU needs to resolve. Traditionally, all these problems were handled by the API/driver and were hidden from the developer. Old-school implicit APIs such as Direct3D11 and OpenGL/GLES work that way. This approach, while being convenient from a developer's point of view, has major limitations that result in suboptimal performance. First, a driver or API does not know what the developer's intent is and have to always assume the worst-case scenario to guarantee correctness. For instance, if one shader writes to one region of a UAV, but the next shader reads from another region, the driver must always insert a barrier to guarantee that all writes are complete and visible because it just can't know that the regions do not overlap and the barrier is not really necessary.

Solution to the aforementioned problems is given by the next-generation APIs (Direct3D12 and Vulkan) that make all resource transitions explicit. It is up to the application now to track the states of all resources and assure that all required barriers/transitions are executed. In the example above, the application will know that when the shadow map is used in a forward pass, it will be in the depth-stencil writable state, so the barrier can be inserted right away without the need to wait for the first command buffer to be recorded or submitted. The downside here is that the application is now responsible for tracking all resource states which could be a significant burden.

Let's now take a closer look at how synchronization is implemented in Vulkan and Direct3D12.

## Synchronization in Vulkan

Vulkan enables very fine-grain control over synchronization operations and provides tools to individually tweak the following aspects:

• Execution dependencies, i.e. which set of operations must be completed before another set of operations can begin.
• Memory dependencies, i.e. which memory writes must be made available to subsequent reads.
• Layout transitions, i.e. what texture memory layout transformations must be performed, if any.

Executions dependencies are expressed as dependencies between pipeline stages that naturally map to the traditional GPU pipeline. The type of memory access is defined by VkAccessFlagBits enum. Certain access types are only valid for specific pipeline stages. All valid combinations are listed in Section 6.1.3 of Vulkan Spec, which are also given in the following table:

| Access flag (VK_ACCESS_)           | Pipeline Stages             |
|                                    |(VK_PIPELINE_STAGE_)         | Access Type Description
|------------------------------------|-----------------------------|---------------------------------------------------------------------
| COLOR_ATTACHMENT_WRITE_BIT         | COLOR_ATTACHMENT_OUTPUT_BIT | Write access to a color attachment (render target) during render pass or via certain operations such as blending
| DEPTH_STENCIL_ATTACHMENT_READ_BIT  | EARLY_FRAGMENT_TESTS_BIT or |
| DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | EARLY_FRAGMENT_TESTS_BIT or |
|                                    | LATE_FRAGMENT_TESTS_BIT     | Write access to depth/stencil buffer via depth/stencil operations
| TRANSFER_WRITE_BIT                 | TRANSFER_BIT                | Write access to an image (texture) or buffer in a clear or copy operation
| HOST_WRITE_BIT                     | HOST_BIT                    | Write access by a host

As you can see most access flags correspond 1:1 to a pipeline stage. For example, quite naturally vertex indices can only be read at the vertex input stage, while final color can only be written at color attachment (render target in Direct3D12 terminology) output stage. For certain access types, you can precisely specify what stage will use that access type. Most importantly, for shader reads (such as texture sampling), writes (UAV/image stores) and uniform buffer access it is possible to precisely tell the system what shader stages will be using that access type. For depth-stencil read/write access it is possible to distinguish if the access happens at the early or late fragment test stage. Quite honestly I can't really come up with any examples where this flexibility may be useful and result in measurable performance improvement. Note that it is against the spec to specify access flag for a stage that does not support that type of access (such as depth-stencil write access for vertex shader stage).

An application may use these tools to very precisely specify dependencies between stages. For example, it may request that writes to a uniform buffer from vertex shader stage are made available to reads from the fragment shader in a subsequent draw call. An advantage here is that since dependency starts at the fragment shader stage, the driver will not need to synchronize the execution of the vertex shader stage, potentially saving some GPU cycles.

For image (texture) resources, a synchronization barrier also defines layout transitions, i.e. potential data reorganization that the GPU may need to perform to support the requested access type. Section 11.4 of the Vulkan spec describes available layouts and how they must be used. Since every layout can only be used at certain pipeline stages (for example, color-attachment-optimal layout can only be used by color attachment read/write stage), and every pipeline stage allows only few access types, we can list all allowed access flags for every layout, as presented in the table below:

|Image layout (VK_IMAGE_LAYOUT)    | Access (VK_ACCESS_)                |   Description
|----------------------------------|------------------------------------|----------------------------------------------------
| UNDEFINED                        | n/a                                | This layout can only be used as initial layout when creating an image or as the old layout in image transition. When transitioning out of this layout, the contents of the image is not preserved.
| GENERAL                          | Any,All types of device access.    |
|                                  | COLOR_ATTACHMENT_WRITE_BIT         | Must only be used as color attachment.
|                                  | DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | Must only be used as depth-stencil attachment.
| TRANSFER_SRC_OPTIMAL             | TRANSFER_READ_BIT                  | Must only be used as source for transfer (copy) commands.
| TRANSFER_DST_OPTIMAL             | TRANSFER_WRITE_BIT                 | Must only be used as destination for transfer (copy and clear) commands.
| PREINITIALIZED                   | n/a                                | This layout can only be used as initial layout when creating an image or as the old layout in image transition. When transitioning out of this layout, the contents of the image is preserved, as opposed to UNDEFINED layout.

Table 2. Image layouts and allowed access flags.

As with access flags and pipeline stages, there is very little freedom in combining image layouts and access flags. As a result, image layouts, access flags and pipeline stages in many cases form uniquely defined triplets.

Note that Vulkan also exposes another form of synchronization called render passes and subpasses. The main purpose of render passes is to provide implicit synchronization guarantees such that an application does not need to insert a barrier after every single rendering command (such as draw or clear). Render passes also allow expressing the same dependencies in a form that may be leveraged by the driver (especially on GPUs that use tiled deferred rendering architectures) for more efficient rendering. Full discussion of render passes is out of scope of this post.

## Synchronization in Direct3D12

Synchronization tools in Direct3D12 are not as expressive as in Vulkan, but are also not as intricate. With the exception of UAV barriers described below, Direct3D12 does not define the distinction between the execution barrier and memory barrier and operates with resource states (see Table 3).

|  Resource state            |
| (D3D12_RESOURCE_STATE_)    | Description
|----------------------------|-------------------------------------------------------
| VERTEX_AND_CONSTANT_BUFFER | The resource is used as vertex or constant buffer.
| INDEX_BUFFER               | The resource is used as index buffer.
| RENDER_TARGET              | The resource is used as render target.
| UNORDERED_ACCESS           | The resource is used for unordered access via an unordered access view (UAV).
| DEPTH_WRITE                | The resource is used in a writable depth-stencil view or a clear command.
| DEPTH_READ                 | The resource is used in a read-only depth-stencil view.
| INDIRECT_ARGUMENT          | The resource is used as the source of indirect arguments for an indirect draw or dispatch command.
| COPY_DEST                  | The resource is as copy destination in a copy command.
| COPY_SOURCE                | The resource is as copy source in a copy command.

Table 3. Most commonly used resource states in Direct3D12.

Direct3D12 defines three resource barrier types:

• State transition barrier defines transition from one resource state listed in Table 3 to another. This type of barrier maps to Vulkan barrier when old an new access flags and/or image layouts are not the same.
• UAV barrier is an execution plus memory barrier in Vulkan terminology. It does not change the state (layout), but instead indicates that all UAV accesses (read or writes) to a particular resource must complete before any future UAV accesses (read or write) can begin.
• Aliasing barrier indicates a usage transition between two resources that are backed by the same memory and is out of scope of this article.

# Resource state management in Diligent Engine

The purpose of Diligent Engine is to provide efficient cross-platform low-level graphics API that is convenient to use, but at the same time is flexible enough to not limit the applications in expressing their intent. Before version 2.4, the ability of the application to control resource state transitions was very limited. Version 2.4 made resource state transitions explicit and introduced two ways to manage the states. The first one is fully automatic, where the engine internally keeps track of the state and performs necessary transitions. The second one is manual and completely driven by the application.

## Automatic State Management

Every command that may potentially perform state transitions uses one of the following state transitions modes:

• RESOURCE_STATE_TRANSITION_MODE_NONE  - Perform no state transitions and no state validation.
• RESOURCE_STATE_TRANSITION_MODE_TRANSITION  - Transition resources to the states required by the command.
• RESOURCE_STATE_TRANSITION_MODE_VERIFY  - Do not transition, but verify that states are correct.

The code snippet below gives an example of a sequence of typical rendering commands in Diligent Engine 2.4:

// Clear the back buffer
const float ClearColor[] = {  0.350f,  0.350f,  0.350f, 1.0f };
m_pImmediateContext->ClearRenderTarget(nullptr, ClearColor, RESOURCE_STATE_TRANSITION_MODE_TRANSITION);
m_pImmediateContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f, 0, RESOURCE_STATE_TRANSITION_MODE_TRANSITION);

// Bind vertex buffer
Uint32 offset = 0;
IBuffer *pBuffs[] = {m_CubeVertexBuffer};
m_pImmediateContext->SetVertexBuffers(0, 1, pBuffs, &offset, RESOURCE_STATE_TRANSITION_MODE_TRANSITION,
SET_VERTEX_BUFFERS_FLAG_RESET);
m_pImmediateContext->SetIndexBuffer(m_CubeIndexBuffer, 0, RESOURCE_STATE_TRANSITION_MODE_TRANSITION);

// Set pipeline state
m_pImmediateContext->SetPipelineState(m_pPSO);

DrawAttribs DrawAttrs;
DrawAttrs.IsIndexed = true;
DrawAttrs.IndexType = VT_UINT32; // Index type
DrawAttrs.NumIndices = 36;
// Verify the state of vertex and index buffers
DrawAttrs.Flags = DRAW_FLAG_VERIFY_STATES;
m_pImmediateContext->Draw(DrawAttrs);

Automatic state management is useful in many scenarios, especially when porting old applications to Diligent API. It has the following limitations though:

• The state is tracked for the whole resource only. Individual mip levels and/or texture array slices cannot be transitioned.
• The state is a global resources property. Every device context that uses a resource sees the same state.
• Automatic state transitions are not thread safe. Any operation that uses RESOURCE_STATE_TRANSITION_MODE_TRANSITION requires that no other thread accesses the states of the same resources simultaneously.

### Explicit State Management

As we discussed above, there is no way to efficiently solve resource management problem in a fully automated manner, so Diligent Engine is not trying to outsmart the industry and makes state transitions part of the API. It introduces a set of states that mostly map to Direct3D12 resource states as we believe this method is expressive enough and is way more clear compared to Vulkan's approach. If an application needs a very fine-grain control, it can use native API interoperability to directly insert Vulkan barriers into a command buffer. The list of states defined by Diligent Engine as well as their mapping to Direct3D12 and Vulkan is given in Table 4 below.

| Diligent State    | Direct3D12 state           | Vulkan Image Layout              |  Vulkan Access Type
| (RESOURCE_STATE_) | (D3D12_RESOURCE_STATE_)    | (VK_IMAGE_LAYOUT_)               |  (VK_ACCESS_)
|-------------------|----------------------------|----------------------------------|----------------------------------
| UNKNOWN           | n/a                        | n/a                              | n/a
| UNDEFINED         |  COMMON                    | UNDEFINED                        | 0
| VERTEX_BUFFER     | VERTEX_AND_CONSTANT_BUFFER | n/a                              | VERTEX_ATTRIBUTE_READ_BIT
| CONSTANT_BUFFER   | VERTEX_AND_CONSTANT_BUFFER | n/a                              | UNIFORM_READ_BIT
| INDEX_BUFFER      | INDEX_BUFFER               | n/a                              | INDEX_READ_BIT
| RENDER_TARGET     | RENDER_TARGET              | COLOR_ATTACHMENT_OPTIMAL         | COLOR_ATTACHMENT_READ_BIT | COLOR_ATTACHMENT_WRITE_BIT
| DEPTH_WRITE       | DEPTH_WRITE                | DEPTH_STENCIL_ATTACHMENT_OPTIMAL | DEPTH_STENCIL_ATTACHMENT_READ_BIT | DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
| INDIRECT_ARGUMENT | INDIRECT_ARGUMENT          | n/a                              | INDIRECT_COMMAND_READ_BIT
| COPY_DEST         | COPY_DEST                  | TRANSFER_DST_OPTIMAL             | TRANSFER_WRITE_BIT
| COPY_SOURCE       | COPY_SOURCE                | TRANSFER_SRC_OPTIMAL             | TRANSFER_READ_BIT
| PRESENT           | PRESENT                    |  PRESENT_SRC_KHR                 | MEMORY_READ_BIT

Table 4. Mapping between Diligent resource state, Direct3D12 state, Vulkan image layouts and access flags.

Diligent resource states map almost exactly 1:1 to Direct3D12 resource states. The only real difference is that in Diligent, SHADER_RESOURCE  state maps to the union of NON_PIXEL_SHADER_RESOURCE  and PIXEL_SHADER_RESOURCE states, which does not seem to be a real issue.

Compared to Vulkan, resource states in Diligent are a little bit more general, specifically:

• RENDER_TARGET state always defines writable render target (sets both COLOR_ATTACHMENT_READ_BIT, COLOR_ATTACHMENT_WRITE_BIT access type flags).
• Transitions to and out of CONSTANT_BUFFER, UNORDERED_ACCESS, and SHADER_RESOURCE states always set all applicable pipeline stage flags as given by Table 1.

None of the limitations above seem to be causing any measurable performance degradation. Again, if an application really needs to specify more precise barrier, it can rely on native API interoperability.

Note that Diligent defines both UNKNOWN and UNDEFINED states, which have very different meanings. UNKNOWN means that the state is not known to the engine and that application manually manages the state of this resource. UNDEFINED means that the state is known to the engine and is undefined from the point of view of the underlying API. This state has well-defined counterparts in Direct3D12 and Vulkan.

Explicit resource state transitions in Diligent Engine are performed with the help of IDeviceContext::TransitionResourceStates() method that takes an array of StateTransitionDesc structures:

void IDeviceContext::TransitionResourceStates(Uint32 BarrierCount, StateTransitionDesc* pResourceBarriers)

Every element in the array defines resource to transition (a texture or a buffer), old state, new state as well as the range of mip levels and array slices, for a texture resource:

struct StateTransitionDesc
{
ITexture* pTexture       = nullptr;
IBuffer*  pBuffer        = nullptr;

Uint32    FirstMipLevel  = 0;
Uint32    MipLevelsCount = 0;
Uint32    FirstArraySlice= 0;
Uint32    ArraySliceCount= 0;

RESOURCE_STATE OldState = RESOURCE_STATE_UNKNOWN;
RESOURCE_STATE NewState = RESOURCE_STATE_UNKNOWN;

bool UpdateResourceState = false;
};

If the state of the resource is known to the engine, the OldState member can be set to UNKNOWN, in which case the engine will use the state from the resource. If the state is not known to the engine, OldState must not be UNKNOWN. NewState can never be  UNKNOWN.

An important member is UpdateResourceState flag. If set to true, the engine will set the state of the resource to value given by NewState. Otherwise, the state will remain unchanged.

### Switching between explicit and automatic state management

Diligent Engine provides tools to allow switching between and mixing automatic and manual state management. Both ITexture and IBuffer interfaces expose SetState() and GetState() methods that allow an application to get and set the resource state. When the state of a resource is set to UNKNOWN, this resource will be ignored by all methods that use RESOURCE_STATE_TRANSITION_MODE_TRANSITION mode. State transitions will still be performed for all resources whose state is known. An application can thus mix automatic and manual state management by setting the state of resources that are manually managed to UNKNOWN. If an application wants to hand over state management back to the system, it can use  SetState() method to set the resource state. Alternatively, it can set UpdateResourceState flag to true, which will have the same effect.

As we discussed above, the main advantage of manual resource state management is the ability to record rendering commands in parallel. As resource states are tracked globally in Diligent Engine, the following precautions must be taken:

• Recording state transitions of the same resource in multiple threads simultaneously with IDeviceContext::TransitionResourceStates() is safe as long as UpdateResourceState flag is set to false.
• Any thread that uses RESOURCE_STATE_TRANSITION_MODE_TRANSITION mode with any method must be the only thread accessing resources that may be transitioned. This also applies to IDeviceContext::TransitionShaderResources()  method.
• If a thread uses RESOURCE_STATE_TRANSITION_MODE_VERIFY mode with any method (which is recommended whenever possible), no other thread should alter the states of the same resources.

### Discussion

Diligent Engine adopts D3D11-style API with immediate and deferred contexts to record rendering commands. Since it is well known that deferred contexts did not work well in Direct3D11, a natural question one may ask is why they work in Diligent. And the answer is because of the explicit state transition control. While in Direct3D11, resource state management was always automatic, Diligent gives the application direct control of how resource states must be handled by every operation. At the same time, device contexts incorporate dynamic memory, descriptor management and other tasks that need to be handled by a thread that records rendering commands.

# Conclusion

Explicit resource state management system introduced in Diligent Engine v2.4 combines flexibility, efficiency and convenience to use. An application may rely on automatic resource state management in typical rendering scenarios and switch to manual mode when the engine does not have enough knowledge to manage the states optimally or when it is not possible such as in the case of multithreaded rendering command recording.

At the moment Diligent Engine only supports one command queue exposed as single immediate context. One of the next steps is to expose multiple command queues through multiple immediate contexts as well as primitives to synchronize execution between queues to allow async compute and other advanced rendering techniques.

Report Article

## User Feedback

A very interesting introduction. Thank you for that.

I was very excited right until the end where you revealed that you aren't trying to fully solve what the driver is doing for us in D3D11, especially in the context of multi-threaded command recording. Good for you! Good in overall! :)

I've got my hands on a multi-platform engine built around the logic of D3D11 and, unfortunately, high-level users sometimes do all kinds of stuff to resources on all the threads in parallel. This brought me a lot of headache because trying to support that is very difficult, error-prone and certainly won't ever be more performant than the battle-hardened drivers.

Just not doing that, that is no transitions inside of parallel recording jobs, but only on their boundaries, is the only way to go, obviously.

##### Share on other sites

There is really no efficient solution to resolving state dependencies in multithreaded environment, which is why D3D12 and Vulkan make that an application's problem. I believe that giving an option to choose between manual and automatic state management is a convenient way to make API easy to use yet expressive when necessary.

##### Share on other sites

I agree. Also, congratz on good work on your engine in general!

##### Share on other sites

@pcmaster Thanks

## Create an account

Register a new account

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 0
• 0
• 4
• 3
• 0

• 12
• 13
• 9
• 25
• 18
• ### Similar Content

• By jb-dev
This is a short .gif showing off visuals effects for the parrying mechanic

Can programmers art? How far can creativity and programming take you?
I have summarized what I learned in several months into 7 key techniques to improve the visual quality of your game.

"Programmer art" is something of a running joke. For those unfamiliar with the term, it refers to the "placeholder" or "throw-together" art that programmers tend to use while developing games.
Some of us don't have the necessary artistic skills, however, sometimes we just can't be bothered to put in the effort. We're concerned about the technical side of things working - art can come later.
Here's what this usually means -

I worked on a game jam with some new people a few months ago. I just wanted to make sure that my gameplay and AI code was doing what it was supposed to do. This would have to interface with code from other teammates as well, so it was important to test and check for bugs. This was the result.
That's not what I'm going to talk about today though.

I'm going to take a different angle on "programmer art" - not the joke art that programmers often use, but the fact that there's a LOT that a programmer can do to improve the visual appeal of a game. I believe some of this falls under "technical art" as well.

My current job kind of forced me to think in this capacity.
I was tasked with visualizing some scientific data. Though this data was the result of years of hard work on the part of scientists, the result was unimpressive to the untrained eye - a heap of excel files with some words and numbers.
There are very few people in the world who can get excited by seeing a few excel files.
My job? To make this data exciting to everyone else.
My task was to visualize connectome data for a famous worm known as C. Elegans, made available by the wonderful people working on the OpenWorm project.
Part of the data parsing to read and display the data as a worm's body with neurons on it was done by my teammate. My main task was to improve the visuals and the overall graphical quality.

The first thing that comes to mind is using HD textures, PBR materials and high-poly models. Add in a 3D terrain using a height map, some post-processing and HDR lighting, and BOOM! Gorgeous 3D scene. I'm sure you've all seen loads of those by now.
Except, almost none of that would really help me.
The idea was very abstract - neurons and connections visible in a zoomed-in, x-ray-like view of a worm. I don't think rolling hills would have helped me much.
I had no 3D modelling skills or access to an artist - even if I did, I'm not sure what kind of 3D models would have helped.

As a result, what I've made isn't a gorgeous 3D environment with foliage and god-rays and lens flares. So it's not applicable in every case or the perfect example of how a programmer can make a gorgeous game.
But, it does provide a distinct viewpoint and result. The special sets of constraints in the problem I had to solve led to this.
So here's what I actually did:

The 7 things I did to improve the visuals of my Unity game
1. Conceptualizing the look
This could be considered a pre-production step for art or any visual project. Ideally, what should it look like? What's the goal? What are your references?
In this case, the viewer had a hologram-like feel to it (also there were plans to port it to a HoloLens eventually). I liked the idea of a futuristic hologram. And the metaphor of "AI bringing us towards a better future".
So what were my references? Sci-fi of course!
My first pick was one of my favourite franchises - Star Wars. I love how the holo-comms look in the movies.

Holograms became a key component of my design.
This is a HUD design from Prometheus that I found on Google -

In this case, the colours appealed to me more than the design itself. I ended up basing the UI design on this concept.

Key takeaway - Your imagination is the very first tool that helps you create impressive art. Use references! It's not cheating - it's inspiration. Your references will guide you as you create the look that you want.

I had some shader programming experience from University - D3D11 and HLSL. But that work had been about building a basic graphics engine with features like lighting, shadows, and some light post-processing. I had done some light Shader programming in Unity before as well.
What I really needed now was impressive visual effects, not basic lighting and shadows.
I was really lucky that this was about the time Unity made Shader Graph available, which made everything much easier. I can write Shader code, but being able to see in real time what each node (Which can be considered a line of code) does makes it so much easier to produce the effects you want.
I familiarized myself with all the samples Unity had included with this new tool. That wouldn't have been enough though. Thankfully due to my previous experience with Shaders, I was able to make some adjustments and improvements to make them suit my needs.
Some tweaking with speed, scaling, colours, and textures led to a nice hologram effect for the UI panels.

I wanted the viewer to feel good to interact with as well, and some work implementing a glow effect (alongside the dissolve effects) led to this -

Key takeaway - Shaders are an extremely powerful tool in a Game Programmer's repertoire. Tools like Unity's Shader Graph, the old Shader Forge asset, and Unreal's material editor make Shaders more accessible and easier to tune to get the exact look you want.
PS - Step 5 below is also really important for getting a nice glow effect.

3. Visual Effects and Animations using Shaders
I was able to extend the dissolve and hologram shaders to fake some animation-like visual effects.
And a combination of some timed Sine curves let me create an animation using the dissolve effect -

The work here was to move the animation smoothly across individual neuron objects. The animation makes it look like they're a single connected object, but they're actually individual Sphere meshes with the Shader applied to them. This is made possible by applying the dissolve texture in World Space instead of Object Space.
A single shader graph for the neurons had functionality for colour blending, glow, and dissolve animation.
All of this made the graphs really large and difficult to work with though. Unity was constantly updating the Shader Graph tools, and the new updates include sub-graphs which make it much easier to manage.
Key takeaway - There is more to shaders than meets the eye. As you gain familiarity with them, there are very few limits to the effects you can create. You can create animations and visual effects using Shaders too.

4. Particle systems - more than just trails and sparks
I have no idea why I put off working with the particle systems for so long!
The "neurons" in the viewer were just spheres, which was pretty boring.
Once I started to understand the basics of the particle system, I could see how powerful it was. I worked on some samples from great YouTube tutorials - I'm sharing a great one by Gabriel Aguiar in the comments below.
After that, I opened up Photoshop and experimented with different brushes to create Particle textures.
Once again, I referred to my sources of what neurons should look like. I wanted a similar look of "hair-like" connections coming out of the neurons, and the core being bright and dense.
This is what it looked like finished, and the particle system even let me create a nice pulsating effect.

Part of my work was also parsing a ton of "playback data" of neurons firing. I wanted this to look like bright beams of light, travelling from neuron to neuron. This involved some pathfinding and multi-threading work as well.

Lastly, I decided to add a sort of feedback effect of neurons firing. This way, you can see where a signal is originating and where it's ending.

Key takeaway - Particle systems can be used in many ways, not just for sparks and trails. Here, I used them to represent a rather abstract object, a neuron. They can be applied wherever a visual effect or a form of visual "feedback" seems relevant.

5. Post-processing to tie the graphics and art together
Post-processing makes a HUGE difference in the look of a game scene. It's not just about colours and tone, there's much more to it than that. You can easily adjust colours, brightness, contrast, and add effects such as bloom, motion blur, vignette, and screen-space reflections.
First of all, Linear colour space with HDR enabled makes a huge difference - make sure you try this out.
Next, Unity's new post-processing stack makes a lot of options available without impacting performance much.
The glow around the edges of the sphere only appears with an HDR colour selected for the shader, HDR enabled, and Linear colour space. Post-processing helps bump this up too - bloom is one of the most important settings for this.
Colour grading can be used to provide a warm or cool look to your entire scene. It's like applying a filter on top of the scene, as you would to an image in Photoshop. You can completely override the colours, desaturate to black and white, bump up the contrast, or apply a single colour to the whole scene.

There is a great tutorial from Unity for getting that HD look in your scenes - if you want a visible glow you normally associate with beautiful games, you need to check this out.

Key takeaway - Post processing ties everything together, and helps certain effects like glows stand out.

6. Timing and animation curves for better "feel"
This is a core concept of animation. I have some training in graphic design and animation, which is where I picked this up. I'm not sure about the proper term for it - timing, animation curves, tween, etc.
Basically, if you're animating something, it's rarely best to do it with linear timing. Instead, you want curves like this -

Or more crazy ones for more "bouncy" or cartoon-ish effects.
I applied this to the glow effects on the neurons, as I showed earlier.
And you can use this sparingly when working with particle systems as well - for speed, size, and similar effects. I used this for the effect of neurons firing, which is like a green "explosion" outwards. The particles move outwards fast and then slow down.
Unity has Animation Curve components you can attach to objects. You can set the curve using a GUI and then query it in your C# scripts. Definitely worth learning about.
Key takeaway - Curves or tweens are an animation concept that is easy to pick up and apply. It can be a key differentiator for whether your animations and overall game look polished or not.

7. Colour Palettes and Colour Theory - Often overlooked
Colour is something that I tend to experiment with and work with based on my instincts. I like being creative, however, I really underestimated the benefits of applying colour theory and using palettes.
Here's the before -

Here are some of the afters -

I implemented multiple themes because they all looked so good.
I basically messed around with different types of "Colour harmony" - Monochrome, triad, complementary, and more. I also borrowed some colours from my references and built around that.
Key takeaway - Don't underestimate the importance of colour and colour theory. Keep your initial concept and references in mind when choosing colours. This adds to that final, polished look you want.

Bonus - consider procedural art
Procedural Generation is just an amazing technique. I didn't apply it on this project, but I learned the basics of it such as generating Value and Perlin noise, generating and using Height maps for terrains, and generating mazes.

Procedural art is definitely something I want to explore more.
A couple of interesting things (Links in the "extra resources" section below) -
Google deepdream has been used to generate art. There's an open-source AI project that can colour lineart. Kate Compton has a lot of interesting projects and resources about PCG and generative art. I hope this leads to tools that can be directly applied to Game Development. To support the creation of art for games. I hope I get the opportunity to create something like that myself too.
Conclusion
These 7 techniques were at the core of what I did to improve the visual quality of my project.
This was mostly the result of the unique set of constraints that I had. But I'm pretty sure some famous person said: "true creativity is born of constraints". Or something along those lines. It basically means that constraints and problems help channel your creativity.
I'm sure there is more that I could have done, but I was happy with the stark difference between the "before" and "after" states of my project.
I've also realized that this project has made me more of an artist. If you work on visual quality even as a programmer, you practice and sharpen your artistic abilities, and end up becoming something of an artist yourself.

Did I miss something obvious? Let me know in the comments!

Extra Resources
OpenWorm project
Great tutorial by Gabriel Aguiar
Unity breaks down how to improve the look of a game using Post processing
Another resource on post-processing by Dilmer Valecillos
Brackey's tutorial on post-processing
Adobe Colour wheel, great for colour theory and palettes
An open-source AI project that can colour lineart
A demo of generative art by Kate Compton