Sign in to follow this  
Seabolt

Vulkan What are your opinions on DX12/Vulkan/Mantle?

Recommended Posts

I'm pretty torn on what to think about it. On one hand being able to write the implementations for a lot of the resource management and command processing allows for a lot of gains, and for a much better management of the rendering across a lot of hardware. Also all the threading support is going to be fantastic.

But on the other hand, accepting that burden will vastly increase the cost to maintain a platform and the amount of time to fully port. I understand that a lot of it can be ported piece by piece, but it seems like the amount of time necessary to even meet the performance of, say, dx12 is on the order of man weeks. 

I feel like to fully support these APIs I need to almost abandon the previous APIs support in my engine since the veil is so much thinner, otherwise I'll just end up adding the same amount of abstraction that DX11 does already, kind of defeating the point.

 

What are your opinions?

Share this post


Link to post
Share on other sites

I feel like to fully support these APIs I need to almost abandon the previous APIs support in my engine since the veil is so much thinner, otherwise I'll just end up adding the same amount of abstraction that DX11 does already, kind of defeating the point.

 

 

That's highly unlikely. One of the reasons the classical APIs are so slow is that they have to do a whole lot of rule validation. When you google an OpenGL function and you see that whole list of things that an argument is allowed to be, what happens when something goes wrong, etc, then the drivers have to actually validate that at runtime (to actually fulfill the language spec requirements correctly, but also to prevent you from crashing the GPU). That is because drivers can't make many assumptions about the context that you execute those API calls in. When you write a game engine with DX12 or Vulkan, that's not true. As the programmer you typically have complete knowledge of the relevant context and can make many assumptions, thus you can skip a whole lot of work that a classical OpenGL driver would have to do.

 

In addition to that multithreading in DX11 and OpenGL 4.x is still very crappy (although I'm not sure why) and with the new APIs you will be able to actually use multiple cores to do rendering API stuff for more than just 5% gains. 

 

Thinking about it, it's kinda like C++ vs a more managed language like Java or C#. Similar concepts, of course extremely similar execution context, but one gives more control, is a more precise abstraction of the hardware, but enables you to shoot yourself in the foot more (and in return is faster).

Edited by agleed

Share this post


Link to post
Share on other sites

 

 

In addition to that multithreading in DX11 and OpenGL 4.x is still very crappy (although I'm not sure why)

 

OpenGL doesnt really provide any mean for multithreading even in 4.4. OpenGL context belongs to a single thread who can issue rendering command. That's what vulkan/DX12 are tackling by the "command buffer" object that can be created on any thread (although they have to be commited to the command queue which seems to belong to a single thread only).

 

Actually there are way to do kindof multithreading in OpenGL 4 : you can share a context between thread, that's used to load texture asynchronously for instance, but I've heard that this is really inefficient. There is also glBufferStorage + IndirectDraw which allows you to access a buffer of instanced data that can be written like any others buffer, eg concurrently.

But it's not as powerful as what Vulkan or DX12 which allow to issue any command and not just instanced ones.

Share this post


Link to post
Share on other sites

 

 

 

In addition to that multithreading in DX11 and OpenGL 4.x is still very crappy (although I'm not sure why)

 

OpenGL doesnt really provide any mean for multithreading even in 4.4. OpenGL context belongs to a single thread who can issue rendering command. That's what vulkan/DX12 are tackling by the "command buffer" object that can be created on any thread (although they have to be commited to the command queue which seems to belong to a single thread only).

 

Actually there are way to do kindof multithreading in OpenGL 4 : you can share a context between thread, that's used to load texture asynchronously for instance, but I've heard that this is really inefficient. There is also glBufferStorage + IndirectDraw which allows you to access a buffer of instanced data that can be written like any others buffer, eg concurrently.

But it's not as powerful as what Vulkan or DX12 which allow to issue any command and not just instanced ones.

 

 

Yes, but I'm more interested in what prevented driver implementors to get proper multithreading support into the APIs in the first place. DX11 has the concept of command lists too and it's kind of working, but practical gains from it are pretty small. I don't know what about the APIs (or the implementations of the drivers) prevents proper multithreading from working in DX11 and GL4.x

Share this post


Link to post
Share on other sites

I feel like to fully support these APIs I need to almost abandon the previous APIs support in my engine since the veil is so much thinner, otherwise I'll just end up adding the same amount of abstraction that DX11 does already, kind of defeating the point.

Yes.
But it depends. For example if you were doing AZDO OpenGL, many of the concepts will already be familiar to you.
However, for example, AZDO never dealt with textures as thin as Vulkan or D3D12 do so you'll need to refactor those.
If you weren't following AZDO, then it's highly likely that the way you were using the old APIs is incompatible with the new says.

Actually there are way to do kindof multithreading in OpenGL 4 : (...). There is also glBufferStorage + IndirectDraw which allows you to access a buffer of instanced data that can be written like any others buffer, eg concurrently.
But it's not as powerful as what Vulkan or DX12 which allow to issue any command and not just instanced ones.

Actually DX12 & Vulkan are exactly following the same path glBufferStorage + IndirectDraw did. It just got easier, made thiner, and can now handle other misc aspects from within multiple cores (texture binding, shader compilation, barrier preparation, etc).

The rest was covered by Promit's excellent post.

Share this post


Link to post
Share on other sites

Actually there are way to do kindof multithreading in OpenGL 4 : (...). There is also glBufferStorage + IndirectDraw which allows you to access a buffer of instanced data that can be written like any others buffer, eg concurrently.
But it's not as powerful as what Vulkan or DX12 which allow to issue any command and not just instanced ones.

Actually DX12 & Vulkan are exactly following the same path glBufferStorage + IndirectDraw did. It just got easier, made thiner, and can now handle other misc aspects from within multiple cores (texture binding, shader compilation, barrier preparation, etc).

 

 

There is something I don't really understand in Vulkan/DX12, it's the "descriptor" object. Apparently it acts as a gpu readable data chunk that hold texture pointer/size/layout and sampler info, but I don't understand the descriptor set/pool concept work, this sounds a lot like array of bindless texture handle to me.

Share this post


Link to post
Share on other sites

There is something I don't really understand in Vulkan/DX12, it's the "descriptor" object. Apparently it acts as a gpu readable data chunk that hold texture pointer/size/layout and sampler info, but I don't understand the descriptor set/pool concept work, this sounds a lot like array of bindless texture handle to me.

Without going into detail; it's because only AMD & NVIDIA cards support bindless textures in their hardware, there's one major Desktop vendor that doesn't support it even though it's DX11 HW. Also take in mind both Vulkan & DX12 want to support mobile hardware as well.
You will have to give the API a table of textures based on frequency of updates: One blob of textures for those that change per material, one blob of textures for those that rarely change (e.g. environment maps), and another blob of textures that don't change (e.g. shadow maps).
It's very analogous to how we have been doing constant buffers with shaders (provide different buffers based on frequency of update).
And you put those blobs into a bigger blob and tell the API "I want to render with this big blob which is a collection of blobs of textures"; so the API can translate this very well to all sorts of hardware (mobile, Intel on desktop, and bindless like AMD's and NVIDIA's).

If all hardware were bindless, this set/pool wouldn't be needed because you could change one texture anywhere with minimal GPU overhead like you do in OpenGL4 with bindless texture extensions.
Nonetheless this descriptor pool set is also useful for non-texture stuff, (e.g. anything that requires binding, like constant buffers). It is quite generic. Edited by Matias Goldberg

Share this post


Link to post
Share on other sites

Apparently the mantle spec documents will be made public very soon, which will serve as a draft/preview of the Vulkan docs that will come later.

I'm extremely happy with what we've heard about Vulkan so far. Supporting it in my engine is going to be extremely easy.

However, supporting it in other engines may be a royal pain.
e.g. If you've got an engine that's based around the D3D9 API, then your D3D11 port is going to be very complex.
However, if your engine is based around the D3D911 API, then your D3D9 port is going to be very simple.

Likewise for this new generation of APIs -- if you're focusing too heavily on current generation thinking, then forward-porting will be painful.

In general, implementing new philosophies using old APIs is easy, but implementing old philosophies on new APIs is hard.

 

In my engine, I'm already largely using the Vulkan/D3D12 philosophy, so porting to them will be easy.
I also support D3D9-11 / GL2-4 - and the code to implement these "new" ideas on these "old" APIs is actually fairly simple - so I'd be brave enough to say that it is possible to have a very efficient engine design that works equally well on every API - the key is to base it around these modern philosophies though!
Personally, my engines cross-platform rendering layer is based on a mixture of Mantle and D3D11 ideas.

Ive made my API stateless, where every "DrawItem" must contain a complete pipeline state (blend/depth/raster/shader programs/etc) and all resource bindings required by those programs - however, these way these states/bindings are described (in client/user code) is very similar to the D3D11 model.
DrawItems can/should be prepared ahead of time and reused, though you can create them every frame if you want... When creating a DrawItem, you need to specify which "RenderPass" it will be used for, which specifies the render-target format(s), etc.

On older APIs, this let's you create your own compact data structures containing all the data required to make D3D/GL API calls required for that draw-call.
On newer APIs, this let's you actually pre-compile the native GPU commands!

 

You'll notice that in the Vulkan slides released so far, when you create a command buffer, you're forced to specify which queue you promise to use when submitting it later. Different queues may exist on different GPUs -- e.g. if you've got an NVidia and an Intel GPU present. The requirement to specify a queue ahead of time means that you're actually specifying a particular GPU ahead of time, which means the Vulkan drivers can convert your commands to that GPU's actual native instruction set ahead of time!

In either case, submitting a pre-prepared DrawItem to a context/commanf-buffer is very simple/efficient.
As a bonus, you sidestep all the bugs involved in state-machine graphics APIs biggrin.png

 

That sounds extremely interesting. Could you make a concrete example of what the descriptions in a DrawItem look like? What is the granularity of a DrawItem? Is is it a per-Mesh kind of thing, or more like a "one draw item for every material type" kind of thing, and then you draw every mesh that uses that material with a single DrawItem?

Share this post


Link to post
Share on other sites

Can I say something I do not like (DX related)? The "new" feature levels, especially 12.1.

 

Starting from 10.1 Microsoft introduced the concept of "feature level", a nice and smart way to collect all together hundreds of caps-bits and thousand of related permutation in a single - unique - decree. With feature level you can target older hardware with the last runtime available. Microsoft did not completely remove caps-bits for optional features, but their number reduced dramatically, something like two orders of magnitude. Even with Direct3D 11.2 the caps-bits number remained relatively small, although they could add a new feature level - let's call it feature level 11.2 - with all new optional features and tier 1 of tiled resources; nevermind that's not a big deal after all - complaints should be focused on the OS support since D3D 11.1.

Since the new API is focused mostly on the programming model, with Direct3D 12 new caps-bits and tiers collections were expected, and Microsoft did a good job reducing dramatically the complexity of different hardware capabilities permutations. New caps-bits and tiers of DX12 are not a big issue. At GDC15 they also announce two "new" feature levels (~14:00): feature level 12.0 and feature level 12.1. While feature level 12.0 looks reasonable (All GCN 1.1/1.2 and Maxwell 2.0 should support this - dunno about first generation of Maxwell), feature level 12.1 adds only ROVs (OK) and tier 1 of conservative rasterization (the most useless!) mandatory support.

I will not go into explicit details (detailed information should be still under NDA), however the second feature level looks tailor-made for a certain particular hardware (guess what!). Moreover FL 12.1 do not requires some really interesting features (greater conservative rasterization tier, volume tiled resources and even resource binding tier 3) that you could expected to be mandatory supported by future hardware. In substance FL12.1 really brake the concept of feature level in my view, which was a sort of "barrier" that defined new hardware capabilities for upcoming hardware.

Edited by Alessio1989

Share this post


Link to post
Share on other sites

 

That sounds extremely interesting. Could you make a concrete example of what the descriptions in a DrawItem look like? What is the granularity of a DrawItem? Is is it a per-Mesh kind of thing, or more like a "one draw item for every material type" kind of thing, and then you draw every mesh that uses that material with a single DrawItem?

My DrawItem corresponds to one glDraw* / Draw* call, plus all the state that needs to be set immediately prior the draw.
One model will usually have one DrawItem per sub-mesh (where a sub-mesh is a portion of that model that uses a material), per pass (where as pass is e.g. drawing to gbuffer, drawing to shadow-map, forward rendered, etc). When drawing a model, it will find all the DrawItems for the current pass, and push them into a render list, which can then be sorted.

A DrawItem which contains the full pipeline state, the resource bindings, and the draw-call parameters could look like this in a naive D3D11 implementation:

struct DrawItem
{
  //pipeline state:
  ID3D11PixelShader* ps;
  ID3D11VertexShader* vs;
  ID3D11BlendState* blend;
  ID3D11DepthStencilState* depth;
  ID3D11RasterizerState* raster;
  D3D11_RECT* scissor;
  //input assembler state
  D3D11_PRIMITIVE_TOPOLOGY primitive;
  ID3D11InputLayout* inputLayout;
  ID3D11Buffer* indexBuffer;
  vector<tuple<int/*slot*/,ID3D11Buffer*,uint/*stride*/,uint/*offset*/>> vertexBuffers;
  //resource bindings:
  vector<pair<int/*slot*/, ID3D11Buffer*>> cbuffers;
  vector<pair<int/*slot*/, ID3D11SamplerState*>> samplers;
  vector<pair<int/*slot*/, ID3D11ShaderResourceView*>> textures;
  //draw call parameters:
  int numVerts, numInstances, indexBufferOffset, vertexBufferOffset;
};

That structure is extremely unoptimized though. It's a base size of ~116 bytes, plus the memory used by the vectors, which could be ~1KiB!

I'd aim to compress them down to 28-100 bytes in a single contiguous allocation, e.g. by using ID's instead of pointers, by grouping objects together (e.g. referencing a PS+VS program pair, instead of referencing each individually), and by using variable length arrays built into that structure instead of vectors.

When porting to Mantle/Vulkan/D3D12, that "pipeline state" section all gets replaced with a single object and the "input assembler" / "resource bindings" sections get replaced by a descriptor. Alternatively, these new APIs also allow for a DrawItem to be completely replaced by a very small native command buffer!

 

There's a million ways to structure a renderer, but this is the design I ended up with, which I personally find very simple to implement on / port to every platform.

 

 

Thanks a lot for that description. I must say it sounds very elegant. It's almost like a functional programming approach to draw call submission, along with its disadvantages and advantages. 

Share this post


Link to post
Share on other sites

 

There is something I don't really understand in Vulkan/DX12, it's the "descriptor" object. Apparently it acts as a gpu readable data chunk that hold texture pointer/size/layout and sampler info, but I don't understand the descriptor set/pool concept work, this sounds a lot like array of bindless texture handle to me.

Without going into detail; it's because only AMD & NVIDIA cards support bindless textures in their hardware, there's one major Desktop vendor that doesn't support it even though it's DX11 HW. Also take in mind both Vulkan & DX12 want to support mobile hardware as well.
You will have to give the API a table of textures based on frequency of updates: One blob of textures for those that change per material, one blob of textures for those that rarely change (e.g. environment maps), and another blob of textures that don't change (e.g. shadow maps).
It's very analogous to how we have been doing constant buffers with shaders (provide different buffers based on frequency of update).
And you put those blobs into a bigger blob and tell the API "I want to render with this big blob which is a collection of blobs of textures"; so the API can translate this very well to all sorts of hardware (mobile, Intel on desktop, and bindless like AMD's and NVIDIA's).

If all hardware were bindless, this set/pool wouldn't be needed because you could change one texture anywhere with minimal GPU overhead like you do in OpenGL4 with bindless texture extensions.
Nonetheless this descriptor pool set is also useful for non-texture stuff, (e.g. anything that requires binding, like constant buffers). It is quite generic.

 

 

Thank.
I think it also make sparse texture available ? At least the tier level requested by arb_sparse_texture (ie without shader function returning residency state).

Share this post


Link to post
Share on other sites

 

 

There is something I don't really understand in Vulkan/DX12, it's the "descriptor" object. Apparently it acts as a gpu readable data chunk that hold texture pointer/size/layout and sampler info, but I don't understand the descriptor set/pool concept work, this sounds a lot like array of bindless texture handle to me.

Without going into detail; it's because only AMD & NVIDIA cards support bindless textures in their hardware, there's one major Desktop vendor that doesn't support it even though it's DX11 HW. Also take in mind both Vulkan & DX12 want to support mobile hardware as well.
You will have to give the API a table of textures based on frequency of updates: One blob of textures for those that change per material, one blob of textures for those that rarely change (e.g. environment maps), and another blob of textures that don't change (e.g. shadow maps).
It's very analogous to how we have been doing constant buffers with shaders (provide different buffers based on frequency of update).
And you put those blobs into a bigger blob and tell the API "I want to render with this big blob which is a collection of blobs of textures"; so the API can translate this very well to all sorts of hardware (mobile, Intel on desktop, and bindless like AMD's and NVIDIA's).

If all hardware were bindless, this set/pool wouldn't be needed because you could change one texture anywhere with minimal GPU overhead like you do in OpenGL4 with bindless texture extensions.
Nonetheless this descriptor pool set is also useful for non-texture stuff, (e.g. anything that requires binding, like constant buffers). It is quite generic.

 

 

Thank.
I think it also make sparse texture available ? At least the tier level requested by arb_sparse_texture (ie without shader function returning residency state).

 

 

On DirectX 12 Feature Level 11/11.1 GPUs the support of tier 1 of tiled resources (sparse texture) is still optional. In that GPU range, even if their architecture should support tier 1 of tiled resource, there are some GPUs (low/low-mid end, desktop and mobile) that do not support it (e.g.: AMD HD 7700 Mobile GPUs driver support of tiled resources is still disable). The same should apply to OGL/Vulkan.

Edited by Alessio1989

Share this post


Link to post
Share on other sites

Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship). This is Vista era, when a lot of people were busy with the DX10 transition, the hardware transition, and the OS/driver model transition. My job was to get games that were broken on Vista, dismantle them from the driver level, and figure out why they were broken.

...

 
That was very interesting, thanks for that!
 

A descriptor is a texture-view, buffer-view, sampler, or a pointer
A descriptor set is an array/table/struct of descriptors.
A descriptor pool is basically a large block of memory that acts as a memory allocator for descriptor sets.

So yes, it's very much like bindless handles, but instead of them being handles, they're the actual guts of a texture-view, or an actual sampler structure, etc...
 
Say you've got a HLSL shader with:

...


Also very informative, I'm starting to understand how to think in the "new way".


I'm looking forward to the new APIs (specifically Vulkan). Not only will we get better game performance, but it seems like it will be less of a headache given what Promit said. Less black box under-the-hood state management, the easier it will be to write and debug.

Share this post


Link to post
Share on other sites

I feel like I need to defend myself a little bit, I am a professional graphics programmer and I've written renderers on Xbox 360 and Wii-U, along with my own side project that can render in DirectX9/11/OpenGL 3.x. I've written a multi-threaded renderer before and already have an idea on how I plan to tackle the new APIs. My initial worry was that in order to get a minimally viable renderer may be extremely painful since my multi-threaded renderer will need to be restructured to create it's own threads. Luckily it already does something along the lines of PSOs since the game logic is sending it's own render commands, I should be able to encapsulate them well enough.

Big concerns for me, (and these are initial thoughts from a GDC presentation on DX12) are:

- Memory residency management. The presenters were talking along the lines of the developers being responsible for loading/unloading graphics resources from VRAM to System Memory whenever the loads are getting too high. This should be an edge case but it's still an entirely new engine feature.

- Secondary threads for resource loading/shader compilation. This is actually a really good thing that I'm excited for, but it does mean I need to change my render thread to start issuing new jobs and maintaining. It's necessary, and for the better good, but another task nonetheless.

 

- Root Signatures/Shader Constant management

Again really exciting stuff, but seems like a huge potential for issues, not to mention the engine now has to be acutely aware of how frequently the constants are changed and then map them appropriately.

 

@Promit: Thanks for the insight, that makes me feel better about the potential gains to be made and helps to assuage my fears that adding my own abstractions between the game thread and the render thread won't defeat the purpose of the API.

@Hodgman: I'm actually writing a new engine for the express purpose of supporting the new APIs, and my previous engine also had stateless rendering. (Kinda, the game thread would just append unique commands for whatever states it wanted and those commands would be filtered by a cache before being dispatched to the rendering thread, with the new threading changes I'll likely abandon this approach so that I can add rendering commands from any thread.) I do like your idea of having specific render passes, that would allow me to reuse render commands for shadowing vs shading passes and I'll be able to better generate my command lists/bundles.

 

I'll also be adding in architecture for Compute Shaders for the first time, so I'm worried that I might be biting off too much at once.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627708
    • Total Posts
      2978730
  • Similar Content

    • By mark_braga
      I am looking at the SaschaWillems subpass example for getting some insight into subpass depdendencies but its hard to understand whats going on without any comments. Also there is not a lot of documentation on subpass dependencies overall.
      Looking at the code, I can see that user specifies the src subpass, dst subpass and src state, dst state. But there is no mention of which resource the dependency is on. Is a subpass dependency like a pipeline barrier. If yes, how does it issue the barrier? Is the pipeline barrier issued on all attachments in the subpass with the input src and dst access flags? Any explanation will really clear a lot of doubts on subpass dependencies.
      Thank you
    • By mark_braga
      I need to index into a texture array using indices which are not dynamically uniform. This works fine on NVIDIA chips but you can see the artifacts on AMD due to the wavefront problem. This means, a lot of pixel invocations get the wrong index value. I know you fix this by using NonUniformResourceIndex in hlsl. Is there an equivalent for Vulkan glsl?
      This is the shader code for reference. As you can see, index is an arbitrary value for each pixel and is not dynamically uniform. I fix this for hlsl by using NonUniformResourceIndex(index)
      layout(set = 0, binding = 0) uniform sampler textureSampler; layout(set = 0, binding = 1) uniform texture2D albedoMaps[256]; layout(location = 0) out vec4 oColor; void main() { uint index = calculate_arbitrary_texture_index(); vec2 texCoord = calculate_texcoord(); vec4 albedo = texture(sampler2D(albedoMaps[index], textureSampler), texCoord); oColor = albedo; } Thank you
    • By Mercesa
      As the title says, I am explicitly creating a too small descriptor pool, which should NOT support the resources I am going to allocate from it.
       
      std::array<vk::DescriptorPoolSize, 3> type_count; // Initialize our pool with these values type_count[0].type = vk::DescriptorType::eCombinedImageSampler; type_count[0].descriptorCount = 0; type_count[1].type = vk::DescriptorType::eSampler; type_count[1].descriptorCount = 0; type_count[2].type = vk::DescriptorType::eUniformBuffer; type_count[2].descriptorCount = 0; vk::DescriptorPoolCreateInfo createInfo = vk::DescriptorPoolCreateInfo() .setPNext(nullptr) .setMaxSets(iMaxSets) .setPoolSizeCount(type_count.size()) .setPPoolSizes(type_count.data()); pool = aDevice.createDescriptorPool(createInfo);  
      I have an allocation function which looks like this, I am allocating a uniform, image-combined sampler and a regular sampler. Though if my pool is empty this should not work?
      vk::DescriptorSetAllocateInfo alloc_info[1] = {}; alloc_info[0].pNext = NULL; alloc_info[0].setDescriptorPool(pool); alloc_info[0].setDescriptorSetCount(iNumToAllocate); alloc_info[0].setPSetLayouts(&iDescriptorLayouts); std::vector<vk::DescriptorSet> tDescriptors; tDescriptors.resize(iNumToAllocate); iDevice.allocateDescriptorSets(alloc_info, tDescriptors.data());  
    • By Mercesa
      When loading in a model with a lot of meshes that have different materials that contain different textures, how would you handle this in Vulkan?
      Is it possible to partially change a DescriptorSet with a WriteDescriptorSet object? Even if it is possible, it does not sound ideal to update the descriptor set for every mesh. I am aware of the boundless texture arrays in shader model 5.0+, but for now I want to keep it as simple as possible.
    • By khawk
      CRYENGINE has released their latest version with support for Vulkan, Substance integration, and more. Learn more from their announcement and check out the highlights below.
      Substance Integration
      CRYENGINE uses Substance internally in their workflow and have released a direct integration.
       
      Vulkan API
      A beta version of the Vulkan renderer to accompany the DX12 implementation. Vulkan is a cross-platform 3D graphics and compute API that enables developers to have high-performance real-time 3D graphics applications with balanced CPU/GPU usage. 

       
      Entity Components
      CRYENGINE has addressed a longstanding issue with game code managing entities within the level. The Entity Component System adds a modular and intuitive method to construct games.
      And More
      View the full release details at the CRYENGINE announcement here.

      View full story
  • Popular Now