Hi,
I am a professional graphics programmer and I create scripting/programming languages as an hobby. I got this idea that I wanted to share with you.
I am reading and learning about low level graphics APIs and the reason why they exist. In DirectX11/OpenGL a lot of the GPU work, like resource barriers for example, is hidden and executed by the driver.
Now, because the driver doesn't know what your frame looks like it has to execute the worst case scenario and execute more barriers that may be required.
(I think DX11 drivers now are quite clever and do prediction to reduce that problem but you get my point)
DX12/Vulkan somewhat solves this issue by letting the programmer decide where to execute the barriers by exposing them as an API concept.
That is a major plus but it is very error prone and if not done correctly can lead to major performance issues.
Now this got me thinking... What if we created a programming language that allowed to define explicitly what a full frame looks like. The steps and the resources involved in those steps.
We could then look at theses steps and figure out exactly where to put the barriers. Re-order the steps for optimal performance. We could also look at what are the dependencies between steps and probably figure out a way to automatically dispatch the work on different queues (copy/dma, compute & graphics).
I have the feeling that with new low level APIs this door is now opened. Static analysis and optimization (of full frames)... Something every compilers do for CPU code. Why not GPU code?
Any thoughts on that?
Gab.