jhenriques

Members
  • Content count

    9
  • Joined

  • Last visited

Community Reputation

456 Neutral

About jhenriques

  • Rank
    Newbie
  1. Problems with Screen Space Reflections

    Hi again,   Well, If I where you, I would double check that my rays are correct. Remember that the origin must be in Camera Space (no projection) and that the ray direction is normalized and  in the direction you actually expect it. Something like this: // create ray: vec3 rayOrigin = fragment_cs_position.xyz; // Camera space vec3 incidentVec = fragment_cs_position.xyz; // Camera space vec3 rayDir = normalize( reflect( incidentVec, normal ) ); // Camera Space //vec3 rayDir = -2.0 * dot( incidentVec, normal ) * normal + incidentVec; Good Luck!
  2. Problems with Screen Space Reflections

    Hi!   I have dealt with implementing this recently.  The code for P0 and P1 is correct on Morgan's version. They are in screen space.   Make sure your pixel matrix is correct. I had to stop and think it through for a while.  One test you should make is something like this: // test code for projection pixel matrix. // Basically, after the cs position is projected back to pixel values by the // pixel matrix, the output must be the same as the values in g_uvs vec2 g_uvs = vec2(gl_FragCoord) / bufferSize; // Using the pixel matrix from engine: // vec4 cpos = projection_pixel_matrix * fragment_cs_position; // cpos = cpos / cpos.w; // vec2 projected = vec2( cpos ) / bufferSize; // this is the viewport transform that I was missing: // float Xw = (cpos.x + 1) * (800 * 0.5) + 0; // float Yw = (cpos.y + 1) * (600 * 0.5) + 0; // vec2 projected = vec2( Xw, Yw ) / (bufferSize); // This two outputs must be the same: // out_color = vec4( projected, 0, 1 ); // out_color = vec4( g_uvs, 0, 1 ); Good luck!
  3. Vulkan 101 Tutorial

    This article was originally posted on my blog at http://www.jhenriques/development.html. [subheading]Vulkan 101 Tutorial[/subheading] Welcome. In this tutorial we will be learning about Vulkan with the steps and code to render a triangle to the screen. First, let me warn you that this is not a beginner's tutorial to rendering APIs. I assume you have some experience with OpenGL or DirectX, and you are here to get to know the particulars of Vulkan. My main goal with this "tutorial" is to get to a complete but minimal C program running a graphical pipeline using Vulkan on Windows (Linux maybe in the future). If you are interested in the in-progress port of this code to Linux/XCB you can check this commit 15914e3). Let's start. [subheading]House Keeping[/subheading] I will be posting all the code on this page. The code is posted progressively, but you will be able to see all of it through the tutorial. If you want to follow along and compile the code on your own you can clone the following git repo: [source=auto:1] git clone https://bitbucket.org/jose_henriques/vulkan_tutorial.git [/source] I have successfully compiled and run every commit on Windows 7 and 10 running Visual Studio 2013. My repo includes a build.bat that you should be able to use to compile the code. You do need to have the cl compiler on your path before you can call the build.bat from your console. You need to find and run the right vcvars*.bat for your setup. For the setup I'm using you can find it at "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64\vcvarsx86_amd64.bat". For each step I will point out the commit that you can checkout to compile yourself. For example, to get the initial commit with the platform skeleton code, you can do the following: [source=auto:1] git checkout https://bitbucket.org/jose_henriques/vulkan_tutorial/commits/39534dc3819998cbfd55012cfe76a5952254ee78 [/source] [Commit: 39534dc] There are some things this tutorial will not be trying to accomplish. First, I will not be creating a "framework" that you can take and start coding your next engine. I will indeed not even try to create functions for code that repeats itself. I see some value in having all the code available and explanatory in a tutorial, instead of having to navigate a couple of indirections to get the full picture. This tutorial concludes with a triangle on the screen rendered through a vertex and fragment shader. You can use this code free of charge if that will bring you any value. I think this code is only useful to learn the API, but if you end up using it, credits are welcome. [subheading]Windows Platform Code[/subheading] [Commit: 39534dc] This is your typical windows platform code to register and open a new window. If you are familiar with this feel free to skip. We will be starting with this minimal setup and adding/completing it until we have our rendering going. I am including the code but will skip explanation. [source=auto:1] #include LRESULT CALLBACK WindowProc( HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam ) { switch( uMsg ) { case WM_CLOSE: { PostQuitMessage( 0 ); break; } default: { break; } } // a pass-through for now. We will return to this callback return DefWindowProc( hwnd, uMsg, wParam, lParam ); } int CALLBACK WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow ) { WNDCLASSEX windowClass = {}; windowClass.cbSize = sizeof(WNDCLASSEX); windowClass.style = CS_OWNDC | CS_VREDRAW | CS_HREDRAW; windowClass.lpfnWndProc = WindowProc; windowClass.hInstance = hInstance; windowClass.lpszClassName = "VulkanWindowClass"; RegisterClassEx( &windowClass ); HWND windowHandle = CreateWindowEx( NULL, "VulkanWindowClass", "Core", WS_OVERLAPPEDWINDOW | WS_VISIBLE, 100, 100, 800, // some random values for now. 600, // we will come back to these soon. NULL, NULL, hInstance, NULL ); MSG msg; bool done = false; while( !done ) { PeekMessage( &msg, NULL, NULL, NULL, PM_REMOVE ); if( msg.message == WM_QUIT ) { done = true; } else { TranslateMessage( &msg ); DispatchMessage( &msg ); } RedrawWindow( windowHandle, NULL, NULL, RDW_INTERNALPAINT ); } return msg.wParam; } [/source] If you get the repo and checkout this commit you can use build.bat to compile the code. This is the contents of the batch file if you just want to copy/paste and compile on your own: [source=auto:1] @echo off mkdir build pushd build cl /Od /Zi ..\main.cpp user32.lib popd [/source] This will compile our test application and create a binary called main.exe in your project/build folder. If you run this application you will get a white window at position (100,100) of size (800,600) that you can quit. [subheading]Dynamically Loading Vulkan[/subheading] [Commit: bccc3df] Now we need to learn how we get Vulkan on our system. It is not made very clear by Khronos or by LunarG whether you need or not their SDK. Short answer is no, you do not need their SDK to start programming our Vulkan application. In a later chapter I will show you that even for a validation layer, you can skip the SDK. We need two things: the library and the headers. The library should already be on your system, as it is provided by your GPU driver. On Windows it is called vulkan-1.dll (libvulkan.so.1 on Linux) and should be in your system folder. Khronos says that the headers provided with a loader and/or driver should be sufficient. I did not find them on my machine so I got them from the Khronos Registry vulkan-docs repo: [source=auto:1] git clone https://github.com/KhronosGroup/Vulkan-Docs.git [/source] I also needed the Loader and Validation Layers repo: [source=auto:1] git clone https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers.git [/source] We will need the Loader and Validation Layers later, but for now copy vulkan.h and vk_platform.h to your application folder. If you are following along with the git repo, I added these headers to the commit. We include Vulkan.h and start loading the API functions we need. We will be dynamically loading Vulkan functions and we want to make sure we are using Windows platform specific defines. So we will add the following code: [source=auto:1] #define VK_USE_PLATFORM_WIN32_KHR #define VK_NO_PROTOTYPES #include "vulkan.h" [/source] For every Vulkan function we want to use we first declare and load it from the dynamic library. This process is platform-dependent. For now we'll create a win32_LoadVulkan() function. Note that we have to add similar code to the vkCreateInstance() loading code for every Vulkan function we call. [source=auto:1] PFN_vkCreateInstance vkCreateInstance = NULL; void win32_LoadVulkan( ) { HMODULE vulkan_module = LoadLibrary( "vulkan-1.dll" ); assert( vulkan_module, "Failed to load vulkan module." ); vkCreateInstance = (PFN_vkCreateInstance) GetProcAddress( vulkan_module, "vkCreateInstance" ); assert( vkCreateInstance, "Failed to load vkCreateInstance function pointer." ); } [/source] I have also created the helper function assert() that does what you would expect. This will be our "debugging" facilities! Feel free to use your preferred version of this function. [source=auto:1] void assert( bool flag, char *msg = "" ) { if( !flag ) { OutputDebugStringA( "ASSERT: " ); OutputDebugStringA( msg ); OutputDebugStringA( "\n" ); int *base = 0; *base = 1; } } [/source] That should cover all of our Windows specific code. Next we will start talking about Vulkan and its specific quirks. [subheading]Creating a Vulkan Instance[/subheading] [Commit: 52259bb] Vulkan data structures are used as function parameters. We fill them as follows: [source=auto:1] VkApplicationInfo applicationInfo; applicationInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO; // sType is a member of all structs applicationInfo.pNext = NULL; // as is pNext and flag applicationInfo.pApplicationName = "First Test"; // The name of our application applicationInfo.pEngineName = NULL; // The name of the engine applicationInfo.engineVersion = 1; // The version of the engine applicationInfo.apiVersion = VK_MAKE_VERSION(1, 0, 0); // The version of Vulkan we're using [/source] Now, if we take a look at what the specification has to say about VkApplicationInfo we find out that most of these fields can be zero. In all cases .sType is known (always VK_STRUCTURE_TYPE_<uppercase_structure_name62). While for this tutorial I will try to be explicit about most of the values we use to fill up this data structure, I might be leaving something at 0 because I will always be doing this: [source=auto:1] VkApplicationInfo applicationInfo = { }; // notice me senpai! applicationInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO; applicationInfo.pApplicationName = "First Test"; applicationInfo.engineVersion = 1; applicationInfo.apiVersion = VK_MAKE_VERSION(1, 0, 0); [/source] Next, almost all functions will return a VkResult enum. So, let's write a simple helper leveraging our awesome debug facilities: [source=auto:1] void checkVulkanResult( VkResult &result, char *msg ) { assert( result == VK_SUCCESS, msg ); } [/source] During the creation of the graphics pipeline we will be setting up a whole lot of state and creating a whole lot of "context". To help us keep track of all this Vulkan state, we will create the following: [source=auto:1] struct vulkan_context { uint32_t width; uint32_t height; VkInstance instance; } vulkan_context context; [/source] This context will grow, but for now let's keep marching. You probably noticed that I have sneaked in a thing called an instance into our context. Vulkan keeps no global state at all. Every time Vulkan requires some application state you will need to pass your VkInstance. And this is true for many constructs, including our graphics pipeline. It's just one of the things we need to create, init and keep around. So let's do it. Because this process will repeat itself for almost all function calls I will be a bit more detailed for this first instance (pun intended!). So, checking the spec, to create a VkInstance we need to call: [source=auto:1] VkResult vkCreateInstance( const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance); [/source] Quick note about allocators: As a rule of thumb whenever functions asks for a pAllocator you can pass NULL and Vulkan will use the default allocator. Using a custom allocator is not a topic I will be covering in this tutorial. Suffice to notice them and know that Vulkan does allow your application to control the memory allocation of Vulkan. Now, the process I was talking about is that the function requires you to fill some data structure, generally some Vk*CreateInfo, and pass it to the Vulkan function, in this case vkCreateInstance, which will return the result in its last parameter: [source=auto:1] VkInstanceCreateInfo instanceInfo = { }; instanceInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO; instanceInfo.pApplicationInfo = &applicationInfo; instanceInfo.enabledLayerCount = 0; instanceInfo.ppEnabledLayerNames = NULL; instanceInfo.enabledExtensionCount = 0; instanceInfo.ppEnabledExtensionNames = NULL; result = vkCreateInstance( &instanceInfo, NULL, &context.instance ); checkVulkanResult( result, "Failed to create vulkan instance." ); [/source] You can compile and run this code but nothing new will happen. We need to fill the instance info with some validation layer we might want to be using and with the extensions we will be requiring to be present so that we can do something more interesting than a white window... [subheading]Validation Layers[/subheading] [Commit: eb1cf65] One of the core principles of Vulkan is efficiency. The counter part to this is that all validation and error checking is basically non-existent! Vulkan will indeed crash and/or result in undefined behavior if you make a mistake. This is all fine, but while developing our application we might want to know why our application is not showing what we expect or, when crashed, exactly why it crashed. Enter Validation Layers. Vulkan is a layered API. There is a core layer that we are calling into, but inbetween the API calls and the loader other "layers" can intercept the API calls. The ones we are interested in here are the validation layers that will help us debug and track problems with our usage of the API. You want to develop your application with this layers on but when shipping you should disable them. To find out the layers our loader knows about we need to call: [source=auto:1] uint32_t layerCount = 0; vkEnumerateInstanceLayerProperties( &layerCount, NULL ); assert( layerCount != 0, "Failed to find any layer in your system." ); VkLayerProperties *layersAvailable = new VkLayerProperties[layerCount]; vkEnumerateInstanceLayerProperties( &layerCount, layersAvailable ); [/source] (Don't forget to add the declaration at the top and the loading of the vkEnumerateInstanceLayerProperties to the win32_LoadVulkan() function.) This is another repeating mechanism. We call the function twice. First time we pass a NULL as the parameter to the VkLayerProperties to query the layer count. Next we allocate the necessary space to hold that amount of elements and we call the function a second time to fill our data structures. If you run this piece of code you will notice that you might have found no layer... This is because, at leat on my system, the loader could not find any layer. To get some validation layers we need the SDK and/or to compile the code in Vulkan-LoaderAndValidationLayers.git. What I found out during the process of trying to figure out if you needed the SDK or not is that you only need the *.json and the *.dll of the layer you want somewhere on your project folder and then you can setup the VK_LAYER_PATH environment variable to the path to the folder with those files. I kinda prefer this solution over the more obscure way where the SDK sets up layer information in the windows registry key HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Khronos\Vulkan\ExplicitLayers because this way you can better control which ones are loaded by your application. (I do wonder about security problems this might raise?) The layer we will be using is called VK_LAYER_LUNARG_standard_validation. This layer works as a kind of super set of a bunch of other layers. [this one comes from the SDK]. So, I will assume you have either installed the SDK or you have moved all the VkLayer_*.dll and the VkLayer_*.json files from the ones you want to use to a layers folder and set VK_LAYER_PATH=/path/to/layers/folder. We can now complete this validation layer section by making sure we found the VK_LAYER_LUNARG_standard_validation layer and configure the instance with this info: [source=auto:1] bool foundValidation = false; for( int i = 0; i < layerCount; ++i ) { if( strcmp( layersAvailable.layerName, "VK_LAYER_LUNARG_standard_validation" ) == 0 ) { foundValidation = true; } } assert( foundValidation, "Could not find validation layer." ); const char *layers[] = { "VK_LAYER_LUNARG_standard_validation" }; // update the VkInstanceCreateInfo with: instanceInfo.enabledLayerCount = 1; instanceInfo.ppEnabledLayerNames = layers; [/source] The sad thing is, this commit will still produce the same result as before. We need to handle the extensions to start producing some debug info. [subheading]Extensions[/subheading] [Commit: 9c416b3] Much like in OpenGL and other APIs, extensions can add new functionality to Vulkan that are not part of the core API. To start debugging our application we need the VK_EXT_debug_report extension. The following code is similar to the layers loading code, the notable difference being that we are looking for 3 specific extensions. I will sneak in two other extensions that we will need later, so don't worry about them for now. [source=auto:1] uint32_t extensionCount = 0; vkEnumerateInstanceExtensionProperties( NULL, &extensionCount, NULL ); VkExtensionProperties *extensionsAvailable = new VkExtensionProperties[extensionCount]; vkEnumerateInstanceExtensionProperties( NULL, &extensionCount, extensionsAvailable ); const char *extensions[] = { "VK_KHR_surface", "VK_KHR_win32_surface", "VK_EXT_debug_report" }; uint32_t numberRequiredExtensions = sizeof(extensions) / sizeof(char*); uint32_t foundExtensions = 0; for( uint32_t i = 0; i < extensionCount; ++i ) { for( int j = 0; j < numberRequiredExtensions; ++j ) { if( strcmp( extensionsAvailable.extensionName, extensions[j] ) == 0 ) { foundExtensions++; } } } assert( foundExtensions == numberRequiredExtensions, "Could not find debug extension" ); [/source] This extension adds three new functions: vkCreateDebugReportCallbackEXT(), vkDestroyDebugReportCallbackEXT(), and vkDebugReportMessageEXT(). Because this functions are not part of the core Vulkan API, we can not load them the same way we have been loading other functions. We need to use vkGetInstanceProcAddr(). Once we add that function to our win32_LoadVulkan() we can define another helper function that should look familiar: [source=auto:1] PFN_vkCreateDebugReportCallbackEXT vkCreateDebugReportCallbackEXT = NULL; PFN_vkDestroyDebugReportCallbackEXT vkDestroyDebugReportCallbackEXT = NULL; PFN_vkDebugReportMessageEXT vkDebugReportMessageEXT = NULL; void win32_LoadVulkanExtensions( vulkan_context &context ) { *(void **)&vkCreateDebugReportCallbackEXT = vkGetInstanceProcAddr( context.instance, "vkCreateDebugReportCallbackEXT" ); *(void **)&vkDestroyDebugReportCallbackEXT = vkGetInstanceProcAddr( context.instance, "vkDestroyDebugReportCallbackEXT" ); *(void **)&vkDebugReportMessageEXT = vkGetInstanceProcAddr( context.instance, "vkDebugReportMessageEXT" ); } [/source] The extension expects us to provide a callback where all debugging info will be provided. Here is our callback: [source=auto:1] VKAPI_ATTR VkBool32 VKAPI_CALL MyDebugReportCallback( VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage, void* pUserData ) { OutputDebugStringA( pLayerPrefix ); OutputDebugStringA( " " ); OutputDebugStringA( pMessage ); OutputDebugStringA( "\n" ); return VK_FALSE; } [/source] Nothing fancy as we only need to know the layer the message is coming from and the message itself. I have not yet talked about this, but I normally debug with Visual Studio. I told you I don't use the IDE but for debugging there really is no alternative. What I do is I just start a debugging session with devenv .\build\main.exe. You might need to load the main.cpp and then you are set to start setting breakpoints, watchs, etc... The only thing missing is to add the call to load our Vulkan extension functions, registering our callback, and destroying it at the end of the app: (Notice that we can control the kind of reporting we want with the callbackCreateInfo.flags and that we added a VkDebugReportCallbackEXT member to our vulkan_context structure.) [source=auto:1] win32_LoadVulkanExtensions( context ); VkDebugReportCallbackCreateInfoEXT callbackCreateInfo = { }; callbackCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; callbackCreateInfo.flags = VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT; callbackCreateInfo.pfnCallback = &MyDebugReportCallback; callbackCreateInfo.pUserData = NULL; result = vkCreateDebugReportCallbackEXT( context.instance, &callbackCreateInfo, NULL, &context.callback ); checkVulkanResult( result, "Failed to create debug report callback." ); [/source] When finished we can clean up with: [source=auto:1] vkDestroyDebugReportCallbackEXT( context.instance, context.callback, NULL ); [/source] So, we are now ready to start creating our rendering surfaces, but for that I need to explain those two extra extensions. [subheading]Devices[/subheading] [Commit: b5d2444] We have everything in place to start setting up our windows rendering backend. Now we need to create a rendering surface and to find out which physical devices of our machine support this rendering surface. Therefore we use those two extra extensions we sneak in on our instance creation: VK_KHR_surface and VK_KHR_win32_surface. The VK_KHR_surface extension should be present in all systems as it abstracts each platform way of showing a native window/surface. Then we have another extension that is responsible for creation the VkSurface on a particular system. For windows this is the VK_KHR_win32_surface. Before that though, a word about physical and logical devices, and queues. A physical device represents one single GPU on your system. You can have several on your system. A logical device is how the application keeps track of it's use of the physical device. Each physical device defines the number and type of queues it supports. (Think compute and graphics queues). What we need to do is to enumerate the physical devices in our system and pick the one we want to use. In this tutorial we will just pick the first one that we find that has a graphics queue and that can present our renderings... if we can not find any, we fail miserably! We start by creating a surface for our rendering that is connected to the window we created: (Notice that vkCreateWin32SurfaceKHR() is an instance function provided by the VK_KHR_win32_surface extension. You must add it to the win32_LoadVulkanExtensions()) [source=auto:1] VkWin32SurfaceCreateInfoKHR surfaceCreateInfo = {}; surfaceCreateInfo.sType = VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR; surfaceCreateInfo.hinstance = hInstance; surfaceCreateInfo.hwnd = windowHandle; result = vkCreateWin32SurfaceKHR( context.instance, &surfaceCreateInfo, NULL, &context.surface ); checkVulkanResult( result, "Could not create surface." ); [/source] Next, we need to iterate over all physical devices and find the one that supports rendering to this surface and has a graphics queue: [source=auto:1] uint32_t physicalDeviceCount = 0; vkEnumeratePhysicalDevices( context.instance, &physicalDeviceCount, NULL ); VkPhysicalDevice *physicalDevices = new VkPhysicalDevice[physicalDeviceCount]; vkEnumeratePhysicalDevices( context.instance, &physicalDeviceCount, physicalDevices ); for( uint32_t i = 0; i < physicalDeviceCount; ++i ) { VkPhysicalDeviceProperties deviceProperties = {}; vkGetPhysicalDeviceProperties( physicalDevices, &deviceProperties ); uint32_t queueFamilyCount = 0; vkGetPhysicalDeviceQueueFamilyProperties( physicalDevices, &queueFamilyCount, NULL ); VkQueueFamilyProperties *queueFamilyProperties = new VkQueueFamilyProperties[queueFamilyCount]; vkGetPhysicalDeviceQueueFamilyProperties( physicalDevices, &queueFamilyCount, queueFamilyProperties ); for( uint32_t j = 0; j < queueFamilyCount; ++j ) { VkBool32 supportsPresent; vkGetPhysicalDeviceSurfaceSupportKHR( physicalDevices, j, context.surface, &supportsPresent ); if( supportsPresent && ( queueFamilyProperties[j].queueFlags & VK_QUEUE_GRAPHICS_BIT ) ) { context.physicalDevice = physicalDevices; context.physicalDeviceProperties = deviceProperties; context.presentQueueIdx = j; break; } } delete[] queueFamilyProperties; if( context.physicalDevice ) { break; } } delete[] physicalDevices; assert( context.physicalDevice, "No physical device detected that can render and present!" ); [/source] That is a lot of code, but for most of it we have seen something similar already. First, there are a lot of new functions that you need to load dynamically (check the repo code) and our vulkan_context gained some new members. Of notice is that we now know the queue index on the physical device where we can submit some rendering work. What is missing is to create the logical device i.e., our connection to the physical device. I will again sneak in something we will be using for the next step: the VK_KHR_swapchain device extension: [source=auto:1] // info for accessing one of the devices rendering queues: VkDeviceQueueCreateInfo queueCreateInfo = {}; queueCreateInfo.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO; queueCreateInfo.queueFamilyIndex = context.presentQueueIdx; queueCreateInfo.queueCount = 1; float queuePriorities[] = { 1.0f }; // ask for highest priority for our queue. (range [0,1]) queueCreateInfo.pQueuePriorities = queuePriorities; VkDeviceCreateInfo deviceInfo = {}; deviceInfo.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO; deviceInfo.queueCreateInfoCount = 1; deviceInfo.pQueueCreateInfos = &queueCreateInfo; deviceInfo.enabledLayerCount = 1; deviceInfo.ppEnabledLayerNames = layers; const char *deviceExtensions[] = { "VK_KHR_swapchain" }; deviceInfo.enabledExtensionCount = 1; deviceInfo.ppEnabledExtensionNames = deviceExtensions; VkPhysicalDeviceFeatures features = {}; features.shaderClipDistance = VK_TRUE; deviceInfo.pEnabledFeatures = &features; result = vkCreateDevice( context.physicalDevice, &deviceInfo, NULL, &context.device ); checkVulkanResult( result, "Failed to create logical device!" ); [/source] Don't forget to remove the layers information when you stop debugging your application. VkPhysicalDeviceFeatures gives us access to fine-grained optional specification features that our implementation may support. They are enabled per-feature. You can check the spec for a list of members. Our shader will require this one particular feature to be enabled. Without it our pipeline does not work properly. (By the way, I got this information out of the validation layers. So they are useful!) Next we will create our swap chain which will finally enable us to put something on the screen. [subheading]Swap Chain[/subheading] [Commit: 3f07df7] Now that we have the surface we need to get a handle of the image buffers we will be writing to. We use the swap chain extension to do this. On creation we pass the number of buffers we want (think single/double/n buffered), the resolution, color formats and color space, and the presentation mode. There is a significant amount of setup until we can create a swap chain, but there is nothing hard to understand. We start by figuring out what color format and color space we will be using: [source=auto:1] uint32_t formatCount = 0; vkGetPhysicalDeviceSurfaceFormatsKHR( context.physicalDevice, context.surface, &formatCount, NULL ); VkSurfaceFormatKHR *surfaceFormats = new VkSurfaceFormatKHR[formatCount]; vkGetPhysicalDeviceSurfaceFormatsKHR( context.physicalDevice, context.surface, &formatCount, surfaceFormats ); // If the format list includes just one entry of VK_FORMAT_UNDEFINED, the surface has // no preferred format. Otherwise, at least one supported format will be returned. VkFormat colorFormat; if( formatCount == 1 && surfaceFormats[0].format == VK_FORMAT_UNDEFINED ) { colorFormat = VK_FORMAT_B8G8R8_UNORM; } else { colorFormat = surfaceFormats[0].format; } VkColorSpaceKHR colorSpace; colorSpace = surfaceFormats[0].colorSpace; delete[] surfaceFormats; [/source] Next we need to check the surface capabilities to figure out the number of buffers we can ask for, the resolution we will be using. Also we need to decide if we will be applying some surface transformation (like rotating 90 degrees... we are not). We must make sure that the resolution we ask for the swap chain matches the surfaceCapabilities.currentExtent. In the case where both width and height are -1 (and they are both not -1 otherwise!) it means the surface size is undefined and can effectively be set to any value. However, if the size is set, the swap chain size MUST match! [source=auto:1] VkSurfaceCapabilitiesKHR surfaceCapabilities = {}; vkGetPhysicalDeviceSurfaceCapabilitiesKHR( context.physicalDevice, context.surface, &surfaceCapabilities ); // we are effectively looking for double-buffering: // if surfaceCapabilities.maxImageCount == 0 there is actually no limit on the number of images! uint32_t desiredImageCount = 2; if( desiredImageCount < surfaceCapabilities.minImageCount ) { desiredImageCount = surfaceCapabilities.minImageCount; } else if( surfaceCapabilities.maxImageCount != 0 && desiredImageCount > surfaceCapabilities.maxImageCount ) { desiredImageCount = surfaceCapabilities.maxImageCount; } VkExtent2D surfaceResolution = surfaceCapabilities.currentExtent; if( surfaceResolution.width == -1 ) { surfaceResolution.width = context.width; surfaceResolution.height = context.height; } else { context.width = surfaceResolution.width; context.height = surfaceResolution.height; } VkSurfaceTransformFlagBitsKHR preTransform = surfaceCapabilities.currentTransform; if( surfaceCapabilities.supportedTransforms & VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR ) { preTransform = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR; } [/source] For the presentation mode we have some options. VK_PRESENT_MODE_MAILBOX_KHR maintains a single entry queue for presentation, where it removes an entry at every vertical sync if the queue is not empty. But, when a frame is committed it obviously replaces the previous. So, in a sense it does not vertically synchronise because a frame might not be displayed at all if a newer one was generated in-between syncs nor does it screen-tears. This is our preferred presentation mode if supported for it is the lowest latency non-tearing presentation mode. VK_PRESENT_MODE_IMMEDIATE_KHR does not vertical synchronise and will screen-tear if a frame is late. VK_PRESENT_MODE_FIFO_RELAXED_KHR keeps a queue and will v-sync but will screen-tear if a frame is late. VK_PRESENT_MODE_FIFO_KHR is similar to the previous one but it won't screen-tear. This is the only present mode that is required by the spec to be supported and as such it is our default value: [source=auto:1] uint32_t presentModeCount = 0; vkGetPhysicalDeviceSurfacePresentModesKHR( context.physicalDevice, context.surface, &presentModeCount, NULL ); VkPresentModeKHR *presentModes = new VkPresentModeKHR[presentModeCount]; vkGetPhysicalDeviceSurfacePresentModesKHR( context.physicalDevice, context.surface, &presentModeCount, presentModes ); VkPresentModeKHR presentationMode = VK_PRESENT_MODE_FIFO_KHR; // always supported. for( uint32_t i = 0; i < presentModeCount; ++i ) { if( presentModes == VK_PRESENT_MODE_MAILBOX_KHR ) { presentationMode = VK_PRESENT_MODE_MAILBOX_KHR; break; } } delete[] presentModes; [/source] And the only thing missing is putting this all together and create our swap chain: [source=auto:1] VkSwapchainCreateInfoKHR swapChainCreateInfo = {}; swapChainCreateInfo.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR; swapChainCreateInfo.surface = context.surface; swapChainCreateInfo.minImageCount = desiredImageCount; swapChainCreateInfo.imageFormat = colorFormat; swapChainCreateInfo.imageColorSpace = colorSpace; swapChainCreateInfo.imageExtent = surfaceResolution; swapChainCreateInfo.imageArrayLayers = 1; swapChainCreateInfo.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT; swapChainCreateInfo.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE; // > 1; } VkDeviceMemory vertexBufferMemory; result = vkAllocateMemory( context.device, &bufferAllocateInfo, NULL, &vertexBufferMemory ); checkVulkanResult( result, "Failed to allocate buffer memory." ); [/source] Even if we ask for host accessible memory, this memory is not directly accessible by the host. What it does is to create a mappable memory. To be able to write to this memory we must first retrieve a host virtual address pointer to a mappable memory object by calling vkMapMemory() So lets us map this memory so we can write to it and bind it: [source=auto:1] void *mapped; result = vkMapMemory( context.device, vertexBufferMemory, 0, VK_WHOLE_SIZE, 0, &mapped ); checkVulkanResult( result, "Failed to map buffer memory." ); vertex *triangle = (vertex *) mapped; vertex v1 = { -1.0f, -1.0f, 0, 1.0f }; vertex v2 = { 1.0f, -1.0f, 0, 1.0f }; vertex v3 = { 0.0f, 1.0f, 0, 1.0f }; triangle[0] = v1; triangle[1] = v2; triangle[2] = v3; vkUnmapMemory( context.device, vertexBufferMemory ); result = vkBindBufferMemory( context.device, context.vertexInputBuffer, vertexBufferMemory, 0 ); checkVulkanResult( result, "Failed to bind buffer memory." ); [/source] There you go. One triangle set to go through our pipeline. Thing is, we don't have a pipeline, do we mate? We are almost there.. we just need to talk about shaders! [subheading]Shaders[/subheading] [Commit: d2cf6be] Our goal is to setup a simple vertex and fragment shader. Vulkan expects the shader code to be in SPIR-V format but that is not such a big problem because we can use some freely available tools to convert our GLSL shaders to SPIR-V shaders: glslangValidator. You can get access to the git repo here: [source=auto:1] git clone https://github.com/KhronosGroup/glslang [/source] So, for example, if for our simple.vert vertex shader we have the following code: [source=auto:1] #version 400 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout (location = 0) in vec4 pos; void main() { gl_Position = pos; } [/source] we can call: [source=auto:1] glslangValidator -V simple.vert [/source] and this will create a vert.spv in the same folder. Neat, right? And the same for our simple.frag fragment shader: [source=auto:1] #version 400 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout (location = 0) out vec4 uFragColor; void main() { uFragColor = vec4( 0.0, 0.5, 1.0, 1.0 ); } [/source] [source=auto:1] glslangValidator -V simple.frag [/source] And we end up with our frag.spv. Keeping to our principle of showing all the code in place, to load it up to Vulkan we can go and do the following: [source=auto:1] uint32_t codeSize; char *code = new char[10000]; HANDLE fileHandle = 0; // load our vertex shader: fileHandle = CreateFile( "..\\vert.spv", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL ); if( fileHandle == INVALID_HANDLE_VALUE ) { OutputDebugStringA( "Failed to open shader file." ); exit(1); } ReadFile( (HANDLE)fileHandle, code, 10000, (LPDWORD)&codeSize, 0 ); CloseHandle( fileHandle ); VkShaderModuleCreateInfo vertexShaderCreationInfo = {}; vertexShaderCreationInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO; vertexShaderCreationInfo.codeSize = codeSize; vertexShaderCreationInfo.pCode = (uint32_t *)code; VkShaderModule vertexShaderModule; result = vkCreateShaderModule( context.device, &vertexShaderCreationInfo, NULL, &vertexShaderModule ); checkVulkanResult( result, "Failed to create vertex shader module." ); // load our fragment shader: fileHandle = CreateFile( "..\\frag.spv", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL ); if( fileHandle == INVALID_HANDLE_VALUE ) { OutputDebugStringA( "Failed to open shader file." ); exit(1); } ReadFile( (HANDLE)fileHandle, code, 10000, (LPDWORD)&codeSize, 0 ); CloseHandle( fileHandle ); VkShaderModuleCreateInfo fragmentShaderCreationInfo = {}; fragmentShaderCreationInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO; fragmentShaderCreationInfo.codeSize = codeSize; fragmentShaderCreationInfo.pCode = (uint32_t *)code; VkShaderModule fragmentShaderModule; result = vkCreateShaderModule( context.device, &fragmentShaderCreationInfo, NULL, &fragmentShaderModule ); checkVulkanResult( result, "Failed to create vertex shader module." ); [/source] Notice that we fail miserably if we can not find the shader code and that we expect to find it in the parent folder of where we run. This is fine if you run it from the Visual Studio devenv, but it will simply crash and not report anything if you run from the command line. I suggest you change this to whatever fits you better. A cursory glance at this code and you should be calling me all kinds of names... I will endure it. I know what you are complaining about but, for the purpose of this tutorial, I don't care. Believe me this is not the code I use in my own internal engines. ;) Hopefully, after you stop calling me names, you should by now know what you need to do to load your own shaders. Ok. I think we are finally ready to start setting up our rendering pipeline. [subheading]Graphics Pipeline[/subheading] [Commit: 0baeb96] A graphics pipeline keeps track of all the state required to render. It is a collection of multiple shader stages, multiple fixed-function pipeline stages and pipeline layout. Everything that we have been creating up to this point is so that we can config the pipeline in one way or another. We need to set everything up front. Remember that Vulkan keeps no state and as such we need to config and store all the state we want/need and we do it by creating a VkPipeline. As you know, or at least imagine, there is a whole lot of state in a graphics pipeline. From the viewport to the blend functions, from the shader stages, to the bindings... As such, what follows is setting up all this state. (In this instance, we will be leaving out some big parts, like the descriptor sets, bindings, etc...) So, let's start by creating an empty layout: [source=auto:1] VkPipelineLayoutCreateInfo layoutCreateInfo = {}; layoutCreateInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO; layoutCreateInfo.setLayoutCount = 0; layoutCreateInfo.pSetLayouts = NULL; // Not setting any bindings! layoutCreateInfo.pushConstantRangeCount = 0; layoutCreateInfo.pPushConstantRanges = NULL; result = vkCreatePipelineLayout( context.device, &layoutCreateInfo, NULL, &context.pipelineLayout ); checkVulkanResult( result, "Failed to create pipeline layout." ); [/source] We might return to this stage later so that we can, for example, set a uniform buffer object to pass some uniform values to our shaders, But for this first tutorial empty is fine! Next we setup our shader stages with the shader modules we loaded: [source=auto:1] VkPipelineShaderStageCreateInfo shaderStageCreateInfo[2] = {}; shaderStageCreateInfo[0].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; shaderStageCreateInfo[0].stage = VK_SHADER_STAGE_VERTEX_BIT; shaderStageCreateInfo[0].module = vertexShaderModule; shaderStageCreateInfo[0].pName = "main"; // shader entry point function name shaderStageCreateInfo[0].pSpecializationInfo = NULL; shaderStageCreateInfo[1].sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; shaderStageCreateInfo[1].stage = VK_SHADER_STAGE_FRAGMENT_BIT; shaderStageCreateInfo[1].module = fragmentShaderModule; shaderStageCreateInfo[1].pName = "main"; // shader entry point function name shaderStageCreateInfo[1].pSpecializationInfo = NULL; [/source] Nothing special going on here. To configure the vertex input handling we follow with: [source=auto:1] VkVertexInputBindingDescription vertexBindingDescription = {}; vertexBindingDescription.binding = 0; vertexBindingDescription.stride = sizeof(vertex); vertexBindingDescription.inputRate = VK_VERTEX_INPUT_RATE_VERTEX; VkVertexInputAttributeDescription vertexAttributeDescritpion = {}; vertexAttributeDescritpion.location = 0; vertexAttributeDescritpion.binding = 0; vertexAttributeDescritpion.format = VK_FORMAT_R32G32B32A32_SFLOAT; vertexAttributeDescritpion.offset = 0; VkPipelineVertexInputStateCreateInfo vertexInputStateCreateInfo = {}; vertexInputStateCreateInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO; vertexInputStateCreateInfo.vertexBindingDescriptionCount = 1; vertexInputStateCreateInfo.pVertexBindingDescriptions = &vertexBindingDescription; vertexInputStateCreateInfo.vertexAttributeDescriptionCount = 1; vertexInputStateCreateInfo.pVertexAttributeDescriptions = &vertexAttributeDescritpion; // vertex topology config: VkPipelineInputAssemblyStateCreateInfo inputAssemblyStateCreateInfo = {}; inputAssemblyStateCreateInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO; inputAssemblyStateCreateInfo.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; inputAssemblyStateCreateInfo.primitiveRestartEnable = VK_FALSE; [/source] Ok, some explanations required here. In the first part we bind the vertex position (our (x,y,z,w)) to location = 0, binding = 0. And then we are configuring the vertex topology to interpret our vertex buffer as a triangle list. Next, the viewport and scissors clipping is configured. We will later make this state dynamic so that we can change it per frame. [source=auto:1] VkViewport viewport = {}; viewport.x = 0; viewport.y = 0; viewport.width = context.width; viewport.height = context.height; viewport.minDepth = 0; viewport.maxDepth = 1; VkRect2D scissors = {}; scissors.offset = { 0, 0 }; scissors.extent = { context.width, context.height }; VkPipelineViewportStateCreateInfo viewportState = {}; viewportState.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO; viewportState.viewportCount = 1; viewportState.pViewports = &viewport; viewportState.scissorCount = 1; viewportState.pScissors = &scissors; [/source] Here we can set our rasterization configurations. Most of this are self explanatory: [source=auto:1] VkPipelineRasterizationStateCreateInfo rasterizationState = {}; rasterizationState.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO; rasterizationState.depthClampEnable = VK_FALSE; rasterizationState.rasterizerDiscardEnable = VK_FALSE; rasterizationState.polygonMode = VK_POLYGON_MODE_FILL; rasterizationState.cullMode = VK_CULL_MODE_NONE; rasterizationState.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE; rasterizationState.depthBiasEnable = VK_FALSE; rasterizationState.depthBiasConstantFactor = 0; rasterizationState.depthBiasClamp = 0; rasterizationState.depthBiasSlopeFactor = 0; rasterizationState.lineWidth = 1; [/source] Next, sampling configuration: [source=auto:1] VkPipelineMultisampleStateCreateInfo multisampleState = {}; multisampleState.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO; multisampleState.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT; multisampleState.sampleShadingEnable = VK_FALSE; multisampleState.minSampleShading = 0; multisampleState.pSampleMask = NULL; multisampleState.alphaToCoverageEnable = VK_FALSE; multisampleState.alphaToOneEnable = VK_FALSE; [/source] At this stage we enable depth testing and disable stencil: [source=auto:1] VkStencilOpState noOPStencilState = {}; noOPStencilState.failOp = VK_STENCIL_OP_KEEP; noOPStencilState.passOp = VK_STENCIL_OP_KEEP; noOPStencilState.depthFailOp = VK_STENCIL_OP_KEEP; noOPStencilState.compareOp = VK_COMPARE_OP_ALWAYS; noOPStencilState.compareMask = 0; noOPStencilState.writeMask = 0; noOPStencilState.reference = 0; VkPipelineDepthStencilStateCreateInfo depthState = {}; depthState.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO; depthState.depthTestEnable = VK_TRUE; depthState.depthWriteEnable = VK_TRUE; depthState.depthCompareOp = VK_COMPARE_OP_LESS_OR_EQUAL; depthState.depthBoundsTestEnable = VK_FALSE; depthState.stencilTestEnable = VK_FALSE; depthState.front = noOPStencilState; depthState.back = noOPStencilState; depthState.minDepthBounds = 0; depthState.maxDepthBounds = 0; [/source] Color blending, which is disabled for this tutorial, can be configured here: [source=auto:1] VkPipelineColorBlendAttachmentState colorBlendAttachmentState = {}; colorBlendAttachmentState.blendEnable = VK_FALSE; colorBlendAttachmentState.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_COLOR; colorBlendAttachmentState.dstColorBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_DST_COLOR; colorBlendAttachmentState.colorBlendOp = VK_BLEND_OP_ADD; colorBlendAttachmentState.srcAlphaBlendFactor = VK_BLEND_FACTOR_ZERO; colorBlendAttachmentState.dstAlphaBlendFactor = VK_BLEND_FACTOR_ZERO; colorBlendAttachmentState.alphaBlendOp = VK_BLEND_OP_ADD; colorBlendAttachmentState.colorWriteMask = 0xf; VkPipelineColorBlendStateCreateInfo colorBlendState = {}; colorBlendState.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO; colorBlendState.logicOpEnable = VK_FALSE; colorBlendState.logicOp = VK_LOGIC_OP_CLEAR; colorBlendState.attachmentCount = 1; colorBlendState.pAttachments = &colorBlendAttachmentState; colorBlendState.blendConstants[0] = 0.0; colorBlendState.blendConstants[1] = 0.0; colorBlendState.blendConstants[2] = 0.0; colorBlendState.blendConstants[3] = 0.0; [/source] All these configurations are now constant for the entirety of the pipeline's life. We might want to change some of this state per frame, like our viewport/scissors. To make a state dynamic we can: [source=auto:1] VkDynamicState dynamicState[2] = { VK_DYNAMIC_STATE_VIEWPORT, VK_DYNAMIC_STATE_SCISSOR }; VkPipelineDynamicStateCreateInfo dynamicStateCreateInfo = {}; dynamicStateCreateInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO; dynamicStateCreateInfo.dynamicStateCount = 2; dynamicStateCreateInfo.pDynamicStates = dynamicState; [/source] And finally, we put everything together to create our graphics pipeline: [source=auto:1] VkGraphicsPipelineCreateInfo pipelineCreateInfo = {}; pipelineCreateInfo.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO; pipelineCreateInfo.stageCount = 2; pipelineCreateInfo.pStages = shaderStageCreateInfo; pipelineCreateInfo.pVertexInputState = &vertexInputStateCreateInfo; pipelineCreateInfo.pInputAssemblyState = &inputAssemblyStateCreateInfo; pipelineCreateInfo.pTessellationState = NULL; pipelineCreateInfo.pViewportState = &viewportState; pipelineCreateInfo.pRasterizationState = &rasterizationState; pipelineCreateInfo.pMultisampleState = &multisampleState; pipelineCreateInfo.pDepthStencilState = &depthState; pipelineCreateInfo.pColorBlendState = &colorBlendState; pipelineCreateInfo.pDynamicState = &dynamicStateCreateInfo; pipelineCreateInfo.layout = context.pipelineLayout; pipelineCreateInfo.renderPass = context.renderPass; pipelineCreateInfo.subpass = 0; pipelineCreateInfo.basePipelineHandle = NULL; pipelineCreateInfo.basePipelineIndex = 0; result = vkCreateGraphicsPipelines( context.device, VK_NULL_HANDLE, 1, &pipelineCreateInfo, NULL, &context.pipeline ); checkVulkanResult( result, "Failed to create graphics pipeline." ); [/source] That was a lot of code... but it's just setting state. The good news is that we are now ready to start rendering our triangle. We will update our render method to do just that. [subheading]Final Render[/subheading] [Commit: 5613c5d] We are FINALLY ready to update our render code to put a blue-ish triangle on the screen. Can you believe it? Well, let me show you how: [source=auto:1] void render( ) { vkSemaphore presentCompleteSemaphore, renderingCompleteSemaphore; VkSemaphoreCreateInfo semaphoreCreateInfo = { VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO, 0, 0 }; vkCreateSemaphore( context.device, &semaphoreCreateInfo, NULL, &presentCompleteSemaphore ); vkCreateSemaphore( context.device, &semaphoreCreateInfo, NULL, &renderingCompleteSemaphore ); uint32_t nextImageIdx; vkAcquireNextImageKHR( context.device, context.swapChain, UINT64_MAX, presentCompleteSemaphore, VK_NULL_HANDLE, &nextImageIdx ); [/source] First we will need to care about synchronising our render calls, so we create a couple semaphores and update our vkAcquireNextImageKHR() call. We need to change the presentation image from the VK_IMAGE_LAYOUT_PRESENT_SRC_KHR layout to the VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL layout. We already know how to do this, so here is the code: [source=auto:1] VkCommandBufferBeginInfo beginInfo = {}; beginInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO; beginInfo.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT; vkBeginCommandBuffer( context.drawCmdBuffer, &beginInfo ); // change image layout from VK_IMAGE_LAYOUT_PRESENT_SRC_KHR // to VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL VkImageMemoryBarrier layoutTransitionBarrier = {}; layoutTransitionBarrier.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER; layoutTransitionBarrier.srcAccessMask = VK_ACCESS_MEMORY_READ_BIT; layoutTransitionBarrier.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; layoutTransitionBarrier.oldLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR; layoutTransitionBarrier.newLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; layoutTransitionBarrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED; layoutTransitionBarrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED; layoutTransitionBarrier.image = context.presentImages[ nextImageIdx ]; VkImageSubresourceRange resourceRange = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 }; layoutTransitionBarrier.subresourceRange = resourceRange; vkCmdPipelineBarrier( context.drawCmdBuffer, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 0, 0, NULL, 0, NULL, 1, &layoutTransitionBarrier ); [/source] This is code you should by now be familiar with. Next we will activate our render pass: [source=auto:1] VkClearValue clearValue[] = { { 1.0f, 1.0f, 1.0f, 1.0f }, { 1.0, 0.0 } }; VkRenderPassBeginInfo renderPassBeginInfo = {}; renderPassBeginInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO; renderPassBeginInfo.renderPass = context.renderPass; renderPassBeginInfo.framebuffer = context.frameBuffers[ nextImageIdx ]; renderPassBeginInfo.renderArea = { 0, 0, context.width, context.height }; renderPassBeginInfo.clearValueCount = 2; renderPassBeginInfo.pClearValues = clearValue; vkCmdBeginRenderPass( context.drawCmdBuffer, &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE ); [/source] Nothing special here. Just telling it which framebuffer to use and the clear values to set both for both attachments. Next we bind all our rendering state by binding our graphics pipeline: [source=auto:1] vkCmdBindPipeline( context.drawCmdBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS, context.pipeline ); // take care of dynamic state: VkViewport viewport = { 0, 0, context.width, context.height, 0, 1 }; vkCmdSetViewport( context.drawCmdBuffer, 0, 1, &viewport ); VkRect2D scissor = { 0, 0, context.width, context.height }; vkCmdSetScissor( context.drawCmdBuffer, 0, 1, &scissor); [/source] Notice how we setup the dynamic state at this stage. Next we render our beautiful triangle by binding our vertex buffer and asking Vulkan to draw one instance of it: [source=auto:1] VkDeviceSize offsets = { }; vkCmdBindVertexBuffers( context.drawCmdBuffer, 0, 1, &context.vertexInputBuffer, &offsets ); vkCmdDraw( context.drawCmdBuffer, 3, // vertex count 1, // instance count 0, // first vertex 0 ); // first instance vkCmdEndRenderPass( context.drawCmdBuffer ); [/source] We are almost done. Guess what is missing? Right, we need to change from VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR and we need to make sure all rendering work is done before we do that! [source=auto:1] VkImageMemoryBarrier prePresentBarrier = {}; prePresentBarrier.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER; prePresentBarrier.srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; prePresentBarrier.dstAccessMask = VK_ACCESS_MEMORY_READ_BIT; prePresentBarrier.oldLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; prePresentBarrier.newLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR; prePresentBarrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED; prePresentBarrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED; prePresentBarrier.subresourceRange = {VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1}; prePresentBarrier.image = context.presentImages[ nextImageIdx ]; vkCmdPipelineBarrier( context.drawCmdBuffer, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT, 0, 0, NULL, 0, NULL, 1, &prePresentBarrier ); vkEndCommandBuffer( context.drawCmdBuffer ); [/source] And that is it. Only need to submit and we are done: [source=auto:1] VkFence renderFence; VkFenceCreateInfo fenceCreateInfo = {}; fenceCreateInfo.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO; vkCreateFence( context.device, &fenceCreateInfo, NULL, &renderFence ); VkPipelineStageFlags waitStageMash = { VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT }; VkSubmitInfo submitInfo = {}; submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO; submitInfo.waitSemaphoreCount = 1; submitInfo.pWaitSemaphores = &presentCompleteSemaphore; submitInfo.pWaitDstStageMask = &waitStageMash; submitInfo.commandBufferCount = 1; submitInfo.pCommandBuffers = &context.drawCmdBuffer; submitInfo.signalSemaphoreCount = 1; submitInfo.pSignalSemaphores = &renderingCompleteSemaphore; vkQueueSubmit( context.presentQueue, 1, &submitInfo, renderFence ); vkWaitForFences( context.device, 1, &renderFence, VK_TRUE, UINT64_MAX ); vkDestroyFence( context.device, renderFence, NULL ); VkPresentInfoKHR presentInfo = {}; presentInfo.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR; presentInfo.waitSemaphoreCount = 1; presentInfo.pWaitSemaphores = &renderingCompleteSemaphore; presentInfo.swapchainCount = 1; presentInfo.pSwapchains = &context.swapChain; presentInfo.pImageIndices = &nextImageIdx; presentInfo.pResults = NULL; vkQueuePresentKHR( context.presentQueue, &presentInfo ); vkDestroySemaphore( context.device, presentCompleteSemaphore, NULL ); vkDestroySemaphore( context.device, renderingCompleteSemaphore, NULL ); } [/source] We made it! We now have a basic skeleton Vulkan application running. Hopefully you could learn enough about Vulkan to figure out how to proceed from here. This is anyway a code repo that I would have liked to have when I started... so maybe this will be helpful to someone else. I am currently writing another tutorial where I go into more details about the topics I left open. (It includes shader uniforms, texture mapping and basic illumination). So, do check regularly for the new content. I will also post it on my twitter once I finish and publish it here. Feel free to contact me for suggestions and feedback at jhenriques@gmail.com. Have a nice one, JH. Find more information on my blog at http://av.dfki.de/~jhenriques/development.html.
  4. Vulkan Vulkan Tutorial

      You are welcome and thanks for the feedback.   Also, I just published a follow up with the parts I left out. Once you are done with this one you can check out the next one: http://av.dfki.de/~jhenriques/vulkan_shaders.html ;)
  5. Hi, I have published a Vulkan tutorial that takes you from WinMain to triangle on the screen. It is the tutorial I wanted when starting with Vulkan. Here it is: http://av.dfki.de/~jhenriques/development.html Hope this is helpful to someone. JH
  6. FBOs and glDrawBuffers

    Check this page: http://www.gpgpu.org/wiki/FAQ#Why_does_my_FBO_app_not_write_to_the_2nd_render_target.3F Hope it helps :)
  7. Tetris with SDL problems

    Hi again, If you really don't want to use arrays (or any other way of maintaing the board state...), you should "clean" the moving block last position. You can do this by paiting the current position of the block in the background color at the same time that you are painting the new block position. But, once again, I do think that's no way of doin'g it... How will you make the "contact" test? (to test if a falling block have collided with another or the bottom limit) Will you get the next pixel color and compare pixel colors??!?!? By the way, How do you erase lines?
  8. Tetris with SDL problems

    You should maintain the board state in some form. Maybe like a matrix. This way the only thing you do in your game is maintain the board matrix updated. (and this has nothing to do with graphics!) Every turn, you must update the falling block position, witch can be done by moving it’s representation in the board one line down. (or to the sides, according to the user input...) Every time your falling block reaches another block, or the bottom line, you create a new falling block and you forget about the old one. ...BTW, this is the right moment to check the board matrix for full lines, game over condition, etc... Now, to display the board, you may call board.draw()(or something like it...) each turn, witch only objective is to represent your board in a graphical way (you can even represent your board as text if you want (and you should, at least in the beginning to test/debug)...). This way you will need to redraw every turn, but your game state will be safely maintained elsewhere. Hope this helps.
  9. A resonable project for a beginner

    I do recommend starting with something simpler than a tetris-clone. Why? Because, it may not seem, but tetris as a much more complex game logic than, for example, pong. ...this is just my opinion, but you should forget about graphics, sound, network and even user input for now... You should focus yourself on how to create the game logic, how to maintain the game information (in pong, this would be the ball and the blocks position, for example.), and to find out how do you want your game to be played. Once you know all this, then your are ready to start programming. Once you start implementing stuff, you will start wanting to see what you are doing. Once that happens, it's time to start learning exactly what you need to achieve just that. This way you will have both the will and the necessary focus to really learn something new! Hope this can help. If you need any help, please, be my guest! (I'm a Computer Science graduate, and I'm still reading the beginners forums, and doing my tetris-clone game! :) )