Jump to content
  • Advertisement

vlj

Member
  • Content Count

    259
  • Joined

  • Last visited

Everything posted by vlj

  1. A week ago AMD announced that they were working with their partner to push "HDR" monitors in the consumer market. These  monitors should support 10 bits RGB color instead of the usual 8 bits per channel. These monitors should be able to display higher constrast and brightness to enable these 2 additionals bits.   My question is : what are the consequences for display algorithm and content creation ?  I never saw a 10 bits monitor so far and thus have no idea how the extra colors "look". I've been often told that the human eyes can't differentiate more than 256 shades of one of the primary color. On the other hand games have a tonemap pass that turns 10+ bits color per channel to ones displayable on a monitor by mimic-ing eye's or camera's behavior ; with 10 bits monitor this pass could happen "in the eye" and have the viewer being dazzled by monitor. However this sounds unconfortable and may conflict with some electronic post process such as dynamic contrast, not accounting for uncorrectly configured display or poorly lit environment.   Additionnaly will traditionnal RGBA8 textures fit on engine designed around 10 bits displays ?
  2. There was some outstanding demo of deep neural network being developed for artistic purpose. Google wrote deep dream which mix a content image and a style image and output an image combining both. Neural doodle extend the idea by taking a single 4 color map and output a "Monnet like" picture. I wonder if it is possible to create and train a dnn that could upscale a texture. I know it has been done for manga/anime picture, and neural doodle can be used to "transfer" high resolution picture on minecraft blocky texture. Unfortunately I have no experience with dnn. I know the theory behind neural network and I know the math Def of a convolution however I think I'm missing some knowledge to correctly understand dnn articles and latest evolution. Is there some "dnn theory for game dev" resources available and can the technic be used to upscale texture?
  3. I found this article which describes a simple network with 3 chained clamped convolution layers : https://arxiv.org/pdf/1501.00092.pdf On a high level view the layers act as feature extraction, feature mapping (ie moreless fetching in a dictionnary what is the closest candidate) and feature reconstruction. What I dont understand is how the network is applied : does it takes a whole image as input and returns a whole image or does it takes a subset of the pictures and return a pixel in the reconstructed picture ? In the latter case how does it deal with image border then ? I'm thinking at rather generic texture (like grass, dirt, wood). Hopefully it may be useful to improve texture resolution in old titles while keeping general aspect.
  4. I'm trying to use the module features introduced in VS2015 and which was unfortunatly rejected for inclusion in C++17 but is still a TS. Basically module adds an "import" mechanism in C++ that replaces includes and should heavily speeds up build time. Unfortunatly while CL is able to produce and consume and module, Visual Studio 2015 doesn't support it. Luckily adding some flags to the CL command line is easy and I'm able to use module inside my project. The issue is that intellisense doesn't recognise the "import" directive and can't import module binary data at the moment. This means that I'm getting wrong color syntaxing which is annoying. For compatibility with others compilers I wrapped the import directive inside an #ifdef MODULE_ENABLED and I wonder if it is possible to make Intellisense parse without this define and still have it enabled when building ? Currently it looks like intellisense is picking the same defines as the one used in vcxproj.
  5. Unfortunately the feature seems very buggy, I'm getting lot of compiler crashes.
  6. I don't know how your shader does work but if every normal are pointing to the same direction then the edge illumination looks wrong when lit by a point light. The light amount depends on normal and depth. Since normal is constant and depth doesn't vary by a wide margin you should get a similar halo as when your light is in the middle of the surface, and not this penumbra. Another argument is that since depth is continuous and diffuse is too, you should get continuous illumination but here there's a visible boundary.
  7. I would recommend using for_each until you get the habit of looking at <algorithm> before implementing your own. There are some of very nice function like rotate, stable partition that goes almost unnoticed. Of course there is likely no difference in performance between a range for and for_each and there is a big temptation to roll your own version of replace_if or fill since they are trivial but you'll probably win a lot if you practice some functional programming. In addition parallel version of for_each may be added to c++ 17 meaning that you just have to change the function name in your calls if you think they can benefit from it.
  8. I don't understand your first question. A command buffer can use several render pass. You can use a single command buffer to draw a whole frame, which involve multiple passes (because of Shadow map, post processing on quarter size frame buffer...) AFAIK a render pass can't span over several command buffer. A command buffer state is undefined when it is "started", bindings are not carried over. One exception though:you can execute separate secondary command buffer inside a subpass. You need to inform the command buffer begin command about this however.
  9. In a C++ project I'm hosting a .Net CLR and load an assembly with an HelloWorld function with the following code (compiled as 64 bits) : struct _FooInterface : IUnknown { virtual HRESULT __stdcall HelloWorld(LPWSTR name, LPWSTR* result, void* ptr) = 0; }; extern "C" { void __cdecl test_func(int); } void main() { Microsoft::WRL::ComPtr<ICLRMetaHost> pMetaHost; Microsoft::WRL::ComPtr<ICLRRuntimeInfo> pRuntimeInfo; HRESULT hr; CoInitializeEx(NULL, COINIT_APARTMENTTHREADED); hr = CLRCreateInstance(CLSID_CLRMetaHost, IID_PPV_ARGS(pMetaHost.GetAddressOf())); hr = pMetaHost->GetRuntime(L"v4.0.30319", IID_PPV_ARGS(pRuntimeInfo.GetAddressOf())); hr = pRuntimeInfo->GetInterface(CLSID_CLRRuntimeHost, IID_PPV_ARGS(pRuntimeHost.GetAddressOf())); SampleHostControl* hostControl = new SampleHostControl(); hr = pRuntimeHost->SetHostControl((IHostControl *)hostControl); ICLRControl* pCLRControl = nullptr; hr = pRuntimeHost->GetCLRControl(&pCLRControl); LPCWSTR assemblyName = L"mesh_managed"; LPCWSTR appDomainManagerTypename = L"mesh_managed.CustomAppDomainManager"; hr = pCLRControl->SetAppDomainManagerType(assemblyName, appDomainManagerTypename); hr = pRuntimeHost->Start(); LPWSTR text; _FooInterface* appDomainManager = hostControl->GetFooInterface(); hr = appDomainManager->HelloWorld(L"Player One", &text, &test_func); hr = pRuntimeHost->Stop(); }   For reference the code of host control class is : class SampleHostControl : IHostControl { public: SampleHostControl() { m_refCount = 0; m_defaultDomainManager = NULL; } virtual ~SampleHostControl() { if (m_defaultDomainManager != NULL) { m_defaultDomainManager->Release(); } } HRESULT __stdcall SampleHostControl::GetHostManager(REFIID id, void **ppHostManager) { *ppHostManager = NULL; return E_NOINTERFACE; } HRESULT __stdcall SampleHostControl::SetAppDomainManager(DWORD dwAppDomainID, IUnknown *pUnkAppDomainManager) { HRESULT hr = S_OK; hr = pUnkAppDomainManager->QueryInterface(__uuidof(_FooInterface), (PVOID*)&m_defaultDomainManager); return hr; } _FooInterface* GetFooInterface() { if (m_defaultDomainManager) { m_defaultDomainManager->AddRef(); } return m_defaultDomainManager; } HRESULT __stdcall QueryInterface(const IID &iid, void **ppv) { if (!ppv) return E_POINTER; *ppv = this; AddRef(); return S_OK; } ULONG __stdcall AddRef() { return InterlockedIncrement(&m_refCount); } ULONG __stdcall Release() { if (InterlockedDecrement(&m_refCount) == 0) { delete this; return 0; } return m_refCount; } private: long m_refCount; _FooInterface* m_defaultDomainManager; }; (took it from a blog post)   In my C# code I declare the following interface that mirror the _FooInterface one : [ComImport, Guid("A15DDC0D-53EF-4776-8DA2-E87399C6654D"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface Interface1 { [return: MarshalAs(UnmanagedType.LPWStr)] string HelloWorld([MarshalAs(UnmanagedType.LPWStr)] string name, IntPtr test_function); } Then I'm implementing the appDomain and the helloWorld function as : namespace mesh_managed { public sealed class CustomAppDomainManager : AppDomainManager, Interface1 { [UnmanagedFunctionPointer(CallingConvention.Cdecl)] unsafe public delegate void test_function_ptr(int i); public CustomAppDomainManager() { } public override void InitializeNewDomain(AppDomainSetup appDomainInfo) { this.InitializationFlags = AppDomainManagerInitializationOptions.RegisterWithHost; } [STAThread] public string HelloWorld(string name, IntPtr ptr) { var test_function = Marshal.GetDelegateForFunctionPointer<test_function_ptr>(ptr); test_function(0); return "Hello " + name; } } } (it's just a test function)   Unfortunatly the test_function(0) call doesn't work here in the assembly : an exception "An unhandled exception of type 'System.AccessViolationException' occurred in mesh_managed.dll" is thrown. If I remove the test_function(0) call everything seem to work as expected, the string is correctly return in the C++ side.   What am I doing wrong ? I'd like to expose an API to the CLR so that I can manipulate data with C#. There is no dll so I can't use DLLImport feature and I prefer to have C++ embed C# than the opposite.   Regards, Vincent.
  10. vlj

    what good are cores?

    By the way what is a job system? Looking at the various link provided it seems very close to the "task" structure in openmp 3 (unfortunately not supported by msvc) or in the upcoming c++17 standard. On the other hand task can create subtasks that can be picked by other thread depending on the scheduler while it looks like all job are generated by main thread while others threads are picking a job.
  11. vlj

    Vulkan and uwp?

    I went the other way, I throwed UWP code and wrapped my code in a C# app. It works as good as a nested Hwnd window can in a Xaml environment ie with airspace problem.
  12. I'm writing a small framework based on dx12 and Vulkan to ease some low level detail when testing rendering technics. Since I don't want to spend a lot of time on UI and thus I'm embedding dx12 code in an uwp app with xaml UI. I use a swapChainPanel that allows to composite a d3d12 swapchain surface and xaml control. Since Swapchainpanel only exposes a dxgi surface is there a (likely hackish) way to wrap a dxgi swap chain in a way that could be used by Vulkan present extension? I'm guessing it's rather unknow territory here but it's possible to share d3d surface with opencl and there is the whole d3d11on12 infrastructure (which is again only exposing com object).
  13. vlj

    Atomic Add for Float Type

    It's always possible to access any data in an atomic manner using a compareswap in a loop. Something like Do {floatval = old value + ... } while(compareswap(oldvalue, floatvalue)); It will loop as long as the atomic data is different from old value. If the atomic is equal to old value it will atomically swap it with floatvalue and exit the loop. I'm not sure this pattern is efficient. On the other hand from my experience amd atomic are way faster than Nvidia's one and that's maybe why they provide float atomic to make you avoid resorting to slower pattern.
  14. I would rather stay away from c++/cli since it looks like it's being deprecated by MS and it's not portable. While the clr is currently on Windows only there is mono and net core for Linux and other platforms.
  15. In C++ the order of the member in a class is the order in which they are initialized. It's also the reverse order in which they're destroyed. The issue is that a struct layout may have a semantic of it's own. For instance the order of member matters for langage interop, for alignment (although it can be mitigated with alignas) ; structure layout can have some implications in how cache friendly it is. I wonder if there is a way to decouple initialisation order from the structure layout and if it has been considered by C++ comitee.
  16. I'm trying to use the CLR as a script engine in my C++ code. I'm hosting a CLR instance which works as expected. However I'd like to expose native functions to the CLR so that script code can have an interface to my C++ data. Issue with PInvoke is that there is no DLL, my application is a standalone executable. As far as I know PInvoke only works on dll provided function (Mono seems to have an DllImport("__internal") which is able to fetch dll in the process address space but I didn't test it). I can split my app in an exe and a .dll but wouldn't that means the dll will be loaded twice ?
  17. I'm implementing the Scalable Ambient Obscurance algorithm on Vulkan ; at some point the algorithm is generating a mipchain on a linearized depth texture. I use a render pass to generate a linear depth texture and do the actual SAO computation. Then I use 2 compute shaders for the bilateral filtering since I can use shared memory to lessen memory bandwidth pressure. I wonder what is the best way to do it in Vulkan ; the mipmap generation step would occur between the depth linear step and the SAO computation one. I can use a compute shader for every mipmap level ; introducing CS between graphic shaders is however not recommended since on some gpu (geforce ?) it triggers a unit reconfiguration and a cache flush. Since there is no advantage in using shared memory here the penalty can't be counterbalanced as for the bilateral passes. I could use a CS for the sao computation algorithm too but I rely on the dFdX function which is only available in fragment shader. I can use a render pass too however I'm not sure how it will map to hardware. Since mipmap levels have different size it means using as many render passes as there are mipmap level. I fear this scenario may be suboptimal for renderpass and that switching renderpasses may increase an overhead of some sort.
  18. Is there a "natural" order in the area? For instance if you are in area 0 is it possible to access any area or only area 1? In the later case you may store several area, the one you're in and the ones immediately accessible from it. When you change the area in focus you can load and unload areas while still being able to render something in the meantime (since the area in focus was already loaded in memory)
  19. You can use a same heap location for several root parameter if needed. Although I think there is no big performance penalty in setting visibility to all. Maybe there is a threshold effect.
  20. BTW what is the actual cost of wrapping an api inside a virtual class? On the first hand Vulkan and dx12 are designed to make you bake as much state change as possible in bigger set like pipeline state or descriptor table or command list. There is thus less api call done which translate to less virtual call made for a close enough api. On the second hand the point of these api is to lower cpu overhead as much as possible ; several Vulkan functions even take array of parameter structure to build multiple api object within a single function call (to write descriptor call, create pipeline state,...). If the cpu overhead reduction makes possible to notice a perf difference when removing around 10 function calls per draw call then I would rather avoid the cost of vtable indirection if possible.
  21. BTW how do you store result of backing calculation? Objects texture's uv can't be reused since an object can appear at different location with different lighting condition, and some texel can be used for several surface. On the other hand flattening polygon in a 2d plane is though due discontinuity at polygon edge, having to optimize texture space while keeping surface size equivalent...
  22. I didn't see newer ssao algorithm so far. However it looks like there is some works going into voxel based solution, for instance in the latest tomb raider
  23. vlj

    Vulkan and uwp?

    It's possible to use Xaml with C# since it's both a .Net and WinRT component. However as far as I know it's not available for Win32 application. Visual Studio 2015 provides template for C++ app  that uses Xaml declaration and a template for C# and WPF. In theory it should be possible to write C++/CLI app that consumes xaml UI but it seems discouraged by Microsoft given the lack of resource/IDE support.
  24. Sao don't use normals, it computes them from the linear depth buffer. It looks like your sampling depth from the same uv. You need to sample at pixel location and at each of the samples.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!