By Yosef BenSadon
Hi , I was considering this start up http://adshir.com/, for investment and i would like a little bit of feedback on what the developers community think about the technology.
So far what they have is a demo that runs in real time on a Tablet at over 60FPS, it runs locally on the integrated GPU of the i7 . They have a 20 000 triangles dinosaur that looks impressive, better than anything i saw on a mobile device, with reflections and shadows looking very close to what they would look in the real world. They achieved this thanks to a new algorithm of a rendering technique called Path tracing/Ray tracing, that is very demanding and so far it is done mostly for static images.
From what i checked around there is no real option for real time ray tracing (60 FPS on consumer devices). There was imagination technologies that were supposed to release a chip that supports real time ray tracing, but i did not found they had a product in the market or even if the technology is finished as their last demo i found was with a PC. The other one is OTOY with their brigade engine that is still not released and if i understand well is more a cloud solution than in hardware solution .
Would there be a sizable interest in the developers community in having such a product as a plug-in for existing game engines? How important is Ray tracing to the future of high end real time graphics?
Notes from the session.
Rahul Prasad, Product Manager on Daydream - Daydream SDK, Platform, VR Mode, Daydream-ready.
Why is mobile VR development hard?
Need to guarantee:
Consistent frame rates required High frame rates 90fps on desktop At least 60fps on mobile Low motion-to-photon latency Related to framerate Influenced by other systems on the device If we look at desktop VR, they have plenty of power, plenty of performance, and less worry over temperature constraints.
In mobile, only get 4W of power (vs 500W), limited bandwidth, no fans with passive dissipation, and a market of mainstream users (vs hardcore gamers).
Mobile VR systems are somewhere in the middle between the full hardware control of consoles and the wild west of general mobile and desktop development.
Build for lowest common denominator Build for exactly one device, try to bring the console model to mobile
GPU Techniques for Mobile VR
Assume ASTC exists on mobile VR devices - use large block size, always use mipmaps and avoid complex filtering Implement stereo specific optimizations - multiview when it exists, render distance geometry once Avoid submitting multiple layers - really expensive on tiled GPUs, compose multiple layers in your eyebuffer render loop prior to ARP Complex pixel shaders are costly - render particles to lower res buffer, use medium precision when possible Avoid large monolithic meshes - compact, efficient chunks; front-to-back rendering (rely on engines) Prefer forward rendering algorithms Spread work over multiple CPU threads - more threads running slowly consume less power than few threads running quickly Use MSAA - at least 2x/4x when possible, use GL_EXT_multisampled_render_to_texture, discard MSAA buffers prior to flushing Buffer management - avoid mid-frame flushes, invalidate or clear entire surfaces before rendering, discard/invalidate buffers not required for later rendering, single clear for double-wide eye buffers Type of Content Affects Your Approach
Example: Youtube VR has very different memory, modem, GPU, and CPU patterns than a high performance game. Session times vary, latency tolerances are different.
Allocate resources appropriate for the content.
Thermal capacity is your "primary currency". Make tradeoffs based on type of app and thermal budget.
Game session times 20-45 minute session time. Video, 1-3 hours of session time. Text, several hours of use important.
Games - high GPU, medium CPU, high bandwidth, low resolution, low modem Video - low GPU, medium to high CPU, high bandwidth, medium to high resolution, high modem if streaming Text - low GPU, low CPU, high bandwidth, high resolution, low modem Bandwidth high across all use cases.
Thermal management about tradeoffs:
session time vs graphics spatial audio vs graphics streaming vs graphics 4k decode vs graphics Dynamic performance scaling to manage resources:
Render target - scale with display resolution and content types Trade resolution for antialiasing - 2x/4x MSAA, consider dynamically adjusting Use modem judiciously - don't light up all available bandwidth, avoid streaming if possible Adjust framerate dynamically - don't let framerate float, snap to full rate or half rate, engines may help If CPU limited - lower spatial audio objects, simplify physics simulation Boost clock rates sparingly Technical Case Study - VR profiling with Systrace
Comes in Android Studio Tool for profiling mobile Android devices (editor note: walking through case study of using Systrace to understand performance)
Is it possible to asynchronously create a Texture2D using DirectX11?
I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present.
Is there anything else that I could do to prevent that frame drop from occuring?
Heroes & Legend is an epic fantasy role play game that I believe will set new standards in role playing. Featuring a rich audio score of over 50+ sound tracks. Currently the game has four on board developers, designers, programmers, and music composers. I am taking on eight more people that are dedicated. You may view six of our audio scores in our first audio enticement video here:
Gamedev - Project:
Gamedev - Blog
Company Website - Under Construction
We are looking for artists familiar with iClone7, iClone Character Creator 2, Blender, PBR, and the 3D Exchange Pipeline. We also need some entry level to proficient Unreal 4 Developers who can help in getting World Max - I, our premier procedural world generator out on Unreal 4 market. This procedural world generator utilizes a 2D Vector Database for SRTM and Natural Earth database importing or creating large worlds smaller or larger than earth with latitude, longitude, Bathymetry and topographical overlay. I am a 25+ year software engineer in charge of the company and would be considered a 10+ year software engineer.
Also, I am taking on a couple positions for Java Developers to create our companies premier Content Management System complete with a Web Hosting Module, Project Management Module, Subversion Module, interacting on a Linux System with a LAMP + T setup, subversion repository, who can also manage Network Administration. Assets are synched via one-drive, projects maintained via private GitHub repositories.
Will train applicants, so don't be a cissy, try me, and let's see if you have what it takes to be a partner.
All potential partners must sign a company NDA (Non-Disclosure Agreement) and Business Agreement via DocuSign. Profit share on all products is 0.025% of the 10% quarterly budgeted net for payment. If you think you have what this takes, then I look forward to hearing from you.
Reply here and I will be in touch! Thanks