• Advertisement

University Dissertation Questionnaire - 3D Environment Art/3D Art for Video Games/VR

Recommended Posts

Hello all!
 
I'm currently in my third year on my 3D Animation & Games Development course, and I am in the process of doing some basic primary research for my dissertation project, which is to create a high quality 3D Environment for use in video games and potentially VR.
 
I have a questionnaire (targeting other artists in the field), and I would really appreciate it if you took some time to have a look and fill it out:
 
 
Thank you in advance!
 
Mike.

Share this post


Link to post
Share on other sites
Advertisement

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By _RoboCat_
      Hi,
      Can anyone point me into good direction how to resolve this?
      I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
      What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
      i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
      is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
      Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
      No need for whole code, just if someone can point me in good direction would be nice.
      Thanks


    • By khawk
      Notes from the session.
      Rahul Prasad, Product Manager on Daydream - Daydream SDK, Platform, VR Mode, Daydream-ready.
      Why is mobile VR development hard?
      Need to guarantee:
      Consistent frame rates required High frame rates 90fps on desktop At least 60fps on mobile Low motion-to-photon latency Related to framerate Influenced by other systems on the device If we look at desktop VR, they have plenty of power, plenty of performance, and less worry over temperature constraints.
      In mobile, only get 4W of power (vs 500W), limited bandwidth, no fans with passive dissipation, and a market of mainstream users (vs hardcore gamers).
      Mobile VR systems are somewhere in the middle between the full hardware control of consoles and the wild west of general mobile and desktop development.
      Simplest solutions
      Build for lowest common denominator Build for exactly one device, try to bring the console model to mobile
        GPU Techniques for Mobile VR
      Assume ASTC exists on mobile VR devices - use large block size, always use mipmaps and avoid complex filtering Implement stereo specific optimizations - multiview when it exists, render distance geometry once Avoid submitting multiple layers - really expensive on tiled GPUs, compose multiple layers in your eyebuffer render loop prior to ARP Complex pixel shaders are costly - render particles to lower res buffer, use medium precision when possible Avoid large monolithic meshes - compact, efficient chunks; front-to-back rendering (rely on engines) Prefer forward rendering algorithms Spread work over multiple CPU threads - more threads running slowly consume less power than few threads running quickly Use MSAA - at least 2x/4x when possible, use GL_EXT_multisampled_render_to_texture, discard MSAA buffers prior to flushing Buffer management - avoid mid-frame flushes, invalidate or clear entire surfaces before rendering, discard/invalidate buffers not required for later rendering, single clear for double-wide eye buffers Type of Content Affects Your Approach
      Example: Youtube VR has very different memory, modem, GPU, and CPU patterns than a high performance game. Session times vary, latency tolerances are different.
      Allocate resources appropriate for the content.
      Thermal capacity is your "primary currency". Make tradeoffs based on type of app and thermal budget.
      Game session times 20-45 minute session time. Video, 1-3 hours of session time. Text, several hours of use important.
      Games - high GPU, medium CPU, high bandwidth, low resolution, low modem Video - low GPU, medium to high CPU, high bandwidth, medium to high resolution, high modem if streaming Text - low GPU, low CPU, high bandwidth, high resolution, low modem Bandwidth high across all use cases.
      Thermal management about tradeoffs:
      session time vs graphics spatial audio vs graphics streaming vs graphics 4k decode vs graphics Dynamic performance scaling to manage resources:
      Render target - scale with display resolution and content types Trade resolution for antialiasing - 2x/4x MSAA, consider dynamically adjusting Use modem judiciously - don't light up all available bandwidth, avoid streaming if possible Adjust framerate dynamically - don't let framerate float, snap to full rate or half rate, engines may help If CPU limited - lower spatial audio objects, simplify physics simulation Boost clock rates sparingly Technical Case Study - VR profiling with Systrace
      Comes in Android Studio Tool for profiling mobile Android devices (editor note: walking through case study of using Systrace to understand performance)
       
    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement