Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    28
  • comments
    0
  • views
    3002

Design is the only thing holding mobile AR back

Sign in to follow this  
8thWallDev

155 views

1*FRhcqxCkOeo8yIHquMsm1A.jpeg

Written by Erik-Murphy Chutorian, Founder and CEO of 8th Wall.
This story was previously published on
The Next Web.

This year will be the year that we see the first applications designed for an AR-powered, mobile camera-first world. I’ve heard many discussions about how to accelerate AR development and adoption, but will everyone’s panacea be the cloud? While I believe cloud services will be essential to AR’s success and play a big role in its evolution, contrary to popular belief, cloud isn’t the gating factor. In my view the roadblocks to AR adoption are around design, reach and instructing people how to rely on what AR can do.

Let’s take a step back and look at how interfaces have evolved. Computers have gained the ability to see and hear, and these virtual senses will usher in new era of natural user interfaces. Touchscreens will soon be on the way out, in the same way that the keyboard and mouse are now. I see the writing on the wall.

We snap photos instead of texting, ask Alexa for our news and weather and can e-shop by seeing how products fit in our homes. As environmental understanding becomes more sophisticated on our phones, I believe user interfaces will start to interact with that environment. However, the cell phone is an accidental user interface for AR, and as such, we have to bridge the form factor and experience with something that is familiar to how people already use their phones.

Not everyone will use the term ‘AR’ to describe what is going on, but I trust that 2018 will be the year consumers experience AR and it’s going to happen on their mobile phone. The best part of mobile AR? There’s nothing to strap on your face — just hold up a phone and open an app. Before we give up our touchscreens, much more will need to happen and contrary to previous discussion in the industry, a lack of specialized cloud services is not what’s holding back the transition to these camera-centric AR apps.

The big hurdle to overcome is getting acclimated to designing with the perspective of AR as the first medium, as opposed to a secondary or add-on channel or feature. AR-first design will be key to creating successful everyday apps for new, natural types of user interaction.

How did we get to mobile AR?

The hype around Mobile AR began last September when Apple launched ARKit, now one of a handful of new software libraries that allow mobile developers to add augmented reality features to standard phone apps. These libraries offer virtual sensors that provide information about the environment and precisely how a phone is moving through it.

For mobile developers, it means an opportunity to be first to design and build new intuitive user experiences that can disrupt how we interact with our phones. In the same way that desktop websites were redesigned for a mobile-first world, we will soon see that camera-enabled physical interactions become the norm for interacting with many types of apps, including ecommerce, communication, enterprise, and gaming.

Where are the killer apps?

People use their phones for email, news, communication, entertainment, shopping, navigation, gaming, and photography. Mobile AR isn’t going to change that. More likely, many of the killer AR apps will be very same apps we already use today afterthey have redesigned for AR. Companies that are slow to embrace this technology will be ripe for disruption. It’s happening already.

Snapchat was first into the AR space and redefined how a younger generation communicates. Facebook and Google followed suit, and now Amazon, IKEA, Wayfair, and others are dipping their toes into the pool of AR. Niantic recently acquired a new AR start-up too, can we hope to see the physical world merge with the Wizarding world? What startups will innovate where the incumbents are slow to change? Will 2018 bring us a successor to maps, email, or photos?

The AR Cloud is not the missing piece

Modern apps rely on internet connectivity, big data, and location to round out their functionality. AR apps are no different in this respect, and in the same way we use Waze and Yelp to provide local, crowdsourced information about our environment, we will continue to do so when these apps are rebuilt for AR-first design.

In today’s tech-speak, the ‘AR Cloud’ is a set of backend services built to support AR features like persistence and multiplayer. These services consist of distributed databases, search engines, and algorithms for computer vision and machine learning. Most are well-scoped engineering projects and their success will be in their speed of delivery and quality of execution.

Technology behemoths and AR startups are competing to build these cloud services, with some of the heavily invested players going so far as to say “Apple’s ARKit is almost useless without the AR Cloud.” In contrast, the reality is quite the opposite. A single, well-designed AR mobile app can succeed immediately, but AR Cloud solutions can’t gain traction until enough top mobile apps are designed for AR. Ensuring that happens quickly is critical to their success.

AR is limited today by a lack of design principles

We need to think about how to design for mobile AR and specifically mobile apps for this new camera-first world. How do we break away from swipes, 2D-menus and the like, now that we can track precisely and annotate real objects in the world? AR technology has created an entirely new set of options for how we can interact with our phones, and from this we need to design AR interactions.

To better understand how we should think about AR-first design, my team and I recently conducted an AR User Study to understand peoples’ experiences with the first crop of mobile AR apps. This resulted in the following AR interaction guidelines, which are by no means an exhaustive list:

  • Prefer curvilinear selection for pointing and grabbing. By using a gentle arc instead of a straight line, people can select distant objects without their cursor jumping as it gets nearer to the horizon.
  • Keep AR touch interactions simple. Limit gestures to simple one-hand operations, since one hand is dedicated to holding and moving the phone.
  • Avoid Dwell clicking, e.g., hovering on a selected object for a period of time, as this selection mechanism is slow and generally leads to unintended actions.
  • Initialize virtual objects immediately. People expect AR apps to work seamlessly, and the surface calibration step found in many ARKit apps is an interaction that breaks the flow of the application.
  • Ensure reliability. Virtual objects should appear in consistent locations, and being able to accurately select, move and tether objects is important if these interactions are provided.
  • AR apps need to balance fixed on-screen UI with in-context AR UI. Users shouldn’t need to “hunt” for UI elements in their environment.
1*KwTvvu8aJ8QL822tAd8RpQ.png

Before we can capitalize on cloud features for AR, we need to determine how to implement these and other guidelines into a uniform set of user interactions that are natural and fluid.

Looking forward to the year of mobile AR

I feel strongly that 2018 will be the year that we see the first applications designed for an AR-powered, camera-first world. The first developers to build them have a strong first-mover advantage on the next generation of applications, communication platforms, and games. I believe cloud services will be essential to this success and play a big role in its evolution, but my view is that other challenges remain around design, reach and instructing people how to rely on this new technology.

In the true tradition of mobile technology, it won’t take long before new startups and tech behemoths defy what everyone once thought was or was not possible in this space. It’s not an AR Cloud or a killer new use case that will make AR successful. My take is that AR-first design, where we prioritize the AR experience over traditional 2D interfaces, will be the key to unlocking mobile AR. The first developers to build these apps will have a strong first-mover advantage on the next generation of applications, communication platforms, and games.

stat?event=post.clientViewed&referrerSource=full_rss&postId=56ccb3741a96

Design is the only thing holding mobile AR back was originally published in 8th Wall on Medium, where people are continuing the conversation by highlighting and responding to this story.


View the full article

Sign in to follow this  


0 Comments


Recommended Comments

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By CursedPhoenix
      I'm thinking on developing my own breeding game. Should be easy enougth to create the basic game mechanics and mostly the planning phase is done. I'll use "NimbleBit"s "Pocket Frogs" (https://en.wikipedia.org/wiki/Pocket_Frogs and https://play.google.com/store/apps/details?id=com.nimblebit.pocketfrogs&hl=de) as example, because my idea is in some ways quite similar. Android is the first platform I want to target, with optional expanding later to IOS and perhaps Windows.

      First Problem:
      I'm quite not sure about what language or engine I should use. I know this question is mainly based on opinion, but based on what I plan to do, is there an engine to prefer and why, or should I build it from scratch and wich language should I use then? At the moment my best bet would be on unity or python. Any suggestions here?

      Second (and most significant) Problem:
      I want to create a large amount  of - let's call it - monsters, that differs in color, pattern, color of the pattern and partly shape, but I don't get the trick behind it. So the question here is: How to create reusable monsters that differ in the above mentioned characteristics, with the lowest possible number of graphics.
      My thoughts and attempts on that topic: I looked at Pocket Frogs, and except shape they do exact the same with their frogs what I want to do. But I really don't get how they created over 38.000 (!!!) individual frogs, and the game still doesn't use that much space. I first tried to extract the graphics from the games files to puzzle together what they did. But I could not find them. However I think I figured out some parts of this secret just by looking at the frogs ingame: I think they used a basic frog model. 16 to be exact, to create 16 background frogs in all the colors. On top of them they just displayed the different patterns. But - and thats the mystery - the patterns are in different colors too and  I still dont believe they made 16*104=1664 different pattern graphics. So what trick am I missing here? Some kind of mask? Can I use the same technique to create different additional shapes for my monsters? And how did they made the feet moving. If the pattern on the feet are extra graphics, that would be another 1664 graphics.
      Any idea on how I can make this work, or on how did they make this work will be very appreciated! Thx
    • By khawk
      Lightstream, an innovator in cloud-native live streaming technology, today announced its IRL (In Real Life) plan for broadcasters who stream events from their phones or mobile live streaming setups. 
      Streaming out in the world away from a computer and without a strong internet connection can be a challenge. Lightstream’s new IRL plan provides capabilities that streamline setup time and gear required to produce a professional stream from any location, any device and any connection. 
      “Part of our mission at Lightstream has been to empower new creative possibilities and our new IRL streaming plan does just that,” says Stu Grubbs, CEO of Lightstream. “Lightstream Studio is powered by our cloud-native live video editing pipeline allowing creators to set up their production in advance, remotely control it, and have their streaming video automatically produced to their specifications from anywhere in the world while in-flight to their viewers.”
      With a Lightstream IRL plan, streamers set up their project and scenes via any browser-enabled device with their overlays, RTMP source and final channel destination. Broadcasting to Lightstream from any location will automatically layer on the media in that project on its way to their channel – whether that be Twitch, Mixer, YouTube, Facebook Live, or a custom destination. Similar to Lightstream’s unique integration with Xbox and Mixer, there is no need to have Lightstream Studio open in a browser window. 

      The new IRL Plan includes the following features:
      Customize with Overlays & Alerts
      Streamers can create multiple custom layouts, upload and position overlays, and use any integrated alerts service they prefer to easily personalize their mobile or IRL broadcast to better engage their audience.
      Auto Go Live
      Projects can be set to automatically start streaming to the broadcaster’s channel as soon as an incoming feed is detected.
      Auto BRB
      If connection is lost, the RTMP layer will go transparent until connection is re-established. Position an image directly behind your feed that will automatically appear to keep viewers updated.
      Disconnect Protection
      Lightstream Cloud will continue broadcasting and keeps the broadcaster’s channel live until signal is regained so the audience doesn’t leave.
       Remote Control
      Start, stop, and switch scenes on any mobile device.
      Stay Mobile with Headless Mode
      Go live without having Lightstream Studio open. Lightstream will automatically composite on your layers in the cloud.
      Max Quality & Duration Increase
      Stream at 720p 60 fps for up to 12 hours per broadcast.

      For more information and to sign up for the IRL Plan, please visit https://golightstream.com/irl.
      For more information about Lightstream and its products, please visit https://www.golightstream.com. Be sure to follow Lightstream on Twitter for the latest updates and community happenings.

      View full story
    • By khawk
      Lightstream, an innovator in cloud-native live streaming technology, today announced its IRL (In Real Life) plan for broadcasters who stream events from their phones or mobile live streaming setups. 
      Streaming out in the world away from a computer and without a strong internet connection can be a challenge. Lightstream’s new IRL plan provides capabilities that streamline setup time and gear required to produce a professional stream from any location, any device and any connection. 
      “Part of our mission at Lightstream has been to empower new creative possibilities and our new IRL streaming plan does just that,” says Stu Grubbs, CEO of Lightstream. “Lightstream Studio is powered by our cloud-native live video editing pipeline allowing creators to set up their production in advance, remotely control it, and have their streaming video automatically produced to their specifications from anywhere in the world while in-flight to their viewers.”
      With a Lightstream IRL plan, streamers set up their project and scenes via any browser-enabled device with their overlays, RTMP source and final channel destination. Broadcasting to Lightstream from any location will automatically layer on the media in that project on its way to their channel – whether that be Twitch, Mixer, YouTube, Facebook Live, or a custom destination. Similar to Lightstream’s unique integration with Xbox and Mixer, there is no need to have Lightstream Studio open in a browser window. 

      The new IRL Plan includes the following features:
      Customize with Overlays & Alerts
      Streamers can create multiple custom layouts, upload and position overlays, and use any integrated alerts service they prefer to easily personalize their mobile or IRL broadcast to better engage their audience.
      Auto Go Live
      Projects can be set to automatically start streaming to the broadcaster’s channel as soon as an incoming feed is detected.
      Auto BRB
      If connection is lost, the RTMP layer will go transparent until connection is re-established. Position an image directly behind your feed that will automatically appear to keep viewers updated.
      Disconnect Protection
      Lightstream Cloud will continue broadcasting and keeps the broadcaster’s channel live until signal is regained so the audience doesn’t leave.
       Remote Control
      Start, stop, and switch scenes on any mobile device.
      Stay Mobile with Headless Mode
      Go live without having Lightstream Studio open. Lightstream will automatically composite on your layers in the cloud.
      Max Quality & Duration Increase
      Stream at 720p 60 fps for up to 12 hours per broadcast.

      For more information and to sign up for the IRL Plan, please visit https://golightstream.com/irl.
      For more information about Lightstream and its products, please visit https://www.golightstream.com. Be sure to follow Lightstream on Twitter for the latest updates and community happenings.
    • By Ruben Torres
      If you plan on jumping into Unity Addressables Pooling, be careful. You better make sure the object pool is not empty.
      [The original post with its formatting can be found at Unity Addressables Pooling]

      In previous posts, I showed you how to load content in your game without limits.
      Well, sure there are limits, but you are not likely to reach them if you did a good job at implementing your asset content management system. And I'm pretty sure you'll get it right enough if you followed my advice.
      But so far, I can say we are missing one important piece of the puzzle, though. I mean, we're missing many, but today I want to focus on a very specific one: latency.
      What's latency?
      Latency is the time it takes between starting something and finishing it. It is some sort of delay that we usually want to avoid.
      You suffer latency when cooking your microwave popcorn, for instance. There, you start the microwave and have to wait for 3 to 5 minutes. And we want to eat popcorn right away, so this kind of latency is bad.
      When we get into the field of games, things get worse than cooking popcorn.
      In games, milliseconds matter. Everything above 20ms makes competitive multiplayer a bit more unfair.
      But in this post, we're not talking about multiplayer games. We will be talking about the latency we suffer when we load and display an asset using Addressables for Unity.
      And actually, we will do something about it.
      We'll implement a simple Unity Addressables Pooling System.
      Will you jump in the pool?
       
      Quick Navigation (opens in a new tab)
      Level 1 Developer: Simple Unity Addressables Loading
      Level 2 Developer: Unity Addressables Pooling
          1. Warm-up the asynchronous pool
          2. Helping our Gameplay: take an item from the pool
          3. Saving CPU time: return the item to the pool
          4. Freeing up memory: disable the pool
      Level 3 Developer: Smart Unity Addressables Pooling
          Performance
          Networking
          Automatic Pooling

      Level 1 Developer: Simple Unity Addressables Loading
      Yes, I know. We've done this several times.
      We take a prefab, mark it as Addressable and we assign it to a script that loads the prefab whenever it makes sense.
      And this gives you big benefits over traditional asset management workflows based on direct references. In short, using Addressables gives you...

      To read more on this, visit my introductory post on Unity Addressables Benefits: 3 Ways to Save Your Game.
      In this blog post, I'll stick to showing my tremendously complex sample project setup.

      Unity Addressables Simple Setup
      Oh nevermind, it was just a prefab instantiated through the Addressables API...
      This works most of the time just fine for any game.
      However...
      This loading and instantiation process has some latency to it. Unity has to fetch the required asset bundle, load the prefab and its dependencies and instantiate.
      The loading process should take well below 1 ms.
      But things get messy when we add more complexity to this object. If we add animators, particle systems, rigid bodies and such, Unity can surely end up stealing 10 ms away from us. Activating these components can take a significant amount of time.
      And if the asset bundles are served over the network and they were not ready, then we're speaking of seconds, even minutes.
      How terrifying would your game be if by the time your final boss is spawned the player already reached the end of the dungeon?
      This is my guess: as terrifying as profitable.
      A typical solution in Unity relies on adding object pools. 
      There're many object pools you can find online for Unity. The issue is, they're not Addressables-ready.
      But now, you'll get one.

      Level 2 Developer: Unity Addressables Pooling
      Let me warn you here: the needs for a pooling system greatly vary from project to project.
      Here I'll be giving you a simple system that you can tweak to match your needs.
      This is what you'll want from this pooling system:

      In case you were wondering: yes, I re-used the icons from the previous section. Busy times here.
      Before we jump into the code, I'll show you the test I prepared.
       
      1. Warm-up the asynchronous pool
      By now, the prefab and its content are not loaded in memory.
      The pool is enabled and loads the prefab based on Addressables.
      Then, it instantiates several objects and deactivates them all, paying the price of Awake, Start, OnEnable and OnDisable.
      By now, the prefab contents are in memory.

      Addressables Pooling: Warm-up
       
      2. Helping our Gameplay: take an item from the pool
      A user takes an item from the pool and puts it somewhere in the scene through the synchronous method Take().
      The user pays the activation (OnEnable) time, which depends on the complexity of their prefab.

      Addressables Pooling: Take
       
      3. Saving CPU time: return the item to the pool
      The user gets tired of their new toy and returns it to the pool.
      The pool deactivates it and puts it under its hierarchy, paying the price of OnDisable.

      Addressables Pooling: Return
       
      4. Freeing up memory: disable the pool
      After some time, we know we will not need this item anymore.
      We disable the pool and therefore it'll free up all the used memory, even though the indirect reference is still present in the pool.

      Addressables Pooling: Disable
       
      The strength of this method relies on memory management. We pay the memory price when we decide to.
      With traditional Unity Object Pools, we paid the memory overhead all the time, even if the prefab was never instantiated.
       
      Now, how does the code look like?
      01: public class GamedevGuruPoolUserTest : MonoBehaviour 02: { 03: [SerializeField] private AssetReference assetReferenceToInstantiate = null; 04: 05: IEnumerator Start() 06: { 07: var wait = new WaitForSeconds(8f); 08: 09: // 1. Wait for pool to warm up. 10: yield return wait; 11: 12: // 2. Take an object out of the pool. 13: var pool = GamedevGuruPool.GetPool(assetReferenceToInstantiate); 14: var newObject = pool.Take(transform); 15: 16: // 3. Return it. 17: yield return wait; 18: pool.Return(newObject); 19: 20: // 4. Disable the pool, freeing resources. 21: yield return wait; 22: pool.enabled = false; 23: 24: // 5. Re-enable pool, put the asset back in memory. 25: yield return wait; 26: pool.enabled = true; 27: } 28: } That's a pretty normal piece of code for testing.
      If there's anything relevant to mention is line 13. Why do we look for the pool passing our asset to GetPool?
      The idea behind that is that you might need several pools, one for each asset type, so you need a way to identify the pool you want to access.
      I don't particularly like static methods that access static variables, but you should adapt the code to the needs of your game.
      By the way, you don't need to copy all the code yourself. I prepared a repository you can access for free. Visit the GitHub Repository
      And how's the code for the pool itself?
      01: public class GamedevGuruPool : MonoBehaviour 02: { 03: public bool IsReady { get { return loadingCoroutine == null; } } 04: 05: [SerializeField] private int elementCount = 8; 06: [SerializeField] private AssetReference assetReferenceToInstantiate = null; 07: 08: private static Dictionary allAvailablePools = new Dictionary(); 09: private Stack pool = null; 10: private Coroutine loadingCoroutine; 11: 12: public static GamedevGuruPool GetPool(AssetReference assetReference) 13: { 14: var exists = allAvailablePools 15: .TryGetValue(assetReference.RuntimeKey, out GamedevGuruPool pool); 16: if (exists) 17: { 18: return pool; 19: } 20: 21: return null; 22: } 23: 24: public GameObject Take(Transform parent) 25: { 26: Assert.IsTrue(IsReady, $"Pool {name} is not ready yet"); 27: if (IsReady == false) return null; 28: if (pool.Count > 0) 29: { 30: var newGameObject = pool.Pop(); 31: newGameObject.transform.SetParent(parent, false); 32: newGameObject.SetActive(true); 33: return newGameObject; 34: } 35: 36: return null; 37: } 38: 39: public void Return(GameObject gameObjectToReturn) 40: { 41: gameObjectToReturn.SetActive(false); 42: gameObjectToReturn.transform.parent = transform; 43: pool.Push(gameObjectToReturn); 44: } 45: 46: 47: void OnEnable() 48: { 49: Assert.IsTrue(elementCount > 0, "Element count must be greater than 0"); 50: Assert.IsNotNull(assetReferenceToInstantiate, "Prefab to instantiate must be non-null"); 51: allAvailablePools[assetReferenceToInstantiate.RuntimeKey] = this; 52: loadingCoroutine = StartCoroutine(SetupPool()); 53: } 54: 55: void OnDisable() 56: { 57: allAvailablePools.Remove(assetReferenceToInstantiate); 58: foreach (var obj in pool) 59: { 60: Addressables.ReleaseInstance(obj); 61: } 62: pool = null; 63: } 64: 65: private IEnumerator SetupPool() 66: { 67: pool = new Stack(elementCount); 68: for (var i = 0; i < elementCount; i++) 69: { 70: var handle = assetReferenceToInstantiate.InstantiateAsync(transform); 71: yield return handle; 72: var newGameObject = handle.Result; 73: pool.Push(newGameObject); 74: newGameObject.SetActive(false); 75: } 76: 77: loadingCoroutine = null; 78: } 79: } I know, somewhat long, but I want to post it here so I can explain what's going on.
      Like I said before, in line 14 we're getting the right pool for you, as in this article we aim to have a pool per prefab. We use the runtime key for this matter, which is the string we use to identify our addressable assets. Other variations can include using generics and enums to use a single pool object instead.
      In lines 30-33, we take one object from the pool, we parent it and then activate it. You might want to add more arguments to this function, such as position and rotation.
      We do the opposite in lines 41-43. Like the child who rebels and leaves home only to come back after just an hour, we accept it back. We deactivate it and parent it back to our pool game object.
      And then it is time to warm up the pool and empty it in lines 52 and 60. We pre-warm the pool when it is enabled by instantiating and deactivating 8 prefabs. Finally, we call Addressables.ReleaseInstance to free up memory.
      The strategy here is clear: enable the pool when we suspect we will need it and disable/destroy it when we don't.

      Level 3 Developer: Smart Unity Addressables Pooling
      There are so many variations of Unity Addressables Pooling systems.
      It all really depends on your objectives for your game.
      Performance
      You could, for instance, prioritize performance. If that is the case, you certainly don't want to activate/deactivate entire game objects on the pool's Take and Return calls.
      Activations are extremely expensive. What you want there is to enable/disable certain components instead, such as renderers, animators, canvases, etc.. You'd stop paying the draw calls while not paying activation times.
      Something you could also avoid is excessive parenting, as we also pay a high price for it.
      If this is your case, you might want to go for PerformancePool.
      Networking
      Did you ever use Photon, PlayFab, Mirror or any other networking solution to add multiplayer possibilities to your game?
      If so, you might have noticed you often have to assign a prefab somewhere so these systems instantiate it when required.
      But what if your prefab is based on Addressables?
      Well, in that case, you can still profit from a more specialized plug and play pool version: NetworkedPool. 
      Automatic Pooling
      If performance is not required and you'd rather save time, you can also make your life easier and still get the benefits of pooling.
      You could go for the AutomaticPool component, which will take care of loading and unloading prefabs for you. What is more interesting, it'll free up its entire memory after a certain time has passed without users requiring the prefab.
      If you are interested in these Plug And Play components, you'll be happy to know they will be included in my upcoming Addressables for the Busy Developer course.
      Highly-skilled game developers and I will start a sprint to transform and level up the way we make games in Unity.
      Join us in the quest.

      What did you think of the article? Leave a comment to share your experience with Addressables.
      Ruben
    • By srphfthnd
      Hello, I apologise in advanced I'm not really good in English. I'm just new here and I don't know how to start. I'm a fourth year IT student and want to make my favorite old game into mobile game. The game I want to create is base from mmorpg Ran Online which was shutdown 3 months ago. I deeply in love with this game and I want to make it into mobile but I have no idea where to start. I heard about how to reverse engineer the game and create it by using Unity software. I also try to read other forums regarding about making mmorpg in mobile but they just gave me a vague general answer. Please help me I want to know where I should start in this project. I really want this game in my pocket and to be back. I'm not that good but I have knowledge in Java and C++. Any help will be much appreciated. 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!