Jump to content
  • Advertisement
  • 05/17/19 05:32 AM

    Unity ML Agents

    Engines and Middleware


    Learn about Unity ML-Agents in this article by Micheal Lanham, a tech innovator and an avid Unity developer, consultant, manager, and author of multiple Unity games, graphics projects, and books.

    Unity has embraced machine learning and deep reinforcement learning in particular, with the aim of producing a working seep reinforcement learning (DRL) SDK for game and simulation developers. Fortunately, the team at Unity, led by Danny Lange, has succeeded in developing a robust cutting-edge DRL engine capable of impressive results. Unity uses a proximal policy optimization (PPO) model as the basis for its DRL engine; this model is significantly more complex and may differ in some ways.

    This article will introduce the Unity ML-Agents tools and SDK for building DRL agents to play games and simulations. While this tool is both powerful and cutting-edge, it is also easy to use and provides a few tools to help us learn concepts as we go. Be sure you have Unity installed before proceeding.

    Installing ML-Agents

    In this section, we cover a high-level overview of the steps you will need to take in order to successfully install the ML-Agents SDK. This material is still in beta and has already changed significantly from version to version. Now, jump on your computer and follow these steps:

    1. Be sure you have Git installed on your computer; it works from the command line. Git is a very popular source code management system, and there is a ton of resources on how to install and use Git for your platform. After you have installed Git, just be sure it works by test cloning a repository, any repository.

    2. Open a command window or a regular shell. Windows users can open an Anaconda window.

    3. Change to a working folder where you want to place the new code and enter the following command (Windows users may want to use C:\ML-Agents):

      git clonehttps://github.com/Unity-Technologies/ml-agents


    4. This will clone the ml-agents repository onto your computer and create a new folder with the same name. You may want to take the extra step of also adding the version to the folder name. Unity, and pretty much the whole AI space, is in continuous transition, at least at the moment. This means new and constant changes are always happening. At the time of writing, we will clone to a folder named ml-agents.6, like so:

      git clone https://github.com/Unity-Technologies/ml-agents ml-agents.6


    5. Create a new virtual environment for ml-agents and set it to 3.6, like so:

      conda create -n ml-agents python=3.6
      Use the documentation for your preferred environment


    6. Activate the environment, again, using Anaconda:

      activate ml-agents


    7. Install TensorFlow. With Anaconda, we can do this using the following:

      pip install tensorflow==1.7.1


    8. Install the Python packages. On Anaconda, enter the following:

      cd ML-Agents #from root folder
      cd ml-agents or cd ml-agents.6  #for example
      cd ml-agents
      pip install -e . or pip3 install -e .
    9. This will install all the required packages for the Agents SDK and may take several minutes. Be sure to leave this window open, as we will use it shortly.


    This should complete the setup of the Unity Python SDK for ML-Agents. In the next section, we will learn how to set up and train one of the many example environments provided by Unity.

    Training an agent

    We can now jump in and look at examples where deep reinforcement learning (DRL) is put to use. Fortunately, the new agent's toolkit provides several examples to demonstrate the power of the engine. Open up Unity or the Unity Hub and follow these steps:

    1. Click on the Open project button at the top of the Project dialog.
    2. Locate and open the UnitySDK project folder as shown in the following screenshot:

      Opening the Unity SDK Project
    3. Wait for the project to load and then open the Project window at the bottom of the editor. If you are asked to update the project, say yes or continue. Thus far, all of the agent code has been designed to be backward compatible.
    4. Locate and open the GridWorld scene as shown in this screenshot:
      Opening the GridWorld example scene
    5. Select the GridAcademy object in the Hierarchy window. 
    6. Then direct your attention to the Inspector window, and beside the Brains, click the target icon to open the Brain selection dialog:

      Inspecting the GridWorld example environment
    7. Select the GridWorldPlayer brain. This brain is a player brain, meaning that a player, you, can control the game.
    8. Press the Play button at the top of the editor and watch the grid environment form. Since the game is currently set to a player, you can use the WASD controls to move the cube. The goal is much like the FrozenPond environment we built a DQN for earlier. That is, you have to move the blue cube to the green + symbol and avoid the red X.

    Feel free to play the game as much as you like. Note how the game only runs for a certain amount of time and is not turn-based. In the next section, we will learn how to run this example with a DRL agent.

    What's in a brain?

    One of the brilliant aspects of the ML-Agents platform is the ability to switch from player control to AI/agent control very quickly and seamlessly. In order to do this, Unity uses the concept of a brain. A brain may be either player-controlled, a player brain, or agent-controlled, a learning brain. The brilliant part is that you can build a game and test it, as a player can then turn the game loose on an RL agent. This has the added benefit of making any game written in Unity controllable by an AI with very little effort.

    Training an RL agent with Unity is fairly straightforward to set up and run. Unity uses Python externally to build the learning brain model. Using Python makes far more sense since as we have already seen several DL libraries are built on top of it. Follow these steps to train an agent for the GridWorld environment:

    1. Select the GridAcademy again and switch the Brains from GridWorldPlayer to GridWorldLearning as shown:

      Switching the brain to use GridWorldLearning
    2. Click on the Control option at the end. This simple setting is what tells the brain it may be controlled externally. Be sure to double-check that the option is enabled.
    3. Select the trueAgent object in the Hierarchy window, and then, in the Inspector window, change the Brain property under the Grid Agent component to a GridWorldLearning brain:

      Setting the brain on the agent to GridWorldLearning
    4. For this sample, we want to switch our Academy and Agent to use the same brain, GridWorldLearning. Make sure you have an Anaconda or Python window open and set to the ML-Agents/ml-agents folder or your versioned ml-agents folder. 
    5. Run the following command in the Anaconda or Python window using the ml-agents virtual environment:
      mlagents-learn config/trainer_config.yaml --run-id=firstRun --train
    6. This will start the Unity PPO trainer and run the agent example as configured. At some point, the command window will prompt you to run the Unity editor with the environment loaded.
    7. Press Play in the Unity editor to run the GridWorld environment. Shortly after, you should see the agent training with the results being output in the Python script window:

      unnamed (1).png
      Running the GridWorld environment in training mode
    8. Note how the mlagents-learn script is the Python code that builds the RL model to run the agent. As you can see from the output of the script, there are several parameters, or what we refer to as hyper-parameters, that need to be configured.
    9. Let the agent train for several thousand iterations and note how quickly it learns. The internal model here, called PPO, has been shown to be a very effective learner at multiple forms of tasks and is very well suited for game development. Depending on your hardware, the agent may learn to perfect this task in less than an hour.

    Keep the agent training and look at more ways to inspect the agent's training progress in the next section.

    Monitoring training with TensorBoard

    Training an agent with RL or any DL model for that matter is not often a simple task and requires some attention to detail. Fortunately, TensorFlow ships with a set of graph tools called TensorBoard that we can use to monitor training progress. Follow these steps to run TensorBoard:

    1. Open an Anaconda or Python window. Activate the ml-agents virtual environment. Don't shut down the window running the trainer; we need to keep that going.
    2. Navigate to the ML-Agents/ml-agents folder and run the following command:
      tensorboard --logdir=summaries
    3. This will run TensorBoard with its own built-in web server. You can load the page using the URL that is shown after you run the previous command.
    4. Enter the URL for TensorBoard as shown in the window, or use localhost:6006 or machinename:6006 in your browser. After an hour or so, you should see something similar to the following:

      The TensorBoard graph window
    5. In the preceding screenshot, you can see each of the various graphs denoting an aspect of training. Understanding each of these graphs is important to understand how your agent is training, so we will break down the output from each section:
    • Environment: This section shows how the agent is performing overall in the environment. A closer look at each of the graphs is shown in the following screenshot with their preferred trend:

    A closer look at the Environment section plots

    • Cumulative Reward: This is the total reward the agent is maximizing. You generally want to see this going up, but there are reasons why it may fall. It is always best to maximize rewards in the range of 1 to -1. If you see rewards outside this range on the graph, you also want to correct this as well.
    • Episode Length: It usually is a better sign if this value decreases. After all, shorter episodes mean more training. However, keep in mind that the episode length could increase out of need, so this one can go either way.
    • Lesson: This represents which lesson the agent is on and is intended for Curriculum Learning.
    • Losses: This section shows graphs that represent the calculated loss or cost of the policy and value. A screenshot of this section is shown next, again with arrows showing the optimum preferences:

      Losses and preferred training direction


    • Policy Loss: This determines how much the policy is changing over time. The policy is the piece that decides the actions, and in general, this graph should be showing a downward trend, indicating that the policy is getting better at making decisions.
    • Value Loss: This is the mean or average loss of the value function. It essentially models how well the agent is predicting the value of its next state. Initially, this value should increase, and then after the reward is stabilized, it should decrease.
    • Policy: PPO uses the concept of a policy rather than a model to determine the quality of actions. The next screenshot shows the policy graphs and their preferred trend:

      Policy graphs and preferred trends

    • Entropy: This represents how much the agent is exploring. You want this value to decrease as the agent learns more about its surroundings and needs to explore less.
    • Learning Rate: Currently, this value is set to decrease linearly over time.
    • Value Estimate: This is the mean or average value visited by all states of the agent. This value should increase in order to represent the growth of the agent's knowledge and then stabilize.

    6. Let the agent run to completion and keep TensorBoard running.

    7. Go back to the Anaconda/Python window that was training the brain and run this command:

    mlagents-learn config/trainer_config.yaml --run-id=secondRun --train

    8. You will again be prompted to press Play in the editor; be sure to do so. Let the agent start the training and run for a few sessions. As you do so, monitor the TensorBoard window and note how the secondRun is shown on the graphs. Feel free to let this agent run to completion as well, but you can stop it now if you want to.

    In previous versions of ML-Agents, you needed to build a Unity executable first as a game-training environment and run that. The external Python brain would still run the same. This method made it very difficult to debug any code issues or problems with your game. All of these difficulties were corrected with the current method.

    Now that we have seen how easy it is to set up and train an agent, we will go through the next section to see how that agent can be run without an external Python brain and run directly in Unity.

    Running an agent

    Using Python to train works well, but it is not something a real game would ever use. Ideally, what we want to be able to do is build a TensorFlow graph and use it in Unity. Fortunately, a library was constructed, called TensorFlowSharp that allows .NET to consume TensorFlow graphs. This allows us to build offline TFModels and later inject them into our game. Unfortunately, we can only use trained models and not train in this manner, at least not yet.

    Let's see how this works using the graph we just trained for the GridWorld environment and use it as an internal brain in Unity. Follow the exercise in the next section to set up and use an internal brain:

    1. Download the TFSharp plugin from here
    2. From the editor menu, select Assets | Import Package | Custom Package... 
    3. Locate the asset package you just downloaded and use the import dialogs to load the plugin into the project.
    4. From the menu, select Edit | Project Settings. This will open the Settings window (new in 2018.3)
    5. Locate under the Player options the Scripting Define Symbols and set the text to ENABLE_TENSORFLOW and enable Allow Unsafe Code, as shown in this screenshot:

      unnamed (2).png
      Setting the ENABLE_TENSORFLOW flag
    6. Locate the GridWorldAcademy object in the Hierarchy window and make sure it is using the Brains | GridWorldLearning. Turn the Control option off under the Brains section of the Grid Academy script.
    7. Locate the GridWorldLearning brain in the Assets/Examples/GridWorld/Brains folder and make sure the Model parameter is set in the Inspector window, as shown in this screenshot:

      Setting the model for the brain to use
    8. The Model should already be set to the GridWorldLearning model. In this example, we are using the TFModel that is shipped with the GridWorld example.
    9. Press Play to run the editor and watch the agent control the cube.

    Right now, we are running the environment with the pre-trained Unity brain. In the next section, we will look at how to use the brain we trained in the previous section.

    Loading a trained brain

    All of the Unity samples come with pre-trained brains you can use to explore the samples. Of course, we want to be able to load our own TF graphs into Unity and run them. Follow the next steps in order to load a trained graph:

    1. Locate the ML-Agents/ml-agents/models/firstRun-0 folder. Inside this folder, you should see a file named GridWorldLearning.bytes. Drag this file into the Unity editor into the Project/Assets/ML-Agents/Examples/GridWorld/TFModels folder, as shown:

      Dragging the bytes graph into Unity
    2. This will import the graph into the Unity project as a resource and rename it GridWorldLearning 1. It does this because the default model already has the same name.
    3. Locate the GridWorldLearning from the brains folder and select it in the Inspector windows and drag the new GridWorldLearning 1 model onto the Model slot under the Brain Parameters:

      unnamed (3).png
      Loading the Graph Model slot in the brain
    4. We won't need to change any other parameters at this point, but pay special attention to how the brain is configured. The defaults will work for now.
    5. Press Play in the Unity editor and watch the agent run through the game successfully.
    6. How long you trained the agent for will really determine how well it plays the game. If you let it complete the training, the agent should be equal to the already trained Unity agent.


    If you found this article interesting, you can explore Hands-On Deep Learning for Games to understand the core concepts of deep learning and deep reinforcement learning by applying them to develop games. Hands-On Deep Learning for Games will give an in-depth view of the potential of deep learning and neural networks in game development.

      Report Article

    User Feedback

    There are no comments to display.

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Game Developer Survey


    We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a $15 incentive for your time and insights. Click here to start!

    Take me to the survey!

  • Advertisement
  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By Ruben Torres
      [The original post with its original format can be found here]
      Players want to play, they don't want to wait. Help them buying your game: reduce your game's download size with Unity Addressables Hosting. And a year later? Offer them a DLC based on, guess what? Addressables.

      Picture your potential player on a Friday afternoon.
      Your player has just left behind a hard week with long working hours. Their wife or husband is gone to their family's country house for the weekend along with the kids. The perfect time to go home, order pizza and browse through Steam with the wallet at hand.
      With or without kids, with or without partner, we all had these awesome weekends.
      Just videogames, please.

      So your player comes across your newly released game in the Steam shop. They see all the effort you put into creating polished content.
      No need for convincing, they hand in their credit card details and buy two copies of your game. One for theirself, another for their friend / brother / sister.
      You get your 19.95 bucks, twice. Both users happily start installing the game.
      But wait...
      A wild Steam installation pop-up appears.

      The remaining installation time suddenly exploded to 12 hours
      What, 12 hours for over 30GB? What the #*@! is in this game?
      I'm not wasting my weekend on this shit, I'm out!
      What happens afterward is not uncommon.
      Your ex-player requests a full refund and purchase instead the next game in their wish-list.
      One of the pain points for players is the waiting time wasted on downloading all the bytes of the whole game and start playing. People do not have that much time. Nothing will burn a hole in your wallet faster than an angry player.
      Do you need to include in your installation package all these assets that are spawned in the level 5 of your game? Chances are, you don't.
      Players will need a couple of hours to play through the initial content of your game. Use that to your advantage
      The idea is simple.
      Provide the minimum content possible in your game installation package and download the rest while playing the initial levels of your game.
      Can you picture your player ready to play in a mere minute after purchasing your game? How different would the reviews be compared to the ones commonly found with huge games?

      Ideally, your game's download size should be below 100MB.
      But how?
      This is what you will get by the time you're done implementing the information of this article:
      Ridiculously tiny installation sizes A new Amazon S3 bucket to host your content online Upload Unity Addressable Assets to S3 through the Unity Editor Download the Unity Addressable Assets from the S3 bucket in your player builds A high-five from your happy players
      Fox / Via mashable.com
      Level 1 Developer: "Storage is cheap, anyway"
      We started developing our game a few months ago and we have big plans for it.
      You and I worked endless hours into creating highly polished content.
      Not only that, we saw some great offers in the Unity Asset Store, so we bought several asset packs at heavily discounted prices.
      Now our game is full of content our players will love to play through. Those Sci-Fi modular parts, the exploding particle systems, the punchy soundtrack.
      It's all gorgeous. 
      And heavy.
      And slow to download.
      Now your Android APK is well over 2GB, so you need to start messing with expansion files, which adds another good week to your efforts. But it's fine, we all have time here.
      Or maybe you're publishing on Steam, so you can be at 30 GB, no problem. You just need a few hours for uploading it. And players? It's ok, people have a fast connection nowadays.
      So we released our game. Some players reported some bugs, so we make a 5-minute fix and we go through all the long process again. Build, wait for hours, upload to stores, wait for hours.
      And our players? They just re-download the whole thing again. Wait for hours, then start playing.
      It's not a big deal.
      Only that you are not recovering all the time you wasted on this previously. And a great deal of your players will stop downloading your game once they see how many hours they have to wait. That only gets worse with each update. Did I mention refunds?
      We can do better than this, now that we have the tools.
      Let's upgrade our skills to Level-2.

      Level 2 Developer: Unity Addressables Hosting
      Welcome to Unity Addressables.
      This package will allow you to efficiently manage your game assets. That, my friend, includes online distribution. For an introduction on this topic, visit my previous article on Unity Addressables Benefits for your game.
      These are the steps you and I will be following in the article:
      Set up an Amazon S3 Bucket for online distribution Mark our content as Unity Addressable Assets for online distribution Upload our content to the cloud Profit from tiny installation sizes (and others) Like granny said, a 2D sprite is worth a thousand times:

      Unity Addressables Hosting with Amazon S3 - Steps
      Let's start with...
      1. Setting Up a Free Amazon S3 Bucket
      It's our lucky day. Amazon offers a free tier for their S3 service.
      That means, we're going to host our content for free. The limitations for their free tier is mostly storage space and the number data transfers. At the moment of writing this, you can store for free up 5GB and perform 20,000 GET and 2,000 PUT requests, but do double check it in the official site of AWS Free Tiers.
      What we are going to do here is to create an account for AWS so we are ready to upload our game content for further distribution.
      You and I will do this as fast as possible. No need to waste time in detail. No BS.
      Setting up Amazon S3 Hosting for Unity Addressables
      1.1. Create AWS account
      Navigate to the AWS Management Console and click on Create a Free Account.

      Enter your e-mail and bla bla bla. That will take you roughly a minute.
      Be aware that you'll need to give them your credit card info to verify your identity.
      1.2. Choose AWS Plan
      Unless you're going pro right from the start, we want to evaluate this in our game first.
      So, after confirming your account, choose the basic plan.

      1.3. Create your first S3 bucket
      After a few minutes, your account will be activated (you'll get an e-mail). Then, sign in to your new console and open the S3 service panel:

      You are now located at the S3 control panel.
      Now we are ready to create the bucket like shown below (change your bucket name and region!):

      Leave the permissions set to public for now, you'll have the chance to tweak them in the future.
      Your S3 bucket for Unity Addressables is now ready, congratulations!
      That was the most tedious step.
      The next step is a piece of cake: time to get your Unity Project to produce downloadable assets.
      Use the AWS Management Console to create a Free Tier S3 Bucket For starting, assign public permissions to your S3 Bucket Alternatively, use another storage service based on the spreadsheet in the Resources Pack
      2. Unity Addressable Assets for Distribution
      Finally, we made it to Unity. That whole S3 process was getting old.
      I will assume you have some content marked as Addressable in your game. If that's not the case because you are new to this, don't worry, I have you covered with the previous Unity Addressables Tutorial I wrote.
      I'll show you the steps to get content uploaded in your newly created AWS S3 Bucket. We will do so based on a project I created for this purpose.
      Instead of following the whole story, you can also skip the line, get access to the code now and read later.

      Unity Addressables - Profile Settings
      A. Addressables Profile Configuration
      The way to start is to tell Unity where to load remote assets from.
      That we achieve by tweaking our Addressables Profile Configuration. In the Addressables main window, click on:
       Profile: Default → Inspect Profile Settings.
      This will redirect you to the settings we need to tweak.
      Here is a collection of funny toys you can play with, but for our purposes we just need to focus on the Profiles section.
      We want to make sure we set the Addressables RemoteLoadPath field to the correct URL.
      We form the RemoteLoadPath URL by concatenating our S3 Bucket URL with the Unity variable [BuildTarget], like below:
      https://YOUR-BUCKET-NAME.s3.YOUR-REGION-NAME.amazonaws.com/[BuildTarget] E.g. https://thegamedevguru.s3.eu-central-1.amazonaws.com/[BuildTarget] The [BuildTarget] variable is left on purpose so Unity fetches in run-time the right assets for each of the platforms we build for. Android assets will be packed differently from Standalone, so each of these platforms will require a different directory.
      The way I found my S3 Bucket URL is by uploading a random file; if you then navigate to its details, you'll see the base URL of your file and hence your bucket.
      B. Addressable Asset Groups Configuration
      So, we just told Unity where to load the remote assets from through the RemoteLoadPath variable.
      Great. What is left is to tell which assets should be loaded remotely. Easy.
      Go over the heavy assets you want to be downloaded remotely and mark these Assets as Addressable. In our case, it's the skybox materials. Open the Unity Addressables Window and assign these assets to Addressable Asset Groups. If you are just starting with Addressables, assign them to a single group for now; e.g. Skyboxes. Eventually, you'll want them to be grouped in a way that makes sense (check my Level-3 guide on Unity Addressables Tutorial for more info). Navigate to the Addressables Group inspector settings by clicking on the group and make the following adjustments: BuildPath is set to RemoteBuildPath LoadPath is set to RemoteLoadPath You can see a graphical breakdown of this entire process below.

      Asset Groups for Unity Addressables Hosting

      Unity Addressable Asset Group Settings for Network Delivery
      We now have our skybox content assigned to a group that will be downloaded by your players in run-time.
      Set RemoteLoadPath to the base URL of your web hosting provider Append the [BuildTarget] variable into RemoteLoadPath  to differentiate multiple platforms Assign your Unity Addressable Assets to a group and tweak its settings to use the remote paths so it'll be downloaded from your web hosting provider  

      3. Uploading Content to Amazon S3
      All our settings are now in place. What about uploading our content to S3?
      This is a simple two-step process:
      Build player content. Upload it to S3. Building Addressables Player Content is straightforward. Open the Addressables main window and press the button that does just that. This will cook the assets for the current platform your editor is in. 

      Unity Addressables: Build Player Content
      The output content will be stored in the path dictated by the RemoteBuildPath variable you happened to see early in the Unity Addressables Profile Settings. If you didn't modify it, it's likely to be in a subfolder of your project called ServerData.
      The second step involves navigating to that directory and dropping its contents into the website of your S3 bucket, as you can see just below:

      Unity-Addressable Assets - Upload to S3
      There you have it, it's that simple.
      However, this can quickly become tedious. It's a very manual task that could easily be automated. I did just that so now uploading all my assets takes the press of a button inside Unity Editor.
      To upload your Unity Addressable Assets directly from the Unity Editor, check my Unity Addressables Hosting Resource Pack at the end of the article.

      4. Downloading Assets from Amazon S3
      This is the part we all were waiting for. You now have a game you can distribute that is significantly smaller. The remaining part is launching it and watching it download the assets on demand!
      If you want to make sure these assets are being effectively downloaded, delete the data from your S3 Bucket, disable the caching option in your Addressable Asset Group Settings, rebuild the content and your player. If you launch it, you should see a few error messages pop up, as you can see below.

      Unity Addressable Assets Download Error
      If you followed this tutorial on Unity Addressables Hosting, chances are, you will be totally alright
      By now, the asset groups you marked to be remotely downloaded are hosted in S3 and Unity knows how to fetch them.

      The Gamedev Guru's S3 Upload Tool
      Level 3 Developer: Unity Addressables Hosting Resource Pack
      By now you should have your first Unity Addressables Hosting experiment up and running.
      You learned how to build player content specifically to target downloadable content.
      That's great, but there's more than just the basics. To help you further, I prepared a Free Unity Addressables Hosting Resource Pack just for you.  This bundle contains:
      A spreadsheet comparing different hosting alternatives to the pricey S3 An extension to upload your Unity Addressable Asset to Amazon S3 directly from the Editor The source code of this project; see it for yourself Newsletter access with exclusive free content Level up your skills. Download your free Resource Pack now.

    • By Florian Gionnane
      Our team at Darkstar Games is looking for some motivated developers to join our new TCG MMORPG game called "Greater Powers".

      We are previewing KS for Q1 2020 and are set to create a unique and epic videogame !
      Our team members work for corporate equity (corporate shares). 
      Every team member who has shown active participation is granted stock option in the corporation. And Department directors will be distributing cash bonuses to team members who contribute significantly to the project during development.

      Skillset especially needed:
      -> Concept artist
      -> Rigger
      -> Animator
      -> C# programming 
      -> Graphic design
      -> 3D modeling (especially for structures, creatures and skyships)
      -> Good knowledges level in Unity3D

      If you're looking to join in on an up and coming original game company send me your Portfolio to: flosambora123@gmail.com
      Hope to hear from you soon ! 

    • By RoKabium Games
      Each upgrade shows the amount of items and energy needed to build that component. This number of items will count down as you find more resources while mining. A green tick means you have enough items of a specific kind required for that upgrade. When you have all resources needed to build a specific upgrade a yellow dot in the top right corner of the icon will appear.
    • By Octane_Test
      I want to implement 3D water in my game similar to this 2D plugin.
      I know that usually water is implemented using plane deformation. I can't use this approach as I want to implement realistic water simulation similar to the above 2D plugin. I tried implementing the water using metaballs but the performance is poor as I need thousands of metaballs.
      I am looking for suggestions about the possible approaches using which 3D water can be implemented. Also, how fluid in Obi Fluid plugin is implemented?
    • By intenscia
      mod.io is an cross platform mod service created by the team behind ModDB.com and IndieDB.com. It can be integrated in-game using the  REST API, C/C++ SDK or engine plugins (if available) Unity and Unreal Engine are ready to use with other engine plugins in development.
      Features include:
      Platform agnostic (support 1 click mod installs on Steam, Epic Games Store, Discord, GOG, itch.io and even consoles in the future) Clientless (mod.io has no other dependencies and works behind the scenes in your game) Embeddable web app UI, so you can integrate your mod community anywhere Powerful search, filtering and tagging of mods Moderation and reporting systems built-in Steps to getting mod.io integrated:
      Add your game to our test environment or production Read our API documentation for an overview of how mod.io works Choose an Engine Plugin, API or SDK to integrate mod.io into your game and mod making tools Ready to launch? Add your game to our production environment then let's discuss promoting your release Need help? Our team is available on Discord to assist and our getting started guide has more information for you  
      Benefits of using mod.io:
      mod.io offers the same core functionality as Steamworks Workshop (1 click mod installs in-game), plus mod hosting, moderation and all of the critical pieces needed. Where we differ is our approach to modding and the flexibility a REST API offers. For example: 
      Our API is not dependent on a client or SDK, allowing you to run mod.io in many places such as your homepage and launchers Designing a good mod browsing UI is hard, our plugins ship with a UI built in to save you a lot of effort and help your mods stand out We don’t apply rules globally, so if you want to enable patronage, sales or other experimental features, reach out to discuss Our platform is built by the super experienced ModDB.com team and is continually improving for your benefit Your community can consume the mod.io API to build modding fan sites or discord bots if they want Large studios and publishers:
      A private white label option is available to license, if you want a fully featured mod-platform that you can control and host in-house. Contact us to discuss.
      Find out more:
      Visit mod.io | About us | Add your game | Chat on Discord
      These screenshots are from our Unity plugin:


Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!