Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Machine Learning'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 15 results

  1. The School of Information is accepting applications for the PhD program, for the incoming fall 2019 cohort. We are seeking students with interests in machine learning, natural language processing, text retrieval, virtual/mixed reality, or game development/design. Preferred incoming doctoral students for the fall 2019 cohort will bring strong computational skills. Having advanced object-oriented programming skills, experience with machine learning toolkits such as scikit-learn or Keras, or experience with widely used real-time development platforms such as Unity, would be a plus. Instructional experiences are also beneficial, and should be highlighted in application materials, but are not required. Funded positions will be available for select graduate students, those students will receive tuition remission and a stipend in exchange for research- related, grant support, or instructional work. The School of Information is an academic unit in the College of Social and Behavioral Sciences at the University of Arizona, Arizona’s only public land grant university. The School’s mission: “As Arizona’s iSchool, we collaborate across disciplines, drive critical research and development, and educate the information intellectuals and professionals of tomorrow to further positive social change that is rooted in the places where we live and that impacts the world.” The school and the College of Social and Behavioral Sciences are dedicated to creating a serious, open, free intellectual space for inquiry, one in which faculty, students, staff members, and community partners can participate fully, regardless of race, ethnicity, gender identity, sexual orientation, age, socio-economic status, citizenship status, size, abled status, language, religion, or any other characteristic. Strategically positioned to lead the University of Arizona’s status as a Hispanic Serving Institution, SBS foregrounds our awareness of the deep ancestral footprint of our region’s populations and cultures, including the Tohono O’odham and Pascua Yaqui people whose lands we inhabit. We honor diverse knowledge traditions and lived experiences, and we strive to foster accessibility and equity, the conditions in which our members together can produce new, rigorous, urgent, and evidence-based knowledge For more Information, please see: https://ischool.arizona.edu/phd-information. With quick questions about the applicant process or application materials, feel free to reach out to our administrative support team through Barbara Vandervelde, barbv@email.arizona.edu.
  2. Hello everyone, my name is Valerio and I'm an expert in business development and product innovation. At the moment I'm working for a big gambling company and I'm also working on a side project to launch a startup aimed at the whole gaming world. I have an idea and I would like the help of this community to validate it. Thanks to machine learning it is possible to predict user behavior. For example, from the tests I have done, I can assure you that it is possible to predict. with an accuracy between 80 and 90 percent (depending on the quality of the data), which users will use a certain app next week. My idea of the service is to create a Softwere as a Service, with a monthly fee, which allows developers and business owners of a game to load tables of data into the software and receive the desired predictive output. For example, thanks to this softwere you might know which users will play and which ones will not play next week, or analyze more specific metrics, like who will make a purchase and who does not, who will use a particular feature and who does not, and so on. With this information, the team that manages the app can set up marketing activities to increase the engagment, such as sending push notifications only to those who know you will not play next week, or special offers to those who will not make a purchase. Here are the questions I need you to answer: - Do you find this service useful? - If so, how much would you pay per month for such a service? Thank you all for your participation, Valerio.
  3. Hi Everyone! After being in private alpha for a few months, we've now released our AI music engine on the Unity Asset Store in open Beta as a Unity plugin. Melodrive is an engine that generates an infinite music stream that can adapt to the unique and complex requirements of a video game. We use state of the art synthesisers and samplers, along with pro-level effects and mixing to render the score generated by the AI. Melodrive comes with full Unity support out of the box, and is easily extendable to any other platform or technology with the native dynamic library. Developers and end-users can create their own music, by changing style and emotion, melody and instruments, in an instant, on the fly. Please note that this version of Melodrive is geared towards developers with little to no musical skills. We'd love to improve the engine and add features in order to extend the capabilities of professional musicians, but this will come at a later time! Here's a short trailer that gives you an idea of what you can do with Melodrive: You can download Melodrive Lite Beta for free from the Unity Asset Store at https://assetstore.unity.com/packages/audio/music/melodrive-lite-beta-129271 We'd love to get feedback and suggestions to improve our music engine. For this, you can join our Discord: https://discord.gg/ZGvF9NX Thanks!
  4. Over the last 1.5yrs, I've worked with my colleagues on Melodrive, an AI music engine that enables people with limited to no musical skills to easily create highly adaptive music. The music engine uses AI to generate music from scratch, in realtime. The music smoothly transitions between different emotional states. We're planning on releasing the engine for free to indies. This means that indies can quickly get a soundtrack, which is more dynamic than anything that exists today on the market - composers included! We’ve just released four Melodrive tech demos. These showcase Melodrive's AI music potential across a range of experiences from VR to music branding. You can download the demos directly from our website here: http://melodrive.com You can also sign up to become an alpha tester. Hope this may be of interest to you!
  5. Hi Folks, I am learning Artificial Intelligence and trying out my first real-life AI application. What I am trying to do is taking as an input various sentences, and then classifying the sentences into one of X number of categories based on keywords, and 'action' in the sentence. The keywords are, for example, Merger, Acquisition, Award, product launch etc. so in essence I am trying to detect if the sentence in question talks about a merger between two organizations, or an acquisition by an organisation, a person or an organization winning an award, or launching of a new product etc. To do this, I have made custom models based on the basic NLTK package model, for each keyword, and trying to improve the classification by dynamically tagging/updating the models with related keywords, synonyms etc to improve the detection capability. Also, given a set of sentences, I am presenting the user with the detected categorization and asking whether its correct or wrong, and if wrong, what is the correct categorization, and also identify the entities. So the object is to first classify the sentence into a category, and additionally, detect the named entities in the sentence, based on the category. The idea is, to be able to automatically re-train the models based on this feedback to improve its performance over time and to be able to retrain with as less manual intervention as possible. For the sake of this project, we can assume that user feedback would be accurate. The problem I am facing is that NLK is allowing fixed length entities while training, so, for example, a two-word award is being detected as two awards. What should be my approach to solve this problem? Is there a better NLU (even a commercial one) which can address this problem? It seems to me that this would be a common AI problem, and I am missing something basic. Would love you guys to have an input on this. Thanks & Regards Camillelola
  6. Artificial Intelligence is one of the hottest technologies currently. From work colleagues to your boss, chances are that most (yourself included) wish to create the next big AI project. Machine Learning and Artificial Intelligence are revolutionizing the way game developers work. Packt wants to learn more about what Artificial Intelligence (AI) means for people who work in the tech industry and wants to help developers understand how AI will impact them today and tomorrow. Take the AI Now 2018 Survey (shouldn’t take more than 2 minutes and all responses are anonymous). There's a huge discount for any Packt eBook or Video once you complete it!
  7. Attention colony managers, you may have expected from update 42 the "Answer to the Ultimate Question of Life, the Universe and Everything"or have you ever sheltered a passing hitchhiker in one of your outposts? Anyways, what you get instead is of course just another bunch of problems which will make the colonizing business so much more interesting... TL;DR Catastrophes let colony shares drop Riots in cities and native camps Security Complex New item: Soma New city center animation Anti-aliasing Biome & forest diversification Fixes & improvements Catastrophes Let Colony Shares Drop For example a persisting epidemic in another companie's colony is lowering their colony's stock price. Use this opportunity to buy their shares at a discount! Riots, fires, hackings, flood, tornadoes, asteroids, meteors and other incidents have similar effects. Also threats like Assimilators or Xrathul will cause a drop. Your ability to artificially trigger these natural catastrophes through temples and items makes price drops an interesting prelude to the next strategic equity buying spree and the takeover of other colonies. The free market makes it possible. Riots in the Colony Riots are now a real threat to your colony! This is what happens when your colonists are unhappy for a long time. Under-supply of resources or negative external influences lead to a poor quality of life. If these circumstances continue for too long, people's misfortune increases to anger and a riot develops. It's too late to fight the causes of the uprising as soon as it breaks out. A riot can be pacified with the use of Soma and prevented by a security complex or a frontier tower. But the easiest thing to do is to provide good colony living conditions in the first place. Security Complex This building stations police units to maintain order in the colony. Riots of dissatisfied colonists in the surrounding area are prevented. This ensures peace and quiet, although this is associated with additional costs. The facility monitoring upgrade scans production facilities in the surrounding area and prevents incidents. The cyber security upgrade monitors the information networks in surrounding buildings and prevents hacker attacks. Soma We could finally synthesize Soma, the famous drug from Aldous Huxley. This is what the packing slip says: "To avoid major mood swings that can lead to negative moods, people regularly take Soma, a drug that has a mood-lifting and stimulating effect and is also used as an aphrodisiac. Unlike alcohol, it has no side effects at the usual dosage and is produced synthetically." To sum it up: it's raising the general life quality and helps against riots. New City Center Animation A new age of ingame animations awaits us! Starting with the colony center base which is expanding in various parts that screw, bend and turn out of the ground when the landing capsule is coming down. The worker drones are now stationed on landing pads from which they will lift off to build and repair. Anti-Aliasing Might sound a bit weird but we had to disable this basic graphics feature for ages due to framework issues. But what it adds to the game's look is awesomely amazing. Lovely, calm and soft edges everywhere especially when you zoom out from the planet, have a look. Biome & Forest Diversification Ralph sent us a very detailed feedback with improvement suggestions to make the environment and nature more realistic (Thanks!). Based on this we decided to further diversify the biomes and forests: Forests in fertile areas are often better at binding CO2. This is for example the case for rain forest and bog forest Big fish swarms are more common in cold areas Industrial farming and fishing reduce fertility Steppe is bad ground for forest growth, but quite good for farms and especially resistant, so fertility is only slowly decreased Rain forest ground in contrast is easily damageable and loses it's fertility fast Fixes & Improvements Added achievements for Edora, Iqunox and Thera and when reaching 40 points in the campaign Workshop queue is only available for workshops of level 2 now Camera position is restored when loading a savegame Priorization stores at least 10% of a resource's production. This amount is now shown in the list of consumed resources Fossil resources below 10% will decrease a fossil power plant's productivity (up tp 20%) Fixed bugs in highscore list New unlocked items are highlighted in the inventory Switched background music compression from ADPCM to WMA streaming Fixed game freeze and window disappearing problems
  8. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new unscripted behaviors. The danger is that we will have a sub-optimal behavior pattern which generates some results, but they’re not the best results. We’d use the same neural pathway over and over again because it is a well travelled path. Exploration Rewards: In order to encourage exploring different untravelled paths, we gradually increase the “novelty” reward value for taking that pathway. If traveling this pathway results in a large reward, the pathway is highly rewarded and may become the most travelled path. Dynamic Deep Learning: On occasion, we’ll also want to create new neurons at random and connect them to at least one other nearby downstream neuron. If a neuron is not connected to any other neurons, it becomes an “island” and must die. When we follow a neural pathway, we are looking at two costs: The connection weight and the path weight. We always choose the shortest path with the least weight. Rarely used pathways will have their weight decrease over a long period of time. If a path weight reaches zero, we break the connection and our brain “forgets” the neural connection. Evolutionary & Inherited Learning: It takes a lot of effort for a neural pathway to become developed. We will want to speed up the development. If a child is born to two parents, those parents will rapidly increase the neural pathways of the child by sharing their own pathways. This is one way to "teach". Thus, children will think very much like their parents do. Other characters will also share their knowledge with other characters. In order for knowledge to spread, it must be interesting enough to be spread. So, a character will generally share the most interesting knowledge they have. Network Training & Evolutionary Inheritance: An untrained ANN results in an uninteresting character. So, we have to have at least a trained base preset for a brain. This is consistent with biological brains because our brains have been pre-configured through evolutionary processes and come pre-wired with certain regions of the brain being universally responsible for processing certain input types. The training method will be rudimentary at first, to get something at least passable, and it can be done as a part of the development process. When we release the game to the public, the creatures are still going to be training. The creatures which had the most “success” will become a part of the next generation. These brain configurations can be stored on a central database somewhere in the cloud. When a player begins a new game, we download the most recent generation of brain configurations. Each newly instanced character may have a chance to have a random mutation. When the game completes, if there were any particular brains which were more successful than the current strain, we select it for “breeding” with other successful strains so that the next generation is an amalgamation of the most successful previous generations. We’ll probably begin to see some divergence and brain species over time? Predisposition towards Behavior Patterns via bias: Characters will also have slight predispositions which are assigned at birth. 50% of their predisposition is innate to their creature class. 25% is genetically passed down by parents. 25% is randomly chosen. A predisposition causes some pleasures and displeasures to be more or less intense. This will skew the weightings of a developing ANN a bit more heavily to favor particular actions. This is what will create a variety in interests between characters, and will ultimately lead to a variety in personalities. We can create very different behavior patterns in our AB’s by tweaking the amount of pleasure and displeasure various outputs generate for our creature. The brain of a goblin could derive much more pleasure from getting gold, so it will have strong neural pathways which result in getting gold. AI will be able to interact with interactable objects. An interactable object has a list of ways it can be interacted with. Interactable objects can be used to interact with other interactable objects. Characters are considered to be interactable objects. The AI has a sense of ownership for various objects. When it loses an object, it is a displeasurable feeling. When they gain an object, it is a pleasurable feeling. Stealing from an AI will cause it to be unhappy and it will learn about theft and begin trying to avoid it. Giving a gift to an AI makes it very happy. Trading one object for another will transfer ownership of objects. There is no "intrinsic value" to an object. The value of an object is based on how much the AI wants it compared to how much it wants the other object in question. Learning through Socialization: AI's will socialize with each other. This is the primary mechanism for knowledge transfer. They will generally tell each other about recent events or interests, choosing to talk about the most interesting events first. If an AI doesn't find a conversation very interesting, they will stop the conversation and leave (terminating condition). If a threat is nearby, the AI will be very interested in it and will share with nearby AI. If a player has hurt or killed a townsfolk, all of the nearby townsfolk will be very upset and may attack the player on sight. If enough players attack the townsfolk, the townsfolk AI will start to associate all players with negative feelings and may attack a player on sight even if they didn't do anything to aggravate the townsfolk AI.
  9. Unity has jumped into the world of machine learning solutions with their open beta release of the Unity Machine Learning Agents SDK (ML-Agents SDK). Released on Github, the SDK brings machine learning to games, allowing researchers and developers to create intelligent agents that can be trained using reinforcement learning, neuroevolution, or other machine learning methods through a Python API. Features include: Integration with Unity Engine Multiple cameras Flexible Multi-agent support Discrete and continuous action spaces Python 2 and 3 control interface Visualizing network outputs in the environment Tensorflow Sharp Agent Embedding (Experimental) Learn more about these features and more at the Unity blog announcement. View full story
  10. Hey guys, I'm starting to get really interested in A.I and especially machine learning. However I can't find a good place to start. I have looked on the internet for tutorials but they are all useless to begginers because they all end up thinking that you have a lot of knowledge about machine learning already and that you know what you are doing. But for me (an absolute beginner) I find it hard to understand what they are saying in the tutorials. I have tried to make my own A.I by just playing around with some maths and the basics of machine learning that I already know like neurons and weights and baias. For the most part my little mini projects work but they are not proper A.I. Could anyone please recommend me some tutorials that I could watch for begginers. My main programming language is python however I am starting to learn java and c#. I have already looked at bigNeuralNetwork's tutorial, at the beginning it was great and I understood everything but then halfway through I didn't know what was going on.
  11. Unity has jumped into the world of machine learning solutions with their open beta release of the Unity Machine Learning Agents SDK (ML-Agents SDK). Released on Github, the SDK brings machine learning to games, allowing researchers and developers to create intelligent agents that can be trained using reinforcement learning, neuroevolution, or other machine learning methods through a Python API. Features include: Integration with Unity Engine Multiple cameras Flexible Multi-agent support Discrete and continuous action spaces Python 2 and 3 control interface Visualizing network outputs in the environment Tensorflow Sharp Agent Embedding (Experimental) Learn more about these features and more at the Unity blog announcement.
  12. Hi, i’m trying to build an effective AI for the Buraco card game (2 and 4 players). I want to avoid the heuristic approach : i’m not an expert of the game and for the last games i’ve developed this way i obtained mediocre results with that path. I know the montecarlo tree search algorithm, i’ve used it for a checkers game with discrete result but I’m really confused by the recent success of other Machine Learning options. For example i found this answer in stack overflow that really puzzles me, it says : "So again: build a bot which can play against itself. One common basis is a function Q(S,a) which assigns to any game state and possible action of the player a value -- this is called Q-learning. And this function is often implemented as a neural network ... although I would think it does not need to be that sophisticated here.” I’m very new to Machine Learning (this should be Reinforcement Learning, right?) and i only know a little of Q-learning but it sounds like a great idea: i take my bot, making play against itself and then it learns from its results… the problem is that i have no idea how to start! (and neither if this approach could be good or not). Could you help me to get the right direction? Is the Q-learning strategy a good one for my domain? Is the Montecarlo still the best option for me? Would it work well in a 4 players game like Buraco (2 opponents and 1 team mate)? Is there any other method that i’m ignoring? PS: My goal is to develop an enjoyable AI for a casual application, i can even consider the possibility to make the AI cheating for example by looking at the players hands or deck. Even with this, ehm, permission i would not be able to build a good heuristic, i think Thank you guys for your help!
  13. sveta_itseez3D

    Web SDK for user-generated game avatars

    Hi folks, If you ever tried to create a game character that looks exactly like you - this post is for you. We at itSeez3D have always been dreaming of a game where we can play as ourselves and look realistic and not uncanny at the same time. We have a pretty solid background in 3D scanning and computer vision so last year we decided to work on our dream. So based on our previous 3D scanning experience we've created a technology that allows to create a lifelike 3D model of a face from one selfie. See our demo here: The process is fully automatic and takes less than one minute. We use deep learning to predict the 3D geometry of a face and a head. Meaning, each avatar gets completely unique and realistic as in Mass Effect Andromeda :). Here are the examples of 3D models you can get with our tech: https://skfb.ly/6p7w7. We've just released the beta version of the SDK and are inviting gamedev folks to participate. By signing up you can get access to our web API and then embed user-generated 3D avatars in your own games, applications, 3D worlds, etc. If you'd like to participate in beta and integrate our tech in a game, please sign up at https://avatarsdk.com/. For any questions or feedback, feel free to contact us at support@itseez3d.com. We'd really appreciate your feedback! Thank you! itSeez3D Team
  14. Last year, the Halite competition was pretty fun! I didn't have as much time to spend as I wanted, but nevertheless. This year, it looks even more fun, although I have even less time to spend on it. If you all want to give it a go, I can highly recommend it. The competition consists of a simple game with rules, and a "starter kit" that lets you write a bot to play the game. The game servers then run the bots against each other, and scores your bot based on how well it does. In the beginning, getting a high score is not that hard, but as the game goes on, the well-ranked competitors will analyze how their bot is doing and improve the algorithm, whatever it is, so at the end, it's quite tough! This is a great way to sharpen your programming skills in Python, C++, Java, Rust, or one of the many other supported languages. Let's show them some gamedev.net colors! https://halite.io/
  15. I am developing a RTS game involving robber and cops. I have designed a simple rule based AI system for the robber. Goals of the robber AI: Escape from the cops (robber should not get caught) Conduct robbery Current implementation: The robber AI uses a rule based system to escape from the cops. I have written IF-THEN clauses to build the rule based AI system. The robber AI uses the knowledge of cops’ current positions. Future requirements: I am worried about the current architecture of this robber AI system. I read that rule based systems are inefficient and difficult to maintain. For the future, I want the robber AI to use some power-ups (to escape) The AI should consider the power-ups used by the cops before making decisions. For conducting the robbery, the robber has to move to a specific location escaping from the cops. Questions: Is the current rule based AI system will flexible enough (and easy to maintain) to meet the future requirements? What can be better alternatives for the existing rule based system? (Like decision trees or state machines etc.) I read people suggesting that the rule based systems are difficult to debug. Can anyone explain why? I would appreciate any thought/suggestions on this topic. Thank you.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!