Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Session'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 12 results

  1. Unity's keynote was last night and showcased their new features and announcements. You can watch the keynote here or view the highlight summary below. New Features for Next Generation Unity Unity announced two new key features to help developers make their AAA dreams a reality - the Scriptable Render Pipelines, which places total control of a powerful new rendering pipeline in the hands of artists and developers and the C# Job System, a high-performance, multithreaded system that makes it possible to take advantage of multi-core processors without the heavy programming headache. Rendering Updates Using Unity’s Scriptable Render Pipelines, developers with AAA aspirations can take advantage of the High Definition Render Pipeline (HDRP) while the Lightweight Render Pipeline (LWRP) is the powerful combination of beauty and speed, all while optimizing battery life for mobile devices and similar platforms. Universal Gamedev Challenge (hey, Unity, we already have a "GameDev Challenge"!) Universal Games and Unity Technologies announced an unprecedented partnership to launch the Universal GameDev Challenge. Unity developers are invited to submit a game design document and pitch video with the goal of being chosen to build a PC game leveraging one of five Universal owned intellectual properties: Back to the Future, JAWS, Battlestar Galactica, DreamWorks Voltron Legendary Defender and Turok. Winners will take home a combined $250,000 cash prize along with a consulting contract with Microsoft, Unity and Universal. Will Wright’s Proxi Challenge Legendary game maker Will Wright, creator of The Sims, SimCity and Spore, announced that he is creating his first game in ~10 years in Unity. In addition to the announcement of his new game, he is looking to hire a 3D artist to help build the game. After less than two months, judges will pick two grand prize winners, who will be flown out to San Francisco for a chance to interview with Will Wright and his team. Machine Learning Unity continues to work to democratize machine learning and lower the barriers to entry so that any developer can make machine learning an integral part of their game development. In this latest release, ML-Agents 0.3, brings features including Imitation Learning which lets your system learn from real people playing your games, and can be trained to adjust to your players. Smaller Runtime for More Devices Unity created a portable and smaller runtime that can run natively on lightweight devices, such as entry-level devices, wearables, or the web. For web based deployment, our compressed core runtime is 72KB. This along with compression techniques make for small file sizes. XR Development Building on the knowledge and expertise from other platforms, we continue to evolve our tools, runtime optimizations and workflow to allow developers to bring their XR content to the widest array of available hardware and ecosystems. Today we announced that Unity developers can now expected optimized support for Magic Leap One, Oculus Go and Daydream Standalone.
  2. A Tale of Three Data Schemas Ludovic Chabant Frostbite What's a Data Schema? Definition: formal structures of the data that your system is working with Examples include: Unity script Unreal UClass Blind data in Maya table columns in a rdbms data definition format (DDF) in Frostbite define data separately from logic With schemas we define: Type "SkinnedMeshAsset" Property "float lodScale" And the data shows up in the tools. The data can be stored (Frostbite stores in Perforce) and access it at runtime in code. Designing data schemas requires a lot of consideration: If you bias too much to runtime you might end up with flattened or nicely packed but content creators can't use it. If you make it too geared toward users, it might be too OOO, take up too much mmeory, not be cache friendly, etc But you don't always have to use a single data schema. The Three Data Schemas 1. Runtime data schema Game loads it in memory, programmers use it in code. 2. Storage data schema Used to keep things around somewhere. Not to be confused with data format - schema is about what you save. Format is how. 3. Tools schema Normally same as storage schema with possibly a few small differences. The data is going to exist in the content creator's mind. How you visualize the schema will determine how the user views it. Node graph, property grid, etc. Schema design is closely related to UX design. Each data schema has a specific purpose. Runtime - reading, Storage - writing, Tools - user Four tales from Frostbite games 1. Euler is your friend. Cotes is not. In the UI - property grid - translation, euler angles, scale. In code it's a 4x4 matrix. On disk, it's a 4x3 matrix. Property grid is more user friendly. On the fly conversion not always enough. In animation editor, angular values in degrees, stored and used in radians. Sometimes conversions and precision errors can be annoying. User enters 45 degrees, converts to radians internally, UI shows 44.99999. Solution: store in degrees, store radians in runtime, display in degrees. Expose user friendly data to content creators. 2. The best of both worlds Cinematic tools have a lot of animation curves. Curve schema to content creator is a list of keyframes. Storage schema an array of keyframes, each item a struct with floats time, value, tangents, etc. This is efficient for the tool side. OO way to represent curves. Optimized for editing curves. Runtime uses array of structs. More efficient for traversals, memory, etc. Example: asset pipeline storage and tools data schema is straightforward relational asset tree runtime is a mix of flattened structures and string pools use different schemas to solve different problems and optimize for use cases 3. It's All in Your Mind Gameplay cameras are a camera state machine and camera modes First UI iteration - camera directors and modes open in different document tabs Problem: too many tabs open Second UI iteration: open camera modes inside same document tab as camera director Problem: contnet creators now think camera mode is "owned" by camera director Solution: change storage schema to enforce ownership Might think this was a UX problem but point of tale is that if you find a UX problem it might actually be a problem in the schema. 4. Data Finds a Way opening a document - storage to tools schema cooking - tools schema to storage live editing - tools to runtime live editing often implemented as set <objectid>.<property> = <value> This is wrong - assumes tools and storage schema same as runtime schemaStorage: class Widget { string frobNAme; } Runtime: uint frobName; Combine the two: meta string frobName; // meta expands to conditional compilation (#if !defined FINALORCONSOLE) uint frobName; meta fields used by good amount of pipeline code. Abstract the pipeline code to do it at runtime Use conditional compilation to include tools/storage schemas into runtime schemas. Conditionally include appropriate pipeline in runtime to transform data on the fly during live edit What to do With All of This At start of design, usually start with storage shema - only one that is persistent also most expensive to change change runtime? just update pipeline code change storage? lots of existing data to change. have to upgrade data 1. start with storage schema. looks at ddf changes during a code review 2. build your tools around it 3. write your runtime code around it - if data is not optimal 1. adjust runtimes cehma and code as needed 2. write popeline code to xform data 3. jump here directly if storage scheme as obviously non-rutime friendly 1. use conditional compilation to mix schemas 2. don't forget live editing cases
  3. This session was held at Epic's booth. The speaker wasAlan Willard. It is currently very challenging to adjust current material attributes, and developers could break other shaders or characters that are using them. Material Layering is a new way to combine materials in a stack, which builds out the correct material graph without needing to build the node graph by hand. There are two new asset types that we use to do this: Material Layer Material Layer Blends Functionally, these behave similarly to Material Functions. These new asset types also enable you to create child instances, which you could not do with Material Functions. Material Layer assets have a default input node which pipes base Material Attributes in from the Material. Material Layer Blend assets have two default input nodes which enable you to access the Material Attributes from layers above and below. New material layers are built and structured like Photoshop and can be easily modified and moved in the stack. Assets are also easily shareable, and the artist can pick and choose any of the layers. This allows artists to make changes to one character without having to worry that it might impact other characters. Drag and drop capabilities are available for moving stacked layers, and all layer attributes and values are located in the side panel for easy modification. You can also turn layers on and off in the editor to try variations. Full release of material layering system will be in 4.20, but an experimental release is in 4.19. Watch this preview video:
  4. Speaker: Alexandra M. Lucas, Content Writer | Microsoft Cortana The session began with a quick run-down of the green-skinned space babe trope, a quick reminder of Mass Effect’s main context, and some basics about the Asari. The green-skinned space babe trope is a recognizable sight in science-fiction; a scantily-clad, green- or blue-skinned, hyper-feminized and hyper-sexualized woman who is depicted as sexually insatiable but almost infantile in her ignorance of human love. Mass Effect begins in the year 2183, and the player’s character Commander Shepard (either gender) is fighting the mechanical Reapers to save all organic life in the galaxy. The Asari are a matriarchal monogender (though typically feminine) alien race who live to be more than 2000 years old in the Mass Effect franchise. They are graceful, skilled in diplomacy, and can utilize incredible biotic powers. They value individual self-actualization, pacifism, and community, and are presented as a sex-positive culture. Alexandra’s deconstruction of the Asari’s empowerment is dependent on a comparison of the Asari to the classic triple goddess. The Asari’s life stages consist of maiden, matron, and crone: The classic triple goddess consists of maiden, mother, and crone: Alexandra emphasized that the Asari maiden’s focus on self-actualization and exploration measure her value in terms of her intelligence, while the maiden goddess’ focus on her virginity and the goal of finding a husband measure her value solely according to her future relationship with her husband. For the Asari matron, sex is optional, fun, and consenting while her purpose in life is found elsewhere, but the mother goddess is defined solely by her ability to reproduce. Lastly, the Asari matriarch subverts the crone’s ugliness by valuing mature beauty, and the crone’s doom-filled predictions are replaced with wisdom that helps the community. Alexandra continued by giving examples of characters in each of the Asari life stages: Maidens include Dr. Liara T’Soni and the Asari nightclub dancers While the dancers embrace consensual sexuality and experimentation, while also demonstrating financial independence, they are typically cloaked in shadow to appear naked, share the same body type, are typically nameless, are not prominently featured in quests in any meaningful way, and have minimal lines or character development. They are presented as objects of the male gaze rather than characters with their own agency. Matrons include Aria T’Loak and Morinth While both Aria and Morinth may seem to conform to some stereotypes, context for their actions and variety in female Asari representation prevents either character from being tokenized or stereotyped and protects the illusion of the characters’ agency. Matriarchs include Samara While Samara wears revealing clothing, it is evident that her outfit was a sex-positive decision made by the character and combats de-sexualization of older women. Alexandra continued on to provide five suggested focus areas for improvement of representation in games: Vary character metrics such as age, race, ethnicity, body type, sex, gender, gender identity, sexual/romantic orientation, and ableness. Experiment with the societies represented in games, not just the people! Consider experimenting with pronouns, extremes and absolutes, reversals, and alternative norms and customs (e.g. if homosexuality were the norm in a society) Prominently feature consent and agency correctly. This could be done through core mechanics, as a facet of character development, or as a societal feature. Seek inspiration and themes from media outside of games. Consult and pay (or hire!) experts. Avoid tokenism or stereotypes, and eliminate the excuse that inclusivity is “extra work” by incorporating it from the beginning of the design process. Finally, Alexandra summarized the benefits of inclusive representation.
  5. Notes from Andrea Conover Frank Lantz | NYU Game Center Frank began the talk by commenting that this material was speculative and a little bit weird, so there would be time for Q&A at the end. The big question: Do games have a significant impact on how we think? Frank began by addressing neurological research on the effects of gaming and noted that it is almost entirely low-level empirical exploration of perception, reflexes, peripheral vision, problem-solving, and other mechanical processes in the brain. However, the larger-scale emergent effects on the way gamers interpret and interact with the world remain unexamined. There are two main approaches to answering this question in the gaming industry: Jane McGonigal and Eric Zimmerman Jane McGonigal argues that games make us smarter and happier, as well as facilitating social interaction and helping us recover from trauma. Eric Zimmerman argues that the question is inappropriate: other forms of art like painting or novels don’t need to justify themselves. Paradoxically, Eric does agree that gaming has an extremely positive effect on players. However, negative effects have been noted recently: Zimbardo recently wrote a book arguing that games’ emotionally impactful and hyper-sensitizing effects without proper contextualization with real world stimuli was dangerous to players, particularly young men. The relationship between Gamergate and the alt-right demonstrated the strident anti-intellectualism, extreme hostility, delusional paranoia, sadism, and mob mentality that can be incited in a worst-case example of how games can negatively facilitate if not outright influence players. Frank acknowledged this connection could be an accident of history, but argued that, with the legacy of our industry on the line, we can’t take that chance. So if games can implement positive change in the way people think, how? For some time, the gaming industry has argued that one of the main positive effects of gameplay is systems literacy. If this is true, what is it about games that enables this new way of thinking? Frank used the example of a knot. Two cables in close proximity often knot, but it is rarely known why or how this occurs. In thinking about this process, Frank illustrated multiple possible configurations of the two cables as nodes, while the connections represented the path between each arrangement. Two cables in a confined space randomly go through various configurations, and knots develop when an arrangement can be drifted into but not out of. This example demonstrates two concepts relating games and systems literacy. Randomness: Probability theory originated from dice and card playing mathematicians attempting to win more often. Modern probabilistic thinking is one example of gamer thinking. State Machines (as a simple example of computational thinking): Digital games have long existed as the “PR of computers, inc.” as many games were developed to demonstrate computers’ abilities. Is this really a question about how computers and software affect our thinking? It seems to be more about anxiety surrounding logic, systems, and modernity at large than video games as a distinct, problematic occurrence. To explain this leap in logic, Frank returned to neuroscience, arguing that written literacy, as old a phenomenon as it is, is so new to human history that our brains essentially hack the areas of our brain that process spoken language to learn to read. Perhaps we are centuries into a process of developing new literacies. Millennia-old debates surrounding the evils of written literacy (such as those posed by Plato) are surprisingly reminiscent of the modern debates about systems literacy. Either way, Frank noted, “we can’t put the genie back in the bottle,” so we might as well use it to further positive change. Frank turned to Kegan’s Stages of Development, which can be used to represent individual people or societies at large. This 5-stage schema proposes that people go through five stages of personal development: Impulsive Self-interested Communal Systematic Fluid The first two stages primarily apply to children, while communal mentalities are often associated with adolescence and pre-Industrialized communities. Systematic individuals often adhere strictly to rules and systems, and the post-Enlightenment modern mentality is viewed as quintessentially systematic. Fluid mentalities, therefore, subordinate systems to the process of meaning-making. Rationality is treated as a powerful but not all-applicable tool for understanding life. It is not a rejection of rationality, but rather a meta-rationality that sets it within a larger context of lived experience. Stage 4 can fail people by being too hard to reach, but also by being too brittle (in the face of the realization that no one system is sufficient), working too well (devolving into an eternal pattern of circular self-justification), or leading people to bail out of personal development between stage 4 and stage 5 into nihilism. Games provide a unique opportunity to influence the development of individuals and society. No game claims to be the ultimate explanation for the world. Games embody logic but don’t contain it. (Chess is an eminently logical game, but it would be illogical to spend your entire life playing chess.) Games exist outside ordinary life but also teach us about it. Therefore, Frank concluded, “This is OUR problem. We have a duty and responsibility to participate in the evolution of society” and “build a bridge to meta-rationality.”
  6. The origins of Sonic: Wanted a character that could stand up to Mario and NES in 1990. None of current characters couple complete on the consumer market. Sega didn't value the worth of characters on the market and had to shift their thinking about characters being iconic and having a longer lifespan. Why a hedgehog? Character that can deal damage by curling up like a ball and rolling. One of original ideas was an old guy with a mustache (Eggman). Drew sample character boards and showed people in Central Park to survey them (hedgehog, porcupine, armadillo, dog, and Eggman). Hedgehog was the winner and his conclusion was that it transcended race and gender. They wanted a character that kids could draw (simple lines). Wanted level of familiarity and be an affectionate character. Key was to also be individual (blue color and connected eyes). Challenge that needed to be overcome is to link character to SEGA-reason Sonic is blue is it wad Sega's color. Characters have to have personality and depth. Sega wanted to be the Challenger against the Nintendo Juggernaut. Used the idea of nature certainly environmental development. Wanted to have a character have its own history and background. Thought about giving the character its own history. Sega was a company that imported pin ball machines and juke boxes which led to Sonic's background story which was the author wrote the story that you see in Sonic 1 and that's why the emblem on the flight jacket goes well with the story. Anecdotes about the game: Moving into 16 bit games. Smoother geography, ability to create sensation of speed, large and lots of moving objects. Based on that they came up with example of what they could do with the new technology. Iterative process to see how far they could take it. Biggest differentiator between Sonic and Mario was skill based and exploring the world versus Sonic was more of an amusement park that helped you enjoy the world around you. Rough level designs would start with rough design notes and then go back and forth with programmer and artist to see how to accomplish. Not every idea was possible at that time and had to be re-prioritized. Same with enemy design, he would draw rough notes and how they acted and have a discussion with the team. Had an animation dance planned but couldn't figure out how to fit it into the story line so no one ever got to see it. Marketing efforts: Sonic 1 was bundled with Genesis at 50 lower than NES. Christmas sales war, Sega came out in top with 58% of salad of 16-consoles. Lessons learned: There is always going to be a way to compete with unbeatable components. Make technology your ally. Gather allies-get a band together. They were a team but they all spoke their minds to make the best product. So in a way they had solo acts but they all reacted off of each other. Energy was exponentially larger than what one person could do alone.
  7. TressFX is a real-time hair physics system from AMD. (https://www.amd.com/en/technologies/tressfx). You can get the source code at https://github.com/GPUOpen-Effects/TressFX. We render hair in 3 steps: Step 1: fragment and partition the hair into areas Step 2: store fragment properties into buffers Step 3: full screen render pass TressFX has shipped in games. First was Tomb Raider in 2013. When rendering realistic hair, we need to think about: Artist workflow Memory usage of per-pixel linked list (PPLL) can be quite large Performance needs to be budgeted Not a plug and play product. Need to integrate into workflow, engine, and tools. You can customize and modify to fit your needs. Curly hair was unsupported. Solution was to separate the main hair from the dependent hair. Now has more defined curls and can keep its style in place with movement in game. Tips: main and dependent hair should be same length, main hair should be straight enough, control points of the curve should be equal distance. Multi state transition. Strengthen the constraints the more the character is moving. Solve length challenges by changing length between vertexes. Hair didn't have enough layers. Solve by adding in sections with different constraint strengths. Some of the hair does not need to be simulated as it would be very time consuming and expensive. Multiple layers of the hair and LOD can solve. LOD closer to the cameras for full rendering, further away will be fixed. Debug tools Only showing one section of hair Only showing main hair Future work Reducing time for simulating and rendering to support more hair Better collision selection
  8. These are my notes from Simplest AI Trick in the Book during the AI Summit. This session has become an annual event where in 30 minutes or less, five or so speakers from the AI Summit provide a simple trick for common problems encountered in the world of game AI. This year the speakers were Steve Rabin, Sergio Ocio Barriales, Rez Graham, Emily Short, and Dave Mark (@IADaveMark). Special note: Dave Mark is the moderator of the Artificial Intelligence forum here at GameDev.net. He's also the organizer of the AI Summit. I unfortunately haven't been able to say hello. Dave is a busy guy running the Summit - and his effort shows. In my opinion the AI Summit is repeatedly the best set of talks during the Monday/Tuesday Tutorials and Summits day, every single year. They know how to run a Summit. Dave Mark (@IADaveMark) kicks off the session. On to the notes: Steve Rabin: Hide and Shoot Find places you can hide and shoot. Previous talks using cover points marked in level where AI can get cover. If an enemy is way in the background without cover spots, how do we take into account obstructions - things that aren't even close to you? Simple trick to make it a little better: Throw a quad around the enemy, test corners on visibility. Two or more corners hidden, possible location to hide and shoot. Sampling strategy - strategies ordered worst to best: Uniform Sampling Jitter: random grid Poisson Disc - keep dropping samples and as long as they aren't too close they're good. Might do this by making concentric rings away from the player an dsample equidistant along each ring (sort of get Poisson this way). Allows enemy to hide behind things that aren't even close to you. Sergio Ocio Barriales: Cheating Game AI is an art. No formulas, no silver bullet. Observe, learn, and build your toolbox - little tricks you can use when facing different problems. Cheating is considered sacrilege to players. BUT, Cheating is actually a powerful tool. Augmented perception capabilities. Allow the AI to access data it wouldn't have access to through normal sensors. Example: allow the AI to know exactly where the player is at. When is it okay to cheat? Use cheating when it's the best solution - don't be lazy though. Don't want your AI to look stupid. You're trying to build an experience, so use your tools to create that experience. Rez Graham: Combining Normalized Values Example: Agent wants to make a decision - drink potion or attack player. For attack, a lot of decision factors or considersations with scores that combine into a final score. Drinking potion has one score. Could multpily all together and get a final score for attack, but multiplying 0.9*5 isn't a good comparison against potion. Makeup Value Method Calculate a makeup value that adjusts the final score to account for the multiplication. But that doesn't always work either. Solution: Geometric mean. Product of all the different things and take nth root of it. It's a multiplicative mean, so if any consideration is zero then final score is zero. Emily Short: Simple Trick for Procedural Storytelling Pacing - aka the bane of interactive storytelling. How do you give someone else control over when things are happening in your story? Ultimate example of this is LA Noire. Emily felt she either ignored the story and got stuck with it. Wasn't getting more story when she wanted it. How do we talk when the player is listening and shut up when the player is doing their thing? Trick: categorize your actions for both players and PCS. Categories Emily uses Progressing - actions that are making progress. Making major story choice, new scene, new location. Changing world state in some way. Exploring - search a space, asking new questions, traveling, trying new puzzle combinations. Player is actively engaged and trying to get something back from the narrative experience. Responding - answering questions to prompts, reacting to the other character. Idling - checking UI/inventory, repeating actions, exploring places already visited. Used to NPCs idling but if players are idling they're probably stuck, bored, confused Several options: Move story on when player has exhausted content or is stuck Flag progress moves to the player explicitly - UI thing more than an AI thing but makes the player more intelligent - so it counts Add drama management into existing utility scores Basic idea: Lead whent he player is following. FOllow when the player is leading. Having categories of action help your algorithms reason about when that should be. Dave Mark: Floating If an archer starts plunking the player with arrows, standing in place, get repetitive. What if we were shooting the arrow? We'd move around a bit. NPCs should do it too. Float ranged behavior - move a short random distance perpendicular to the direction of the threat. - individual enemy - nearby center of enemies - when - during shot cooldowns - if line of sight lost Melee - move short random distance, perpendicular to direction of threat - individual enemey when during cooldowns Find direction to center of mass of enemies. Get our perpendicular, choose our range. (code shown, float melee code same with different ranges) Weak attempt to take a pic of the code sample. Instead of choosing randomly left or right, use an influence map to choose direction.
  9. Session notes from Beyond Bots: Making Machine Learning Accessible and Useful. Speakers: Joe Booth of Vidya Games Wolff Dobson of Google Danny Lange of Unity Joe Booth Every generation of machine learning algorithm is a 10x improvement on the last. A rpoblem that was taking 100 million steps a few years ago can now be learned in realtime on a robot in 20 minutes. Seeing these improvements every 6-12 months. If reinforcement learning with reward functions can apply to SW development then we may see a multi-magnitude improvement. Game AI is proof. Problem with reinformcenet learning math notation can be difficult to read/understand Build a mental model Get hands on as quickly as possible - recreate benchmarks Don't waste time with online ML courses Learn to skim read / re-read papers machinelearningmastery.com - high on engineering, low on math Wolff Dobson Technical Lead Developer Relations for Tensorflow Players are creating data and you're finding out what they do and don't like to do. Is this player going ot quit today? If a player buys X, what's the next thing they are likely to buy? Sentiment Analysis learning about your community and things they're saying can make game a more interesting experience for users open source models available. Can setup a server to do sentiment analysis or time series projection Getting Some Style Style transfer is the idea that you take the semantic structure of an image and a model trained on a piece of art, and you put them together. Comes out of research Google did on convolutional networks Style transfer can be done at runtime with modern GPUs. Game developers can use this to get different effects in their games. CycleGAN github/com/junyanz/CycleGAN - transformations in realtime. Can shorten art cycle times for prototyping. Generators Text generators are an example. RNN-generated tex.t Predict sequences of text. Andrej Karpathy's "Unreasonable effectiveness of recurrrent neural networks" Editor necessary artists and tools still necessary, but ML is allowing us to be more efficient and creative github.com/david-gpu/srez take photo, downsample, generate new photo new art can be generated neural editing DCGAN - Japanese engineer took giant database of popstar faces and generated more popstar faces - can be useful to generate content, ideas, etc Inpainting - network tries to fill in what's missing without context Context Encoders: Feature Learning by Inpainting Project Magenta - team at Google working on intersection between art and machine learning Variational Autoencoders - Take image and encode to a latent vector Z and take decoder to make image back again. Trying to reduce loss between the two. - Trained on QuickDraw Dataset from Google users Vector drawing of a cat -> encoder -> decoder -> reconstruct drawing of a cat Latent Space Interpolation example: interpolate between cat face and a pig A model like this can also do image completion. Start drawing something and it can create ideas. Do it with fonts too. Can also describe music. magenta.tensorflow.org/music-vae Giant space of melodies, and you can interpolate between any two spaces in the dataset. Tools like MusicVAE might be an interesting way to become more efficient and creative. ML can be a lot of work upfront for good results later. These tools are available right now. Later, deep learning can help you create unique, compelling content. Will still need plenty of artists, programmers, and designers. Machines can only know what they've been taught. tensorflow.org research.googleblog.com g.co/magenta
  10. We'll update this list as we become aware of more presentations. You can also submit links to presentations. Design Gameplay Setbacks As a Tool for Creating Impactful Moments - Scott Brodie (Heart Shaped Games) Rules of the Game: Five Further Techniques from Rather Clever Designers Game Design Patterns for Designing Friendships - Daniel Cook (Spryfox, Lostgarden) Playtesting VR - Shawn Patton (Schell Games) Ultimate Online at 20: Classic Game Postmortem - Raph Koster, Starr Long, Richard Garriott de Cayeux, and Rich Vogel Production All the Families: The Making of Animation Throwdown - Peter Eykemans, Katrina Wolfe (Kongregate) Going Cross-platform: Is It Worth the Effort? - Tammy Levy (Kongregate) Marketing Judo: Leverage Your Time, Sell Your Game - Sam Coster (Butterscotch Shenanigans) Producer Bootcamp: Be the Best Producer for Your Team - Ruth Tomandl (Oculus Research) Remote Unity Studio in a Box – Ben Throop (Frame Interactive) Rules of the Game: Five Further Techniques from Rather Clever Designers - Richard Rouse III Shipping Call of Duty at Infinity Ward - Paul Haile (Infinity Ward) Programming Advanced Graphics Techniques Tutorial: The Latest Graphics Technology in Remedy’s Northlight Engine – Tatu Aalto (Remedy Entertainment) Beyond Emitters: Shader and Surface Driven GPU Particle FX Techniques – Christina Coffin Circular Separable Convolution Depth of Field - Kleber Garcia (Electronic Arts) Cloth Self Collision with Predictive Contacts – Chris Lewin (Electronic Arts) Conemarching in VR: Developing a Fractal Experience at 90 FPS - Johannes Saam, Mariano Merchante (Framestore) Creativity of Rules and Patterns: Designing Procedural Systems - Anastasia Opara Democratizing Data-Oriented Design: A Data Oriented Approach to Using Component Systems - Mike Acton (Unity) Epic Presentations Building High End Gameplay Effects with Blueprints - Chris Murphy Cinematic Lighting in Unreal Engine Optimizing UE4 for Fortnite: Battle Royale: Part 1 Optimizing UE4 for Fortnite: Battle Royale: Part 2 Programmable VFX with Unreal Engine’s Niagara Creating Believable Characters in Unreal Engine GPU-Based Clay Simulation and Ray-Tracing Tech in ‘Claybook' – Sebastian Aaltonen (Second Order) Khronos Group Presentations WebGL and Why You Should Target It Standardizing all the Realitites: A Look at OpenXR Linear Algebra Upgraded - Eric Lengyel (Terathon Software) Real-Time Ray-Tracing Techniques for Integration into Existing Renderers - Takahiro Harada (AMD) Rendering Technology in 'Agents of Mayhem' - Scott Kircher (Volition) Shiny Pixels and Beyond: Real-Time Raytracing at SEED – Johan Andersson, Colin Barre-Brisebois (Electronic Arts) Spline Based Procedural Modeling in 'Agents of Mayhem' - Chris Helvig, Chris Dubois (Volition) Terrain Rendering in 'Far Cry 5' - Jeremy Moore (Ubisoft) The Asset Build System of ‘Far Cry 5′ – Remi Quenin (Ubisoft Montreal) Tools Tutorial Day: Shipping ‘Call of Duty' – Paul Haile (Infinity Ward) Visual Arts Capturing Great Footage For Game Trailers – Derek Lieu Independent Game Summit Building Games That can be Understood at a Glance – Zach Gage Porting Games to Consoles (Youtube) - Thomas O'Connor (PlayEveryWare) User Experience Summit UI/UX of Creating Your Mobile Game in VR - Miranda Marquez (MunkyFun) Immersing a Creative World into a Usable UI - Steph Chow (Steph Chow Design) Game Narrative Summit The Nature of Order in Game Narrative - Jesse Schell (Schell Games) Other Content Houdini Foundations - SideFX The Transvoxel Algorithm (poster download) - Eric Lengyel
  11. Speaker: Josh Scherr (Writer | Naughty Dog) Naughty Dog’s previous games followed a wide linear approach, which created the characters and story in a straightforward manner but allowed for little input or interaction from the player. In open world games, however, the player sets the pace of their gameplay, which means that quick developments in plot can lose urgency or emotional relevance to the player over time as they choose to do something else. Returning to the main story at a later point and having the characters rushing desperately toward their objective wrecks the player’s immersion in the story. So, when planning Uncharted: The Lost Legacy, one question became clear: how can players be given the freedom to choose their gameplay experience without abandoning character development or completely blowing the budget? Josh Scherr emphasized that regular communication between story and design was essential; each team needed to be flexible enough to adapt to progress and changes from the other team. Eventually, the story team developed the story macro, which functioned as a flexible, big picture roadmap for the general story arc of the game. Josh Scherr streamlined their process for developing a main character in a non-linear space into 3 steps. Step 1: Establish your main character. The Uncharted team had a slight advantage when they chose to base Lost Legacy around Chloe Fraser, since she was already an established character that was a fan favorite in the franchise. However, since Chloe was known to prioritize her own self-preservation and provide a counterpart to Drake, the goal was to create character growth to the point that Chloe would decide to help another at her own expense. After establishing a main character, it became evident that the plot of an open world game would not be linear enough to allow for traditional character development since players can choose the order and pace in which they complete tasks (or, in Josh Scherr’s words, “I’ll rescue you! – Ooh, what’s that shiny thing?”). One of the main ways characters develop that is not plot-based is through interactions with other characters, so choosing the right companion for Chloe was essential. Nadine Ross, another memorable character from Uncharted 4: A Thief’s End, acted as a contrast for Chloe, which supplied a logical motivation for Chloe’s development. Step 2: Determine player goals. Open world games provide a vast number of possible tasks and objectives for players to complete, as well as the opportunity for these tasks and objectives to be completed in many different orders. So, story progress must be tied to the game’s main goals, and all other dialogue must be neutral in tone so that it can be played in any order. Step 3: Determine how this can be achieved. Cutscenes: These were written first and designed to be independent of gameplay or player’s decisions. They were used sparingly, since important moments for the character have the most impact when the player is in control. However, cutscenes are good for displaying emotional moments because of the fixed camera angle and level of detail that can be added to cinematics. In order to save time and money in production, the footprint of each fort was made the same so cutscenes could play in order at any location. Scripted sequences: Players’ pace, order, and approach were uncontrolled, so scripted sequences were rarely used and only in the later part of the game, when the players’ choices were more restricted. Driving dialogue: Because there is no fixed camera (meaning no possibility for close ups) and the dialogue had to be delivered over the sound of the jeep engine, this dialogue by necessity was less emotional or intimate than the cutscenes. Furthermore, these sets of dialogues can be interrupted by player actions, such as entering combat areas or other checkpoints. After determining the shortest possible travel time between two forts (main storyline objectives) to be 15-20 seconds, driving dialogue was split into three clips each under that length: a primary clip containing essential information relevant to plot and character development, a secondary clip containing further useful but nonessential development, and a bonus clip of banter to advance Chloe and Nadine’s relationship. Players would automatically hear the first clip, and if it were interrupted, it would resume at a natural time. However, additional time passing lead to the playing of the subsequent clips. Level dialogue: This was used to provide supplementary storytelling and information, allowing for subtleties in the growing relationship between Chloe and Nadine. This also allowed Nadine’s character to be given more background and a few quirks, such as her zoological interests, without cluttering the cutscenes with smaller details. Additional, optional dialogue led the character in a forced walk which was not ideal but allowed for exposition and a little more development. While it’s not guaranteed that every player will see it, most players see the option and are tempted to press the button. All other dialogue was nonlinear, and therefore neutral in tone. However, some of the neutral dialogue was supplemented or replaced according to the player’s place in the main storyline to match Chloe and Nadine’s current relationship. Environmental storytelling was used occasionally, but it was dependent upon the characters commenting on the environment and providing interpretation for the player because of a general lack of expected familiarity with ruins in the Western Ghats on the part of the players. Partly due to the incredible amount of dialogue this strategy required and partly due to their own non-linear design process, hours and pages of excess dialogue were trimmed and cut from the final game. Furthermore, no specific software was used to keep track of dialogue, only the scripts themselves or the writers’ brains. So, Josh finished with a special shout-out to the audio team and the voice actors (as well as a brief lament that Claudia Black and Laura Bailey’s renditions of their lines in valley girl accents couldn’t be an unlockable language option).
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!