Jump to content
  • Advertisement

Search the Community

Showing results for tags 'R&D'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 39 results

  1. Hi everybody, Xilvan Design building 3D games since 2004 in Blitz3D, we now present you our kindly official gaming related pages. (please click on each links, download games & bookmark the pages): Lights of Dreams IV: Far Above the Clouds v10.37. Candy World II: Another Golden Bones v10.37. Candy Racing Cup: The Lillians Rallies v3.57. Candy World Adventures IV: The Mirages of Starfield v7.97. Candy to the Rescue IV: The Scepter of Thunders v8.07. Candy's Space Adventures: The Messages from the Lillians v18.37. Candy's Space Mysteries II: New Mission on the earth-likes Planets v8.75. New Xilvan Design Websites. Xilvan Design's Youtube Channel. Friendly, Alexandre L., Xilvan Design.
  2. Hi everyone! I need to transform a 32bit PFM (HDR) file, reading pixel by pixel, into a usual LDR format and later write into PPM and BMP files. Can someone please give me an equation to solve this or a sinppet. is it enough to tonemap?
  3. Sean O'Connor

    R&D Evolving neural networks

    A long time ago I used to hang around this forum with the user name redtea. Red tea is an actual beverage that is delicious with milk and sugar. It is not a political persuasion. A few rednecks ran me off though because red is a red flag to them. Kinda funny and stupid story at the same time. Anyway you can evolve neural networks especially if you choose rather unusual activation functions that particularity suit evolution such as signed square y=-x*x, x<0 y=x*x, x>=0. I don't think you would ever use those with back propagation: https://groups.google.com/forum/#!topic/artificial-general-intelligence/4aKEE0gGGoA I also have a kind of associative memory that might be interesting for character behavior, https://github.com/S6Regen/Associative-Memory-and-Self-Organizing-Maps-Experiments Maybe AMScalar is the one to use.
  4. I've been found a talk at GDC, now on youtube, about procedural animation. The video. When I saw what could be done with it, I knew I had to learn it. Any tips or pointers for to build a strong grasp of the subject for a beginner to procedural animation? Like I understand the 2D math so far, ready for the 3D, but do you have tips or goto resources? I'm figuring through the 3D maths, maybe I should go and make my own custom animator in DirectX or is that a bit much? I even saw a course on the robotics version of this through online courses. But I don't want to go off on a tangent or pick up some useless books on it when there are better ones.
  5. Hello. GCN paper says that its one simd engine can have up to 10 wavefonts in-flight. Does it mean it can run 10 wavefonts simultaniosly? If so then how? By pipelining them? AFAIK wavefonts are scheduled by a scheduler. How does the scheduler interact with simd engine to make it possible? Do these 10 wavefonts belong to only one instruction?
  6. Hello dear AI folk! I picked up a passion for Game AI a while ago and I wanted to join a community for a while now. This seems to be the right place to get cozy. Now, for my bachelors degree, I am supposed to write a research essay (about 20 pages). I chose to title it "Comparing Approaches to Game AI imder Consideration of Gameplay Mechanics". In the essay I am making bold statements, concerning the state of related work in the field. I want to know, if it holds up to reality. And what better way to find out, than asking the community of said reality? Disclaimer: This is the first research paper I have ever done and it is work in progress. I feel that I suck at this. The main question is: Does the above statement hold up to reality? But please don't shy away from corrections or general advice. I would also like to share the completed work as soon as it is done, if anyone is interested. I am in dear need of some feedback, that I personally can grow by. Cheers
  7. thecheeselover

    Zone generation

    Subscribe to our subreddit to get all the updates from the team! I have integrated the zone separation with my implementation of the Marching Cubes algorithm. Now I have been working on zone generation. A level is separated in the following way : Shrink the zone map to exactly fit an integer number of Chunk2Ds, which are of 32² m². For each Chunk2D, analyse all zones inside its boundaries and determine all possible heights for Chunk3Ds, which are of 32³ m³. Imagine this as a three dimensional array as an hash map : we are trying to figure out all keys for Chunk3Ds for a given Chunk2D. Create and generate a Chunk3D for each height found. Execute the Marching Cubes algorithm to assemble the geometry for each Chunk3D. In our game, we want levels to look like and feel like a certain world. The first world we are creating is the savanna. Even though each Chunk3D is generated using 3D noise, I made a noise module to map 3D noises into the 2D to able to apply 2D perturbation to the terrain. I also tried some funkier procedural noises : An arch! The important thing with procedural generation, it's to have a certain level of control over it. With the new zone division system, I have achieved a minimum on that path for my game.
  8. thecheeselover

    Zone division

    Subscribe to our subreddit to get all the updates from the team! A friend and I are making a rogue-lite retro procedural game. As in many procedural rogue-lite games, it will have rooms to complete but also the notion of zones. The difference between a zone and a room is that a zone is open air whilst a room is not. Rooms are connected mainly by corridors while zones are mostly naturally connected / separated by rivers and mountains. Because we want levels with zones to be generated, we need to tame the beast that is procedural generation. How can we generate each zone itself and also clearly divide them? Until now, I had only been using the Java noise library called Joise, which is the Java community port of JTippetts' Accidental Noise Library. I needed the zone data to be generated with basis function modules, i.e. Perlin noise, but in contrast I needed a more structured approach for the zone division. Joise library does have a cell noise module that is a Worley noise. It looks like this depending on its 4 parameters (1, 0, 0, 0) : Using math modules, I was able to morph that noise into something that looks like a Voronoi diagram. Here's what a Voronoi diagram should look like (never mind the colors, the important parts are the cell edges and the cell centers) : A more aesthetic version : The Worley noise that I had morphed into a Voronoi-like diagram did not include the cell centers, did not include metadata about the edges and was not enough deterministic in a sense that sometimes, the edges would around 60 pixels large. I then searched for a Java Voronoi library and found this one called Voronoi-Java. With this, I was able to generate simple Voronoi diagrams : Relaxed : 1 iteration Relaxed : 2 iterations The relaxation concept is actually the Lloyd's algorithm fortunately included within the library. Now how can I make that diagram respect my level generation mechanics? Well, if we can limit an approximated number of cells within a certain resolution, that would be a good start. The biggest problem here, is that the relaxation reduces the number of cells within a restricted resolution (contrary to the global resolution) and so we need to keep that in mind. To do that, I define a constant for the total number of sites / cells. Here's my code : private Voronoi createVoronoiDiagram(int resolution) { Random random = new Random(); Stream<Point> gen = Stream.generate(() -> new Point(random.nextDouble() * resolution, random.nextDouble() * resolution)); return new Voronoi(gen.limit(VORONOI_SITE_COUNT).collect(Collectors.toList())).relax().relax().relax(); } A brief pseudo-code of the algorithm would be the following : Create the Voronoi diagram Find the centermost zone Selects X number of zones while there are zones that respect the selection criteria Draw the border map Draw the smoothed border map The selection criteria is applied for each edge that is connected only to one selected zone. Here's the selection criteria : Is connected to a closed zone, i.e. that all its edges form a polygon Does have two vertices Is inclusively in the resolution's boundaries Here's the result of a drawn border map! In this graph, I have a restricted number of cells that follow multiple criteria and I know each edge and each cell center point. To draw the smoothed border map, the following actions must be taken : emit colors from already drawn pixels and then apply a gaussian blur. Personally, I use the JH Labs Java Image Filters library for the gaussian blur. With color emission only : With color emission and a gaussian blur : You may ask yourself why have we created a smoothed border map? There's a simple reason for this, which is that we want the borders to be gradual instead of abrupt. Let's say we want rivers or streams between zones. This gradual border will allow us to progressively increase the depth of the river and making it look more natural in contrast with the adjacent zones. All that's left is to flood each selected cell and apply that to a zone map.
  9. Hello, Just wanted to share the link to the latest upgrade for the Conservative Morphological Anti-Aliasing, in case someone is interested. It is a post-process AA technique in the same class of approaches as FXAA & SMAA but focusing on minimizing the input image change - that is, apply as much anti-aliasing as possible while avoiding blurring textures or other sharp features. Details available on https://software.intel.com/en-us/articles/conservative-morphological-anti-aliasing-20 and full DX11 source code under MIT license available on https://github.com/GameTechDev/CMAA2/ (compute shader implementation, DX12 & Vulkan ports are in the works too!)
  10. Hi Folks, I am learning Artificial Intelligence and trying out my first real-life AI application. What I am trying to do is taking as an input various sentences, and then classifying the sentences into one of X number of categories based on keywords, and 'action' in the sentence. The keywords are, for example, Merger, Acquisition, Award, product launch etc. so in essence I am trying to detect if the sentence in question talks about a merger between two organizations, or an acquisition by an organisation, a person or an organization winning an award, or launching of a new product etc. To do this, I have made custom models based on the basic NLTK package model, for each keyword, and trying to improve the classification by dynamically tagging/updating the models with related keywords, synonyms etc to improve the detection capability. Also, given a set of sentences, I am presenting the user with the detected categorization and asking whether its correct or wrong, and if wrong, what is the correct categorization, and also identify the entities. So the object is to first classify the sentence into a category, and additionally, detect the named entities in the sentence, based on the category. The idea is, to be able to automatically re-train the models based on this feedback to improve its performance over time and to be able to retrain with as less manual intervention as possible. For the sake of this project, we can assume that user feedback would be accurate. The problem I am facing is that NLK is allowing fixed length entities while training, so, for example, a two-word award is being detected as two awards. What should be my approach to solve this problem? Is there a better NLU (even a commercial one) which can address this problem? It seems to me that this would be a common AI problem, and I am missing something basic. Would love you guys to have an input on this. Thanks & Regards Camillelola
  11. Imagine you are Valve or ID or Dice, and your team is going to create a new engine to run your company's main titles for the next decade. You want an engine that is innovative and flexible, can knock socks off next year and still impress gamers 5 years down the road. Would someone in this position use helper libraries like GLUT aor GLFW or GLM or would they create their own libraries for their project and do the win API stuff manually?
  12. Armaan Gupta

    Lets build cool things!

    Hi there, My name is Armaan and our game studio my company started, The Creative Games, is looking for talented people to join. Art, development, code, audio, design... whatever you do, we would love to have you. As of now were working to get more people to really get a diverse set of inputs. Were not focused on a "type" of games, really just whatever we as a team want to make. If you want to be a part of a team working on building cool things, email me at armaangupta01@gmail.com or text me on my Discord (Guppy#7625). Can't wait to have you join!
  13. Hello everyone, Lately, I published my latest games. I want people to watch more my videos: Subscribe & Watch our videos HERE! Remember that our games are free to play for the moment: Surf on our Website HERE! Hope you'll appreciate my work! Friendly, Alexandre Lecours, Xilvan Design.
  14. Animating characters is a pain, right? Especially those four-legged monsters! This year, we will be presenting our recent research on quadruped animation and character control at the SIGGRAPH 2018 in Vancouver. The system can produce natural animations from real motion data using artificial neural networks. Our system is implemented in the Unity 3D engine and trained with TensorFlow. If you are curious about such things, have a look at this:
  15. Hello! Previous year in my job we implemented a HDR output (as in HDR10 / BT.2020 / ST.2084 PQ back-buffers) on one of our games on the consoles, which do support HDR10 over HDMI. The HDR compatible hardware (monitors, televisions) has already been around for a year, with varying quality. I wonder if there's already HDR-HW output exposed in the PC drivers? Windows 10? Vulkan? DX 11? DX 12? Which vendors? For those unfamiliar, I'm talking about outputting HDR signal to HDR hardware (using r10g10b10a2_unorm + PQ backbuffers, or better). Thanks, .P
  16. I am about to start a PhD that will investigate ways of replicating creativity in the AI systems of simulated people in virtual environments. I will research which psychology theories and models to use in order to achieve this, with a focus on creative problem solving. The aim of this project is to create virtual characters and NPCs that can create new solutions to challenges, even if they have never encountered these before. This would mean that not every possible action or outcome would need to be coded for, so less development resources are required. Players would encounter virtual people that are not bound by rigid patterns of pre-scripted behaviour, increasing the replay value and lifespan of games, and the accuracy of simulations. I am looking for companies or organisations that would be interested in working with me on my PhD, and I think computer games companies might be the most likely. I am trying to think of ways in which this new AI system might benefit games companies, or improvements and new types of games that might be possible. I am on this forum to ask for your thoughts and suggestions please, so I can approach games companies with some examples. Thank you for your time and interest.
  17. BRENT ERICKSON

    R&D Advanced AI in Games?

    At the company I currently work for we have been working on a variety of AI projects related to big data, natural speech, and autonomous driving. While these are interesting uses of AI, I wonder about their application in real-time systems like games. Games can't tolerate large delays while sending data to the cloud or complex calculation and are also limited in storage space than can be allocated to data. I am curious about the community's view of where complex AI could fit in gaming?
  18. Gourav Mishra

    R&D chatbot

    Please help in completing the code. I am unable to use defined function import sys import nltk import random from nltk.tokenize import word_tokenize,sent_tokenize GREETING_KEYWORDS = ("hello", "hi", "greetings", "sup", "what's up",) GREETING_RESPONSES = ["'sup bro", "hey", "*nods*", "hey you get my snap?"] User_input = input ("User said:") type (User_input) def check_for_greeting(sentence): """If any of the words in the user's input was a greeting, return a greeting response""" words = word_tokenize (sentence) if words in GREETING_KEYWORDS: print(random.choice(GREETING_RESPONSES)) return; check_for_greeting(sentence = User_input )
  19. If you have CROWDFUNDED the development of your game, which of the following statements do you agree with? 1. I went out of my way to try to launch my game by the estimated delivery date 2. I made an effort to launch my game by the estimated delivery date 3. I was not at all concerned about launching my game by the estimated delivery date ------------------------------------------------------------------------------- Hi there! I am an academician doing research on both funding success and video game development success. For those who have CROWDFUNDED your game development, it would be extremely helpful if you could fill out a very short survey (click the Qualtrics link below) about your experiences. http://koc.ca1.qualtrics.com/jfe/form/SV_5cjBhJv5pHzDpEV The survey would just take 5 minutes and I’ll be happy to share my findings of what leads to crowdfunding success and how it affects game development based on an examination of 350 Kickstarter projects on game development in return. This is an anonymous survey and your personal information will not be recorded. Thank you very much in advance!
  20. Hi, Currently I'm working in a project in where an AI team of NPCs must attack a squad of 4 characters manipulated by the Player. The problem I have is all the info I've found about AI against a player is related by attacking a single character. In this particular scenario attack rules changes because the AI must be aware about four characters. I'm curious if any one knows some paper about this particular scenario. Thanks.
  21. Am a new game dev and I need the help of you (the experts) While making the game I had one main problem, In my game, the player moves his mouse to control the direction of a sword that his character is supposed swings against other players, the problem is that I don't know how to program the hand to move according to the mouse. I will be grateful if someone can give me a helping hand on how to code it or a general idea of how this thing can be programmed on unity ^^.
  22. Hi, Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!) I have gotten so far but I am unsure how a few things that they say in the blogs.. This is a simplified version of how I understand their stuff to be setup: Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..) The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource. Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel. One things I would like clarification on In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource Where are the handles allocated from? the RenderBackend? How do they do it in a thread safe way that's doesn't kill performance? If anyone has any ideas or any additional resources on the subject, that would be great. Thanks
  23. Hi, Tile based renderers are quite popular nowadays, like tiled deferred, forward+ and clustered renderers. There is a presentation about GPU based particle systems from AMD. What particularly interest me is the tile based rendering part. The basic idea is, that leave the rasterization pipeline when rendering billboards and do it in a compute shader instead, much like Forward+. You determine tile frustums, cull particles, sort front to back, then render them until the accumulated alpha value is below 1. The performance results at the end of the slides seems promising. Has anyone ever implemented this? Was it a success, is it worth doing? The front to back rendering is the most interesting part in my opinion, because overdraw can be eliminated for alpha blending. The demo is sadly no longer available..
  24. Scanmaster_k

    R&D AI Book Bundle

    Just unless you missed it, Humble Bundle is currently selling a book bundle regarding AI and machine learning. Link The bundle includes one book for UE4 and a lot of general books.
  25. Hi guys, So I have an AI game in mind and I was wondering what are the best ways or techniques to sell my idea of my prototype and proof of concept. Should I make a trailer? Should make a magazine style book? Should I make a video in talking about my game like they do in Kickstarter campaigns? Any feedback would be highly appreciated!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!