Currently I'm working in a project in where an AI team of NPCs must attack a squad of 4 characters manipulated by the Player. The problem I have is all the info I've found about AI against a player is related by attacking a single character. In this particular scenario attack rules changes because the AI must be aware about four characters. I'm curious if any one knows some paper about this particular scenario.
Am a new game dev and I need the help of you (the experts)
While making the game I had one main problem,
In my game, the player moves his mouse to control the direction of a sword that his character is supposed swings against other players,
the problem is that I don't know how to program the hand to move according to the mouse.
I will be grateful if someone can give me a helping hand on how to code it or a general idea of how this thing can be programmed on unity ^^.
Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!)
I have gotten so far but I am unsure how a few things that they say in the blogs..
This is a simplified version of how I understand their stuff to be setup:
Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer
Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands
Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..)
The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource.
Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel.
One things I would like clarification on
In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource
Where are the handles allocated from? the RenderBackend?
How do they do it in a thread safe way that's doesn't kill performance?
If anyone has any ideas or any additional resources on the subject, that would be great.
Disclaimer: I was tempted to put this in Game Design, but it heavily references AI so feel free to move it to wherever it needs to be if this is the wrong place.
I had this idea that a game could use machine learning to support it's AI and make itself a really challenging opponent (nothing new there), but also to tailor its style of playing based on feedback given by human players.
Using an RTS as a classic example, lets say you prefer to play defensively. You would probably get more enjoyment out of games where the opponent was offensive so as to challenge your play style. At the end of each match, you give a quick bit of feedback in the form of a score ('5/10 gold stars' for example) that pertains to your AI opponent's style of play. The AI then uses this to evaluate itself, cross referencing its score against previous scores in order to determine the optimum 'preferable' play style.
Then I got onto to thinking about two issues with the idea:
1) The human player might not be great at distinguishing feedback about their opponents play style from feedback about their game experience in general.
2) In a multiplayer context, players could spam/abuse/troll the system by leaving random/erroneous feedback.
Could you get round this by evaluating the player without them knowing it, i.e. could some other data recorded from the way a player acts in a game be used to approximate their enjoyment of a particular opponents play style without being too abstract? For example 'length of time played somehow referenced against length of time directly engaged with AI opponent' etc...
Do any existing games work like this? I just came up with it when I saw a stat that call of duty has been played for a collective 25 billion hours or something - which made me that would be the perfect bank of experience to teach a deep learning computer how players interact with a game.