Jump to content
  • Advertisement

Shaarigan

Member
  • Content Count

    517
  • Joined

  • Last visited

  • Days Won

    1

Shaarigan last won the day on July 11

Shaarigan had the most liked content!

Community Reputation

1113 Excellent

6 Followers

About Shaarigan

  • Rank
    Advanced Member

Personal Information

  • Role
    Artificial Intelligence
    Programmer
  • Interests
    Art
    Design
    Programming

Social

  • Github
    Shaarigan

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. To achieve what I described above you need! to use Semaphore at least on Windows because as I wrote Mutex on Windows is protected against self locking. So a thread holding the lock already can't rely to be set to sleep when it will aquire the same lock on the same thread again. This was my case because each Thread holds it's own lock in locking state and if it would get idle instead of burning cycles, it aquires the same lock again to deadlock itself and go to sleep indefinitely. Another process then will check the lock-state and fire a release so the idle thread is notified that there is work to do again. This was my personal usecase for that :)
  2. @maxset I have had a concrete usecase for a self-blocking lock in my ThreadPool/ TaskScheduler. Idle threads should themselves go to sleep they wont burn any CPU time and I first implemented this on a self-locking mutex. Debug build worked well but I got failure in Release. In the end, Mutexes in Windows are designed to some kind of self locking detection while Semaphore isn't so this was my remark to this topic
  3. Shaarigan

    Using a game engine as a library

    It is not possible this way as you need two things to at least embedd the game into the app, some kind of Render Canvas that is able to act as the render target for the game and the game must provide a possible way to define the render target except for making it's own window when launching. Unity 3D is closed source so your only option is to use OS APIs to stick the UnityPlayer window to the app, listen to events of that window (Resize, FullScreen, Close) and react to always keep the window as child of the App-Window but that is tricky and hackish. Unity itself has a launcher and a player that will take everything from your game into their C++ code embedding Mono Runtime to run C# or whatever you used to make your game and don't let anything go to the outside. This is how any of the modern tools like Unity3D, Unreal Engine 4 or GameMaker works. Another way is to have the App launch your game but it dosen't sound in your request that this is the goal
  4. There are professional tools out to maintain those decision trees and export into any format you need to work with your game. The simplest solution as already mentioned is the decision tree, a structure that is a collection of connected logic nodes that each may also have child nodes attached to. When verifyng your tree you'll start with a reference to the root node and test it's children. You then take a reference to the children whitch first returned true on validation as your next root node and repeat until you reach the end of the tree and so the end of your game. Meanwhile there could also be action nodes between so run those node's functions to interact with your game
  5. Shaarigan

    Supporting mods in Unity game?

    In one of my old games we stored our maps outside of unity to edit and rapidly add new content. Those were decoupled to the raw map data used in Unity Terrain as same as some description files where to place what kind of entity including direction, size and whatever was neccessary. For code you can provide some kind of scripting language yourself or use some known parser/lexer for existing languages as long as there isn't any compiler magic or native assembly magic in there. This way you can control where to allow code and where to keep your static game code safe from modifications. You can then take the OpCode-ed script files and either run an interpreter written in managed code or compile those OpCodes to IL using System.Reflection.Emit, even when driving with IL2CPP
  6. ECS if taken simple can be thought simple so to answer your last question, it depends! If you have a rendering system that takes SpriteComponent instances then you could attach multiple of them to the same entity but having a system that relys on a strict 1 on 1 coupling of components, any system that takes data from component a and places processed data in component b, then you should avoid using multiple instances of the same component on a single entity. I never ever would let people in my team nest components into each other as components are simple data, they can't have nested components as those would break the ECS chain. Your problem/approach needs to target a different system, Render Graph. In almost every modern engine these days, you have some kind of Render Graph built-in that handles object hirachy and scene contents so if you want to display something on action then add a new entity to the graph. The graph is also the first and only source for render systems as it contains any entity that is intended to be known in a scene and decides which entities are visible. It is as well a system as same as a data source. Why render graph?, just becuase it handles parent-child relations of entities or rendering relevant components and makes the game more easy to switch entire entity trees to visible/invisible for rendering
  7. I think you make things too complicated and coupled. The real benefit of having a game system is that it is decoupled from anything else. I think you handle anything on the server side correct? Do you have thought about splitting into client/ server responsability like Player A casts it's buff to Player B so let player A handle all the update logic by it's client and tell the server just you started casting, another message that you finished casting and a third message to apply the buff to Player B. Server will do sanity checks only in this scenario so client A sends it's messages and server checks if A has enougth mana to do so or is being attacked and then populates those messages to any player in the zone so their clients could play animation and so on. The other possibility is to hang those actions into a queue. An update happens every X frames/ms and different updates run through different queues to be processed. When A starts casting, just put A's callback into the timer queue and on every run it will be decreased until finish is reached. Unattach the callback from the queue and attach it to a one-shot action queue to be processed on next frame. typedef void (*QueueCallback)(float); Vector<QueueCallback> TimingQueue; Queue<QueueCallback> ActionQueue; class GameAction { virtual void Perform(float timeDelta); }; class TimedEffect { virtual void Tick(float timeDelta); }; class SpellThatAddsBuff : public GameAction, TimedEffect { PlayerId target; BuffId buff; float time; override void Perform(float timeDelta) { Players.Get(target).AttachBuff(buff); } override void Tick(float timeDelta) { time -= timeDelta; if(time <= 0) { TimingQueue.Remove(Tick); ActionQueue.Add(Perform); } } }; (Pseudo code)
  8. Shaarigan

    My brief tour through 3d engines

    This are my personal two cents but at end of day it will be more effective to assemble what you need from scratch (or if it is urgent, taking some libraries would be ok too) and work out the rest by yourself. This is time consuming and 99% of those people reading this want to make their games ASAP so this isn't for everyone but the remaining 1% might think at the end of decade of learning and coding, that it was worth the work and even going multiplatform isn't that hard these days as it was 20 years in the past. The most difficult part is coding your pipeline as you already mentioned, there are a lot of different things out there and CMake also differs from versions to build correctly. That is the reason I spend some time know to get my pipeline for building but also setting up projects right at the moment as I got over this article for about two weeks ago when starting to think about a good design. The heart of my pipeline is a C# application that auto analyses the code of a project to create a dependency graph for determining inter-/external dependencies
  9. Have your read the docs properly? Again, you play one sample at a time per channel so the FLAC block-size should match a value that it fits fully into the audio buffer of the WASAPI API. They wrote something of 4608 bytes as blocksize for 48 kHz playback rate. I don't know if you understand audio hardware correct, you shouldn't provide a whole frame from any format instead you provide a buffer of an arbitary size so that you could mix your samples and provide data when hardware playback reaches the point where it needs them. So all you have to provide is a buffer that is big enougth to last a few hundret ms. I played arround the old WAVEAPI some time ago and provided a buffer of certain size where those size was arround 2 seconds of audio that I grabbed from my mic. WAVEAPI then took that buffer into playback and I got the whole record out of my speakers. As I understand, those APIs aren't very different in use so you provide a buffer for each channel that has the same size as every other channel's buffer and gibe it "at least" a few ms space so the formular should be playbackRate * sampleRate * time in bytes and milliseconds, then start streaming your audio into that buffer until it is filled and await the end of of playback to start with the next bytes of data. You might have several buffers in best case so that one could playback while another is being filled with next data to swap them immediatly when WASAPI returns from playback. This is the Double-/or even Tripplebuffering term you have in graphics coding
  10. I use that behind the scenes as platform and architecture independent std::atomic replacement for something like spin locking in our professional engine #if defined(__GNUC__) #define SpinLock(__lock) { while (sync_lock_test_and_set(&(__lock), 1)) while (lock) {} } #define SpinUnlock(__lock) { __sync_lock_release(&(__lock)); } #elif defined(WINDOWS) #define SpinLock(__lock) { while (InterlockedExchange(&(__lock), 1)) while (__lock) {} } #define SpinUnlock(__lock) { InterlockedExchange(&(__lock), 0); } #endif But anybody should stay aware of exotic platforms that implement their own interlocked functions like PSSDK or Switch SDK does
  11. Youre welcome, every developer is learning new things every day so don't worry! When I did an Endless Runner, I used a concurent array of meshes (in Unity for mobile platform) where each mesh was a tile of the world that is generated by certain rules on demand. Those tiles have certain size so that it wouldn't grab under player's feet when going out of scope while there are enougth tiles generated during runtime that there are always 3 tiles more than player could see far and last tile is still behind the player until he reaches the center of the next one. If you generate those tiles on demand then you could theoretically define the world you have in some kind of meta format, process these defines and add anything to it as read if you don't want to have completely random tiles. My game was some kind of Minecraft meets Super Mario Run so I have had predefined levels with multiple optional routes to take (depending on the powerup grabbed)
  12. volatile is must-have keyword when working without std::atomic but hardware atomic functions like InterlockedExchange on MSVC or __sync_lock_test_and_set in GCC/clang so this depends. When there is need for reading/ writing protection and the operation is just setting a variable, I use an atomic SpinLock or ReadWriteSpinLock if performance really matters. In case of atomic types like unsigned int you could also consider using atomic operations directly instead of locking. If the operation is considered to longer runs then using Mutex is absolutely ok but keep in mind that Mutexes in Windows are protected from OS to prevent self locking and you need to use Semaphore instead. I use this for example in my Task System to let idle threads go to sleep by their own. You should always ensure to use the same lock for read and write or the effect will be pointless
  13. Do you have a basic knowledge of the hardware/ OS specific software part i audio processing? You are asking two different things here and I try my best to answer both. Audio on the hardware level is just a wave that is presented as amplitude signal of -5v to 5v so a speaker can "pulse" and so generate audio waves the human ears could translate into sound and voice. Those waves are a layer above just buffers of amplitude data in the form of raw bytes that are passed to the audio stream and pushed to the speakers via hardware bus. Accessing this is the lowes level we have to process sound. The OS does another thing too, as it is just possible to have only one audio stream at a time for each speaker, the OS mixes sounds and voices together to have playing music at the same time as some OS message voices without stop the music while the OS sound is playing. This is done by an software audio mixer built into the driver and/or OS. Audio libraries know provide an own capability to mix different audio streams together under certain circumstances using rules you or the author of the library have provided. When you hit the "PlaySound" function, the whole buffer of that audio resource is queued from its current position into the libs audio mixer queue that is providing the data for each "audio frame". Sounds are mixed together using their volumn and in case of 3D sound also where the audio listener is in reference to the audio source. A funny fact, silence is also an audio stream but on the same level so no amplitudes are passed here. The frame mismatch you have here might depend on the frequency you initialized your audio source with. Every audio file carries a desired frequency at what it is intended to play "best". You could initialize your hardware devices with an ammount of supported channels (up to 7.1 sound) and frequency (up to 96 kHz if I remember my experiments correclty). You now have either to sample your FLAC frame down to match the WASAPI frame or you need to initialize the WASAPI frame with a higher frewuency to get more bits into it, this is about your choice. A side note, normally Audio runs in it's own thread or even threads to have mixing of next audio frame be in parallel to playing the current one. You should use a continous buffer and ensure to always have enougth data present before the next mixing step ends or you'll get shuttering in your playback
  14. Just a remark, when creating an Endless Runner, you should never move the player instead move the world arround the player. The problem here is as when reaching the end of world, you'll get undefined behavior of your code otherwise. This occures due to numerical overflow. A normal 32 bit Integer can handle arround +/- 4 billions so when reaching 4 billions and one, a byte swap will happen (unless Javascript causes some kind of error here) and you will end on the other side of the world at - 4 billions. This is why you would normally change that and instead let the player stay unmoved while anything else arround him moves
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!