Laval B

  • Content count

  • Joined

  • Last visited

Community Reputation

12387 Excellent

About Laval B

  • Rank

Personal Information

  • Interests
  1. Serialization libraries in C

    You can use text-based file format or binary file format. Text files are handy to work with because they are human readable and you can modify them easily with a text editor while developing and debuging. If your files are very simple, the fprintf/fscanf method mentioned above will probably do just fine. If you need structured data (sections, types and repeating sequences of data) using a format like JSON might just be a good idea. A C library that works well could be There is actually many such libraries available on the web. Kylotan's post has interesting informations about serialization and frameworks. My advice would be to start by using simple methods then go for more sophisticated encoding as needed.     A word of caution on this if i may. Don't forget that the Java virtual machine is big endian so if you decide to go for binary files and, for some reason, some Java application needs to read or write these data files (Android version of the game made in Java for exmaple) byte order will have to be taken care of .
  2.   +1 I totally agree.
  3. Code quality at work

      This is also true for me. Managers have different concerns then most developers, they need respect budget and deadlines. Some are just a bit too shortsighted sometimes. We are spending more time dealing with poorly organized code on a regular basis then it would have taken to just fix some things in the first place. At some point, you accumulate so much problems that it becomes impossible to even do a simple modification and then, you need change big things.
  4. Object lifecycle management

    One last question if i may.   Would it be too much of a restriction to require that updates as well as resource creation/destruction could be done onlt by a single thread at a time ? It wouldn't need to always be the same and the construction of the update liste could itself be done concurrently on multiple threads but it could be submitted only from one thread (at a time). It simplifies many things and allows for uptimisations to have a single producer thread.
  5. Object lifecycle management

      This methods is very intesting because, for one thing, it would allow me to be able to modify/replace the content of a buffer that is currently in use. This is something i wasn't even considering at this point. The nice thing about it is that it isn't really that difficult to implement. The list of things to be deleted is a great idea. I will probably need some sort of reference counting because there may me multiple updates queued that use the same buffer (resources). Just a couple of questions :   1. Just to make sure i follow you, when you say "(where you can give them a buffer with any old contents in it)", do you mean recycling an old buffer (memory area) ? 2. The list of buffers to be deallocated would be processed by the mixer thread when it's done with the current update ?     Thank you very much for the idea.
  6. Object lifecycle management

      You are right. The only moment i can think of where there can be more important allocation is when a level is unloaded to load another one. Using pools for source and buffer objects would also make allocation/deallocation time deterministic and short. Even the buffer's data could be from a linear allocator i guess.   Thx for the thoughts.
  7. Object lifecycle management

      Yes, reference counting is the solution i was trying to avoid but i guess there isn't must choice since buffers are shared. I still don't like the idea that the mixer thread has to do memory management.
  8. Hello again everyone.   This post is related to this other post i made a few weeks ago. The link is just for reference.   In short, i'm working on the development of a 3D audio system. It's a personal project and i'm still at the protorype and experimentation phase. I have been mostly working on the algorithms for 3D sound rendering so far. Doppler shift and distance model are the next things.   My problem is with the way the host application comminicates with the mixer thread. Here are the basic objects involved in this part :   - AudioSource : It is a struct that contains the mixing parameters that can be updated by the application (position, speed, orientation, state : play, pause, stop, etc). This is a very light weight object (64 bytes each). This a POD in the most simple sens.   - MixerSource : This is an internal representation of a source containing data that are accessed only by the mixer (the current playback position, a pointer to the audio data buffer, etc). These informations are persistent for a source accross updates. An AudioSource has a pointer to this structure (it could eventually be a handle of some sort to make it more opaque).    - AudioBuffer : This object contains the audio pcm data to be played by the source and its parameters (sample rate, etc). A buffer can be shared by multiple sources and they are accessed in read only by the mixer. It basically loads and process like 2.5 or 3ms worth of data every cycle.   So the application will call an update method that takes an array of AudioSources, a Listener and the number of AudioSources in the array. This represents an update of the sound configurations in the scene that can be done every frame or every few frames. A copy of this array is then made into a circular queue of AudioSource arrays (copying 64 sources takes less then a microsecond with memcpy) and the mixer thread just process them. The goal of this method is to reduce the amount of synchronization between the two threads. When the mixer starts working on an update (a list of sources), it is working on its own copy.   For synchronization, i'm using a classic critical section/condition variable pair for now and it works great. (CRITICAL_SECTION / CONDITION_VARIABL pair on Windows and a pthread_mutex_t / pthread_cond_t on Linux).   The problem i have is when deleting a source or a buffer. Adding is not a problem because a source will be part of an update only after it has been added. When a source is deleted is another story. If the application wants to remove a source, it needs to synchronise and delete the source but there are some copy (updates) of the source in the queue that still have a pointer to this source. There are different ways i could manage this, but i'm not really foud of any of them.   I would like to know your thoughts about this.      
  9. As explained in, how to interpret the return value for SendMessage depends on the message sent. With many window messages, it will return 0 if the target didn't process the message. It doesn't necessarily mean an error occured but simply that the target didn't process the message (often because that target doesn't process that kind of message).   In general, you have to consult the documentation for the specific message to know how to interpret the return value of SendMessage.
  10. It is impossible to answer your question with so little details, we need more context. I suggest you read the remark section in the MSDN page about WM_SETFONT message :   Have you tried to call GetLastError to determine why it is failing if it is really failing ? To get details about the error code returned by GetLastError, you can use Error Lookup in Visual Studio Tool menu.
  11. Audio System

      Thank you very much for the answer, i was wondering how much i should queue vs update rate.   I've just done some searching on Google about the method you're using for sound propagation, it's facinating and more sophisticated then the panning algorithm i'm using. Thanks again for the tips.   Just out of curiosity, do you have a fixed compile-time limit on the number of sources you can use ?
  12. Hello everyone.   I have started the development of an audio system that would be mostly oriented toward games be it 3D or 2D. So far I have been mostly concerned with the algorithms for 3D sound i.e. generating a multichannel (one channel per speaker) pcm buffer from multiple single channel sources at different locations relative to a listener, just like OpenAL, FMOD and other libraries do.   I have also been working on the real time mixing aspect of these sounds (of course). It's going very well up to that point. I still need to implement Doppler shift as well as distance attenuation and possibly hrtf and binaural filters. This part of the development is very interesting. I'm also learning alot about SSE and AVX instructions.   Just as a note, the system will use XAudio2 on Windows and probably ALSA on Unix Operating Systems like OSX/Linux. I don't know yet about mobile or even if i will eventually port it to mobile (which would be great). One of the design goals i have is to use the platform specific audio api minimally only to send pre-process samples to the device so porting will be easier. If it ever becomes decent enough, i might make it an opensource project but i'm not there yet.   As of late, i started thinking about how the system would communicate with the host application and the more i think about it, the less i'm sure about it. That's what i would like to discuss. My concerns are all related to multithreading.   So far, the application would deal with 4 classes : AudioSystem is the class used for the initialization/shutdown of the system and the main api for resource management and update. AudioSource basically represents the configuration of a sound in the scene i.e. position, speed, orientation, area of effect, etc. AudioBuffer represents the data of a sound. An AudioBuffer can be shared by multiple sources of course. Listener represents the point in the scene from which the sounds are heard. It has a position, speed, orientation so far. The system basically has a mixing thread that cycles through the list of sources and prepares buffers for the api to consume when it needs them.   A typical real time application would likely have (or at least i have to consider it would) multiple threads working on preparing/updating the scene and each of these threads would update sources, and that is where i'm not sure how to do it. The operations performed by the application are the following : Add/remove sources from the list (or set the status to paused/stopped). Update the parameters of sources like speed, orientation and position. Update the listener's parameters (position, speed, orientation). I'm trying to think about a way that would not impair the performance of either the application or the mixing thread. I have though about using two lists, one is the "committed" list the mixing thread is working on and the other is the list the application is working on then i could "atomically" swap the two lists ... or something like that. I don't know about locking, it could be good if done properly  i guess. It is clear to me the update of the list must be done as a transaction only once "per frame" and not multiple updates during the frame composition just like the graphics APIs do.   With the atomic swap of lists, i'm affraid i could lose updates if one side is too fast so i guess i would need to queue these updates ...   Well, that's is basically what i would like to discuss. I'm opened to ideas and suggestions.
  13. Vulkan Vulkan Resources   I found these tutorials, they are nice to start with Vulkan. There is a pdf version with detailed explanations and sample code.
  14. Variable size strucs

      You mean moving the shaders, the shader parameters and the textures to a material class that could be shared among multiple DrawItems ? Yes it could be done. But what i'm trying to do (and this is only experimentation) is to pack all the data for a draw call into contiguous memory to see if there is something to be gained by being more "cache efficient". Packing arrays is what is actually difficult.   Elements of a draw item are of course shared across models.
  15. Variable size strucs

      Yes it is very important in production code that is manipulated by many people not to allow such a data structures to be easily or accidentally accident missused. It must not be possible to construct a drawitem on the stack or in a conatiner that would construct an incomplete object.     Yes indeed. But then, how do you define the "material" object ? This material object will need to have thoses fields anyway with the arrays.