Jump to content
  • Advertisement

Holy Fuzz

  • Content Count

  • Joined

  • Last visited

Everything posted by Holy Fuzz

  1. I'm making a game (shameless plug) that uses a custom game engine whose audio subsystem is built on top of XAudio2. For this game, I made a new explosion sound effect .wav file in Adobe Audition (a simple 44100 hz 32-bit float uncompressed wav), which I have attached to this post. I noticed that when playing the .wav file via my game, the sound effect is much lower quality than when playing it in Audition, lacking much of its original detail. So I did some more testing, trying it in a handful other programs. When that exact same .wav file is played in Unity3D or Google Chrome, it sounds great! (Though, interestingly, when played in Windows Media Player or VLC, it lacks detail much as it does when played via my game.) I have reproduced this on two different computers that have totally different sound hardware. My first assumption was, of course, that I had screwed up something in my XAudio2 code. First, I double-checked to make sure that the sample rate of the .wav file matched that of my mastering voice, which it does (they're both 44100 hz), and then I played around with a variety of higher and lower sample rates, both for the .wav file and for the mastering voice. Unable to find any obvious issues in my own code but still figuring I screwed up somewhere, I then decided to sanity-check using Microsoft's XAudio2 sample code (specifically the "XAudio2BasicSound" example which does little more than load and play a few different sound effects) with my .wav file and, much to my surprise, the audio quality was just as poor as in my own game. So now I'm wondering, is XAudio2 just bad, or is there anything I can do to improve its audio quality? (If someone knows how to improve the audio quality of Microsoft's sample code, then I can almost certainly make that same improvement in my own engine.) Or would I be better off using a different API? Thanks for your help & advice! explosion.wav
  2. Oh I'm pretty confident this is exactly what's happening, which is why converting to int fixes it. What I now wish I understood better is how exactly different audio technologies interpret/define these out-of-range values.
  3. Okay, so I have a hypothesis! This particular sound file uses clipping to give an intentional "crunchy" texture to the sound: My hypothesis is that different APIs/libraries have different ways of handling clipping in sounds when the waveform goes out of range, and that XAudio2/FPL_Audio/mini_al (or their underlying technologies) handle clipping differently than Audition/Unity3D/Chrome. (Or perhaps it's a difference with how the wave file is loaded? Maybe some wav loaders handle clipping differently.) If I reduce the volume of the sound such that it no longer clips, then it sounds pretty much the same regardless of where I play it. As an experiment, I tried saving as 16-bit int instead of 32-bit float, and now it sounds great regardless of where I play it! So at least I have a solution that works for me! (Though I still wish I understood the underlying problem, why and what the difference is.)
  4. Not working for me, it sounds the same as XAudio2. To be clear, it's not obviously incorrect if you don't know how it's *supposed* to sound -- it's not distorted or anything, still sounds like an explosion, just lacking its original crunchy detail. But at least on my computers, there's a clear difference between playing it in XAudio2, FPL_Audio, or mini_al and playing it in Audition, Unity3D, or Chrome. Doesn't work either. Thanks for your help trying to figure this out, it's pretty puzzling.
  5. I tried creating a mastering voice for every audio device listed for both my computers at various sample rates (44100, 48000, 96000) and channels (1 and 2), everything I've tried exhibits the same low quality. Not sure what other settings I can try? Nope, mini_al.h has the same low quality using their "Simple Playback Example".
  6. I am making a game using a custom graphics engine written in Direct3D 11. I've been working to lower the amount of system RAM the game uses, and I noticed something that, to me, was surprising and went against my expectations: Textures that are using D3D11_USAGE_DEFAULT or D3D11_USAGE_IMMUTABLE (along with a D3D11_CPU_ACCESS_FLAG of 0) are increasing my system RAM usage according to the size of the texture (i.e., a 1024x1024x32bpp texture adds about 4MB of system RAM usage). I had thought that the point of the D3D11_USAGE_DEFAULT and (especially) D3D11_USAGE_IMMUTABLE usage modes was to put the texture in VRAM instead of system RAM? I might expect this behavior on a system with integrated graphics and shared memory, but I'm seeing this on a desktop with no integrated graphics and only a GTX 1070 GPU. So am I just not understanding how this works? Is there any way I can make sure textures are allocated only in VRAM? Thanks for your help!
  7. it's a 2D pixel art game. Every texture compression format I've tried makes it look pretty terrible. It's also not anywhere close to being GPU-bound.
  8. Thanks for the info, that's very informative! I was looking at memory usage as reported by the memory profiler I've been using (specificially DotMemory), which appears to be reporting total committed memory because it's value (791MB) is a lot closer to what Task Manager reports for its Commit Size (811MB) than than its Private Working Set (359MB). My main goal has been to try to reduce the number of "out of memory" crashes for players with very low-spec (2-4 GB RAM) computers. I take it there's not much I can do to reduce committed memory usage by textures besides using texture compression or smaller textures?
  9. Quick question: Is there any danger in releasing resources on other threads, assuming one can guarantee that the resource will never be used after it is released?
  10. I am working on a game (shameless plug: Cosmoteer) that is written in a custom game engine on top of Direct3D 11. (It's written in C# using SharpDX, though I think that's immaterial to the problem at hand.) The problem I'm having is that a small but understandably-frustrated percentage of my players (about 1.5% of about 10K players/day) are getting frequent device hangs. Specifically, the call to IDXGISwapChain::Present() is failing with DXGI_ERROR_DEVICE_REMOVED, and calling GetDeviceRemovedReason() returns DXGI_ERROR_DEVICE_HUNG. I'm not ready to dismiss the errors as unsolveable driver issues because these players claim to not be having problems with any other games, and there are more complaints on my own forums about this issue than there are for games with orders of magnitude more players. My first debugging step was, of course, to turn on the Direct3D debug layer and look for any errors/warnings in the output. Locally, the game runs 100% free of any errors or warnings. (And yes, I verified that I'm actually getting debug output by deliberately causing a warning.) I've also had several players run the game with the debug layer turned on, and they are also 100% free of errors/warnings, except for the actual hung device: [MessageIdDeviceRemovalProcessAtFault] [Error] [Execution] : ID3D11Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DEVICE_HUNG: The Device took an unreasonable amount of time to execute its commands, or the hardware crashed/hung. As a result, the TDR (Timeout Detection and Recovery) mechanism has been triggered. The current Device Context was executing commands when the hang occurred. The application may want to respawn and fallback to less aggressive use of the display hardware). So something my game is doing is causing the device to hang and the TDR to be triggered for a small percentage of players. The latest update of my game measures the time spent in IDXGISwapChain::Present(), and indeed in every case of a hung device, it spends more than 2 seconds in Present() before returning the error. AFAIK my game isn't doing anything particularly "aggressive" with the display hardware, and logs report that average FPS for the few seconds before the hang is usually 60+. So now I'm pretty stumped! I have zero clues about what specifically could be causing the hung device for these players, and I can only debug post-mortem since I can't reproduce the issue locally. Are there any additional ways to figure out what could be causing a hung device? Are there any common causes of this? Here's my remarkably un-interesting Present() call: SwapChain.Present(_vsyncIn ? 1 : 0, PresentFlags.None); I'd be happy to share any other code that might be relevant, though I don't myself know what that might be. (And if anyone is feeling especially generous with their time and wants to look at my full code, I can give you read access to my Git repo on Bitbucket.) Some additional clues and things I've already investigated: 1. The errors happen on all OS'es my game supports (Windows 7, 8, 10, both 32-bit and 64-bit), GPU vendors (Intel, Nvidia, AMD), and driver versions. I've been unable to discern any patterns with the game hanging on specific hardware or drivers. 2. For the most part, the hang seems to happen at random. Some individual players report it crashes in somewhat consistent places (such as on startup or when doing a certain action in the game), but there is no consistency between players. 3. Many players have reported that turning on V-Sync significantly reduces (but does not eliminate) the errors. 4. I have assured that my code never makes calls to the immediate context or DXGI on multiple threads at the same time by wrapping literally every call to the immediate context and DXGI in a mutex region (C# lock statement). (My code *does* sometimes make calls to the immediate context off the main thread to create resources, but these calls are always synchronized with the main thread.) I also tried synchronizing all calls to the D3D device as well, even though that's supposed to be thread-safe. (Which did not solve *this* problem, but did, curiously, fix another crash a few players were having.) 5. The handful of places where my game accesses memory through pointers (it's written in C#, so it's pretty rare to use raw pointers) are done through a special SafePtr that guards against out-of-bounds access and checks to make sure the memory hasn't been deallocated/unmapped. So I'm 99% sure I'm not writing to memory I shouldn't be writing to. 6. None of my shaders use any loops. Thanks for any clues or insights you can provide. I know there's not a lot to go on here, which is part of my problem. I'm coming to you all because I'm out of ideas for what do investigate next, and I'm hoping someone else here has ideas for possible causes I can investigate. Thanks again!
  11. Yeah, you're right. I think I was thinking of updating resources after creation, though looking over my code again, I don't think I actually ever do that on anywhere but the main thread. That being said, I fixed another GPU crash some players were experiencing simply by wrapping calls to the device in lock statements, so I'm not convinced that all drivers are as thread-safe as they're supposed to be.
  12. That makes a lot of sense. Thanks again for the reply!
  13. Matias, that's very helpful info, thanks a lot! It gives me some good next-steps to look into. Can this only happen if I have explicit loops in my shader code, or it can it also happen by passing bad values to an intrinsic function? I had a similar thought, so I added a by-default 100 FPS limit to my game. No discernible drop in device hangs though. Would this commonly cause a hang, or a different DEVICE_REMOVED error? Is this just because multithreading is hard to get right, or additionally because there could be underlying problems in D3D/drivers that could be causing problems when called from multiple threads even with proper synchronization? Thanks again for your help! Much appreciated.
  14. I have written a game that uses XAudio2 (via the SharpDX wrapper for C#). For myself and 99% of my players, everything is working fine. But occasionally I get automated error reports from players which indicate that my call to IXAudio2::CreateMasteringVoice is returning XAUDIO2_E_INVALID_CALL. Here's my C#/SharpDX code: Device = new XAudio2(XAudio2Flags.None, ProcessorSpecifier.DefaultProcessor); Logger.Log("XAudio2 Device NativePointer: " + Device.NativePointer.ToString("X")); MasteringVoice = new MasteringVoice(Device); (If you're unfamiliar with SharpDX, the important thing to point out is that SharpDX automatically checks for error codes returned from DX functions and throws an exception if an error is returned. My application then catches that exception at a higher level (not shown) and reports it to me.) I think it's pretty unlikely that SharpDX is to blame here, because it's a very thin wrapper around the native DirectX APIs and I couldn't find any obvious issues in its source code. The logging statement above has also verified that the XAudio2 device is being created successfully. (XAudio2Create is returning success and outputting a non-NULL pointer.) The docs for CreateMasteringVoice mention two reasons why it might return XAUDIO2_E_INVALID_CALL: and   I am definitely *not* doing either of these things, and so I'm at a loss as to why it would be returning XAUDIO2_E_INVALID_CALL. I did find an old thread from someone who has the same issue, but there was no resolution to that. In that thread, someone asks,   To which my own answer would be "no". My game's installer should automatically install the required DirectX components, but it's possible that has failed for some of my players, or that they are running the executable without running the installer. Is there any way to verify, within my application itself, that XAudio2 is installed on their machine? (Besides, if XAudio2 wasn't installed, wouldn't creating the XAudio2 device fail before it even gets to creating the mastering voice?) If anyone has any clues on why this might be happening, or how to debug it, I'd greatly appreciate it. Thanks! FWIW, I've never gotten an error report from players on Windows 10 computers; only Windows 7, 8, and 8.1. That's the only commonality I could find in the error reports.
  15.   I finally got around to testing on my GTX 970, and I can confirm that SetMaxFrameLatency(16) does indeed create the expected latency, unlike my laptop's M370X. I'm certainly curious why AMD limits it to 3.   Thanks again for your help!
  16. So I've been trying to implement triple-buffering in my application by changing the BufferCount* parameter of DXGI_SWAP_CHAIN_DESC, but regardless of what I set it to, there is no detectable change in the performance or latency of my application. Let me elaborate...   I would expect that increasing the number of swap chain buffers would lead to an increase in latency. So I started experimenting: First, I added a 50ms sleep to every frame so as to artificially limit the FPS to about 20. Then I tried setting BufferCount to 1, 2, 4, 8, and 16 (the highest it would go without crashing) and tested latency by moving my game's camera. With a BufferCount of 1 and an FPS of ~19, my game was choppy but otherwise had low latency. Now, with a BufferCount of 16 I would expect 16 frames of latency, which at ~19 FPS is almost a whole second of lag. Certainly this should be noticeable just moving the game camera, but there was no more latency than there was with a BufferCount of 1. (And none of the other values I tried had any effect either.)   Another possibly-related thing that's confusing me: I read that with vsync on (and no triple-buffering), the FPS should be locked to an integer divisor of your monitor's refresh rate (i.e., 60, 30, 20, 15, etc...) since any frame that takes longer than a vertical blank needs to wait until the next one before being presented. And indeed, when I give Present a SyncInterval of 1, my FPS is capped at 60. But my FPS does *not* drop to 30 once a frame takes longer than 1/60 of a second as I would expect; if I get about 48 FPS with vsync off then I still get about 48 FPS with vsync on. (And no, this isn't a result of averaging of frame times. I'm recording individual frame times and they're all very stable at around 1/48 second. I've also checked my GPU settings for any kind of adaptive vsync but couldn't find any.)   More details: I'm testing this in (I think exclusive) fullscreen, though I've tested in windowed mode as well. (I've fixed all the DXGI runtime warnings about fullscreen performance issues, so I'm pretty sure I have my swap chain configured correctly) If it matters, I'm using DXGI_SWAP_EFFECT_DISCARD (but have tested SEQUENTIAL, FLIP_SEQUENTIAL, and FLIP_DISCARD with no apparent effect). I've tried calling Present with a SyncInterval of both 0 (no vsync) and 1 (vsync every vertical blank). Using 1 adds small but noticeable latency as one would expect, but increasing BufferCount doesn't add to it. I've tested on three computers: One with a GTX970, one with a mobile Radeon R9 M370X, and one virtual machine running on VirtualBox. All exhibit this same behavior (or lack thereof)   So can anyone explain why I'm not seeing any change in latency or locking to 60/30/20/... FPS with vsync on? Am I doing something wrong? Am I not understanding how swap chains work? Is the graphics driver being too clever?   Thanks for your help!   *(As an aside, does anyone know for sure what I *should* be setting BufferCount to for double- and triple-buffering? In some places I've read that it should be set to 1 and 2 respectively for double and triple buffering, but in some other places they say set it to 2 and 3.)
  17. Again, a very clear explanation that really helps me understand what's going on. Thanks!   1. I'm using DXGI_SWAP_EFFECT_DISCARD in full-screen with vsync on and have experimented with both 2 and 3 buffers. No significant difference in average latency between SetMaximumFrameLatency(1) and SetMaximumFrameLatency(16) according to PresentMon. D3D11_CREATE_DEVICE_PREVENT_INTERNAL_THREADING_OPTIMIZATIONS had no effect either. So maybe my driver is overriding this setting? There aren't many options to tweak in my AMD control panel. I can try on my other computer (a GTX 970) later.
  18. Jesse, this is by far the clearest and most informative explanation I've read on the internet or in a book on how BufferCount and MaxFrameLatency works. (And the video was useful too.) Thank you so much! If I could upvote you a thousand times, I would! I hope lots of other people find your explanation as useful as I have.   There are still a few things that I'm puzzled by:   1. As a test, I removed the trivial sleep(50) call every frame and instead looped part of my rendering code 30 times (a part that uses very little CPU but draws lots of pixels) which brings my FPS down to about 20. (90% of my frame time is now spent in Present, so I'm pretty sure I'm now GPU-limited.) Setting FrameCount to 16 had no noticeable effect, which now makes sense given your explanation (since this is GPU-limited and not vsync limited). I also tried setting MaxFrameLatency to 16, which if I understand correctly should introduce 16 frames of latency since my CPU can execute so much faster than my GPU? But again, I'm seeing no latency, which should be quite obvious at ~20 FPS, correct? Am I misunderstanding something? (I also tried PresentMon, which is reporting ~130ms of latency regardless of how I set MaxFrameLatency.)   2. I've been using a BufferCount of 1 in full-screen with no obvious ill-effect. Will the driver automatically increase it to 2 if I specify 1 in full-screen mode? Or maybe I'm not actually running in exclusive full-screen? (Is there any way to check that? PresentMon's CSV says "Hardware: Legacy Flip" if that's at all relevant.)   3. Now that I have my game GPU-limited, I am seeing my FPS locked to 60/30/20/15/etc when vsync is on. Why don't I see the same behavior when my game is CPU-limited? (And yeah, I've set MaxFrameLatency to 1.)   Thanks again!
  19. Hello, Is it possible to run the SlimDX .Net 4.0 installer in quiet or passive mode, like you can with the .Net 2.0 installer? I tried the /passive and /quiet flags, but they don't seem to do anything. Thanks! - Walt
  20. Hey SlimDX users/devs... quick couple questions about SoundBuffer.Write. How exactly does the LockFlags parameter work? ... - When is the buffer unlocked? Immediately after the write? Manually via some method I don't know about? - What exactly does LockFlags.FromWriteCursor lock? My guess would be from bufferOffset to the length of the data passed to Write(), but I'd like confirmation. - What's the best locking practice when using streaming buffers? Presumably LockFlags.EntireBuffer is out of the question since the buffer is being played while it is being written to, but then should I use LockFlags.None or LockFlags.FromWriteCursor? - What does the CurrentWritePosition property correspond to? Is it just set to Write's bufferOffset parameter? Thanks!
  21. Hi all, (I'm sure there's a better forum somewhere online in which to post this question, but I'm not sure what it would be. Can anyone point me to a better forum?) I am trying to use JOrbis 0.0.17 in my game to decode Ogg Vorbis music files. Unfortunately, I am having a problem using the VorbisFile class. It may be a bug, or I may simply be misusing it. My problem is that VorbisFile.read() always returns 0. I'm using the http://www.vorbis.com/music/Hydrate-Kenny_Beltrey.ogg reference track to test my code. Here's the output I get: PCM Total: 11667456 Position: 0 Read returned: 0 Read returned: 0 Read returned: 0 Read returned: 0 Read returned: 0 Read returned: 0 Read returned: 0 Read returned: 0 And here is my code that uses VorbisFile: package jorbistest; import com.jcraft.jorbis.*; public class Main { public static void main(String[] args) throws Exception { VorbisFile vf = new VorbisFile("Hydrate-Kenny_Beltrey.ogg"); int length = (int)vf.pcm_total(-1); System.out.println("PCM Total: " + length); System.out.println("Position: " + vf.pcm_tell()); byte[] buf = new byte[length]; int count = 0; while(count < length) { int[] bitstream = new int[1]; int readCount = vf.read(buf, 100, 0, 2, 1, bitstream); System.out.println("Read returned: " + readCount); count += readCount; } } } One disconcerting note is that VorbisFile.read() was a package-level method and I had to make it public to be usable from my test program. This bug seems like an indication that no one actually uses JOrbis's VorbisFile class and maybe it hasn't been tested properly. I'd really like to use VorbisFile because that would make my life much easier if I can get it to work. However, if that's not an option, then can someone point me to some tutorials for either JOrbis or regular libogg/libvorbis that demonstrate how to use the low-level classes to decode to PCM *and* demonstrate how to perform PCM seeking? Thanks for any help you can provide.
  22. Holy Fuzz

    Demo: Normal Tanks Full version available now.

    Hey, very nice! I played through the first couple levels and it's definitely fun! I especially like the graphics. A couple minor notes: - An option to use relative controls would be nice. (W, S, A, D, for forward, reverse, turn left, turn right.) - A noticed a couple times that the landscape along the top edge of my screen would "pop" in as I was driving up. I have a widescreen monitor if that matters. - More weapons! Cannon and machine gun get a little old after a while. I myself have been working on similar game: http://www.tanky-tank.com/tanky-tank/ Like your Normal Tanks, my Tanky-Tank is a 2D top-down tank combat game (you even use the mouse to aim & shoot). But I'm concentrating on multiplayer instead of singleplayer. And it's not as pretty as your game. ;-) A new version should be released with tons of improvements within a week.
  23. I'm in the process of porting my DirectSound audio engine to SDL. In DirectSound, I have the ability to change the frequency at which individual sound buffers are playing, thus allowing me to speed up / slow down (obviously changing pitch) sounds dynamically at run time. AFAIK, SDL doesn't have such an ability since all sounds must be mixed together at one particular frequency in the mixer callback. As with most SDL applications, I open sound with a particular frequency (in my case, 22050) and then convert each WAV on load to that frequency. What I'd like to do is (if a sound is playing at something other than 1.0x speed) dynamically convert the sound to a different speed (frequency) in my mixer callback so that it can then be mixed in with the main audio buffer. For example, if I wanted to play a sound at half the speed, I could convert it to a stream that was twice the frequency and then play that stream at the normal frequency, thus slowing the sound down by a half. Is there an algorithm I can use to do this? A link to an algorithm or tutorial would suffice. Thanks!
  24. Holy Fuzz

    Tanky-Tank: multiplayer action game

    Yeah, Death Zone is all Lava, so that explains that. Thanks! Doing some sort of lava-related animation is on my very long to-do list. ;-)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!