Jump to content
  • Advertisement

Chadivision

Member
  • Content Count

    44
  • Joined

  • Last visited

Community Reputation

130 Neutral

About Chadivision

  • Rank
    Member
  1. Chadivision

    MIDI I/O in DirectX

    It sounds like the Windows multi-media dll is going to be a better way to go. I've already written some audio code using XAudio2, so using Direct Music along with it would be a pain. (As far as I understand it, you can't have two different DirectX SDK's installed at the same time. Am I correct about this?) I am coding in C++. I've been meaning to learn some C#, so whenever I get around to it, I'll check out the link that you gave me. Thanks.
  2. Chadivision

    MIDI I/O in DirectX

    I've got the March 2008 DirectX SDK, and I didn't see anything in the documentation about using MIDI input and output ports. Is there currently a way in DirectX to receive MIDI input and send signals to a MIDI output port (for playback on hardware synths), or am I going to need to use an older version or DirectX to do this? (Direct Music, maybe?) If DirectX isn't the way to go, I'm open to using other API's. Does anyone have any suggestions? Basically all I'm looking for is an API that lets me enumerate MIDI I/O devices and send and receive MIDI signals using those device's ports. Thanks.
  3. I hadn't really thought about delta coding. There are a couple of situations that it could work for in my application, but I'm a little hesitant to have the modules output any kind of compressed signal. What I'm going for is to create a truly modular syntesizer--one where the output of any module can be plugged into the input of any other module, and I wouldn't want to have only certain modules be able to work with compressed signals. I guess I could have a decompression function built into the base class that all of my modules will derive from, but I could run into a situation where the signal is being compressed and decompressed each time it runs through another module. That might be a little more complexity than I really need, but I am going to think about it. It might not be a bad way to go. What I was thinking about was to have each module generate either a double or a float (chosen by the designer of the module) for each of its outputs and send them to an object that stores all of the outputs for all modules. When a module wants to use that value as an input, it calls a function on that object to retrieve the value. In that function call, I could specify whether I'm requesting a double or a float (depending upon the level of precision needed for that particular module), and the object could return the appropriate type (doing a conversion if necessary). That would make it so that I could still patch any output into any input, regardless of whether it's a double or a float. Obviously taking a double output into a float input would sacrifice some precision, but there are a lot of modules where that wouldn't create an audible difference. I could have the user interface notify the user that a conversion has taken place, and then the user could decide whether it's important enough to worry about.
  4. That makes sense. Thanks for clearing that up. Also, the more I think about it, I think I'm going to use a mixture of floats and doubles. Even if I get comparable speeds, going to all doubles is still doubling the amount of memory, which would limit the number of synthesis modules that could be used in a patch. There are some types of modules that could benefit from using doubles, but most of them would only need to use doubles internally and could then convert the output to a float. Or I could write two versions of some modules--one that outputs a float and one that outputs a double. So I think I'll design a flexible system that can handle both and then let the user decide which is appropriate for the situation.
  5. I could be wrong, but I was under the impression that certain math functions can be rewritten using SSE in order to get better performance. What I was thinking was to wrap the math functions so that, after profiling and determining where the performance issues are, I can rewrite my math function using SSE (instead of using it to call the math function). That way I could optimize the function but not have to track down and change ever single function call in all of my code. But I really don't know that much about SSE at this point, so I could be way off base here.
  6. I guess using typedefs is probably the best way to go for right now...keep my functions generic until I have enough code written to profile it and see which way to go. That would also give me the flexibility to easily change it in the future. It probably makes sense to wrap any <math.h> function calls with my own inline functions. That way I can get the code up an running without having to create an entire math library from scratch. Then I can profile it and only optimize the functions that are causing performance problems. I know that optimizing too early can cause all kinds of problems, but so can charging ahead without a plan. I'm trying to find a balance between the two. I like the idea of using the GPU to do the processing. I think I'll look into that a little more, but I might end up building the user interface on DirectX, so my application may end up needing all of the graphics card's power to do graphics. I guess I could have the audio engine support a few different options and have the app choose between them at startup. So many choices! Sometimes I miss writing BASIC on my Atari 400. That was a simpler time.
  7. Thanks for the feedback. I think I probably will use doubles for the audio code. I may or may not be using 3D acceleration (right now I'm mostly just trying to figure out the design of the audio engine itself--I haven't really thought too much about the UI yet), but if I do I'll definitely be using floats for that. I probably will write a math library that uses SSE, though that's a topic that I haven't really learned too much about yet--aside from just a basic understanding of what it is.
  8. I'm in the VERY early stages of planning for a realtime audio synthesis application. It's going to be a native Windows application, written in C++, that uses DirectX for audio output, but all of the signal generation and processing will be done with my own code (rather than using DirectX to generate effects, etc.). I wrote a non-realtime synthesis program about ten years ago, and I used floats to represent the audio signals (which I later converted to 8- or 16-bit samples that I could write into a wavefile). Floats work fine for many types of synth modules, but anything that requires a lot of operations on the same data (a complex filter or a reverb, for example) could definitely benefit from the extra precision offered by a double instead of a float. This time around, I'm thinking of using doubles, instead of floats, for all audio signals. Right now I'm running WindowsXP (a 32 bit platform), but this program is going to be a very longterm project, and I'm sure that it will mainly be used on 64-bit operating systems in the future. I don't know much about operating systems, so here's my question... In the not-too-distant future, 32 bit operating systems are going to be far less common than they are now. How should this affect my decision on whether to use floats or doubles? Obviously, if I use floats I can make the program use less memory and it seems to me that it should run faster (am I right about that?), but I really think I'm going to want the precision of doubles. Will a 64-bit operating system be much more efficient at doing math on a 64-bit double? It seems to make sense to me, but I really don't know much about operating systems, chips, etc. Sorry for the long post. I just wanted to make sure that I gave all the details of the project. Thanks.
  9. Chadivision

    code orginization

    Generally, it is true that you want only function declarations (not implementations) in your header files, but there are sometimes exceptions. If you have a very short function (like an accessor function, for example) it is sometimes more convenient to just implement the function in the header. Most of the time I try to stay away from this approach though. One other thing to consider...I may be wrong about this, but if you implement a function inside the class declaration, doesn't the compiler treat that as an inline function even if you don't use the "inline" keyword? I may not be remembering the details of this correctly, but I think I remember reading something about that. If you are accidentally inlining a bunch of functions (especially big functions), that can cause serious code bloat. Another reason to keep your declarations separate from your implementations is that it makes it easier to look at the code and know what's going on. Sometimes you just want to look at the function's declaration (to remember it's return type and argument list) without being bogged down by seeing the implementation too.
  10. Chadivision

    Where to start?

    I have never worked with C#, but I've read a little bit about it and it looks like a good language. You may still want to consider C++ instead simply because, for game design, it is currently the most common language. Most of the books that you buy and the tutorials that you download will use C++. If you run into trouble and post a question on the GameDev.net forums, more people will be able to help you with C++ than with C#. Don't get me wrong. I'm not trying to bash C#. It very well may be the next big thing, but as of right now, C++ is more widely used. No matter what language you decide to use, make sure to start small. At first, don't worry about how to make a game. Just learn the language. Once you understand the programming language, start working through some game design tutorials. Next, write some game-like demos, but not full games. Usually the best thing to do here is to get code from a tutorial or from the CD-ROM of a game design book. Compile it and run the demo, then study it to figure out how it works. When you start getting a pretty good idea of what's going on, start experimenting by modifying the code. You can learn a lot by messing around with someone else's code. Finally, make sure to keep in mind that this is a slow process. There will be times when you're pulling your hair out, trying to figure out why your code is not working. Don't sweat it. It's just part of the process. Good luck.
  11. Chadivision

    How do I get back in the loop?

    Oops. I forgot to login. That last post was mine.
  12. You could use a quaternion camera, but it's not necessary unless you want to be able to yaw, pitch, and roll. Unless you're writing a flight simulator (or something similar) you should only need to yaw and pitch, so euler angles will work fine.
  13. Chadivision

    Petzold style Direct 3D Book?

    LaMothe spends most of the book explaining graphics stuff, but he does show how to implement a collision detection system. He also goes into character animation, but not extremely in-depth. It shows how to do keyframe animation. He also mentions skeletal animation and gives a brief overview of it, but he doesn't actually implement it into his engine. One thing that I really liked about this book was that he actually designed the game engine while he was writing the book, so you also get to learn something about the thought process and why he made the design choices that he did. When you're done with this book, you will know a lot about the inner workings of a game engine and you'll have a solid foundation to start learning DirectX. The engine that he builds is actually fairly advanced and supports materials, textures, and several different lighting types. You could build a real game on it if you wanted to, but it would be pretty limiting because his engine doesn't use any hardware acceleration. The reason for this is that he wanted to teach how everything works instead of just having an API do it for you. This was my first 3D programming book. I think going through it made it easier to learn DirectX because I already understood the underlying principles, and all I had to do was learn the API-specific stuff.
  14. Chadivision

    what zhe "game engine" includes ?

    I'm not quite sure what you're asking, and I'm not a DirectX expert (more of a hobbyist), but I'll give it a shot. Basically, from what I understand, DirectX is what it is and there is no way to extend it. But if you learn the whole API, you should be able to create just about any kind of effect that you want (within the limits of your CPU and graphics adapter, of course). This site has tons of articles and forum posts about how to create just about any kind of effect that you can imagine. The use of vertex shaders and pixel shaders is about the closest thing to "extending the API". Shaders allow you to write your own miniature programs that run on the graphics adapter's GPU. These programs replace parts of the fixed-function pipeline, so you get the flexibility of being able to write your own transformation and lighting code, while retaining the performance benefits of hardware acceleration. I hope that answered your question.
  15. Chadivision

    Petzold style Direct 3D Book?

    There are a couple of books that I have found to be helpful. "Sepecial Effects Game Programming with DirectX" by Mason McCuskey (series editor Andre LaMothe). I thought that this book would be just about special effects, but it actually has some really good beginner-level DirectX information. On top of that, it teaches you how to create fire, water, particle systems, lens flares, and a lot of other effects. "Tricks of the 3D Game Programming Gurus" by Andre LaMothe. This book is not really about DirectX. It's about 3D graphics in general. It does use DirectX, but only for setting up the back/front buffers. If you already know a lot about 3D graphics and are just looking to learn DirectX, this book isn't for you, but if you need more background information on general 3D concepts it's a great resource. I've found that a lot of DirectX books gloss over some of the basics of 3D graphics (coordinate systems, matrix math, quaternions, lighting calculations, etc.), but this book gives you a lot of detail. LaMothe builds a software-based 3D engine step by step and explains all of the math and theory as he goes. He spends a lot of time "reinventing the wheel" by writing code for things that could be handled automatically by DirectX, but that's what I like about it. It really gives you a good understanding of what's going on behind the scenes. If you get “Tricks of the 3D Game Programming Gurus”, you’ll also want to pick up a book that’s specifically about DirectX. The software based engine that he builds in this book doesn’t use any hardware acceleration. Working through the examples is a great way to learn, but the engine is so slow that you won’t want to build an actual game on it.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!