DirectSound is essentially a driver that bridges the gap between third party libraries (such as OpenGL, FMOD, etc) and hardware. DSound does emulate some effects and provides access to hardware acceleration if possible, but at ground level it's precisely that and nothing more: a driver.
Now, I'm not too familiar with OpenAL overall, but it's likely just a library like FMOD, which builds on top of native drivers depending on what operating system you're compiling on and what is available. OpenAL and FMOD (and other libraries) also provide additional functionality, like time-to-frequency domain conversion (essentially raw FFT and IFFT calls), effects (reverb, delay, etc) and format support (easy loading of audio file formats).
In short, you're probably not thinking of writing a driver, in which case "writing DirectSound from scratch" doesn't really make much sense. You are probably thinking of implementing various library functionalities, such as effects and the like (just to be clear: if you do - for whatever reason - want to write a driver, then I can't help you). In the latter case, however, I would suggest two things:
1) start by reading the book I linked to. I'm sorry to say, but it's kind of apparent that you're not really aware of what you're even wanting to do. Building a knowledge base to work off of is the place to start. DSP is literally one of the most comprehensive and demanding fields out there and has to do with everything from circuitry design to programming synthesizers to implementing an incredible slew of various effects in code
2) if you don't feel comfortable simply reading up on things and really really want to do some coding, try an icebreaker assignment: keep reading and start writing somethin like a really simple additive synthesizer (let's say 2 oscillators using a few wavetables and a couple of filters). You will never figure out how this stuff works from code (which is why reading is so important), but conversely also implementing things like filters in code from theory is highly technical. My approach, which I deem pretty healthy, is that it's essential to have an understanding of what each knob (on a synthesizer or audio control panel) does and how it affects the signal, but it isn't imperative to understand the underlying mathematics. The same applies to code unless you really want to over-compensate.
As another thing, you might want to start by examining how software synthesizers work and what all the different knobs do. Let me know if you would like some suggestions.
As for code, here are two invaluable resources to get you started: KVR Audio (check out the forums for active DSP-related discussions) and musicdsp.org (check out the wide variety of user-submitted source code listings).
If you're wondering what a software synth has in common with an audio library, then the answer is that a synth generally boils down to being a DSP library in and of itself with the distinction that all modules are specialized and structured to manipulate sound in a specific sequence as opposed to being standalone functions.
Hopefully I understood you correctly and what I wrote helps!