Memory Mapped Devices

Started by
5 comments, last by mind_wipe 13 years, 3 months ago
BTW, Merry Christmas Everyone!

So... I'm having problems understanding memory mapped devices/files thing. I'm using it for DMA for sound cards and hope to find good use for it in other areas as well. But for now. What I'm having problems understanding is how a device knows or is informed about my read/write operations into the address space/block mmap returned. How does it know? I write data to the map and presto the device is informed via sub-system thingy? Or is there some *device specific* way that I would have to handle this? For example, I write some sound data to the memory mapped buffer when use ictl or something to tell the sound card: "Heym I just wrote something here, use it!"? I'm confused... Please help! I've been at this for hours/days and can't figure this technique out.
Advertisement
Different device interfaces use different DMA mechanisms; you can get a great overview of the general process for ISA and PCI in this article.

In a nutshell, though, you can think of the bulk of the circuitry on your computer motherboard as facilitating two basic processes: RAM access (northbridge) and device access (southbridge). The two are controlled by the CPU, but they can work independently, which is how DMA avoids heavy CPU loads. Again, more detail is available in the article linked above.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Your question is vague. Are you writing to MMIO and asking how the device is affected by those writes or are you writing into some other buffer and asking how the device fetches that data? By the way, you really can't use DMA in general in user programs.
Just a vague memory from a computer science class, but I believe the sound card has an assigned DMA channel, and when a transfer is initiated on that channel, the system generates an interrupt signal that is caught by the card, and that's how the card "knows" that it has a buffer to process.

On some video game consoles, it's customary to set up a callback function that is run every time a specific interrupt is triggered (such as the display device's vertical or horizontal blank signal).

And if I remember correctly, each device that can receive interrupts has a table of interrupt vectors -- addresses of code to run when a specific interrupt signal is received.

It's basically events at the hardware level.
Thanks for the answers. I'm sorry I was so vague. What I was trying to use the DMA for was sound. I've seen this done in some commercial applications and couldn't quite reverse engineer the process enough to understand it entirely. Like I said before, I wasn't sure how the device knew when transfers were made and how to process them.

Take, for example a sound card. You use mmap to setup DMA for that device. Then when you write or read(e.g. playback or recording) information. Is there a layer somewhere in the kernel and/or hardware that is catching those reads or writes and signaling interrupts or some form of triggers, to notify the device that its been modified so it should process that data? You know, once the write is complete, in an underlying *would-be* process of a kernel is calling/notifying the device once the read/write operation is complete.

This is what I want to know because I didn't see any code what so ever that showed this explicitly. For example,

int    snd_device ;byte*  snd_buffer ;snd_device = open( "dev/dsp", O_WDRW ) ;// mmap init left out// but, snd_buffer now points to pa of mmap// write sound using writewrite( snd_device, random_sample, 16 ) ;// write sound mmapsnd_buffer[ 0 ] = random_sample ;


Get the idea? So after I write() to the device, I'm assuming the function is higing the details about how the sound device is notified about the write so that it will process(play) the data. If I use the snd_buffer obtained via mmap, then how would this same process happen?

I don't believe there's some weird hook in the OS that watches the code when it writes.... or maybe there is in a way....

You know I seam to distinctly remember that certain types of peripherals do this. Actually, when you set the device in a "wait to receive more". Any writes or reads from that resource auto-trigger, that is DSP_TRIGGER in my case, the interrupt or event for the device to process the data. I sure hope I'm getting close

I guess it's the whole DMA thing I'm struggling with. You prolly helped in more ways then you know of ApochPiQ.

Pretty much, I wasn't happy with a lot of the SDKs or libraries I found on linux and wanted to write my own. Plus I seen many examples of this in commercial applications so I though I should do the same. 2nd, this allows me to become more familiar with the kernel and it's little stuffs.

I was hoping to use the same technique for joysticks and web-cams too. But thanks for all your help so far guys, I mean it!
/dev/dsp is just an OSS interface and all you're doing is writing to a shared buffer that the driver then passes to the soundcard somehow. Some drivers will point the hardware at that buffer and the hardware will start reading from that same buffer, other drivers will copy that buffer elsewhere and then tell the hardware what to do with it. If a driver allows you to mmap /dev/dsp as opposed to using read/write then it will provide you with a circular buffer. This circular buffer may be the same buffer that the hardware is reading from or the driver may be reading from it and passing it to the hardware using some other mechanism. Either way, no one will know if/when you've written to this mmapped buffer, the only thing the driver will tell you is where it or the hardware is currently reading from (via an ioctl) and you'll have to make sure you've written valid data there ahead of time.
Well, thanks for all the replies. I think I figured it out. You call ioctl using DSP_ENABLE_INPUT and DSP_ENABLE_OUTPUT. That is, if the driver supports DSP_CAP_TRIGGER and DSP_CAP_MMAP you acquire a ring buffer(i.e. the circular buffer) using mmap and make your reads/writes to it. Using DSP_ENABLE to enable input and output. If the define is enable then th driver is or device is reading or writing to the buffer. If the device isn't enabled, then the you are writing to the buffer... or should be unless you turned the device off. It's very similar to DirectSound's lock and unlock mechanism. The main difference here is that you have to implement the fancy cover code your self.

The device will obviously know how to process the data via hardware configuration parameters. I hope Linux does shared resources with this so that I don't have an accident with some other application's sound. I don't need supreme ownership, but at least a good enough level to work with.

So that's it. I just need to write the audio conversion routines for a few formats and a simple software mixer and I'll be done.

Do any of you know if this is standard? I read about dev/audio being capability/fix for sun systems. I'm using Debian right now. What about BSD? Fedora/Redhat? or Novell?

This topic is closed to new replies.

Advertisement