fiber optics

Started by
6 comments, last by way2lazy2care 11 years, 3 months ago

so, i was sitting here watching modern marvels, and it was talking about fiber optics, which uses diffrent beam intensity's to represent 1/0, and it occured to me, why don't they use a higher range of intensity's to represent say a hex value, rather than a binary value? it'd seem like it'd be as easy as just decoding the intensity, and you incode 16x the data.

i'm assuming their has to be a technical reasoning for why fiber optic's would only transmit 1 bit at a time?

Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.
Advertisement

I believe multi-mode fiber does carry more than binary data -- though I think its split into different channels, rather than one big channel. Regardless, channels can always be teamed together to have the same effective bandwidth increase.

The reason its not done the way you describe is that its always easier to distinguish simple on/off binary signals, than to measure and categorize signals with multiple levels. It's also a more-robust signal. As an analogy, imagine you are trying to communicate non-verbally with your friend across a dark room. You've decided to exchange messages by using ASCII codes. To signal a zero, you hold up a black card and to signal a one you hold up a white card. But, you say, "Aha! this would go much quicker, if we signaled two bits at a time by using four cards -- black, dark grey, light grey, and white!" Mathematically, this is sound, but it also increases the chance of error -- maybe, depending on the ambient light, you sometimes confuse black with dark grey, dark grey with light grey, and light grey with white. Maybe you can slow down to be more sure of your decoding, but then you've defeated the purpose of using multiple levels. The problem gets worse the more levels you add to your signal, and all of this is to say nothing of attenuation or a weakening signal.

At the same time, another means of increasing total bandwidth is simply to run the binary signal twice as fast. Assuming you can keep up, this method is preferable because there's no need for the error-prone and slow process of decoding various signal strengths. For various reasons and properties of electromagnetic signals (including light), I'm reasonably sure that it always holds that a faster binary signal is more practical than a slower multi-bit one -- at least in so far as transmitting discrete data (that is, information that is not naturally analog) is concerned.

There's an analog to be made to Solid-state storage (like SSD hard drives or SD cards) in that the fastest, most-reliable storage uses Single-Level-Cell memory, which stores just one bit per NAND cell. Because this is expensive, its mostly limited to enterprise and enthusiast SSD drives. Common high-performance drives today use Multi-Level-Cell memory that stores two bits per cell in a manner similar to what you describe (using different voltages to represent different bit patterns), however, these cells are slower to read and to write, and are less stable than SLC cells. Because MLC is *still* relatively expensive, we're now starting to see SSD drives with Triple-Level Cell memory, which is even slower and even less stable than MLC, but is around half to 2/3rds the cost per gigabyte. For flash memory chips, MLC seems to be the commercial sweet-spot at the moment, but you're dealing with different design parameters than fiber-optics.

throw table_exception("(? ???)? ? ???");

I believe they are messing with the polarity (if thats what it is called) of the light to get more channels (you know, like polarizing sunglasses work)

o3o

In the early days, many computers embraced this idea and took it to the logical extreme: they were fully analog.

They were notoriously quirky, temperamental, difficult to get consistent results from, and they essentially died off for a reason.


I don't remember the source offhand, but I recall reading a paper once that argued (and possibly also mathematically proved) that base 2 is the most robust encoding mechanism for general purpose data transmission and computation. It hits a very special sweet spot.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Digital transmission is more error tolerant than analog. Digital equipment tends to be far less temperamental than analog.

Digital means "using digits", whole numbers, as opposed to continuous values. If you were to use, for example, a varying amplitude to represent 16 separate values on the same frequency, you would have to make sure both sender and receiver agree on exactly what amplitude represents what value (and the breakpoints between)... any attenuation through the cable would result in value shifts and loss of information and slight variances between sender and receiver might make a 12 appear as a 13 or something. It is possible to multiplex a signal using non-binary digital, depending on the carrier medium.

Turns out fibre actually uses phase and not amplitude to send digital binary data, because (a) it is less impacted by spectral attenuation (ie. dimming over the length of the fibre at different frequencies) and (b) switching between the 1 state and the 0 state is faster. It's also self-clocking so sideband control signals aren't required. Using phase limit transmission to binary, although polarization could be used to multiplex on the same channel.

Stephen M. Webb
Professional Free Software Developer

thanks for the info guys=-)

Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.
You should also keep in mind that the fibre optics itself is just an optical cable, and what is actually sent down it depends on what is connected at either end.

I have three optical transceivers kicking around here (Pulled from service more than a decade ago, no idea when it was originally installed) that were based on a timed filtered light pulse. It uses different shades of light (I want to say 5, but it may be 6), which are emitted by individual sources and then focused down the same line in combined pulses. At the far end they hit an array of sensors with a filter plate over them so that each sensor sees only one of the original colours of light. So if you were to point the thing at a camera you would see 2^5 or 2^6 different colours due to mixing with each clock pulse, but it is still a binary on/off signal.

I honestly have no idea what styles are currently in common use.

I also remember reading a paper years ago on a more complex filtered light system. It played on the fact that the filters at the far end did not filter in a 1:1 ratio with the emitters (where one emitter was matched with one receiver), but rather emissions from one could be picked up on multiple receivers. Really wish I could remember more about the details, as it apparently had strong error checking supporting 3 state bits (0, 1, 2), and some cool mathematical property stemming from Emitter A triggering receivers B and C, with one possibly in a state lower than emitted due to filtering.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

Binary plays nice with a lot of hardware/logic. More than likely you'd just convert binary->whatever you need for your fiber solution->binary. I doubt that would be any faster than just sending it down the pipe outright.

This topic is closed to new replies.

Advertisement