Sign in to follow this  

how do computers work?

This topic is 4833 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

closer to why, I know HOW, but I dont know why it will "know" to do this or that? if it is to long to type, is there a link on why?

Share this post


Link to post
Share on other sites
Wow, man.

=)

That question can be answered (a little) by taking an architecture and assembler course at college. As for the true whys, you'll want the entire Comp Sci major. In fact, you really want to go CmpSci/Electrical Engineering to understand the full-blown *why* and *how*.

Or you can spend a lot of time with Google =) Maybe someone else here will know of a site you can hit up that'll give you the skinny.

Share this post


Link to post
Share on other sites
I doubt you truely know HOW they work, or you would understand why. It all comes down to electrons finding their way through millions of transistors that make up logical gates (AND, OR, etc) to either set or clear states of memory bits. It's an entire major figuring out how all the electrical components actually work.

You could probably get basic ideas by reading through the nested pages starting here.

Share this post


Link to post
Share on other sites
Hello, from your reasoning I would guess you want to know how those "data signals" are actually read. The communication to/from software, intangible, to hardware, tangible is probably something not many people will think about.

Basically you can look at it like a switch, an electronic signal is either present or not present on the switch's activator. If there is a signal, then current is allowed to pass through, but if there is no signal, current is not allowed to pass through. Storage is usally made in a circuit called a flip-flop or something called a capacitor. ROMs are an easy illustration to understand. They contain diodes and/or transistors that arranged in a matrix fashion. Programming is made by severing certain connections to make that part of the circuit not conduct electricity and thus logic gates (AND, NOR, OR, XOR, NAND, etc) can be formed this way.

These websites have some elucidation on this topic.

40-bit TTL computer

A plethora of homemade computers

If you have an understanding in digital electronics then it would be easy to proceed more in depth.

Share this post


Link to post
Share on other sites
Also keep in mind that the source you compile and execute is not what talks to the hardware. You tell Windows what you want to do, and Windows talks to the drivers, which talks to the hardware which probably talks to something else.

Share this post


Link to post
Share on other sites
There is an interesting book I've been reading lately called "The Most Complex Machine" by David Eck. At least I've read the first three chapters... I'm not too sure how the rest of it is. But the first three chapters are cool talking about how to design a simple (very simple) computer. At least the design is on paper, even the simple CPU is very complicated to actually build.

Anyway, it's an interesting read if you wanted to learn more about computers.

Share this post


Link to post
Share on other sites
I would suggest looking into Discrete Mathematics (basically, logic math) to get a basis for some of the ideas present in circuits and computers in general. For anything that sounds interesting below, remember your best friend Google and the awesome cousin wikipedia. Check out how binary numbers work.

Also, here is a very abstract example:

Computers work with 1s and 0s. Somehow, you need to use those 1s and 0s to do complex things like graphics and sound.

The idea is that the combinations of 1s and 0s mean different things in different contexts.

Here is a simple idea:

If you want to add two numbers and store the answer somewhere, your C++ code would probably look like this:

a = b + c;

Now, at the very low level, you can see each variable being represented by some register on the CPU. Let's say the CPU has 32 registers, r1, r2, ...., r32. In the above example, imagine the value of b is r1 and the value of c is r2. a is r3.

The CPU knows how to add. You will have an instruction called ADD. In assembly, it would probably look something like this:

ADD r1, r2, r3

The above line basically says, "Take the values of r1 and r2, add them together, and store the result in r3."

So how does that get to the CPU itself?

Another of the registers might store instructions.

Let's say the values of b (r1) and c (r2) are 7 and 10 respectively. r1 = b = 7 = 0111, r2 = c = 8 = 1000

The CPU has a few commands at its disposal: ADD, SUB (subtraction), JMP (jump, or goto), etc.

Each command will have a binary representation. For our purposes, imagine ADD is 0000, SUB is 0001, and JMP is 0010.

Each register will have a binary representation. r1 = 0000, r2 = 0001, r3 = 0010, etc.

So you could have the ADD instruction look like this:

ADD r1, r2, r3

which will look like this in binary:

0000000000010010

Separated out:
0000 0000 0001 0010

The CPU will see the ADD instruction and know it is going to take the values in the next two registers, which it sees are going to be in r1 and in r2, add the two values together (look up binary addition to see how that works), and store the value in the third register, which it sees is r3.

It will look for the value in the registers:

r1 = b = 7 = 0111
r2 = c = 8 = 1000

and add them together:

0111
+1000
------
1111 = 15

r3 will now have the value 1111, or 15.

Now to get even more low level than that (how to add binary numbers on a computer), you will need to understand how the AND, OR, XOR, NOR, etc, gates work.

Again discrete mathematics is a good subject to learn about the logic needed. Binary mathematics is good to know. If you learn assembly, you can learn a lot about the internal workings of a CPU, but be warned: Intel processors are CISC, meaning that different instructions have different sizes. If you JMP to a point, you can just say JMP address_of_position, which may be represented by 0010 1011.

Other processors are RISC, meaning that each instruction will be th same size. MIPS assembly will always use 32 bits for instructions, which makes it faster since it knows each instruction is always four bytes away. To go to the next instruction, simply add four bytes to the current value.

I hope this gives you a lot to think about and a lot of ideas about where to start your research. Even after understanding how it all works on a low level, it can still be amazing how such simple 1s and 0s can result in the games we play and the movies we watch.

Share this post


Link to post
Share on other sites
I have read the howstuffworks stuff, it tells me what it does; but not how. why do the switchef flip? kind of, what was going on the the mind of the guy\s who make computers.

Share this post


Link to post
Share on other sites
to the post above me: thank you fow the work you put in to that post, I do know how to add in binary and do stuff like that. you pionted it out at a very low level, but why does the computer add that? why isnt it just a circut with electricity in it?

Share this post


Link to post
Share on other sites
I think you're asking about this from an electrical engineering perspective, so I will try to help.

Each of the millions of little switches in your computer is a tiny transistor. There are three "wires" that hook into a transistor, called the base, the collector, and the emitter. When the wire to the base is powered, it "opens" the gate and allows electrons to flow on the other two wires, making a "1". When the base isn't powered, it makes a "0".

Collections of transistors are used to make logic gates, which are the basic building blocks of all computers and programmable logic devices. For instance, if you want to check an "and", then you feed source 1 into the base of transistor one, connect transistor one's emitter to transistor 2's collector (which is called a "series" connection), and connect source 2 to the base of transistor 2. If the feed from source 1 and source 2 are both on, current will be able to travel through both transistors, and the result will be 1. If either or both sources are off, then the result will be 0. For an OR gate, you would connect the transistors in parallel by connecting the two transistors' emitters and collectors together, and attaching a source to each gate. If one or both of the sources is on, then the electricity can flow through, if neither is on, it can't, and you get a 0 result. The third type is a NOT gate, which also involves a resistor.

From AND, OR, and NOT, you can do every logic function there is.

-fel

Share this post


Link to post
Share on other sites
Quote:
Original post by dan1088352
to the post above me: thank you fow the work you put in to that post, I do know how to add in binary and do stuff like that. you pionted it out at a very low level, but why does the computer add that? why isnt it just a circut with electricity in it?


It is many circuits with electricity flowing through them.

Technically it is not adding, the flows of currents (for the input) is causing gates to change the other flows of courent (for the output), in a way which would be considered adding, base 2.

From,
Nice coder

Share this post


Link to post
Share on other sites
I have two pieces of advice: 1.) understand the 1-bit binary half-adder; 2.) read a book.

You're not going to grasp it from this forum. As has been pointed out, you're asking a Computer Engineering problem (which is a mixture of electrical engineering, itself an offshoot of chemistry, and mathematics).

Share this post


Link to post
Share on other sites
It looks like this is the book you need:
Code: The Hidden Language of Computer Hardware and Software
Charles Petzold's latest book, Code: The Hidden Language of Computer Hardware and Software, crosses over into general-interest nonfiction from his usual programming genre. It's a carefully written, carefully researched gem that will appeal to anyone who wants to understand computer technology at its essence. Readers learn about number systems (decimal, octal, binary, and all that) through Petzold's patient (and frequently entertaining) prose and then discover the logical systems that are used to process them. There's loads of historical information too. From Louis Braille's development of his eponymous raised-dot code to Intel Corporation's release of its early microprocessors, Petzold presents stories of people trying to communicate with (and by means of) mechanical and electrical devices. It's a fascinating progression of technologies, and Petzold presents a clear statement of how they fit together.

The real value of Code is in its explanation of technologies that have been obscured for years behind fancy user interfaces and programming environments, which, in the name of rapid application development, insulate the programmer from the machine. In a section on machine language, Petzold dissects the instruction sets of the genre-defining Intel 8080 and Motorola 6800 processors. He walks the reader through the process of performing various operations with each chip, explaining which opcodes poke which values into which registers along the way. Petzold knows that the hidden language of computers exhibits real beauty. In Code, he helps readers appreciate it. --David Wall


read some of the reviews and I think you'll agree.

Share this post


Link to post
Share on other sites
You are all wrong!!!

Computers work because an evil wizard once wanted to punish all kids in a nearby town for thorwing eggs at his evil tower during the nights. So he created this evil contrapment to lure the kids into something self destructive instead. And ever since the kids became computer nerds and left him alone.

So in short...

HOW: Evil black magic
WHY: To enslave all kids for eternity


EDIT:
-----
PS. In case you wonder, i was very tired when i wrote this. I have now managed to rest for a while and realized that it was a tad silly. So just disregard this post. I wont remove what i wrote since i dont aprove of changing posts when posted.

[Edited by - Allmight on October 23, 2004 8:17:22 AM]

Share this post


Link to post
Share on other sites
Felisandria's response was probably what you were looking for. Here is a page I found searching google that explains basically how transistors function and has a few links to specific logic gate diagrams. You can look at those for each case of base voltage, and see how they do in fact produce the inverting, NAND, and NOR functions. Most designs try to stay away from using resistors in gates though, because they are very large compared to the other components.

And just so you can appreciate the scale, I believe the P4 extreme edition has ~178 million transistors.

If you want an even more detailed explination on the physics of transistors (how EXACTLY does a channel form when a voltage is applied to the gate), you can just google 'transistor physics' and get pretty much whatever you need.

Share this post


Link to post
Share on other sites
I just remembered someone who I can talk to, he is a systems programmer, he built a computer in college from scratch. if he doesnt know it no one does, I am serious when I say that. but I will check out the links\books I got here, thanx

Share this post


Link to post
Share on other sites
Quote:
Original post by save yourself
If you want an even more detailed explination on the physics of transistors (how EXACTLY does a channel form when a voltage is applied to the gate), you can just google 'transistor physics' and get pretty much whatever you need.
Actually, you want to look up diodes, specifically NPN and PNP junction diodes, and semiconductors. This will eventually lead you down the wonderful path of a discussion of electron site vacancies and doping, at which point you'll talk about electrovalence and all that lovely atomic chemistry stuff.

It all depends on how much detail you want to get into. It's interesting, but it's challenging to teach and learn because so much knowledge is incestuous in terms of reference. Concepts are not so much prerequisites of each other as they are co-requisites, which is why university curriculums struggle to effectively teach these things to students.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
basicly you need to know how a transistor works. with this element you can build all logic to build a computer. Today hardware description languages like Verilog are commonly used to describe 'hardware'.

Share this post


Link to post
Share on other sites
Quote:

if he doesnt know it no one does


lol. Sorry but that is really funny. So this guy must be the only person on EARTH that knows anything about a computer huh? Lucky bastard. I sure wish I knew what a computer was. BTW, is his name God? ;)

The short answer is if you want to know exactly how and why a computer works all the way down to its lowest level then you need at least a computer science degree and perhaps a bit more. I'm almost finished with my CS degree and the classes I have taken (man have I taken a butt load) was enough to allow me to finally understand exactly how the electronics is able to do what it does down to the physics. Now I may not be able to make a CPU my self in my back yard but I have a really good understanding even at the lowest level. Ok...that wasn't quite as short of an answer as I once thought but, well there ya go! :D


-SirKnight

Share this post


Link to post
Share on other sites
If you understand how AND and OR gates work, (perhaps you need NOT as well) you should be able to design, on paper, a simple 1-bit processor. The 1-bit processor has three inputs. The middle one is the opcode, which obviously is either 0 or 1. It is not too difficult to design a circuit that adds the other two inputs if the opcode is zero, and multiplies them if the opcode is one. Two-bit processors are too complicated for me to keep track of, but then I'm usually just doodling idly; with a bit of discipline and patience, you can no doubt do better.

Share this post


Link to post
Share on other sites
No, I'd just go straight to checking out transistors as opposed to diodes. The concepts are extremely similar, but transistors apply more directly to understanding the switching effect used to create logic circuits. It's PNP and NPN transistors that you want...

Share this post


Link to post
Share on other sites

This topic is 4833 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this