Jump to content
  • Advertisement
Sign in to follow this  
dan1088352

how do computers work?

This topic is 5080 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

closer to why, I know HOW, but I dont know why it will "know" to do this or that? if it is to long to type, is there a link on why?

Share this post


Link to post
Share on other sites
Advertisement
Wow, man.

=)

That question can be answered (a little) by taking an architecture and assembler course at college. As for the true whys, you'll want the entire Comp Sci major. In fact, you really want to go CmpSci/Electrical Engineering to understand the full-blown *why* and *how*.

Or you can spend a lot of time with Google =) Maybe someone else here will know of a site you can hit up that'll give you the skinny.

Share this post


Link to post
Share on other sites
I doubt you truely know HOW they work, or you would understand why. It all comes down to electrons finding their way through millions of transistors that make up logical gates (AND, OR, etc) to either set or clear states of memory bits. It's an entire major figuring out how all the electrical components actually work.

You could probably get basic ideas by reading through the nested pages starting here.

Share this post


Link to post
Share on other sites
Hello, from your reasoning I would guess you want to know how those "data signals" are actually read. The communication to/from software, intangible, to hardware, tangible is probably something not many people will think about.

Basically you can look at it like a switch, an electronic signal is either present or not present on the switch's activator. If there is a signal, then current is allowed to pass through, but if there is no signal, current is not allowed to pass through. Storage is usally made in a circuit called a flip-flop or something called a capacitor. ROMs are an easy illustration to understand. They contain diodes and/or transistors that arranged in a matrix fashion. Programming is made by severing certain connections to make that part of the circuit not conduct electricity and thus logic gates (AND, NOR, OR, XOR, NAND, etc) can be formed this way.

These websites have some elucidation on this topic.

40-bit TTL computer

A plethora of homemade computers

If you have an understanding in digital electronics then it would be easy to proceed more in depth.

Share this post


Link to post
Share on other sites
Also keep in mind that the source you compile and execute is not what talks to the hardware. You tell Windows what you want to do, and Windows talks to the drivers, which talks to the hardware which probably talks to something else.

Share this post


Link to post
Share on other sites
There is an interesting book I've been reading lately called "The Most Complex Machine" by David Eck. At least I've read the first three chapters... I'm not too sure how the rest of it is. But the first three chapters are cool talking about how to design a simple (very simple) computer. At least the design is on paper, even the simple CPU is very complicated to actually build.

Anyway, it's an interesting read if you wanted to learn more about computers.

Share this post


Link to post
Share on other sites
I would suggest looking into Discrete Mathematics (basically, logic math) to get a basis for some of the ideas present in circuits and computers in general. For anything that sounds interesting below, remember your best friend Google and the awesome cousin wikipedia. Check out how binary numbers work.

Also, here is a very abstract example:

Computers work with 1s and 0s. Somehow, you need to use those 1s and 0s to do complex things like graphics and sound.

The idea is that the combinations of 1s and 0s mean different things in different contexts.

Here is a simple idea:

If you want to add two numbers and store the answer somewhere, your C++ code would probably look like this:

a = b + c;

Now, at the very low level, you can see each variable being represented by some register on the CPU. Let's say the CPU has 32 registers, r1, r2, ...., r32. In the above example, imagine the value of b is r1 and the value of c is r2. a is r3.

The CPU knows how to add. You will have an instruction called ADD. In assembly, it would probably look something like this:

ADD r1, r2, r3

The above line basically says, "Take the values of r1 and r2, add them together, and store the result in r3."

So how does that get to the CPU itself?

Another of the registers might store instructions.

Let's say the values of b (r1) and c (r2) are 7 and 10 respectively. r1 = b = 7 = 0111, r2 = c = 8 = 1000

The CPU has a few commands at its disposal: ADD, SUB (subtraction), JMP (jump, or goto), etc.

Each command will have a binary representation. For our purposes, imagine ADD is 0000, SUB is 0001, and JMP is 0010.

Each register will have a binary representation. r1 = 0000, r2 = 0001, r3 = 0010, etc.

So you could have the ADD instruction look like this:

ADD r1, r2, r3

which will look like this in binary:

0000000000010010

Separated out:
0000 0000 0001 0010

The CPU will see the ADD instruction and know it is going to take the values in the next two registers, which it sees are going to be in r1 and in r2, add the two values together (look up binary addition to see how that works), and store the value in the third register, which it sees is r3.

It will look for the value in the registers:

r1 = b = 7 = 0111
r2 = c = 8 = 1000

and add them together:

0111
+1000
------
1111 = 15

r3 will now have the value 1111, or 15.

Now to get even more low level than that (how to add binary numbers on a computer), you will need to understand how the AND, OR, XOR, NOR, etc, gates work.

Again discrete mathematics is a good subject to learn about the logic needed. Binary mathematics is good to know. If you learn assembly, you can learn a lot about the internal workings of a CPU, but be warned: Intel processors are CISC, meaning that different instructions have different sizes. If you JMP to a point, you can just say JMP address_of_position, which may be represented by 0010 1011.

Other processors are RISC, meaning that each instruction will be th same size. MIPS assembly will always use 32 bits for instructions, which makes it faster since it knows each instruction is always four bytes away. To go to the next instruction, simply add four bytes to the current value.

I hope this gives you a lot to think about and a lot of ideas about where to start your research. Even after understanding how it all works on a low level, it can still be amazing how such simple 1s and 0s can result in the games we play and the movies we watch.

Share this post


Link to post
Share on other sites
I have read the howstuffworks stuff, it tells me what it does; but not how. why do the switchef flip? kind of, what was going on the the mind of the guy\s who make computers.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!