Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualKhatharr

Posted 29 December 2012 - 11:07 PM

1) The workload is masked by the fact that the main thread is not using the resources the hyper-thread is using. Like if you're carrying rocks from one pile to another and you pick up a small rock in one hand, you can pick up another small rock in your free hand because you're not using your full strength yet.

 

2) RAID stands for "Redundant Array of Individual Disks", so probably not.

 

3) Yes, all cores have access to the system bus and cache. I don't know what you mean by 'they run'. Cores don't run. They're physical objects on the CPU die. The OS handles core distribution, since it controls the thread scheduler.

 

4) The thread scheduler locks the core, sets its registers, including the flags and instruction pointer and then unlocks it. The core will then execute instructions normally, ignoring the other cores unless its given a special signal. When it's time to schedule a new thread the scheduler just locks the core again, captures the registers and then sets them to the values of the next thread and unlocks.

 

As for the cache, the core is only vaguely aware of it. At the level of assembly you usually don't mess with the cache unless you have to. (Actually I don't recall reading about any instructions for messing with the instruction cache, but I haven't got too far into messing with the cache yet.) It's a separate unit that just streamlines the process of instruction fetching so that you don't have to wait on the bus as often.

 

Typically each core would start at a different address, yes, but there's no reason that more than one core can't be running the same instruction at the same time. The instruction stream is typically on a memory page with read-only access. The data that gets modified is either on the heap page (dynamic memory) or on the stack page (local variables and the function stack).

 

If you get comfortable with assembly and threading then you'll have an easier time with it.


#3Khatharr

Posted 29 December 2012 - 11:05 PM

1) The workload is masked by the fact that the main thread is not using the resources the hyper-thread is using. Like if you're carrying rocks from one pile to another and you pick up a small rock in one hand, you can pick up another small rock in your free hand because you're not using your full strength yet.

 

2) RAID stands for "Redundant Array of Individual Disks", so probably not.

 

3) Yes, all cores have access to the system bus and cache. I don't know what you mean by 'they run'. Cores don't run. They're physical objects on the CPU die. The OS handles core distribution, since it controls the thread scheduler.

 

4) The thread scheduler locks the core, sets its registers, including the flags and instruction pointer and then unlocks it. The core will then execute instructions normally, ignoring the other cores unless its given a special signal. When it's time to schedule a new thread the scheduler just locks the core again, captures the registers and then sets them to the values of the next thread and unlocks.

 

As for the cache, the core is only vaguely aware of it. At the level of assembly you usually don't mess with the cache unless you have to. It's a separate unit that just streamlines the process of instruction fetching so that you don't have to wait on the bus as often.

 

Typically each core would start at a different address, yes, but there's no reason that more than one core can't be running the same instruction at the same time. The instruction stream is typically on a memory page with read-only access. The data that gets modified is either on the heap page (dynamic memory) or on the stack page (local variables and the function stack).

 

If you get comfortable with assembly and threading then you'll have an easier time with it.


#2Khatharr

Posted 29 December 2012 - 10:53 PM

1) The workload is masked by the fact that the main thread is not using the resources the hyper-thread is using. Like if you're carrying rocks from one pile to another and you pick up a small rock in one hand, you can pick up another small rock in your free hand because you're not using your full strength yet.

 

2) RAID stands for "Redundant Array of Individual Disks", so probably not.

 

3) Yes, all cores have access to the system bus and cache. I don't know what you mean by 'they run'. Cores don't run. They're physical objects on the CPU die. The OS handles core distribution, since it controls the thread scheduler.

 

4) The thread scheduler locks the core, sets its flags register and instruction pointer register and then unlocks it. The core will then execute instructions normally, ignoring the other cores unless its given a special signal. When it's time to schedule a new thread the scheduler just locks the core again, captures the flags and EIP registers and then sets them to the values of the next thread and unlocks.

 

As for the cache, the core is only vaguely aware of it. At the level of assembly you usually don't mess with the cache unless you have to. It's a separate unit that just streamlines the process of instruction fetching so that you don't have to wait on the bus as often.

 

Typically each core would start at a different address, yes, but there's no reason that more than one core can't be running the same instruction at the same time. The instruction stream is typically on a memory page with read-only access. The data that gets modified is either on the heap page (dynamic memory) or on the stack page (local variables and the function stack).

 

If you get comfortable with assembly and threading then you'll have an easier time with it.


#1Khatharr

Posted 29 December 2012 - 10:52 PM

1) The workload is masked by the fact that the main thread is not using the resources the hyper-thread is using.

 

2) RAID stands for "Redundant Array of Individual Disks", so probably not.

 

3) Yes, all cores have access to the system bus and cache. I don't know what you mean by 'they run'. Cores don't run. They're physical objects on the CPU die. The OS handles core distribution, since it controls the thread scheduler.

 

4) The thread scheduler locks the core, sets its flags register and instruction pointer register and then unlocks it. The core will then execute instructions normally, ignoring the other cores unless its given a special signal. When it's time to schedule a new thread the scheduler just locks the core again, captures the flags and EIP registers and then sets them to the values of the next thread and unlocks.

 

As for the cache, the core is only vaguely aware of it. At the level of assembly you usually don't mess with the cache unless you have to. It's a separate unit that just streamlines the process of instruction fetching so that you don't have to wait on the bus as often.

 

Typically each core would start at a different address, yes, but there's no reason that more than one core can't be running the same instruction at the same time. The instruction stream is typically on a memory page with read-only access. The data that gets modified is either on the heap page (dynamic memory) or on the stack page (local variables and the function stack).

 

If you get comfortable with assembly and threading then you'll have an easier time with it.


PARTNERS