# parallel algorithm articles with news like punchlines

This topic is 4228 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi guys, I am currently working on a parallel algorithm. What is very funny is that I found tons of algorithm articles claiming to have O(1) algos and then say using N^3 processors. Duh, how can you have N^3 processors I have a constant amount, say k processors. Sounds like sensation punchlines that mean nothing. How would one go about converting this information whats the runtime of a O(1) algo using N^3 processors if I only have k processors. -CProgrammer

##### Share on other sites
Quote:
 Duh, how can you have N^3 processors I have a constant amount, say k processors.

That you have a constant amount doesn't matter. You may also have a constant amount of time, but the time taken for an algorithm to run can still grow.

I guess what the articles say is that the asymptotic number of processors required is Θ(n3) for a problem of size n (is there any way to use the real Theta symbol on this forum?). It works just as with time or space. A simple example would be a problem requiring P(n)=10n3 processors. This means that to solve a problem of size 5 you'd need 1250 processors, but for a problem of size 15 you'd need 33750 processors.

Quote:
 How would one go about converting this information whats the runtime of a O(1) algo using N^3 processors if I only have k processors.
If k < N3 then it isn't guaranteed to work, but most implementations of parallel algorithms can just do it sequantially in chunks. This will probably result in Θ(n3) worst-case runtime.

Your question is the same as asking how an algorithm requiring ω(1) memory will perform with a constant amount of memory. It might not even work because you don't have enough memory.

EDIT: Replaced Theta with Θ. Thanks for the solution.

##### Share on other sites
Quote:
 Original post by CTar(is there any way to use the real Theta symbol on this forum?)

&theta;

##### Share on other sites
Well I guess it does mean that the algorithm has the capability to run using constant time processes. But the O(1) is just very misleading and this feature could be annotated differently. If my problem grows in size I cant just have my amount of processors repeatedly gorwing. So I guess like you said if k < N^3 I have to assume the algorithm is O(N^3). It would be more interesting to have an O-Notation which includes k as a variable.

-CProgrammer

##### Share on other sites
Quote:
 Original post by CProgrammerBut the O(1) is just very misleading and this feature could be annotated differently.

Maybe, but technically it isn't incorrect. As long as you can allow the number of processors to grow like that then it's indeed O(1).

Quote:
 If my problem grows in size I cant just have my amount of processors repeatedly gorwing.

You can't, but some can. Imagine a great network connecting thousands of computers each with hundreds of cores then for reasonably sized problems it's possible to let the number of processors grow.

Quote:
 So I guess like you said if k < N^3 I have to assume the algorithm is O(N^3). It would be more interesting to have an O-Notation which includes k as a variable.

If you restrict the problem to a constant number of processors then the number of processors will be O(1) no matter what. The runtime needed could then be described as Θ(n3). The asymptotic notations only describe what happens as the problem size approaches infinity and at that point your number of processors will be just as limiting as if you had 1 processor. If you need to describe what happens when n^3 is near k then you need another notation.

In a real problem you may be able to say something like:
p = the number of processors
t(n) = the number of distinct tasks required for a problem size n (n^3 in this case).
Then we could instead describe the number of tasks each processor need to handle (c is a constant):
w = c*ceiling(t(n)/p)
When t(n) >> p then t(n)/p is large and the ceiling function's affect isn't noticeable so we could just remove it.
w = (c/p)*t(n)
c and p are both constants so we could describe it as w=Θ(t(n)), but it seems you would find w = (c/p)*t(n) more useful.
Alternatively if the number of processors is allowed to grow linearly then we have:
p(n)=c1n
t(n)=c2n3
w = c3*t(n)/p(n) = (c2c3/c1)*n2
So we have:
w = Θ(n2)

##### Share on other sites
I guess that sums it up.
Didn't think of the large network such as the internet, thanks for the tip.

-CProgrammer

1. 1
2. 2
3. 3
Rutin
22
4. 4
5. 5
khawk
14

• 9
• 11
• 11
• 23
• 12
• ### Forum Statistics

• Total Topics
633654
• Total Posts
3013168
×