# Complexity Question

This topic is 3226 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I missed a point on a question on a recent exam, and I cannot figure out why. We were asked the following:
Quote:
 "What is the complexity to take the keys from the sorted array of size N and create a perfectly balanced BST?"
The answer turned out to be O(N), but I put O(N*log(N)) on the test. If each insertion into the BST takes O(log(N)) time, then inserting the N keys will take O(N*log(N)), right? Laying the process out, it would look like O(log(N) + log(N-1) + log(N-2) + . . . + log(N - (N -1))) = O(log(N*(N-1)(N-2). . .(N-(N-1)))) = O(log(N^N + . . .)) = O(N*log(N)). The first part, log(N), is the insertion of the last key; I reversed the addends to make it easier to manipulate and see. Moreover, another question that I got correct asks
Quote:
 "What is the complexity to place the keys from a heap into an array in sorted order?"
Each removal of a key from the heap takes O(log(N)), removing them in sorted order; using the same argument I answered that the complexity is O(N*log(N)). And this is the answer on the solutions! The first question seems to be the same as the second question but in reverse; the answer to the first--according to the solutions--is O(N), but the answer to the second--according to the solutions--is O(N*log(N)). I emailed the professor, and he replied with "it doesn't matter that suboptimal algorithms exist which have O(N*log(N)) or even O(N^2) complexity because the single best answer is the one with O(N) complexity." He is talking about the first question. Am I missing an algorithm that takes only O(N) time? Sorry for the long question, thanks.

##### Share on other sites
Think for a moment: to create a perfeclty balanced search tree, all one needs to do is always go to the middle of the array, then split from there, and repeat, setup correctly you will do it in O(n), the big thingy here is that the array is already sorted for you, so you know what is the middle value for each continuous sub array of the array.

##### Share on other sites
You're thinking of building the tree top-down.
The O(n) algorithm builds the tree bottom-up. No item comparisons need be done using this method, because we already know the data is sorted.

##### Share on other sites
The answer to the first one could be O(1) or O(lg N) (depending on if you have to deal with arrays with larger-than-native-pointer sizes):

template<typename data>class ArrayTree {private:  data* const center;  data* const begin;  data* const end;public:  ArrayTree( data* begin_, data* end_ ):    center( begin_ + (end_-begin_)/2 ), begin(begin_), end(end_)  {}  ArrayTree right() const {    return ArrayTree( center+1, end );  }  ArrayTree left() const {    return ArrayTree( begin, center-1 );  }  data& contents() const {    return *center;  }  bool valid() const {   return begin < end;  }  operator bool() const { return this->valid(); }};template<typename data, size_t size>ArrayTree WrapArray( data(&array)[size] ) {  return ArrayTree<data>( array, array+size );}
Given a sorted array A (or, more specifically, pointers into the begin and one-past-the-end), you can construct an ArrayTree object in O(1) time.

The ArrayTree is a balanced binary tree (with a rightward bias), with operators left(), right() and contents(). To determine if you are dealing with a 'null' ArrayTree, just cast to bool.

Ie:
template<typename function>void InOrder( ArrayTree tree, function f ) {  if (!tree) return;  InOrder( tree.left(), f );  f( tree.contents() );  InOrder( tree.right(), f );}

is an in-order traversal of the binary tree ArrayTree.

Now, ArrayTree is a 'constant' tree in that you cannot change it's contents.

But given a function that accepts a duck-typed balanced binary tree and outputs a copy of it, you could copy the ArrayTree into a 'real' balanced binary tree (with pointers and stuff) in O(n) time by doing a node-by-node copy.

Note that the center pointer in the ArrayTree isn't needed.

This ArrayTree structure isn't completely useless as-is: it makes writing a binary search pretty easy:
template<typename data>bool binarysearch( ArrayTree tree, data d ) {  while(true) {    if (!tree) return false;    if (d < tree.contents()) {      tree = tree.left();      continue;    } else if (d > tree.contents()) {      tree = tree.right();      continue;    } else {      return true;    }  }}template<typename data, size_t n>bool binarysearch( data(&array)[n], data d ) {  return find( WrapArray(array), d );}
and there we go: an O( lg(n) ) search of a sorted array.

Fun eh?

##### Share on other sites
Thanks everyone, a friend realized the problem. You don't have to even deal with inserting keys into the array which is what I was stuck on; you can just recursively build each node using the medians.

##### Share on other sites
Quote:
Original post by Satan666
I missed a point on a question on a recent exam, and I cannot figure out why. We were asked the following:
Quote:
 "What is the complexity to take the keys from the sorted array of size N and create a perfectly balanced BST?"

The answer turned out to be O(N), but I put O(N*log(N)) on the test. If each insertion into the BST takes O(log(N)) time, then inserting the N keys will take O(N*log(N)), right?

Yes. However, there is a smarter way to construct the tree than to insert every element one at a time. :) Specifically, we can do it recursively, by locating the middle element in O(1) time, recursively building the tree for the elements before and after the middle element, and attaching the sub-trees under the middle element in O(1) time. Each call to the recursive function takes O(1) time, and the function is called exactly once for each element, so the whole process is O(N).

Quote:
Moreover, another question that I got correct asks
Quote:
 "What is the complexity to place the keys from a heap into an array in sorted order?"

Each removal of a key from the heap takes O(log(N)), removing them in sorted order; using the same argument I answered that the complexity is O(N*log(N)). And this is the answer on the solutions!

And it is a correct answer. However, you can build the heap in O(N) time, in an analogous way to how the BST is built.

Quote:
 The first question seems to be the same as the second question but in reverse; the answer to the first--according to the solutions--is O(N), but the answer to the second--according to the solutions--is O(N*log(N)).

Also, BSTs are not heaps, anyway, so the questions aren't exactly symmetric. ;)

##### Share on other sites
Quote:
 Original post by NotAYakkThe answer to the first one could be O(1) or O(lg N) (depending on if you have to deal with arrays with larger-than-native-pointer sizes):

Your class can indeed be constructed in O(1) time, and it would thereafter indeed provide the search characteristics of a BST. However, no CS professor would accept that an instance of ArrayTree "is a BST" unless perhaps they also accepted the sorted array itself as being a BST already. :) There is no point in building the BST from the sorted array unless we want to modify the tree; and in that case ArrayTree won't help us.

##### Share on other sites
Quote:
Original post by Zahlman
Quote:
 Original post by NotAYakkThe answer to the first one could be O(1) or O(lg N) (depending on if you have to deal with arrays with larger-than-native-pointer sizes):

Your class can indeed be constructed in O(1) time, and it would thereafter indeed provide the search characteristics of a BST. However, no CS professor would accept that an instance of ArrayTree "is a BST" unless perhaps they also accepted the sorted array itself as being a BST already. :) There is no point in building the BST from the sorted array unless we want to modify the tree; and in that case ArrayTree won't help us.
My thoughts exactly. It might walk like a duck, but a read-only tree is like a duck that cannot quack.

Thanks Zahlman for nicely describing the process of building the tree bottom-up.
I have some code lying around that does this if you're still having any trouble understanding how it is implemented, Satan666.

##### Share on other sites
*nod*, it is sort of a joke that points out how the two are basically identical.

However, you'll note that that kind of code makes the 'create the tree' code more beautiful.

template<typename nonconstTree, typename constTree>nonconstTree CloneTree( constTree const& src ){  if (!src) return nonconstTree();  return nonconstTree( src.contents(), CloneTree(src.left()), CloneTree(src.right()) );}

Where an example of a nonconstTree would be:
Ie:
template<typename data>class BinaryTree {  struct Impl {    boost::shared_ptr< Impl > left;    boost::shared_ptr< Impl > right;    data d;    Impl( data const& d_, boost::shared_ptr< Impl > left_, boost::shared_ptr< Impl > right _ ): d(d_), left(left_), right(right_) {}  };  boost::shared_ptr< Impl > pImpl;private:  BinaryTree( boost::shared_ptr<Impl> src ): pImpl( src ) {}public:  BinaryTree( BinaryTree const& src ): pImpl( src.pImpl ) {}  BinaryTree( data const& d, BinaryTree left, BinaryTree right ):    pImpl( new Impl( d, left.pImpl, right.pImpl ) )  {}  bool Valid() const { return !!pImpl; }  operator bool()() const { return this->Valid(); }  // calling these when !Valid() segfaults: (should throw and/or assert in that case)  data& contents() { return pImpl->d; }  data contents() const { return pImpl->d; }  BinaryTree& left() { return pImpl->left; }  BinaryTree& right() { return pImpl->right; }};

And now the creation of a binary tree from an array becomes:
BinaryTree<int> newTree = CloneTree< BinaryTree<int> >( WrapArray( array ) );

which is sort of beautiful.

• 12
• 10
• 10
• 11
• 18