• Advertisement
Sign in to follow this  

Is casting O(1)?

This topic is 4791 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

The languages in question here are C++ and C#. Is casting an object to a new type a constant-time operation that's independent of the object's size in memory (that is, it's merely a change in how its bits are interpreted)? Or do larger objects take longer to cast?

Share this post


Link to post
Share on other sites
Advertisement
It depends on the nature of a cast. Checked downcasting can entail a pointer-comparison or worse, but thats all I know for sure. Unchecked casts won't even show up in the generated code.

Share this post


Link to post
Share on other sites
Type determines representation and interpretation of data in memory. Thus casting between types simply changes the interpretation of that data. Changing interpretation in general isn't physical operation, thus it shouldn't take time (except is special cases like inheritance where the compiler might do some internal bookkeeping, or during a few other special casts). Hopefully no compiler-vendor is silly enough to implement a cast that isn't constant-time (if time at all).

Share this post


Link to post
Share on other sites
In C++ it depends on the kind of cast (static_cast, reinterpret_cast, dynamic_cast) as well as the types being cast.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
Hopefully no compiler-vendor is silly enough to implement a cast that isn't constant-time (if time at all).

There are plenty of casts that a standards compliant compiler will be unable to do in constant time. For example: a static_cast of a const char * to a std::string actually creates a temporary std::string object constructed from the const char *. The is decidedly a non-constant time operation.

Share this post


Link to post
Share on other sites
C++ : reinterpret_cast doesn't take any time. Other cast on user-defined types will have time complexity of apporiate constructor, that may have any time complexity you will make. (for example, you can convert from unsorted list to sorted list, using quicksort with O(n*log(n))) time, or you can use bubblesort with O(n2) time...)
Cast from parent to child : dynamic cast will do some checking, don't sure with what time complexity, might depends to multiple/single inheritance , etc. Cast from pointer to child to pointer to parent - i guess, with single inheritance, nothing in resulting code.

Share this post


Link to post
Share on other sites
Slightly OffTopic but if you're using casts in your main program code (i.e. away from loading files, or sending network data), then there is quite possibly a problem with your design.

Share this post


Link to post
Share on other sites
Hmm, I don't think I would say that. For instance, you might want to perform calculations using doubles before casting the final result to a float, when precision is important.

Share this post


Link to post
Share on other sites
I was talking about about your traditional casts with built-in types where just the interpretation changes. Obviously casts which involve custom types (such as std::string) won't be constant time because the new object has to be built rather than just interpreted in a new way.

Share this post


Link to post
Share on other sites
Quote:
Original post by petewood
Slightly OffTopic but if you're using casts in your main program code (i.e. away from loading files, or sending network data), then there is quite possibly a problem with your design.


petewood
Are you implying this applies to dynamic_casts??

Cheers
Chris

Share this post


Link to post
Share on other sites
If you guys didn't notice he is asking about Big O here.
Unless a loop or recursion is involved, its O(1), if its a single loop its O(N). I don't know if you can cast an array to something like a vector, but something like that would be the only time I would see any thing that isn't O(1).

Share this post


Link to post
Share on other sites

If it depends on the size of the object (like he said), then it isn't O(1). It doesn't have to involve a loop.

Share this post


Link to post
Share on other sites
In Big O, if you have a series of instructions where code is not run more then once, its O(1). Even function calls are O(1), and a constructor would fall under that definition, unless the constructor itself contains a loop to do initialzation, it would be O(1). It doesn't matter how many instructions replace the single one you made in code (the casting that is replaced with the constructor if its an object to object cast, either using a copy constructor or one writen by the class author). You can write a program consisting of 100 lines of the same thing, and have another program with 1 loop that runs 100 times, the first is O(1) and the second is O(n).
Edit: Should expand on that. Big-O is about getting an Idea as to how well the algorithim works, an O(1) algorithim (no loops) may actually take longer to run then O(n), its just a way to guestimate things.
Edit2: I think this might explain it better then I can
http://www.me.berkeley.edu/~e77/lecnotes/ch20/ch20.htm

[Edited by - Xipher on January 8, 2005 10:17:36 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Xipher
In Big O, if you have a series of instructions where code is not run more then once, its O(1). Even function calls are O(1), and a constructor would fall under that definition, unless the constructor itself contains a loop to do initialzation, it would be O(1). It doesn't matter how many instructions replace the single one you made in code (the casting that is replaced with the constructor if its an object to object cast, either using a copy constructor or one writen by the class author). You can write a program consisting of 100 lines of the same thing, and have another program with 1 loop that runs 100 times, the first is O(1) and the second is O(n). Runtime is NOT what Big-O is about.

No, you are mistaken about what Big O notation means, although it is partly at fault to the wording of the question at hand.

First, in your example, where you give the example of the 100 iteration loop and it's unraveled counterpart, you claim that the two have different complexity, stating that one is O(n) while the other is O(1). The problem is that you never defined what "n" is. In any normal sense of your analysis, "n" would be defined as the amount of objects of which you are iterating over, which here would be 100. Just because one is handled in a loop while the other is not in no way changes the fact that the complexity with respect to the number of elements is O(n).

There is the same exact fault with the original posters question: What is "n." When working with algorithms that operate on an iterator range, "n" is usually the number of elements in the range. Here, however, there is no obvious "n." Assuming pointer types, is "n" the number of bases a type being pointed to has, or perhaps the maximum number of virtual bases it has in series leading to the casted type, or what? Until you define what "n" is, asking to analyze the algorithm with Big O notation is completely meaningless. It would be like telling someone to take the derivative of a function -- your question should be "with respect to what." Until you answer that, you can go no further, since your question is not fully formed.

Share this post


Link to post
Share on other sites
Thanks for the correction. I haven't gotten that far into my CS curriculum yet. I have learned about Big O, although only recently, and haven't really gotten to understand it very well yet.

Share this post


Link to post
Share on other sites
Suppose you have to search an element in an unsorted array of n elements. What do you do? Compare the first, the second,...until you find it!
How many comparisons do you need? 1, 3, 7 we dont know...but we must consider the 'worst case' that is n so we say that this algorithm is O(n).
In practice this does not mean that we have 'n' instructions but the multiplicative term (K) is implicit in O() so we say that K*n instructions is similar to O(n).

Other algorithms are 'slower'. For example consider an algorithm in which you have to search the nearest two elements in the same unsorted array.

Compare the first with the second, third,...,nth
Compare the second with the third,...,nth
...
Compare (n-1)th with nth

You have done n*(n-1)*(n-2)...(1) comparison that is (if i remember!)

n*(n-1)/2 that is similar to n2/2 that is O(n2)

If you have an algorithm independent from the size of the object we say that it is O(1) so a sqrt with double precision and an int sum have the same complexity
O(1) altough the first is slower than the second!

Share this post


Link to post
Share on other sites
Quote:
Original post by blizzard999
If you have an algorithm independent from the size of the object we say that it is O(1)

Size of the input, you mean... Input might be more than one object. ;)
As was mentioned before, n might be number of objects, or the size of one object, or a dozen other things. The complexity depends on that

Share this post


Link to post
Share on other sites
definition 1) Casting is performed on a single object. Cast obj1 to obj2

definition 2) If the algorithm is independent from the size of the object it is O(1)

Share this post


Link to post
Share on other sites
Quote:
Original post by chollida1
Quote:
Original post by petewood
Slightly OffTopic but if you're using casts in your main program code (i.e. away from loading files, or sending network data), then there is quite possibly a problem with your design.


petewood
Are you implying this applies to dynamic_casts??


Yes, I mean dynamic casts.

Share this post


Link to post
Share on other sites
The only cast that could possibly be more than O(1) would be a contrived vector to list automatic conversion. If you're not abusing the cast, it's O(1).

Quote:
Original post by chollida1
Quote:
Original post by petewood
Slightly OffTopic but if you're using casts in your main program code (i.e. away from loading files, or sending network data), then there is quite possibly a problem with your design.


petewood
Are you implying this applies to dynamic_casts??

Cheers
Chris


There's a good way and a bad way to use dynamic_cast...

Share this post


Link to post
Share on other sites
Quote:
Original post by Magmai Kai Holmlor
The only cast that could possibly be more than O(1) would be a contrived vector to list automatic conversion.


What're your thoughts on boost::lexical_cast<> then? :)

Share this post


Link to post
Share on other sites
Just because it calls itself a cast doesn't mean it isn't really a data conversion facility that acts like a cast. Which begs the question, just where do we draw the the line at what we call casting? I mean, the syntax makes it look like a cast, and at that level it behaves like a cast, but underneath would the implementation be considered casting? It just makes me uneasy.

Share this post


Link to post
Share on other sites
Yes, boost::lexical_cast is casting. It's converting one data type to another, it just does it in a way that preserves the lexigraphical meaning of the data. Any user-made cast that just uses the built in casts to work is rather pointless. The 'non-casty' work is what makes this cast work. You said yourself that it looks and behaves like a cast. If it quacks like a duck, and looks like a duck, it's a duck!

Share this post


Link to post
Share on other sites
You mean if it looks like a duck, and it quacks like a duck, then it is a duck [grin] I guess it's just the C programmer inside me that's still a little uneasy.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement