[.net] sbyte

Started by
8 comments, last by DrGUI 19 years, 4 months ago
I was reading this tutorial and I don't understand why in this example the variable bitNot is cast to sbyte when it is already a sbyte.

using System;

class Unary
{
    public static void Main()
    {
        int unary = 0;
        int preIncrement;
        int preDecrement;
        int postIncrement;
        int postDecrement;
        int positive;
        int negative;
        sbyte bitNot;
        bool logNot;

        preIncrement = ++unary;
        Console.WriteLine("Pre-Increment: {0}", preIncrement);

        preDecrement = --unary;
        Console.WriteLine("Pre-Decrement: {0}", preDecrement);

        postDecrement = unary--;
        Console.WriteLine("Post-Decrement: {0}", postDecrement);

        postIncrement = unary++;
        Console.WriteLine("Post-Increment: {0}", postIncrement);

        Console.WriteLine("Final Value of Unary: {0}", unary);

        positive = -postIncrement;
        Console.WriteLine("Positive: {0}", positive);

        negative = +postIncrement;
        Console.WriteLine("Negative: {0}", negative);

        bitNot = 0;
        bitNot = (sbyte)(~bitNot);
        Console.WriteLine("Bitwise Not: {0}", bitNot);

        logNot = false;
        logNot = !logNot;
        Console.WriteLine("Logical Not: {0}", logNot);
    }
} 


www.computertutorials.org
computertutorials.org-all your computer needs...
Advertisement
From the MSDN:

The ~ operator performs a bitwise complement operation on its operand. Bitwise complement operators are predefined for int, uint, long, and ulong.

So the sbyte is implicitly converted to an int by the ~

This is also called integral promotion.

Cheers
This happens because most processors out there now are 32-bit and natively all arithmetic done on these processors is 32-bit. So, when the compiler sees bytes or shorts that need to be operated on, they are padded to 32 bits, the 32-bit operation is performed, and the result is (you guessed it) 32 bits! Because of this is it safe to perform operations on 8 or 16-bit integers even when the result may be outside of 8 or 16-bit precision limits. Example:

byte maxByte = 255;
byte one = 1;

int result = maxByte + one;
//maxByte and one are both converted to 32-bit integers before
//calculating result; result is now 256, it is not trunated to
//fit into a byte
I have one more question even though it has nothing to do with sbyte type. I'm wondering whats the difference between multi-dimension and jagged array types?

bool[][] myBools --Jagged
double[,] myDoubles --multi-dimension


www.computertutorials.org
computertutorials.org-all your computer needs...
A multidimentional array is internally the same as a regular array. Under the hood, the following examples do the same thing:

int[,] multiArray = new int[rank1, rank2];
multiArray[x, y] = 8;

int[] regArray = new int[rank1 * rank2];
regArray[x * rank2 + y] = 8;

Jagged arrays are just arrays of arrays. The main array and each sub-array are allocated separately. Each sub-array can have a unique length, and even just be null.
Further differences:
- jagged arrays are slightly faster than multidimensional arrays, because there are special IL-opcodes to access the elements (newarr, ldelema, ldlen, stelem)
- jagged arrays do not(!) belong to the CLS (Common Language Specification), i.e. the CLS does not support an array being the element type of an array. This could lead to problems when using multiple programming languages.

(Have a look at Applied Microsoft .NET Framework Programming)

Regards,
Andre
Andre Loker | Personal blog on .NET
Quote:Original post by VizOne
Further differences:
- jagged arrays are slightly faster than multidimensional arrays, because there are special IL-opcodes to access the elements (newarr, ldelema, ldlen, stelem)
- jagged arrays do not(!) belong to the CLS (Common Language Specification), i.e. the CLS does not support an array being the element type of an array. This could lead to problems when using multiple programming languages.

(Have a look at Applied Microsoft .NET Framework Programming)

Regards,
Andre

Do you think we should use multidimesional arrays in preference to jagged arrays because they have more restrictions, they have the potential to be faster (if more instructions were created) and the compiler could convert multidimensional arrays to jagged if they were faster. It is a good idea to use as many restrictions as possible, then let the compiler use that.
My suggestion would be to use whichever one is most appropriate for your situation. Here's what the program has to do in 2-dimentional instances of both types:

To access an element in a jagged array:
1. One bit-shift
2. One addition
3. One dereference
4. One multiplication or bit-shift (always bit-shift for reference types)
5. One addition
6. One dereference

To access an element in a multidimentional array:
1. One multiplication
2. One addition
3. One multiplication or bit-shift (always bit-shift for reference types)
4. One addition
5. One dereference

If you understand a little bit about processors and assembly, you'll know that of all the operations above, multiplication is generally the slowest. (Note: If the elements in your array are larger than the registers on the processor, the dereferencing may incur additional overhead.) If you access elements frequenly in certain areas of your code there are optimizations that the compiler can perform on jagged arrays to prevent going through that whole list each time.

Overall, jagged arrays will be faster when accessing elements. When I ran tests for populating jagged and multidimentional arrays of the same size and two dimentions, the jagged one was about three times faster. On the other hand, jagged arrays do use slightly more memory than multidimentional arrays of the same size. Choose whichever fits your needs the best - processors are fast enough today that it shouldn't make a huge difference unless your program spends all its time accessing array elements.
Quote:Original post by DrGUI
Do you think we should use multidimesional arrays in preference to jagged arrays because they have more restrictions, they have the potential to be faster (if more instructions were created) and the compiler could convert multidimensional arrays to jagged if they were faster. It is a good idea to use as many restrictions as possible, then let the compiler use that.


As TheBluMage suggests, use the appropriate one. If I wanted to express some kind of rectangular matrix, multidimensional arrays would appear much more "natural" to me than an array of arrays, as it's actually rows and columns that are presented. A jagged array is just an array that happens to have an array of some kind as it's element type. The single array-elements do not appear to be interrelated. See the difference? If I was building a library that is supposed to be consumed by e.g. VB.NET, I would try to avoid non-CLS-constructs on the public side of the assembly, i.e. no multidimensional arrays as return types, parameter-types etc. in public methods. Another disadvantage of jagged arrays is that they are more complicated and error prone to initialize. Finally, the speed-advantages of jagged arrays are negligible in most cases.

Well, actually I haven't used any of the two variations much in my projects. Most of the time, I have picked a more specialized type of container/collection.

Regards,
Andre
Andre Loker | Personal blog on .NET
Thanks for your replies, I was just curious really. What I really meant was that multi-d arrays could theoretically be faster, but it all probably depends heavily on memory access patterns for processor-cache efficiency etc.

This topic is closed to new replies.

Advertisement