Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

yaroslavd

C# Convention Question

This topic is 5364 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

As I understand it, .NET provides common types for all .NET languages. These include Object, String, Int32. In C#, these are aliased to object, string, and int. By convention, which style should be used when programming in C# and does this choice have any impact on the program besides readability? Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
Use the lower case versions for keywords, simply because Visual Studio highlights them as blue, which makes your code easier to read by yourself and others.

Share this post


Link to post
Share on other sites
Are the type aliases part of the standard? I''d assume that int would reference the most logical integer data type for the particular architecture that your code is running on. For a 64 bit architecture this would most likely be a 64 bit value, as opposed to a 32.

If my assumption is correct, then it would make sense to use Int32 only if you were performing operations that were specific to 32 bit integers, and would not scale correctly to larger types.

Share this post


Link to post
Share on other sites
This is a normal alias.

NORMALLY I use the lower case form (the alias). In some certain conditions I Was known to write lclasses using the struct names.

Anyhow, some comments:

::Are the type aliases part of the standard?

They are part of the C# language specificaion.

::I''d assume that int would reference the most logical integer
::data type for the particular architecture that your code is
::running on. For a 64 bit architecture this would most likely
::be a 64 bit value, as opposed to a 32.

I would assume you are a heck of a bad programmer.

This would mean that code I wrote on an AMD 64 could suddenly break on a PIV. Whow. Impressive.

This is the kind of mess we got in C - variable sizes defined by the platform.

Makes more sense that everything has a DEFINED BEHAVIOR. int is System.Int32 regardless of the platform. If you want to work on 64 bit variables, define the variable to have 64 bit.

::If my assumption is correct, then it would make sense to use

I suggest a sep tpwards making better programs:

LESS ASSUMPTIONS AND MORE READING OF DOCUMENTATION.


Regards

Thomas Tomiczek
THONA Consulting Ltd.
(Microsoft MVP C#/.NET)

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by thona
I would assume you are a heck of a bad programmer.

Thomas Tomiczek
THONA Consulting Ltd.
(Microsoft MVP C#/.NET)


Indeed you have a point, but I fail to see why you have that attitude. It''s as easy to tell the same message in a polite way and given that you''re an MVP that might also be a good thing for your reputation. Who knows, ''haro'' might be looking for a contractor. I bet he wouldn''t choose you. Neither would I, there are other skilled MVPs with a nicer attitude, so why would I choose you? You seem like a professional guy, so why not have a professional attitude?

Share this post


Link to post
Share on other sites
quote:
Original post by thona
::I''d assume that int would reference the most logical integer
::data type for the particular architecture that your code is
::running on. For a 64 bit architecture this would most likely
::be a 64 bit value, as opposed to a 32.

I would assume you are a heck of a bad programmer.

This would mean that code I wrote on an AMD 64 could suddenly break on a PIV. Whow. Impressive.



I''m not even going to mention my assumptions about you. I really love how you spent so much time considering what I said. In general , whether an int is 32 bit, 64 bit or 128 bit you''re not going to run into range issues. Its a sufficient size for most pourposes. As, the majority of time the bit size of the integer is almost irrelevant, since it can be assumed we''re going to have at least 32 bit ints. What would be more important then, would be how coherant the datatype would be with the underlying architecture. IE- If for some reason, you were on an architecture with all arithmetic instructions dealing exclusively with 64 bit operands, then it would make little sense to use 32 bit values which are going to be forced to be extended before they can be used anyhow.

Now, there are obviusly specific instances when the size of your data storage is extremely relevant. An example might be writing a compiler, when assuming a "int" is the size of an general purpose register could be disasterous if general purpose registers were actually 32 bit and ints were defined as 64 bit. So you would want to explicitly use int32.

Share this post


Link to post
Share on other sites
I do a lot of C# work and for the most part I use only the lowercase versions for built in types.

The only really notable exception is when defining integers when percision matters or for working with binary data.

For example, even though it is understood that "int" corresponds to System.Int32 I will specify Int32 instead of int in code ment for parsing network packets of a particular standard or something like that.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!