Secure/Stable Programming

Started by
134 comments, last by Dmytry 19 years, 6 months ago
Dmytry:

I think you're blowing this a bit of proportion. Null terminating low level strings in c/c++ is what one needs to do for that same string to be usable by many funtions, it's not "hidding bugs", it's actually preventing them.
If you don't particularly enjoy the way bounds checking is done in c/c++, it's ok, but leaving an extra space for the NULL is indeed good practice with c/c++ when dealing with low level strings.

Moderator:

Feel free to delete this post for thread maintenance purposes.
"Follow the white rabbit."
Advertisement
Quote:Original post by snk_kid
***** source block removed *****

I wouldn't do that in the copy constructor

(construction/initialization) != assignement

even thou the final result is the same they are not the same operation, with assignement the variable always had "some" previous value that is replaced.

Also like to add always prefer the default copy/assignment operations implicitly defined or if doesn't make sense for a type to copy/assign then make them protected/private. Only define what means to copy & assign when you have data members that are raw pointers.


Hi,
what do you mean "some" previous value, i thought by that point
inside the code, the class and its variables are already
constructed w/o initial values, wouldnt inline'ing the
copy operator give out the EXACT same code?

eg.
  MyGenericClass( const MyGenericClass& rhs ) {     m_A = rhs.m_A;  }  MyGenericClass& operator=( const MyGenericClass& rhs ) {     m_A = rhs.m_A; // same as above ????     return (*this);  }


is it actually bad practice, or do you just not prefer it?
"I am a donut! Ask not how many tris/batch, but rather how many batches/frame!" -- Matthias Wloka & Richard Huddy, (GDC, DirectX 9 Performance)

http://www.silvermace.com/ -- My personal website
Quote:Original post by Washu
Assert everything. The less likely something will be bad, the more likely you should assert it. The more complex the assert, the more information it will give you. Don't hesitate to write your own assert macro that gives you more information than the standard one does.

I've never used asserts alot except in complex code. I find less clutted code worth the extra time it takes to find out the assert information using the debugger.
Quote:Original post by silvermace
what do you mean "some" previous value


Well i mean this:

int i = 10; //implicitly invokes constructori = 30; //assignement


is different from this:

int i(30);//orint i = 30; //both are equivalent


Even thou they both eventually have the same result the first version being i is initialized with 10 and is replaced by 30 by assignement so it had a previous value, if in the first version you didn't intialize it to 10 or any number what so ever then it will probably have some garabage value so again it had a previous value.

Quote:Original post by silvermace
i thought by that point inside the code, the class and its variables are already constructed w/o initial values, wouldnt inline'ing the
copy operator give out the EXACT same code?


From what i remember the c++ standard does not mandate compiler writers to have variables intialized to some default or without initial values.

You cannot make any assumptions thats the whole point of using constructors & constructor intializer lists to intialize user-defined types with either default or non-default values.

Quote:Original post by silvermace
eg.
  MyGenericClass( const MyGenericClass& rhs ) {     m_A = rhs.m_A;  }  MyGenericClass& operator=( const MyGenericClass& rhs ) {     m_A = rhs.m_A; // same as above ????     return (*this);  }



What happens if you have pointers as data members i'm pretty sure that copy construction and assignement will be quite different. The constructor will pull resources (say call new) where as the assignement operator will probably release the current resource back (say call delete) then pull some new resources to prepare and then copy the values.

Quote:Original post by silvermace
is it actually bad practice, or do you just not prefer it?


bad practicel, prefer constructor intializer lists to intialize members and pull resources in the body of the constructor.
Once VS 2005 comes out (both express VC and the full deal), use the secure CRT functions. It requires 0 effort on your part, and provides a pretty minor overhead.

---------------------------Hello, and Welcome to some arbitrary temporal location in the space-time continuum.

Quote:Original post by White Rabbit
Dmytry:

I think you're blowing this a bit of proportion. Null terminating low level strings in c/c++ is what one needs to do for that same string to be usable by many funtions, it's not "hidding bugs", it's actually preventing them.
If you don't particularly enjoy the way bounds checking is done in c/c++, it's ok, but leaving an extra space for the NULL is indeed good practice with c/c++ when dealing with low level strings.

Moderator:

Feel free to delete this post for thread maintenance purposes.

Adding 0s at end of string is OK. Adding zero to zero-terminated string is not OK. Adding extra size to the end of all arrays is not OK too.
For string,one 0 is needed for it's normal work,but it's not extra 0s you need to optionally add at the end "to be sure".

edit:and i know,many people have problems with counting from 1 versus from 0 (i also had) . And will just allocate bigger arrays because they are +/- 1 unsure what size they need. It's not good.
If you do any sort of string handling and security is a concern then for crying out loud use a string class. It doesn't have to be std::string, just something that will protect you against overruns. If speed is a concern then consider making a StringRef class that provides class semantics atop an otherwise stack-allocated buffer.

Ditto for stream classes. No excuse for C++ programs to have buffer over/underflows nowadays. Avoid unsafe CRT functions like the plague.
--God has paid us the intolerable compliment of loving us, in the deepest, most tragic, most inexorable sense.- C.S. Lewis
Don't use signed values for sizes!

This mistake is made too often, and it is a security break waiting to happen.

int size;char *buf;fread(&size, sizeof(int), 1, infile);if (size < 1024)    //WOOHOO   buf = (char *)malloc(size);


The code reads in a size from file to an int, then checks that it is not greater than 1k, and then allocs the specified size.

That might be a good idea if size was unsigned int, but it is not, and can therefore be smaller than 0.

edit: now the code seems right
Quote:Original post by doho
Quote:Original post by Washu
Assert everything. The less likely something will be bad, the more likely you should assert it. The more complex the assert, the more information it will give you. Don't hesitate to write your own assert macro that gives you more information than the standard one does.

I've never used asserts alot except in complex code. I find less clutted code worth the extra time it takes to find out the assert information using the debugger.

In my professional experiance I've found it to be quite the opposite. A well placed assert can save me many hours of debugging. Especially as the codebase becomes more and more complex. An assert doesn't have to be ugly you know.

Quote:
Not entirely true. Sometimes, using assembly is the best way to go. Consider, for example, prefetching. The compiler doesn't know what is good code/data to prefetch, and can't actually make use of this feature too well.

Other things, such as vectorization, would also benefit from assembly, but that requires some work at the initial implimentation to do well. It helps if you have an understanding of what the processor does and when, of course.

You are right, there are times when assembly can be the "right" thing to use. My point was that optimization should be looked at from a higher level than that. Because chances are that you will get a greater gain from optimizing something else. While assembly can be useful (I won't deny that), it tends to be more wasteful writing something in assembly than just optimizing an algorithm, or changing it out entirely.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Quote:Original post by Washu
Quote:Original post by Nurgle
Not entirely true. Sometimes, using assembly is the best way to go. Consider, for example, prefetching. The compiler doesn't know what is good code/data to prefetch, and can't actually make use of this feature too well.

Other things, such as vectorization, would also benefit from assembly, but that requires some work at the initial implimentation to do well. It helps if you have an understanding of what the processor does and when, of course.

You are right, there are times when assembly can be the "right" thing to use. My point was that optimization should be looked at from a higher level than that. Because chances are that you will get a greater gain from optimizing something else. While assembly can be useful (I won't deny that), it tends to be more wasteful writing something in assembly than just optimizing an algorithm, or changing it out entirely.


Nurgle, I disagree with you (not about the need of assembly language - of course we still need it, since some code construct cannot be written using high level languages - but about the examples you gave). Todays compiler are far more efficient than the one I used to use in the past days. The Intel compiler and the Codeplay VectorC compiler, for example, can vectorize your code - and they perform very well. It can also unroll small loops - and most of them are good candidate to prefetch. Most assembly programming is governed by rules : "if you do this like that, then you'll have some very fast code". Those rules are also applied by compilers when they can - of course, this "when they can" is the Big Point. IMHO it is far better to give hints to the compiler and to use its intrisics in order to enable him to generate better code than to write hand assembly routine - and it is less time consuming, of course, which is always good :)

Washu++ - as coders, the first things we need to optimize are our algorithms.

And since we are still speaking about code statbiliy and security : even this should not be done before testing. The code development process is usually :


  • design
  • write code
  • test
  • debug
  • profile
  • optimize
  • test
  • debug
  • release


(with a lot of loops everywhere :)

This topic is closed to new replies.

Advertisement