Jump to content

  • Log In with Google      Sign In   
  • Create Account

Anon Mike

Member Since 07 Feb 2001
Offline Last Active Oct 24 2012 06:07 PM

Posts I've Made

In Topic: Visual Studio can't debug performance bottleneck

19 October 2012 - 01:47 PM

This is actually a Windows internal thing, not VC specifically. If a process starts up with a debugger attached then the Windows heap manager will put itself into debug mode.

To work around it, start your app without debugging (ctrl-F5) and then attach VC to the process after it starts.

In Topic: why do they have to make engineering so hard?

21 September 2012 - 10:54 AM

Everybody has their mental blocks. For my dad, like many here, it was calculus. He struggled through and went on to be an engineer anyway.

I had little trouble with calculus. I was in Computer Engineering and was one of those people that never studied and still got A's and B's. I picked up a Computer Science double major because it was essentially no effort - the coursework was trivial and the projects were fun things that I would have been doing in my spare time anyway. And then came statistics. I took the hard version (meant for math and physics majors) because "hey, why not", and it kicked my ass so hard I dropped out for the reminder of the quarter (it was to late to just drop the one class). I went back later and took the basic probability for dummies course instead and still struggled but at least got through it.

It's interesting. Students always complain about how hard their classes are and how it's unfair that there's such a thing as "weeding out". I do a lot of interviewing nowadays. Virtually everybody I talk to has a degree in either engineering or CS. Yet the amount of people who are complete failures at basic technical skills is staggering. We'll go through ~60 people (i.e. people who at least seem qualified based on thier resume) to fill a single entry-level position.

In Topic: Do interviewers allow you to use a compiler to answer questions

16 December 2011 - 02:24 PM

I've given a lot of interviews and no, you probably won't have access to a compiler. In the first place every compiler and IDE is different - I'm neither going to find and install the one you like nor expect you to figure out the one the company uses in what little time is available. Second, at least for the companies I've worked for I don't really care what language you use - I expect that if you can code decently then you'll be able to pick up whatever language your current project uses - and that could easily change from project to project.

As for the code itself, I expect it to real and not pseudocode, but I don't care if you make minor syntax errors. I do expect you to have a good algorithm, have at least a vague sense of good structure, and be able to mentally go through it to find the most glaring flaws.

As for the simplicity of the questions in the OP, those are commonly seen in the early stages. About a third of the people can't answer them well.

In Topic: Is it safe to NULL an array?

07 December 2011 - 03:31 PM

An array of zero length would be like a room with zero width, length and height. Hard to imagine and certainly not useful for anything if it could exist. :blink:

1-dimensional arrays of length 0 are actually useful. You see them a lot when you have a fixed-sized header in front of a dynamically-sized array and you want the whole lot to be in one contiguous block of memory. e.g., when overlaying a structure on top of serialized data:

struct Foo
	int some_header_data;
	size_t length;
	ArrayType variable_sized_info[];  // "length" elements

Since the array is of size 0 it doesn't count toward sizeof and so you can easily get the header size. But since it's declared the structure is padded properly and you can easily access the array elements, e.g. foo->variable_sized_info[42] is legal (assuming there are indeed that many elements of course).

*Two* dimensional arrays with both zero-length don't make sense though. And as far as I know not compilable with any of the standard compilers.

In Topic: Visual Studio 2008: 'W' is for WTF?

18 October 2011 - 10:20 AM

They didn't consider the consequences of their decision hard enough at the time.

at the time, the dominant language was C, not C++, so function overloading wasn't an option. They could have just made everyone call the W version directly, but then people would have whined about how dumb all the API names were (like you sometimes see for the unicode C-runtime functions, e.g. wcslen). Plus Microsoft was already fighting an uphill battle to get people to accept that Unicode was a good thing rather than a waste of memory (ASCII should be good enough for anybody!) And even the people who were willing to accept Unicode didn't want to have to go through their entire program changing function names - porting to NT (from Win16) was enough work already. Plus if they did that then it wouldn't compile for 16-bit anymore and that was where the real market was.

The macros were a perfectly reasonable compromise at the time. It is unfortunate that they don't provide a way to use overloading instead of macros for C++ nowadays, but I imagine there's little real benefit and it would very likely increase compile times in addition to making the headers simply bigger.