Best Way To Comment Code Without Cluttering
1) Comment every file, class, or function to explain what it does
2) Comment every member variable (since you can't see how it's being used by looking at a header)
3) Explain what conditionals are for, and potentially what each branch does, if it's non-obvious
4) Comment on any obscure language or library feature you're using that a reader might otherwise have to research
5) Write a comment about every bit of code that is doing something in a non-obvious way, explaining why (e.g. lack of time, improved performance, library quirks)
I don't see comments as clutter. I use an IDE and they clearly delineate comments from code, so it's no worse than having a lot of whitespace.
Back in my BASIC days I didn't use any comments at all, and learning to understand code purely by reading it was a skill I cultivated and was proud of. Now, I work on bigger projects with multiple people, and nobody's got time to read everything to understand it before editing. It's easy to claim that well-written code with good variable names is self explanatory but I don't want to have to read all of it to know what it's doing. Comments provide an essential abstract to save time when coming back to edit code later.
One type of comment I find very useful is to link any code to bugs (jira, github issues, whatever), including for FIXMEs and known problems. More generally, links to a page on the company wiki or anything like that are also good; you can have huge amounts of documentation and archived conversation on the code without cluttering up the source file itself.
The bugs are generally durable (e.g., the link is valid even after the bug is closed), so it lets developers go and find any discussion that occurred relating to the code. Sometimes just seeing the title of the bug (or feature request) is enough to explain the "why" of some code, or provides important context (e.g. "this feature is stupid and complicates everything, why ffs did anyone... oh, an executive asked for it personally").
The further advantage is that if you are investigating a bug but either can't fix it immediately or need to hand it off to another developer, it lets them find relevant parts of the code you investigated very easily and quickly. I just yesterday had a case where I got assigned a months-old bug, searched the codebase for the jira ticket name, and found 6-month-old comments from myself of the form: // [ABC-1234] I think this logic is inverted but I need content to test
This is similar to my own professional approach.
Firstly if we have a description and discussion, link to a page on confluence, or just mention a JIRA number etc.
Secondly, within header files, class definitions etc, above the definition describe the function in a format which can be parsed, e.g. javaDoc style or whatever floats your boat. You can then run your codebase through a document generator such as doxygen and i find that much more readable than the alternative.
Hope this helps!
*
*I'm pretty much a minimalist in *this department. I kinda hate *when some third party library *has a class that looks like it's *hundreds and hundreds of lines *with most of them comments and *the odd method or property here *and there. And I demonstrated
*what I mean lol
*************************/
I suppose it depends a bit on what your reason for writing code is. When I'm at work I comment a lot less, but that's probably more out of just pressure to get things done and being sloppy. Even then I try to write self documenting code.
But I'm mostly writing tutorials to help other people understand. So for the tutorials I comment profusely.
I became a big fan of commenting after I learned Assembly Language. There, you can't even read your own code a week later if you don't comment. That was one comment per operation. That's a lot of comments. There is no such thing as self documenting code in Assembler.
So there are several practices I generally try and follow for commenting.
First, I was taught that you should never have any part of your code be longer than one page; if you do, you need to break it up into a function or method. I generally shoot for two pages, but occasionally break the rule and go a little long. But if I see more than two pages in any part of my code, I consider breaking it up.
Then I do a "header" for every function/method. For tutorials, that may be one page of comments to one page of code. There I describe what that function or method does and why to try and help anyone who reads it understand my intent and the high level overview of the function/method.
I also try to put all my variables at the top of the function/method where they are easy to find. If it's only 2 pages of code, this may be a little over-kill but it works well for me. Then I can write a comment out to the side to describe the intent of each one.
What I find problematic is comments on separate lines. I do find that to feel like clutter. So, I rarely do that. That's kind of the job of my headers.
But I often write comments at the end of the line. I try to leave a lot of white space at the end of every line so that there is room for this, which is sometimes challenging since I like to write 30 character variable names as part of my "self documenting" goal such as "PlayerVelocityInMeterPerSecondEachSecond". But with that white space and a 16 by 9 display I generally try to tab a couple times to leave a fair amount of white space between my code and the comments and to align the comments to make them less obtrusive.
Here's an example where I have almost as much comment as code. Of course, commenting is somewhat a matter of preference and style and different people are going to prefer different approaches. (It doesn't display my tabs correctly, but the right side comments were mostly aligned in the original where possible while getting the whole line on the screen. It also unaligned some of the code. I use indention to group code as well as white space between lines.)
//=====================================================================================================================
// DirectX11Game::CreateDeviceAndSwapChain()
//
// Purpose:
// To create the Device, Device Context, and Swap Chain. Basically, initializes DX.
//
// Input:
// unsigned int Width - Width of the display resolution of the computer monitor.
// unsigned int Height - Height of the display resolution of the computer monitor.
//
// Output:
// bool - true if no errors occured.
//
// Notes:
// This method is the core of starting up DX. This one method, by calling D3D11CreateDeviceAndSwapChain, almost single handedly
// initializes DX. Three of the most important parts of DX are setup here in this one call: the Device, the Device Context, and the
// swap chain.
//
// I have to admit that I'm still basically trying to understand exactly what a device and a device context are. Here is MS's
// documentation on the matter:
// https://msdn.microsoft.com/en-us/library/windows/desktop/ff476880(v=vs.85).aspx
//
// Basicaly, the best explanation I've heard comes from MS and is that a device is basically used to put together the resources
// that the graphics card uses such as textures and vertex buffers. And a device context is what actually does the work. At least,
// that's my current understanding of the matter.
//
// Regardless, the combination of the device and device context ARE DirectDraw3D/DirectX. This is the COM object that is
// DirectX and that does all the drawing.
//
// The swapchain consists of the front buffer and one or more back buffers. The front buffer is the area of memory where
// the graphics card looks to determine what color every pixel on your computer screen will be drawn with. By drawing all the
// pixels on the computer screen in the correct color, you get the image on your computer screen. All the front buffer contains
// is a color for every pixel to be drawn on the screen (I say screen but you could be drawing picture in picture or multiple
// cameras which would each need their own swap chain.) But it is 1 to 1 with each value in the front buffer representing nothing
// but a color for a pixel on the screen. Whatever is in the front buffer IS what's on your computer screen.
//
// You could theoretically draw to the front buffer, but we don't. The primary reason is that if you do, you get what is called
// screen tearing. You can google that to get some great example images of what that is. But basically it means that when something
// changes between frames part of the screen still has the image for the previous frame and part of the screen has the image for the
// new frame and you can see the line where they don't agree with one another.
//
// So, instead we create an identical buffer called the back buffer which we draw to instead. Then we do a SUPER fast change between
// frames all at once to flip the buffers. Actually, the difference between the front buffer and the back buffer is just which one the
// pointer points to. Flipping the back buffer and front buffer is called "presenting" and all that happens is that between frames the
// back buffer pointer is changed to point to the front buffer memory and the front buffer pointer is changed to point to the back buffer
// memory. Then the new frame is presented to the video display all at once without any tearing. And changing two memory pointers is super
// fast not to mention that this can be coordinated with the drawing so that it occurs between screen refreshes. The screen refresh rate
// is also set here if you like. 1/60th of a second is pretty standard.
//
// Two buffers is called double buffering. But you can have even more back buffers if needed for some reason. The front buffer
// and all the back buffers is what is known as the "swap chain".
//
// If D3D11CreateDeviceAndSwapChain succeeds then it returns a pointer to the Device, Device Context, and swap chain. If it fails
// then DirectX is not going to be able to draw anything to the screen and we probably might as well close the application and call it
// quits. This method returns false if such a catostrophic failure occurs.
//
// This isn't really designed to work with multiple graphics cards or even multiple monitors. So, you're pretty much on your own
// modding this code to make it handle that. It pretty much goes with the first graphics port it finds.
//
//=====================================================================================================================
bool DirectX11Game::CreateDeviceAndSwapChain(unsigned int Width, unsigned int Height)
{
bool DriverFound = false; //Assume we fail unless everything goes right.
unsigned int Driver = 0;
unsigned int NumberOfDriverTypes;
unsigned int NumberOfFeatureLevels;
unsigned int DeviceCreationFlags = NULL;
D3D_DRIVER_TYPE DriverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE, D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE, D3D_DRIVER_TYPE_SOFTWARE //The driver types we want to support. Usually only the hardware driver.
};
D3D_FEATURE_LEVEL FeatureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0 //,
//D3D_FEATURE_LEVEL_10_1,
//D3D_FEATURE_LEVEL_10_0 //The different versions of DX hardware we want to support.
};
NumberOfDriverTypes = ARRAYSIZE(DriverTypes);
NumberOfFeatureLevels = ARRAYSIZE(FeatureLevels);
DXGI_SWAP_CHAIN_DESC swapChainDesc;
ZeroMemory(&swapChainDesc, sizeof(swapChainDesc)); //Initialize the memory by zero'ing it all out.
swapChainDesc.BufferCount = 1;
swapChainDesc.BufferDesc.Width = Width;
swapChainDesc.BufferDesc.Height = Height;
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.OutputWindow = WindowHandle; //Connect DirectX to the window so that DX can draw in the application's window.
swapChainDesc.Windowed = true; //Create windowed or full screen.
//swapChainDesc.SampleDesc.Count = 1; //No anti-aliasing
swapChainDesc.SampleDesc.Count = 8; //Anti-aliasing
//swapChainDesc.SampleDesc.Quality = 0; //No anti-aliasing
swapChainDesc.SampleDesc.Quality = 4; //Anti-aliasing
swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; //Allow full screen switching;
#ifdef _DEBUG
DeviceCreationFlags |= D3D11_CREATE_DEVICE_DEBUG; //If you're debugging, you might want a little extra info as to what went wrong.
#endif
do
{
if(SUCCEEDED(D3D11CreateDeviceAndSwapChain(0, DriverTypes[Driver], 0, DeviceCreationFlags, FeatureLevels, NumberOfFeatureLevels,
D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &GraphicsDevice, &GraphicsCardFeatureLevel, &GraphicsDeviceContext)))
{
DriverType = DriverTypes[Driver];
DriverFound = true; //We can now draw to the screen. Exit after finding the first working graphics driver type.
}
++Driver;
}
while (Driver < NumberOfDriverTypes && !DriverFound);
if(!DriverFound) DXTRACE_MSG("Failed to create the Direct3D device!");
return DriverFound;
}
//=====================================================================================================================
Alternatively, using amazing & informative titles for my classes/methods/variables etc.