The Point of the Temp-Folder

Started by
11 comments, last by Norman Barrows 7 years, 2 months ago

Hello forum : )

I was wondering why applications use Window's Temp folder? Especially nowadays where OS are being installed on SSDs - therefore using the temp-folder is "wasting" write-cycles, right?

Yes, I know modern SSDs have a long lifetime, but that is not something we can expect.

Where is the problem by simply using your own temp-folder where the application has been installed? Bare exes seem to have an excuse.. but putting stuff into a folder never harmed anyone...

Then again, why are applications not cleaning up the temp-folder? Some do, yes. Others do not or crash and never get do to it. But still, once restarting, they should check whether they left some stuff there and clean up (in my opinion, as Windows does not clean up the temp-folder and just clogs up).

Is it because we can expect having permissions for writing to the temp-folder but not to where our application resides in?

Should I use the temp-folder? My application is meant to be rather cross-platform, I would prefer to avoid specific adjustments.

Thanks for taking your time : )!

Advertisement

You may not have write access where the program is installed, eg trying to write in /usr/<anywhere> ain't gonna work in Unix unless your user is completely crazy.

As for cleanup, ymmv on different OSes. Unix allows deletion of the file, even when it's still open, and being used (disk blocks are released once you close the file). Windows doesn't afaik.

There is a system call for asking for "temp file storage" (not sure how cross-platofrm it is, maybe it's Unix-only?). Useful for things like compliers or sorting big data sets.

If you really need temp fs-space, either set it during compile at the system, or make it configurable, eg by means of an environment variable. Keeping things in memory is however likely a better strategy if it's not too big. It gives you fast access and doesn't disappear suddenly. Also, it eliminates a few attack vectors with file creation, and you don't need to clean up the mess afterwards.

Folders/directories on a local disk are largely a virtual structure. Files written to the temp folder effectively go to random places on the physical disk (like all new files), so using a centralized location as opposed to application-specific one has no practical effect on the disk wear.

Niko Suni

Folders/directories on a local disk are largely a virtual structure. Files written to the temp folder effectively go to random places on the physical disk (like all new files), so using a centralized location as opposed to application-specific one has no practical effect on the disk wear.

Yes, but when I install my OS on an SSD, it will save them somewhere on the SSD when they are written to the temp-folder, right? Even when I want my games or general purpose applications on the HDD, they will use my temp-folder, which is on the SSD.

Am I mistaken?

You may not have write access where the program is installed, eg trying to write in /usr/ ain't gonna work in Unix unless your user is completely crazy. As for cleanup, ymmv on different OSes. Unix allows deletion of the file, even when it's still open, and being used (disk blocks are released once you close the file). Windows doesn't afaik.

Oh, thanks for granting me this insight : ) I will definitely need to consider where to save my games then (unrelated to this topic, but got me thinking).

There is a system call for asking for "temp file storage" (not sure how cross-platofrm it is, maybe it's Unix-only?). Useful for things like compliers or sorting big data sets. If you really need temp fs-space, either set it during compile at the system, or make it configurable, eg by means of an environment variable. Keeping things in memory is however likely a better strategy if it's not too big. It gives you fast access and doesn't disappear suddenly. Also, it eliminates a few attack vectors with file creation, and you don't need to clean up the mess afterwards.

Ah, thanks again!

About keeping things in memory, I agree, it was rather about curiosity about the temp-folder which a lot of applications use.

On another note, some text-editor-applications create temporary files while editing the file. Why when they cannot be sure if that is possible/allowed? Libre does, as far as I know.

Do they simply try and once failing try a direct route via temp?

It is actually possible to move the temp directory to a different disk. On Windows, this is a registry setting (though I don't remember the exact path right now). I know that the tmp directory can also be remapped in Linux.

Do note that some applications do not respect the OS setting and will try to write temp files in a semi-hardcoded path anyway.
Also, consider that temp files may not take physical disk space, unless they are actually flushed.

Niko Suni

Go into advanced properties in "computer" (or whatever they call "my computer" now). Go to the tab marked "environment variables" and look for two variables called TEMP and TMP, change them to a location on your non ssd.

Note this may slow down your system, and it won't affect the users temp dir which is within the user profile.

Hope this helps!

On Windows there are a number of different "temp folder"s.

C:\temp and C:\Windows\temp still exist but for backwards compatibility only. You should definitely not be hard-coding usage of these anywhere.

Windows also has a per-user temp folder which is what the %temp% environment variable expands to. On a default setup this is going to be in C:\users\username\AppData\Local\Temp but once again you should never rely on it being there. Use the correct API call instead.

Yes, but when I install my OS on an SSD, it will save them somewhere on the SSD when they are written to the temp-folder, right? Even when I want my games or general purpose applications on the HDD, they will use my temp-folder, which is on the SSD.


Not everybody is on an SSD + HDD setup.

Some of us are SSD only.

Some of us are HDD only.

Some of us are even on networks with folder redirection enabled.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

First you need to recognize that machines with this much physical memory are a very recent thing in computer history. There is an enormous body of existing legacy code based on smaller amounts of memory. It used to be standard procedure for every program to work with a temp folder and scratch space. It is far less common than it used to be, but many programs do it. You can often tell the programs from small-memory days because they can edit files >2GB. Often the newer stuff will have limits for files the size of available memory, no more than 2GB total for editors, but older stuff will be able to work with files of any size that is legal for the file system.

Even on modern code you may not be able to require 64-bit architectures so 2GB is your max, and thanks to your program taking space about 1.5 GB is your max. Many programs go through far more memory than that. Virtual memory can work in some cases, but you hit a maximum of virtual memory space.

Temporary files are great for scratch data. When you know the data is going to exceed available memory you've got that as an option. For bonus point you can arrange and organize the file and do some fancy CS data structures on disk data.

If you are doing something that requires huge amounts of memory -- imagine a program like Photoshop -- then a scratch memory area works extremely well. The heavily-compressed source file is around 100-300 MB, meaning the decompressed in-memory version ispossibly expanding to three or five or ten gigabytes or so. If the system has enough physical memory then it is fine to keep it loaded. But if the OS starts giving you messages that it is time to trim your memory usage, you can discard things or push them to scratch files.

As for document editors, it is more frequent to keep a backup copy available, and it is also typical to use a pattern of save-and-swap for data integrity reasons, just like programs often use a copy-then-swap pattern to avoid corruption from exceptions: The entire file is written out and preserved, and then the old file is removed and the successfully-written file gets renamed to the original name. Having the intermediate file written as a temp file gives some benefits of constant saving without actually destroying the old file so you can recover your work in case of a catastrophic problem like power outage.
On another note, some text-editor-applications create temporary files while editing the file. Why when they cannot be sure if that is possible/allowed?

The point of an editor is to change a file, and write it to disk, or else the user cannot save his work.

So it naturally has space in the home directory of the user at least for writing files. You often see that in the form of .bak files, or other extensions in the same directory as the file being edited. This makes sense, as it is not uncommon to have several (source) files with the same name on the disk, and you don't want to put all those .bak files in one folder then :)

Writing the temp file in the same folder also has the advantage that the user knows where to look for the file.

My vim editor even stores the full edit history on the disk while I use the program, so it can restore a file to the point of the last edit in case of a crash or (more common) an unexpected system shutdown (loss of power, or someone throwing the power switch while I worked at the machine :P ). Believe me, it beats working on a RAM disk for 3 hours, and then have a power outage.

Note that if your compiler supports it, you can use std::filesystem::temp_directory_path to get the temp directory in a cross-platform way.

If not, there's always Boost.Filesystem (which std::filesystem is based on), but you may or may not want that dependency.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

This topic is closed to new replies.

Advertisement