Imagine having a giant map and the tool was meant to show pathfinding data, but had to load only parts of the navmesh at a time. There are lots of paths you could find/see, but there are also some paths that couldn't be found/displayed. The results would still be useful but incomplete, and sometimes those missing pieces are what matters.
In this situation I would say that you need to use hierarchical pathfinding, as if you attempt to use a flat navmesh that takes more than several GB of memory to represent, you're in for a bad time.
If that's not your actual problem space, well, the advice still applies; you are looking at trying to brute-force a problem that almost assuredly has elegant algorithmic solutions that use far less resources and are amenable to streaming or other dynamically loaded setups.
It's not my actual problem domain no, and we don't have control over the input format.
Imagine an undirected graph where nodes are stored in some arbitrary order, and the position of each node (except the first) is stored as an offset to a node that has appeared previously in the list (it might be the previous node, it might be a node 20 steps back). The list can only be traversed forward. Then there's a list of edges, where the first node is referenced by it's final position and the other node referenced by its relative index in that first list. The edge list can also only be traversed forward. The entire file might even be compressed, preventing seeking in it. (edit: the format ends up this way because the data is generated in an online performance sensitive environment and processed in an offline environment, so the strategy is pretty much "throw everything needed in there and let the offline processor figure it out")
It's easy to work with if you can load the full set and create a linked structure, but trying to work with a partial set or stream the data from disk or something would be incredibly annoying and/or inefficient. Perhaps we could do a pre-pass and convert it to another format, but that conversion would be difficult and time consuming, not to mention it would use storage on the users system. Thats' fine for 10mb data sets, but suboptimal for 10gb sets.
Also... Who are your users? If this is a quick-fix you're looking for to act as a band aid until a fundamentally better solution can be worked out, and if your users are internal or expected to have an IT budget, it could well be that the cheapest, most-expedient solution might be "buy more RAM" or "Rent some time on a big VM in the cloud".
Our users are developers, as in programmers. We want to tell them to get more ram, but we need to not crash their system first.
A 4GB system might be able to handle something like 85% of all input sets, a 16gb system might be able to handle 97% of all input sets etc, so I don't want to put an arbitrary system requirement for 16gb ram or similar.
There might be something buried in the Windows Management Instrumentation system that you can access from C# that gives you enough information to make that kind of decision. I haven't used it very frequently, nor for memory, but I suspect what you're looking for will be in there if it's anywhere.
We are looking at this right now, but I think we might be leaning towards PerformanceCounters. My problem is kindof which metric to look at. For example, if I look at available physical memory, Windows might swap things away so that there's always some physical memory free. If I look at total available system memory, then I assume it includes swap space in some form, which again means Windows might perform tricks like increasing the size of the swap file to always have some memory available. Do I want to include swap space? I have no idea. It feels like there might be lots of different scenarios that I need to account for.