Note that changing code at run time is generally possible, but the ease/difficulty of doing so depends on what programming language you are using. For dynamic languages like Ruby replacing function definitions at run time is essentially trivial. For compiled languages, you have to do evil things like disabling data execution protection and rewriting code pages.
FWIW, there is a "slightly less evil" strategy of making use of function pointers.
the basic idea being that the logic is mostly broken down into pieces ("basic operations"), and then structs and function pointers are used to glue everything together. as things change around, then the various structures and function pointers can be changed around, ...
when executed, the logic can jump fairly directly from one operation to the next and essentially walk the structure.
if done well, this can be fairly flexible and actually a fair bit faster than the "general purpose" logic (lots of complex branching if/else chains or switches).
the drawback, however, is that it is typically much more bulky and nasty-looking, and potentially difficult to understand and debug.
generally, I think reorganizing code where possible is preferable to resorting to something like this.
generally, in my case, it is mostly confined to use in my scripting VM and dynamic type-system and object-system facilities (*1).
I had a few times tried using it in image codecs, but typically it is faster/easier just to write special-case versions of functions (say: "hey, we are using 4:2:0 subsampling and YCbCr and an RGBA image buffer" -> use a special version of the logic specifically for this case, falling back to the general case when encountering an unexpected combination of parameters).
the tradeoff is that specialized versions of code can also contribute a fair bit of bulk, and are inherently "not really all that flexible" since each only addresses a few specific parameters (say, if we had 3 subsampling modes, 4 color-space transforms, and 4 image-buffer layouts, naively this would mean 48 versions of the image-transform, which is impractical, so usually only 2 or 3 "very common" cases get special handling).
sometimes a "mix and match" compromise is possible, with some special-casing of the logic, mixed with the use of function-pointers in other places.
going beyond this, there is crafting specialized machine-code sequences at run-time, which can execute faster, but opens up a lot of issues (namely portability and complexity). however, machine-code can accomplish a few things which function-pointers can't (it is free from the constraints of the language and ABI, ...).
direct self-modifying code and writing into compiler-generated code generally seems like a bad idea IMO.
*1: actually, they tend to use a mixture of structs and function pointers, and directly-generated native code.
in some tests (for script code), using a straight-C route I have gotten within about 10x of native, and about 2x-3x of native C for cases where typically a form of "call-threaded code" is used (the native code = sequence of calls to C functions implementing the logic), with occasional operations being generated directly as machine code (typically things like variable loads/stores and arithmetic operations and similar).
typically, this is good enough.
a much more "advanced" strategy is to use (or implement) a full compiler, but, this is another level of pain, and adds its own costs.
using such a code generator for the original problem though would likely be severely overkill.
or such...