Sign in to follow this  
ToohrVyk

Name this pattern

Recommended Posts

This is an approach I have seen used in ASP.NET 2.0 and several other places, and I've also found an use for it in one of my compilers. What is its name? The approach consists in delaying an action on an object until the point when the object becomes known. For example, in my compiler, I will usually read a function being called, and only some time after that would I read the function declaration. The general way of handling this is doing a second pass, once all functions are defined, to check that any called functions do exist. The approach I use here consists in saying "Check that function FOO exists." This checks an "error" flag which, if it is still present at the end of processing, results in an error. As soon as the FOO function is defined, all function calls referencing it are automatically updated: the number and types of arguments are checked, and so on. When applied to sets of objects, this becomes even more useful. For instance, I could decide that for each type constraint on the arguments of FOO, I want to perform some simple operation (typically propagating that type constraint locally). When I read a function call, the function may have already been defined, so half its constraints are present, and the other half isn't yet. This means that "for each constraint" would execute the function immediately for existing constraints, and execute it for the rest as they are added. This is similar to what the ASP.NET 2.0 framework does: when you add a Button to a CompositeControl, even though you do it long after all events have been handled, any events that were intended to be activated on the button are activated immediately, as if the button has been there since the very beginning.

Share this post


Link to post
Share on other sites
I think its reflection you're refering to, although I haven't used it outside of a java programming class.

with it you can instantiate a class at runtime using an interface and check if a function exists.

Share this post


Link to post
Share on other sites
I'm not certain that I've correctly understood your post, but it reminds me a bit of how one can solve such problems in Oz using dataflow variables and threads.

In case you're not familiar with Oz's dataflow variables, the concept is simple: one creates a variable, which is initially unbound. Using = you can bind it (using some unification algorithm à la Prolog (automatic destructuring etc.); if it is still unbound, it's bound to the new value; if it's already bound, it only succeeds when the values are equal, otherwise exception). If however, you're trying to use the variable's value when it has not been bound yet, the thread will block until it is (in the hope another thread will make the binding for you).

Using this, I would create a "skeleton" data structure consisting of a number of unbound fields. Each constraint would be put in a separate thread (threads are cheap in Oz), which means the constraint checking logic would automatically run as soon as the necessary fields in the data structures become bound. You could easily have it work with constraints which work in two ways: if you've got A, then you can deduce B, and vice versa: you'd just need to create two different threads, one for A=>B, one for B=>A, one of them will run first, the other will just check.

In the case of a compiler accumulating pieces of information as it goes, you would just create new such data structures and constraint threads while going through the code, fill in the structures with the appropriate values, and the system will take care automatically of propagating constraints in the background. At the end of the run, everything should automatically be filled in.

I have to admit, I've no idea how efficient this would be, but I certainly like the concept of it. And if I were to give it a name, I'd call it "Komk'onfretenOz".

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
without having read more than the first line of the 2nd paragraph, could you happend to be referring to "lazy evaluation" (wikipedia)

Share this post


Link to post
Share on other sites
Your description sounds more like a subsystem than just a single patter. You need to keep a collection of the dependents (which is an "observer" pattern), and also some condition for evaluation (typically, a "future" of some sort). And, yes, you can "observe" some concept that doesn't (yet) have a concrete realization -- you're in fact observing the realization thereof.

I'd say for sure that it's not "reflection" and it's not "lazy evaluation" though. Lazy evaluation means you don't evaluate until you're asked for the value of something, but the point in the original post is that you're actually asked for the status, but you decide to defer until you actually know, and add an observable link back to the requester.

Share this post


Link to post
Share on other sites
Quote:
Original post by iminyourbrain
I think its reflection you're refering to, although I haven't used it outside of a java programming class.

with it you can instantiate a class at runtime using an interface and check if a function exists.


All .net languages make heavy use of reflection and its a pretty damned powerful tool.

To a limited degree C++ can have the same ability. Google RTTI or Runtime Type Informaation. It allows you to look at the methods and properties of an object and runtime instead of compile time. Allowing you to do all kinds of late binding tricks.

Im not sure thats what he is talking about though.

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
Lazy evaluation means you don't evaluate until you're asked for the value of something, but the point in the original post is that you're actually asked for the status, but you decide to defer until you actually know, and add an observable link back to the requester.
Sounds like declarative concurrency to me.

This means: spawn off some function and let it do its work, leaving you a placeholder. When it is complete, the placeholder evaluates to the result of the function. Futures and Oz's logic variables are different forms of these placeholders.

If this is what you mean, search for "call by future".

Share this post


Link to post
Share on other sites
Quote:
Original post by Rebooted
This means: spawn off some function and let it do its work, leaving you a placeholder. When it is complete, the placeholder evaluates to the result of the function. Futures and Oz's logic variables are different forms of these placeholders.

If this is what you mean, search for "call by future".


That was it. Thank you.

Share this post


Link to post
Share on other sites
I didn't know it had a name, but I apparently use declarative concurrency in data flow analysis for my decompiler.

Here's the scenario: Input code is arranged in 'basic blocks', which are in turn held within functions (which are technically unneccessary at this stage).

Inside each block are any number of Instructions which mostly perform read/write operations. The main task of data flow analysis is to figure out which read instructions depend on which write instructions and create links between them.

The naive algorithm is to process each block, recording the last instruction responsible for writing to each register or memory location (I only care about registers at the moment).

This works great until a loop is encountered. Because the instructions inside the loop can write to registers that are then read again in earlier parts of the next iteration of the loop, the entire loop has to be repeatedly processed until no new links are discovered. This is pretty inefficient, especially when the loop calls several functions.


On my second try, I devised the following scheme: Every block defines its input data flow in terms of its predecessor(s)' output, then evaluates each local instruction and produces an output data flow expression which still might have references to the predecessors (or not). This avoids having to "loop until no changes detected" -- I don't have to waste memory storing what the last iteration discovered.

After each block figures out its input and output expressions, I perform what I called "flattening" on the expressions - Each expression must be simplified to remove all block references (replacing them with the output equation for the referenced block). This is extremely complicated when taking in cross-function data flow into account, but is MUCH faster than the previous method, and uses very little memory in comparison.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this