UML IT infrastructure

Started by
18 comments, last by Tribad 11 years, 11 months ago
I think I missed something.
What do you want to make differently than UML does?

In your very first post you used a language again. If you want a more powerfull language than Java or C++ or something similar create one.
Advertisement
Yeah I did not do a good job of describing what I have in mind.

The idea is defining a system that combines syndication of UML models and storing data that is in compliance with the model.
Both the model and the data can be accessed and modified by clients. The system would be a middleware that wraps the actual databases and removes the business logic from most programs. And the UMLM systems would need to be able to connect to others for syndication.

Most programs would not need to know the data stucture any more. They would ask that service what model they should work with (through UMLQL) and then collect data (also through UMLQL). Model and data can be defined and altered (with UMLML for instances / data, UMLDL for the model). UMLCL would add a possibility for permission management.

I imagine the specification for UMLMS could be general like that and implemented in pretty much any language and support pretty much any kind of database.

The Java implementation I mentioned was the project I have started to test the waters and to move towards a proof or disproof of concept.
And I should not have mentioned scripting without defining what context I think this would play a role in.
The system would be a low level and tools built on top of that middleware would need much less hardcoded business logic.

I also noticed that I broke one of my rules ... I have not quite tried to understand first (before seeking to be understood). SAP was mentioned twice. I just noticed I don't quite know how well that can be used to define business models that go beyond ... well ... business. Guess I will do some more research based on the input I already got.
Maybe you should have a look at OpenAmeos UML-Tool.
The Tool stores the meta model in a database and the tool itself is completly independent from something like the UML-Meta-Model.
AFAIK it is based on Software-Through-Pictures from Aonix.
If you download and install the UML-Tool you get a lot of documentation about how it works and you get all scripts that make up the UML-Tool with it.
The UML-Tool has been created with a lot of scripts that define how the data should be handled.

Have a look at it. Its a funny flexible system

The idea is defining a system that combines syndication of UML models and storing data that is in compliance with the model.

XML schemas, SQL tables with constraints - they are both examples of carrying structure along with the data, in a parseable form. What does moving this metadata to UML actually gain us? (keeping in mind that UML is designed for human readability, not machine comprehension)

Both the model and the data can be accessed and modified by clients.[/quote]
Can you give me a real-world example where client modification of the model is beneficial? (and by beneficial, I mean worth the risk that changes will invalidate all other client software)

The system would be a middleware that wraps the actual databases and removes the business logic from most programs.[/quote]
How exactly does the existence of a published model remove the need for business logic? I may be missing something essential, but if my program relies on the username, I have the alternative of using data['username'] vs data[model['username']]. The latter allows me to more easily adapt equivalent data from a different source - but that could equally well be accomplished by a stream filter/whatever, and it in no way impacts my actual logic.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Maybe it is important to understand where I am coming from.

1.) As a reader of Abundance and fan of XPrize.org and Singularity University I believe in DIY innovation, barrier free environments and collaboration.
What bothers me is that we never think about how to pool our ideas and data. All efforts are isolated - to some degree they are brought together in universities but that is not good enough. And we don't remove borders by applying the DEAL (definition, elimination, automation and liberation) principles. It is extremely hard to contribute to big projects. There should be better authoring tools and those tools should play well together.
Eventually computers will need to "understand" the world ... think "Turing" test. The big challenges will require syndication and exchange of data (power grid, sensors - http://www.nokiasensingxchallenge.org etc.).

2.) As a programmer I hate my job right now. Too many clients for the size of the company, too many products and too many different versions of our software and products that are hard to refactor and don't play well together.
The one good thing we have is a code generator that creates stubs for forms from xml descriptions.

So I went from wondering: "why can't we also create DAOs from meta descriptions?"
to "why can't we define the whole software in an abstract way and generate most of the source code with the help of builder services?".

There were many times where we hated that we were stuck with our software architecture.
We would have loved to sell all our products in one suite edition, sell a "Single session, multiple users" standalone version of a product or plugin or even sell one feature of one of our products as a cheaper product.
So I wanted a CASE tool that knows about tiers, roles, use cases, layers, classes, interfaces, functions and structograms ... and that allows users to define (capturing as data, not hardcoded) what to do with any business model in order to realize a certain kind of build.

Then I stated to wonder "why can't we use the same business model that our clients use?" ... for example share our model with the help of a model service.

UML is more than just class diagrams ... you can define what a service is supposed to look like etc.



And combining both trains of thoughts I ended up thinking that a syndication of UML management systems is the way to go ... imagine most current LAMP and WAMP server added that middleware and all different kinds of authoring tool projects started using it.
How different are our applications really? How many use the concept of scene graphs? Would a service that knows physics and updates a generic scene graph streamed from and then to a client make sense?

Go from there ... and I think the questions that arise are just small details:
* render software as binary builds?
* turn OS into a browser that can deploy any application in any way with generated scripts?
* ...

And we would not even have to share all data if that is what you are afraid of now.
A new program could embed a UML management system and establish a P2P connection to the same program running on a friends computer.
They could exchange instances for a model they fetch from the cloud without sharing those instances with anybody else.
Companies could still develop proprietary models by working with their intranet cloud and not sharing their additions to the public model.
The driving force behind this combination of naivety and basic MDA/SOA delusion is the OP's bad experience with a badly designed software product line, where copy&paste reigns supreme and tools like a generator of "stubs" encourage massive duplication of write-only and very boring code.

Improving this situation is a laudable intent, but if the current mess is considered "hard to refactor" it means that it's already exceeding the technical ability of the whole company: hoping to make a sophisticated CASE tool that understands and automates the whole product line design is obviously more difficult than a more gradual and simple redesign effort, and therefore it cannot be considered anything but wishful thinking.

Serious software engineering would consider the difficulty of writing such a CASE tool (high, and higher than alternatives), its cost (obviously in the bet-the-company range), its risk of failure (horrible, as it is only useful if complete and flawless), its provided value (limited and uncertain) and figure out something more humble and practical, like a component-based plan that starts by imposing some order on existing code and replaces modules with better designed and more configurable ones over time.

But naive software engineering is able to neglect practical constraints (like time, money and skill), turn a blind eye to difficulties (for example, assuming that UML is useful) and dream of far less realistic systems: instead of a tool for the specific sort of products the OP's company is making, which could contain a lot of valuable complex design and genuine knowledge of "business models", a silver bullet tool for everything, able to model and understand generic processes: it would be so generic that it would go full circle to a combination of programming language and runtime platform, like e.g. Java, and it would be no better than the currently vague language design and libraries it would consist of.

The final layer of delusion is a compound free lunch expectation: the AI will put together working software that satisfies requirements without supervision; from "business models" which will be simple to write; sparing programming effort by reusing other people's "syndicated" models and data whenever possible. It goes without saying that all three points range from completely undemonstrated to fantastic.

Omae Wa Mou Shindeiru

It would not be one silver bullet tool. UML model syndication would be one addition to our existing infrastructure. Like I said, any language could use that service it it can use a database system.

It would be fully scalabe ... every person and enterprise could decide how to use it - in other words we could actually decide to go for a humble approach. There days we need to employ specialists and come up with a model that has been defined millions of times already by other groups and individuals ... I don't see what's humble about that.

Through devide and conquer things would be simpler not more complex or complicated.

It would be fully scalabe ... every person and enterprise could decide how to use it - in other words we could actually decide to go for a humble approach. There days we need to employ specialists and come up with a model that has been defined millions of times already by other groups and individuals ... I don't see what's humble about that.


I've got to ask, have you actually developed software AND the related processes for a big organization, and seen it through to production/roll-out?

Many similar subprocesses have been defined formally or informally by many organizations yes. But the devil is in the details, and process details depend on their context. Working with processes you have to make sure they integrate at all levels of abstraction which means you can't design/define them in isolation. Even a single human intelligence isn't enough. For instance I've seen banks overhauling processes and they have large teams devoted to the "paper designs" for years, and they're in constant debate with the rest of the organization how everybody and every system are to work together.


Through devide and conquer things would be simpler not more complex or complicated.
[/quote]

Which means: Don't try to design a single system/process/framework to solve everything! ;-)
Designing a single software is not the plan. The plan is offering a shared infrastructure that offers the best way to bring carefully designed software components together - as a "seamless software user experience".
Those software components would have to be designed in an abstract form in order to allow different kinds of deployments - and I see a lot of room for automation and elimination that would automatically be exposed with that kind of infrastructure.

Yeah I have worked in a big company where I saw what different teams (support, development and operating departments) struggle with. And I currently work in a small company as a programmer where I have witnessed and contributed to many projects and development cycles.

And I have come to the conclusion that teams and individuals struggle with problems and have to answer questions that should not have come up in the first place, all the time.
What you are trying to do is creating a language to define abstract components and give them some parameter to fit into a problem.

That is what any good software engineer does.

If I want to count the number of children crossing the street, I can use the abstract code fragment that increments a variable whenever a child crosses the street.
But to make that possible you need someone, the software engineer, who is able to do the abstraction. The software engineer identifies the requirements for the software and take the code fragment for such a counting loop out of his database named brain and use it.
So what you are trying here to do is to teach a piece of software the interpretation/abstraction of any requirement that may come to mind and let the software decide how to solve it.

That is A.I at its best.

How often software engineers struggle depends on there experience they have.

My opinion.

This topic is closed to new replies.

Advertisement