• Create Account

# Why XML is all the rage now?

65 replies to this topic

### #41Washu  Senior Moderators   -  Reputation: 7590

Like
6Likes
Like

Posted 03 August 2013 - 07:13 PM

Plenty of talented people out there are still trying to do the best they can but don't get a chance because higher up the chain 'good enough' is what they want and its on to the next feature. I've lost track of the number of things I've had to check in where I know I could have improved it but the time wasn't there because 'feature Y' needs to be done in a week now. You fight the battle, some times you win and more often than not you lose.

They don't even really want "good enough." They want "Runs for me in the sales demo."

having had to deal with those kinds of people for a long time, I can honestly say that I've never "lazied" code. But I have fudged it and written shit just to get it "working" and had to leave that code behind. Feature creep is damn annoying, and happens on all projects. Having features change entirely? Happens all the time. Having to have had stuff done LAST WEEK, that just was brought up TODAY? Yep. Happens all the time.

Decent programmers aren't lazy, just swamped with a hundred other things on their plate.

Edited by Washu, 03 August 2013 - 07:15 PM.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.
ScapeCode - Blog | SlimDX

### #42Sik_the_hedgehog  Crossbones+   -  Reputation: 2671

Like
3Likes
Like

Posted 03 August 2013 - 08:15 PM

Being lazy isn't necessarily a bad thing, it can lead to the programmer writing simpler code for the sake of having to do less work (both at the moment and later during maintenance). The problem is being incompetent (seriously, most of the horrible code actually takes lots of effort to make, so it's hard to argue it's lazy), and in some cases, not knowing the implications of what the code does so things are done in a suboptimal way (high level languages sometimes can get rather bad about this).

But yeah, incompetent managers are way too common and a serious source of creep. Or maybe they think that by being too hard they can get a higher pay or something. Or possibly both. (depends on who's in charge really, as well as company culture)

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

### #43samoth  Crossbones+   -  Reputation: 8209

Like
3Likes
Like

Posted 04 August 2013 - 04:54 AM

Being lazy isn't necessarily a bad thing, it can lead to the programmer writing simpler code for the sake of having to do less work (both at the moment and later during maintenance). The problem is being incompetent (seriously, most of the horrible code actually takes lots of effort to make, so it's hard to argue it's lazy), and in some cases, not knowing the implications of what the code does so things are done in a suboptimal way (high level languages sometimes can get rather bad about this).

There's a lot of truth in this in my opinion, not only in respec of "managers" but also in respect of the original topic "why use XML".

Being "lazy" for doing less work means nothing but showing competence in using the available work time. That's at least true as long as the end user observable result is identical (which is the case).

Now, XML may be unsuitable for your tasks, then you should indeed use something different (for example, I would not use it to serialize data that goes over network, even though even this "works fine" as has been proven). But on the other hand, it might just be good enough with no real and serious disadvantage other than being less pretty than you'd like. You have working libraries that you know by heart to handle the format, it plays well with your revision control system, and in the final product it's either compiled into a binary format anyway, or the load time doesn't matter. Maybe you don't like one or the other fearure, but seriously, so what.

In the rather typical case of "no visible difference in end product", one needs to ask which one shows more competency. Using something that works or investing extra time so one can use something that... works.

### #44BGB  Crossbones+   -  Reputation: 1562

Like
1Likes
Like

Posted 04 August 2013 - 11:56 AM

Being lazy isn't necessarily a bad thing, it can lead to the programmer writing simpler code for the sake of having to do less work (both at the moment and later during maintenance). The problem is being incompetent (seriously, most of the horrible code actually takes lots of effort to make, so it's hard to argue it's lazy), and in some cases, not knowing the implications of what the code does so things are done in a suboptimal way (high level languages sometimes can get rather bad about this).

There's a lot of truth in this in my opinion, not only in respec of "managers" but also in respect of the original topic "why use XML".

Being "lazy" for doing less work means nothing but showing competence in using the available work time. That's at least true as long as the end user observable result is identical (which is the case).

Now, XML may be unsuitable for your tasks, then you should indeed use something different (for example, I would not use it to serialize data that goes over network, even though even this "works fine" as has been proven). But on the other hand, it might just be good enough with no real and serious disadvantage other than being less pretty than you'd like. You have working libraries that you know by heart to handle the format, it plays well with your revision control system, and in the final product it's either compiled into a binary format anyway, or the load time doesn't matter. Maybe you don't like one or the other fearure, but seriously, so what.

In the rather typical case of "no visible difference in end product", one needs to ask which one shows more competency. Using something that works or investing extra time so one can use something that... works.

yes.

this is partly a reason behind the current funkiness of using both XML and S-Expressions for a lot of stuff...

a lot comes back to my interpreter projects, as most of the other use-cases had been "horizontal outgrowths" of these, and most unrelated systems had typically ended up using line-oriented text-files (partly because, in simple cases, these tend to be the least implementation effort).

note that in my case, JSON is mostly treated as a horizontal side-case of the S-Expression system (different parser/printer interface, but they map to the same basic underlying data representations).

the main practical difference then (in program) is the main dominance of types:

S-Expression data primarily uses lists, and generally avoids maps/objects (non-standard extension);

JSON primarily uses maps/objects, and avoids lists, symbols, keywords, ... (non-standard extensions).

secondarily, this also means my "S-Expression" based network protocol naturally handles serializing JSON style data as well (it doesn't really care too much about the differences).

for largish data structures, the relative costs of various options tends to weigh-in as well, and (in my case) objects with only a few fields tend to be a bit more expensive than using a list or array (though objects are a better choice if there is likely to be a lot of fields, or if the data just naturally maps better to an object than it does to a list or array).

this leads to a drawback for JSON in this case that it tends to (by convention) rely fairly heavily on these more-expensive object types, and also for my (list-heavy) data-sets tends to produce slightly more "noisy" output (lots of extra / unnecessary characters). both formats can be either "dumped" or printed using formatting.

brief history (approximately of the past 14 years):

at one point, I wrote a Scheme interpreter, and it naturally uses S-Expressions.

later on, this project partly imploded (at the time, the code became mostly unmaintainable, and Scheme fell a bit short in a few areas).

by its later stages, it had migrated to a form of modified S-Expressions, where essentially:

macros were expanded; built-in operations used operation-numbers rather than symbols; lexical variables were replaced with variable-indices; ...

there was also a backend which would spit out Scheme code compiled to globs of C.

elsewhere, I had implemented XML-RPC, and a simplistic DOM-like system to go along with it.

I had also created a type-system initially intended to assist with data serialization, and partly also to add some dynamic type-facilities needed to work effectively with XML-RPC. pretty much all types were raw-pointers to heap-allocated values, with an object header just before the data, and was initially separate from the memory manager (later on, they were merged). (in this system, if you wanted an integer-value, you would get an individually-allocated integer, ...).

later on, I implemented the first BGBScript interpreter (BS.A) (as a half-assed JavaScript knock-off), using essentially a horridly hacked/expanded version of XML-RPC logic as the back-end (it was actually initially a direct interpreter working by walking over the XML trees, and was *slow*...). later on, it had sort of half-way moved it to bytecode, but in a lame way (it was actually using 16-bit "word code", and things like loops and similar were handled using recursion and stacks). the type-system was reused from that above. (it also generated garbage at an absurd rate... you couldn't do "i++;" on an integer variable without the thing spewing garbage... making it ultimately almost unusable even really for light-duty scripting...).

the second BGBScript interpreter (BS.B) was built mostly by copying a bunch of the lower-end compiler and interpreter logic from the Scheme interpreter, and essentially just using a mutilated version of Scheme as the AST format, while retaining a fairly similar high-level syntax to the original. it used ("proper") bytecode up-front, and later experimented with a JIT. it ended up inheriting some of the Scheme interpreter's problems (and notably problematic was the use of precise reference-counted references from C code, which involved a lot of boilerplate, pain, and performance overhead).

the C compiler sub-project mostly used the parser from BS.A and parts of the bytecode and JIT from BS.B. it kept the use of an XML based AST format. this sub-project ultimately turned out to have a few ultimately fatal flaws (though some parts remain and were later re-purposed). this fate also befell my (never completed) Java and C# compiler efforts, which were built on the same infrastructure as the C compiler.

the 3rd BGBScript interpreter (BS.C) was basically just reworking BS.B to work onto the type-system from BS.A, mostly as it was significantly less of a  PITA to work-with. this resulted in some expansion of the BS.A typesystem (such as to include lists and cons cells, ...). (and, by this time, some of the worse offenses had already been cleaned up...).

the changes made broke the JIT in some fairly major ways (so, for the most part, it was interpreter only).

the BS VM has not undergone any single major rewrites since BS.C, but several notable internal changes have been made:

migration of the interpreter to threaded-code;

migration of the interpreter (and language) mostly to using static types;

implementation of a new JIT;

migrating to a new tagged-reference scheme, away from raw pointers (*1);

...

what would be a 4th BS rewrite has been considered, which would essentially be moving the VM primarily to static types and using a Dalvik-like backend (Register IR). this could potentially help with performance, but would take a lot of time/effort and would likely not be bytecode compatible with the current VM.

*1: unlike the prior type-system changes, this preserves 1:1 compatibility with the pointer-based system (via direct conversion), though there are some cases of conversion inefficiencies (mostly due to differences in terms of value ranges). both systems use conservative references and do not use reference-counting (avoiding a lot of the pain and overhead these bring).

or such...

Edited by cr88192, 04 August 2013 - 12:54 PM.

### #45leeor_net  Members   -  Reputation: 307

Like
0Likes
Like

Posted 07 August 2013 - 10:07 AM

Would like to throw my two cents in here as well.

I understand that people may not be crazy about XML and it was used, overused and abused to no end for many, many years. But, I personally find it a very useful format for encoding basic data that doesn't need to be in binary and is never really intended to be sent over a network. Effectively I use it to define animation states and object properties in games. I also use it to great effect for localization strings.

I find JSON problematic for these cases and frankly, YAML isn't as easy to put together particularly when you have a number of sub objects (not as intuitive, but that could simply be because it hasn't been in as great a use as XML).

Not to mention, you have really great libraries that are well tested and mature. I'm using TinyXML to great effect -- no need for the extra stuff like schemas and validation and whatnot, I just handle that myself because the definitions I'm using are so basic in nature.

### #46CodeDemon  Members   -  Reputation: 363

Like
0Likes
Like

Posted 11 August 2013 - 02:38 AM

brief history (approximately of the past 14 years):

at one point, I wrote a Scheme interpreter, and it naturally uses S-Expressions.

later on, this project partly imploded (at the time, the code became mostly unmaintainable, and Scheme fell a bit short in a few areas).

by its later stages, it had migrated to a form of modified S-Expressions, where essentially:

macros were expanded; built-in operations used operation-numbers rather than symbols; lexical variables were replaced with variable-indices; ...

there was also a backend which would spit out Scheme code compiled to globs of C.

Quite the array of language projects you have there!

I too am fond of the use of S-expressions over that of XML, and have had experience using them for data and DSLs in a number of projects. You can't beat the terseness and expressive power, and it's not hard to roll your own parser to handle them.

I share many of the opinions from: http://c2.com/cgi/wiki?XmlIsaPoorCopyOfEssExpressions

As for my own projects, I've also built a custom R6RS parser in C++, and have done some interesting things with it. For specifying data as maps/sets/vectors, I added support for handling special forms which yield new data-structure semantics, added Closure-like syntactic sugar to the lexer/parser where braces and square brackets can be used to define such data structures, and added a quick tree-rewriting pass to the data compiler to convert from the internal list AST node representation to the appropriate container type.

For simple data, sometimes I just go with simple key-value text files if I can get away with it (less is more! strtok_r does the job good enough), and I've recently been experimenting with using parsing expression grammar generators to quickly create parser combinators for custom DSLs that generate more complex data or code as s-expressions or C++.

A shame that many of the "big iron" game studios still use XML for a lot of things, although I've managed to convince a number people that it's time to move on. I dread the days where I am tasked with working on anything touching the stuff.

In short, if you're still using XML, you're needlessly wading through an endless swamp of pain, suffering and obtuse complexity. Things can be better.

### #47CodeDemon  Members   -  Reputation: 363

Like
0Likes
Like

Posted 11 August 2013 - 02:58 AM

I understand that people may not be crazy about XML and it was used, overused and abused to no end for many, many years. But, I personally find it a very useful format for encoding basic data that doesn't need to be in binary and is never really intended to be sent over a network. Effectively I use it to define animation states and object properties in games. I also use it to great effect for localization strings.

I find JSON problematic for these cases and frankly, YAML isn't as easy to put together particularly when you have a number of sub objects (not as intuitive, but that could simply be because it hasn't been in as great a use as XML).

S-expressions are just as powerful, yet more terse. Naughty Dog uses them in the Uncharted Engine for similar things.

### #48phantom  Members   -  Reputation: 10326

Like
0Likes
Like

Posted 11 August 2013 - 03:16 AM

If the data format is going to, 99% of the time, be read in a tools pipeline and not a human then I don't consider terseness a virtue to be honest.

If your pipeline/tools are based around .Net then with XML, between the XDoc/XElement classes and LINQ, you've got 99% of your processing/tree walking requirements there - writing a bit of LINQ to parse a XDoc is pretty trivial.

### #49swiftcoder  Senior Moderators   -  Reputation: 17560

Like
0Likes
Like

Posted 11 August 2013 - 07:01 AM

If the data format is going to, 99% of the time, be read in a tools pipeline and not a human then I don't consider terseness a virtue to be honest.

My experience has been that even when that is the intention, it's not the end result.

We seem to spend a lot of time hand-tweaking the (XML) output of our pipeline.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #50phantom  Members   -  Reputation: 10326

Like
0Likes
Like

Posted 11 August 2013 - 07:21 AM

I can say with absolute certainty that no tool produced XML needs tweaking in our setup, we do have 1 config file which is XML but that is "legacy" as much as anything ("it works, we aren't going to change it"). The only other hand edited config file we have is the renderer setup one which is in JSON - although I'm not convinced that was the right call and wanted to use a 'JSON/Python inspired syntax', but that's a whole other barrel of bitterness

### #51BGB  Crossbones+   -  Reputation: 1562

Like
0Likes
Like

Posted 11 August 2013 - 11:09 AM

brief history (approximately of the past 14 years):

at one point, I wrote a Scheme interpreter, and it naturally uses S-Expressions.

later on, this project partly imploded (at the time, the code became mostly unmaintainable, and Scheme fell a bit short in a few areas).

by its later stages, it had migrated to a form of modified S-Expressions, where essentially:

macros were expanded; built-in operations used operation-numbers rather than symbols; lexical variables were replaced with variable-indices; ...

there was also a backend which would spit out Scheme code compiled to globs of C.

Quite the array of language projects you have there!

I too am fond of the use of S-expressions over that of XML, and have had experience using them for data and DSLs in a number of projects. You can't beat the terseness and expressive power, and it's not hard to roll your own parser to handle them.

I share many of the opinions from: http://c2.com/cgi/wiki?XmlIsaPoorCopyOfEssExpressions

As for my own projects, I've also built a custom R6RS parser in C++, and have done some interesting things with it. For specifying data as maps/sets/vectors, I added support for handling special forms which yield new data-structure semantics, added Closure-like syntactic sugar to the lexer/parser where braces and square brackets can be used to define such data structures, and added a quick tree-rewriting pass to the data compiler to convert from the internal list AST node representation to the appropriate container type.

For simple data, sometimes I just go with simple key-value text files if I can get away with it (less is more! strtok_r does the job good enough), and I've recently been experimenting with using parsing expression grammar generators to quickly create parser combinators for custom DSLs that generate more complex data or code as s-expressions or C++.

A shame that many of the "big iron" game studios still use XML for a lot of things, although I've managed to convince a number people that it's time to move on. I dread the days where I am tasked with working on anything touching the stuff.

In short, if you're still using XML, you're needlessly wading through an endless swamp of pain, suffering and obtuse complexity. Things can be better.

I was working at the time with R5RS.

by the time R6RS came out, I had mostly stopped using Scheme, and looking at it briefly, it looked like a bit of a jump from what R5RS was.

the AST format later used for BGBScript was based partly on R5RS, but differs in a lot of ways, namely in those which make it a better fit for an HLL with a more C/JS/AS3/... like syntax, like different special-forms for defining things, ones representing control-flow constructs (for/while/switch/...), ...

also, generally it moved to the use of explicit special-forms for things like function calls and using operators, ...

some elements of Scheme also were worked into the HLL design as well (tail-calls / tail-position, implicit return values, lists, ...).

early on, both Self and Erlang were also influences for the language design.

later on, Java, C#, and AS3 became influences.

basically, while it started out dynamic and prototype based, static-types, classes, packages, ... were later glued on, partly for performance reasons, and also because they are more effective for a lot of use-cases (can do stronger compile-time checking, ...).

though, the language still retains most of its dynamic funkiness (including a Self-derived scoping model, scoping semantics are fun in my language...). not going to try to explain the type-system and scoping model here though.

for parsers, I have most often used hand-written recursive-descent.

I started out with RD, and pretty much every non-trivial syntax I have encountered seems to work fine with RD.

XML and S-Expressions both have some use-cases.

granted, my XML APIs have since diverged somewhat from DOM, becoming generally a lot more operation-centric, and much less about treating XML nodes as objects (and generally, the "Document" metaphor is all but absent in-use). basically, the API focuses a lot more on composition and decomposition of data, rather than on node manipulation. ironically, it isn't used much at all with external tools (typically about the only time most of this is actually seen is in debugging dumps).

theoretically, it could also matter if/when I needed to interact with other things which use XML, or if by some off-chance I decide to use XML-RPC again (currently unlikely...).

granted, from an ease-of-use perspective, lists are hard to beat, as they are generally a lot easier to work with with a lot less code.

granted, my approach to this (C-side) has been to build a big chunk of Lisp-like APIs in C (basically, a bunch of Lisp and CLOS-like stuff glued onto C).

granted, it took several iterations before really settling on a usable set of tradeoffs (getting something that is both usable and performs well).

a lot of the infrastructure is shared between my script-language and C parts of the project.

I had considered (binary) XML for my network protocol, but ended opting instead with lists.

basically, my network protocol consists of basically large nested list structures, generally passed along to/from specific "targets" (such as between client-side and server-side versions of an entity, ...). initial versions had used Deflated textual serializations, but I later implemented a direct entropy-coded binary serialization.

this protocol is also used for my voxel-terrain, though it is sort of a hybrid (generally, the actual voxel-chunk data is passed using large byte arrays, with the chunk-data being flattened out and RLE compressed). partly this is because passing every voxel as a list-based message would be a bit of a stretch...

(chunk-delta (origin -240  416 48) (size 16 16 16) ... (voxeldata (voxel :type dirt :aux 0 :slight 240 :vlight 0 ...) (voxel :type dirt ...) ...))

it is basically a problem of 16x16x16 * 32*32*8 * 4 * ... which would take some fairly absurd numbers of cons-cells...

so, passing the chunk data in a byte-serialized format seemed like a "reasonable" compromise here.

so, instead it is something more like:

(wdelta

...

(voxdelta ...

(rgndelta ... #Ah( ... ))

(rgndetla ...)

...)

...)

where wdeta=world-delta, voxdelta=voxel-delta, rgndelta=region-delta, and #Ah(...) is a 1D byte array.

Edited by cr88192, 11 August 2013 - 11:15 AM.

### #52swiftcoder  Senior Moderators   -  Reputation: 17560

Like
0Likes
Like

Posted 11 August 2013 - 01:04 PM

It's more a matter of our process being broken

We don't sit next to our artists (nor even in the same time zone), so if an asset comes through buggy, you either learn to use the DCC tool, wait 6 hours for a fresh edition of the asset, or patch the XML up by hand. The latter option wins surprisingly often (hint: programmers mostly don't like using DCC tools).

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #53Flimflam  Members   -  Reputation: 665

Like
1Likes
Like

Posted 11 November 2013 - 02:42 AM

Honestly, I think XML might be hated a bit too religiously these days. Many of the biggest things make the biggest targets for criticism. Unless you're going full-featured in your XML usage, it's plenty readable and if you make proper use of attributes, it isn't that much bigger than things like JSON.

### #54DulcetTone  Members   -  Reputation: 103

Like
0Likes
Like

Posted 29 December 2013 - 08:20 PM

XML was the CSV of Y2K

tone

### #55Postie  Members   -  Reputation: 1420

Like
0Likes
Like

Posted 31 December 2013 - 04:45 PM

I consider XML to be in the same category as COLLADA. ie, designed to reliably convey data between different systems, and nothing else.

• They are both text-based and inefficient at storing data when compared to binary formats.
• They can have complex structures that lead to slow parsing especially on larger files.

The only difference I see is that people realised what COLLADA was intended for and treated it accordingly, whereas XML was (and still is) abused.

I've worked on a project where the original authors thought it would be a good idea to create an entire XML document on the fly using string concatentation, pass it to a stored procedure and query it as a table to pull out a few parameters.

Guess what brought down the entire system?... "&".

Granted, that's not XML's fault, but still...

Currently working on an open world survival RPG - For info check out my Development blog:

### #56Ravyne  GDNet+   -  Reputation: 13365

Like
0Likes
Like

Posted 31 December 2013 - 07:44 PM

Well, COLLADA is an XML application so it makes sense they are similar. I agree very much that in most respects XML is best left on the near side of your build, with the exception being when you actually need human-readable markup as part of your program's content (or arguably, its not the worst choice you could make for configuration data *if* you've already taken the dependency anyways).

I tend to disagree that XML isn't human readable though -- the language itself is plenty readable, but many of its applications are too complex and/or verbose for that to be true in practice. Another sin some XML applications commit is not using the language correctly -- using attributes when children would be more apropos (or vice-versa), introducing too-many/not-enough "container" elements, improper use of namespaces, or failing to provide a means of validation for the application.

A straight-forward, well-designed, and well-supported XML application is usually a joy to use, modify, and build tooling around.

throw table_exception("(ノ ゜Д゜)ノ ︵ ┻━┻");

### #57ambershee  Members   -  Reputation: 532

Like
0Likes
Like

Posted 02 January 2014 - 08:12 AM

Edit: Nevermind. Accidentally replied to something months old. I'm an idiot.

Edited by ambershee, 02 January 2014 - 08:12 AM.

### #58RedactedProfile  Members   -  Reputation: 169

Like
0Likes
Like

Posted 15 January 2014 - 12:24 AM

Honestly, JSON is my preferred data serializer. I use it in everything in lieu of XML.

The existing YAML parsers are, from experience of trying to integrate with C++, pretty bad or incomplete haha but its also a good format when its working. YAML 1.2 actually falls back on JSON which is cool (but I havent tested it). I tend to prefer YAML for config type files, and JSON for just about everything else (data stores, data transfer, web services, etc)

Signed: Redacted

### #59DocBrown  Members   -  Reputation: 273

Like
0Likes
Like

Posted 24 February 2014 - 12:38 PM

From a strictly professional/commercial standpoint - I do EDI development as my day job.  Things such as EDIFACT, X12, HL7, FiX, etc.

Back several years ago, many of these large, business-type data standards decided to try and push the market from using Length-Encoded textual files to markup files via XML tags.  It went horrible.  Those that implemented it probably wished they hadn't, and those that didn't still have to deal with those that did.  Here's an example of HL7v2 and HL7v3(XML-based).  Can you pick which one you'd rather try and troubleshoot and view data in?  I pick Option #1. I honestly wish XML would die.

HL7v2

MSH|^~\&|GHH LAB|ELAB-3|GHH OE|BLDG4|200202150930||ORU^R01|CNTRL-3456|P|2.4
PID|||555-44-4444||EVERYWOMAN^EVE^E^^^^L|JONES|19620320|F|||153 FERNWOOD DR.^
^STATESVILLE^OH^35292||(206)3345232|(206)752-121||||AC555444444||67-A4335^OH^20030520
OBR|1|845439^GHH OE|1045813^GHH LAB|15545^GLUCOSE|||200202150730|||||||||
555-55-5555^PRIMARY^PATRICIA P^^^^MD^^|||||||||F||||||444-44-4444^HIPPOCRATES^HOWARD H^^^^MD
OBX|1|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^182|mg/dl|70_105|H|||F<cr>



HL7v3

 <POLB_IN224200 ITSVersion="XML_1.0" xmlns="urn:hl7-org:v3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<id root="2.16.840.1.113883.19.1122.7" extension="CNTRL-3456"/>
<creationTime value="200202150930-0400"/>
<!-- The version of the datatypes/RIM/vocabulary used is that of May 2006 -->
<versionCode code="2006-05"/>
<!-- interaction id= Observation Event Complete, w/o Receiver Responsibilities -->
<interactionId root="2.16.840.1.113883.1.6" extension="POLB_IN224200"/>
<processingCode code="P"/>
<processingModeCode nullFlavor="OTH"/>
<acceptAckCode code="ER"/>
<device classCode="DEV" determinerCode="INSTANCE">
<id extension="GHH LAB" root="2.16.840.1.113883.19.1122.1"/>
<asLocatedEntity classCode="LOCE">
<location classCode="PLC" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.2" extension="ELAB-3"/>
</location>
</asLocatedEntity>
</device>
<sender typeCode="SND">
<device classCode="DEV" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.1" extension="GHH OE"/>
<asLocatedEntity classCode="LOCE">
<location classCode="PLC" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.2" extension="BLDG24"/>
</location>
</asLocatedEntity>
</device>
</sender>
</POLB_IN224200>


Edited by DocBrown, 24 February 2014 - 12:39 PM.

### #60JTippetts  Moderators   -  Reputation: 11912

Like
1Likes
Like

Posted 24 February 2014 - 12:46 PM

From a strictly professional/commercial standpoint - I do EDI development as my day job.  Things such as EDIFACT, X12, HL7, FiX, etc.

Back several years ago, many of these large, business-type data standards decided to try and push the market from using Length-Encoded textual files to markup files via XML tags.  It went horrible.  Those that implemented it probably wished they hadn't, and those that didn't still have to deal with those that did.  Here's an example of HL7v2 and HL7v3(XML-based).  Can you pick which one you'd rather try and troubleshoot and view data in?  I pick Option #1. I honestly wish XML would die.

HL7v2

MSH|^~\&|GHH LAB|ELAB-3|GHH OE|BLDG4|200202150930||ORU^R01|CNTRL-3456|P|2.4
PID|||555-44-4444||EVERYWOMAN^EVE^E^^^^L|JONES|19620320|F|||153 FERNWOOD DR.^
^STATESVILLE^OH^35292||(206)3345232|(206)752-121||||AC555444444||67-A4335^OH^20030520
OBR|1|845439^GHH OE|1045813^GHH LAB|15545^GLUCOSE|||200202150730|||||||||
555-55-5555^PRIMARY^PATRICIA P^^^^MD^^|||||||||F||||||444-44-4444^HIPPOCRATES^HOWARD H^^^^MD
OBX|1|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^182|mg/dl|70_105|H|||F<cr>



HL7v3

 <POLB_IN224200 ITSVersion="XML_1.0" xmlns="urn:hl7-org:v3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<id root="2.16.840.1.113883.19.1122.7" extension="CNTRL-3456"/>
<creationTime value="200202150930-0400"/>
<!-- The version of the datatypes/RIM/vocabulary used is that of May 2006 -->
<versionCode code="2006-05"/>
<!-- interaction id= Observation Event Complete, w/o Receiver Responsibilities -->
<interactionId root="2.16.840.1.113883.1.6" extension="POLB_IN224200"/>
<processingCode code="P"/>
<processingModeCode nullFlavor="OTH"/>
<acceptAckCode code="ER"/>
<device classCode="DEV" determinerCode="INSTANCE">
<id extension="GHH LAB" root="2.16.840.1.113883.19.1122.1"/>
<asLocatedEntity classCode="LOCE">
<location classCode="PLC" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.2" extension="ELAB-3"/>
</location>
</asLocatedEntity>
</device>
<sender typeCode="SND">
<device classCode="DEV" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.1" extension="GHH OE"/>
<asLocatedEntity classCode="LOCE">
<location classCode="PLC" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.2" extension="BLDG24"/>
</location>
</asLocatedEntity>
</device>
</sender>