Being lazy isn't necessarily a bad thing, it can lead to the programmer writing simpler code for the sake of having to do less work (both at the moment and later during maintenance). The problem is being incompetent (seriously, most of the horrible code actually takes lots of effort to make, so it's hard to argue it's lazy), and in some cases, not knowing the implications of what the code does so things are done in a suboptimal way (high level languages sometimes can get rather bad about this).
There's a lot of truth in this in my opinion, not only in respec of "managers" but also in respect of the original topic "why use XML".
Being "lazy" for doing less work means nothing but showing competence in using the available work time. That's at least true as long as the end user observable result is identical (which is the case).
Now, XML may be unsuitable for your tasks, then you should indeed use something different (for example, I would not use it to serialize data that goes over network, even though even this "works fine" as has been proven). But on the other hand, it might just be good enough with no real and serious disadvantage other than being less pretty than you'd like. You have working libraries that you know by heart to handle the format, it plays well with your revision control system, and in the final product it's either compiled into a binary format anyway, or the load time doesn't matter. Maybe you don't like one or the other fearure, but seriously, so what.
In the rather typical case of "no visible difference in end product", one needs to ask which one shows more competency. Using something that works or investing extra time so one can use something that... works.
this is partly a reason behind the current funkiness of using both XML and S-Expressions for a lot of stuff...
a lot comes back to my interpreter projects, as most of the other use-cases had been "horizontal outgrowths" of these, and most unrelated systems had typically ended up using line-oriented text-files (partly because, in simple cases, these tend to be the least implementation effort).
note that in my case, JSON is mostly treated as a horizontal side-case of the S-Expression system (different parser/printer interface, but they map to the same basic underlying data representations).
the main practical difference then (in program) is the main dominance of types:
S-Expression data primarily uses lists, and generally avoids maps/objects (non-standard extension);
JSON primarily uses maps/objects, and avoids lists, symbols, keywords, ... (non-standard extensions).
secondarily, this also means my "S-Expression" based network protocol naturally handles serializing JSON style data as well (it doesn't really care too much about the differences).
for largish data structures, the relative costs of various options tends to weigh-in as well, and (in my case) objects with only a few fields tend to be a bit more expensive than using a list or array (though objects are a better choice if there is likely to be a lot of fields, or if the data just naturally maps better to an object than it does to a list or array).
this leads to a drawback for JSON in this case that it tends to (by convention) rely fairly heavily on these more-expensive object types, and also for my (list-heavy) data-sets tends to produce slightly more "noisy" output (lots of extra / unnecessary characters). both formats can be either "dumped" or printed using formatting.
brief history (approximately of the past 14 years):
at one point, I wrote a Scheme interpreter, and it naturally uses S-Expressions.
later on, this project partly imploded (at the time, the code became mostly unmaintainable, and Scheme fell a bit short in a few areas).
by its later stages, it had migrated to a form of modified S-Expressions, where essentially:
macros were expanded; built-in operations used operation-numbers rather than symbols; lexical variables were replaced with variable-indices; ...
there was also a backend which would spit out Scheme code compiled to globs of C.
elsewhere, I had implemented XML-RPC, and a simplistic DOM-like system to go along with it.
I had also created a type-system initially intended to assist with data serialization, and partly also to add some dynamic type-facilities needed to work effectively with XML-RPC. pretty much all types were raw-pointers to heap-allocated values, with an object header just before the data, and was initially separate from the memory manager (later on, they were merged). (in this system, if you wanted an integer-value, you would get an individually-allocated integer, ...).
the second BGBScript interpreter (BS.B) was built mostly by copying a bunch of the lower-end compiler and interpreter logic from the Scheme interpreter, and essentially just using a mutilated version of Scheme as the AST format, while retaining a fairly similar high-level syntax to the original. it used ("proper") bytecode up-front, and later experimented with a JIT. it ended up inheriting some of the Scheme interpreter's problems (and notably problematic was the use of precise reference-counted references from C code, which involved a lot of boilerplate, pain, and performance overhead).
the C compiler sub-project mostly used the parser from BS.A and parts of the bytecode and JIT from BS.B. it kept the use of an XML based AST format. this sub-project ultimately turned out to have a few ultimately fatal flaws (though some parts remain and were later re-purposed). this fate also befell my (never completed) Java and C# compiler efforts, which were built on the same infrastructure as the C compiler.
the 3rd BGBScript interpreter (BS.C) was basically just reworking BS.B to work onto the type-system from BS.A, mostly as it was significantly less of a PITA to work-with. this resulted in some expansion of the BS.A typesystem (such as to include lists and cons cells, ...). (and, by this time, some of the worse offenses had already been cleaned up...).
the changes made broke the JIT in some fairly major ways (so, for the most part, it was interpreter only).
the BS VM has not undergone any single major rewrites since BS.C, but several notable internal changes have been made:
migration of the interpreter to threaded-code;
migration of the interpreter (and language) mostly to using static types;
implementation of a new JIT;
migrating to a new tagged-reference scheme, away from raw pointers (*1);
what would be a 4th BS rewrite has been considered, which would essentially be moving the VM primarily to static types and using a Dalvik-like backend (Register IR). this could potentially help with performance, but would take a lot of time/effort and would likely not be bytecode compatible with the current VM.
*1: unlike the prior type-system changes, this preserves 1:1 compatibility with the pointer-based system (via direct conversion), though there are some cases of conversion inefficiencies (mostly due to differences in terms of value ranges). both systems use conservative references and do not use reference-counting (avoiding a lot of the pain and overhead these bring).
Edited by cr88192, 04 August 2013 - 12:54 PM.