We have a near ubiqitious data storage and data transmission format for intranets and the internet yet the many want to poison the interoperability well by increasing the number of incompatible formats that are called 'XML'.
--Dare Obasanjo on the xml-dev mailing list, Monday, 22 Nov 2004
Namespaces are an intrinsic part of an element. A furniture store <table> doesn't become an XHTML <table> just because you moved it to an XHTML document.
--Jason Hunter on the jdom-interest mailing list, Sunday, 28 Nov 2004
My wife discovered that her computer had been infected by spyware and trojans despite the anti-virus, regular Windows updates, having the good sense not to open attachments, using a firewall, and avoiding any type of seedy activities online. As best we can tell someone exploited IE transparently while she searched for medical information to help our nephew.
The clean up from these types of infections is great fun. I spent not less than 5 hours running about every spyware prevention program known to man. Each one searching for those pesky files and registry settings. The worst thing of all was that, once I cleared them off the disk, simply starting Internet Explorer would reinfect the whole system. Seriously, it was great fun and I did, eventually, have the satisfaction of beating the problem. That's right - a system administrator for 10 years with a degree in computer science and a RHCE CAN clean up a single spyware infection in 5 hours.
I hope you see what I am really saying here. How on this earth are people that aren't trained in Information Technology going to do it? As a Linux desktop user, I had never been exposed to this type of problem. Having now battled with spyware, I am finally motivated to speak up and say something to the world. I want to get a single message across:
It's time for anyone running a Windows PC to switch to Linux.
You see, the Windows platform is not just insecure - it's patently, blatantly, and unashamedly insecure by design and for all the lip service to security it's really not going to get better, ever. To make matters worse, it's more expensive and gives you fewer necessary applications right out of the box than Linux. Everyone, even Microsoft, knows this - they are just too afraid to say it. The tide is coming in. Nothing on this planet can stop it.
--Chris Spencer
Read the rest in Linux Opinion: An Open Letter to a Digital World (LinuxWorld)
the trick with xslt is that it's a specification of what to do, not an instruction to do something. ie it is truly non-procedural.
the paradigm shift is from programming (giving a clear step by step set of instructions) to specifying (if you have an x then do y). this is subtle but critical.
my experience training practicing programmers to make this paradigm shift is that they struggle. they're programmers because they can build a detailed set of instructions. if you talk to managers you'll find they can actually do this sort of work better because their day-to-day work is based on broad directions not detailed instructions.
--Rick Marshall on the xml-dev mailing list, Thu, 11 Nov 2004
The real problem with DOM is that it is good enough for many purposes; it has far too many methods, many overlapping in purpose and not named consistently. Committees and legacies do that. Despite that, everyone already has a DOM library handy -- not just Java programmers, but also programmers of many other languages. It is too easy to just choose DOM because it is widespread and available.
Although I would not generally choose to write in the Java language if I had the option to write Python (or maybe Ruby, or even Perl),
XOM
really does everything better than DOM.XOM
is more correct, easier to learn, and more consistent. Most of its capabilities have not been covered in this introduction, but rest assured it incorporates the usual collection of XML technologies: XPath, XSLT, XInclude, the ability to interface with SAX and DOM, and so on.If you are doing XML development in the Java language, and you are able to include a custom LGPL library in your application, I strongly recommend that you give
XOM
a serious look.
--David Mertz
Read the rest in XML Matters: The XOM Java XML API
One thing I think was pretty significantly stupid on Microsoft's part is that it built its virtual machine environment so it would support languages like C and C++, and we thought long and hard about doing that. As soon as you support C and C++, you blow away most of the security story.
So C# added these unsafe regions, and the CLR (Common Language Runtime) allows you to do unrestricted pointer operations; and as near as I can tell, the way that the standard Microsoft APIs are built, you have to drop into these unrestricted pointer environments a lot.
As a part of its "support all possible languages," Microsoft basically gave up on security. Microsoft is getting hammered over and over and over again about this, and has been for years, and the company says a lot of good words, but it doesn't actually seem to do anything really significant. It issues a lot of patches. It doesn't actually think about things from the ground up.
Security is one of these things that you don't add by painting it on afterward.
--James Gosling
Read the rest in ACM Queue - A Conversation with James Gosling - James Gosling talks about virtual machines, security, and of course, Java.
Market share does not predict security. Apache has more market share than has Microsoft IIS, which has more holes than Apache.
--Ben Goodger
Read the rest in Unearthing the origins of Firefox | Newsmakers | CNET News.com
XSLT's XML format was one of its huge advantages over DSSSL, which it effectively replaced the way XML replaced SGML (and DSSSL had far less success than SGML). It's much easier to read "</xsl:if></xsl:for-each></xsl:if></xsl:variable>" and know exactly what kinds of structures are being ended, in what order, than to look at "))))" and know the same thing.
--Bob DuCharme on the xml-dev mailing list, Tuesday, 9 Nov 2004
For big systems with open-ended scaling requirements, architectures that are asynchronous and queued rather than call and response generally seem to win, big time. It’s not an accident that IBM and Tibco were making millions selling big robust asynchronous queuing infrastructure long before anyone started talking about “Web Services.”
--Tim Bray
Read the rest in ongoing · Web Services Theory and Practice
I've heard a number of talks recently advocating that we should use URIs whenever we want to identify anything, and I simply don't think that's the right direction. To my mind <postcode>RG4 7BS</postcode> is a perfectly good identifier (for a small piece of geography in which my house is found), and any technology that requires me to write it differently if I'm going to use it for linking purposes is too constraining.
--Michael Kay on the xml-dev mailing list, Friday, 22 Oct 2004
The whole WS standards thing has more moving parts than a 747. Much of it recently invented, untested and unproven in the real world.
Given that there are no exceptions to Gall's Law:A complex system that works is invariably found to have evolved from a simple system that worked.
I believe WS-YouMustBeJoking is doomed to collapse under its own weight.
--Sean McGrath
Read the rest in Sean McGrath, CTO, Propylon
you just can’t get both the necessary flexibility and performance that you need for XML unless you are prepared to move away from a purely relational approach.
--Philip Howard
Read the rest in IBM moves the database goalposts | The Register
RSS is clearly, far and away the most successful web service to date. And it kind of demonstrates something that happens a lot in technology, which is that something simple and easy-to-use gets overloaded (in the sense that object oriented programming uses the term). I mean it's the classic example of Clayton Christensen's innovator's dilemma. When HTML came out everybody said "Hey this is so crude, you can't build rich interfaces like you can on a PC - it'll never work". Well it did something that people wanted, it kind of grew more and more popular, became more and more powerful, people figured out ways to extend it. Yes a lot of those extensions were kludges, but HTML really took over the world. And I think RSS is very much on the same track. It started out doing a fairly simple job, people found more and more creative things to do with it, and hack by hack it has become more powerful, more useful, more important. And I don't think the story is over yet.
--Tim O'Reilly
Read the rest in Read/Write Web: Tim O'Reilly Interview
Mozilla came back to life and is now improving, which is no longer the case for IE. If mozilla brings a good run time environment for intranets apps, then things may change and we may have an alternative option to XAML/IE/longhorn. Mozilla teams should listen more to developers needs and less to W3C in order to succeed.
--Didier PH Martin on the xml-dev mailing list, Wednesday, 07 Jul 2004
Whenever I hear about a new text editor that’s “better than BBEdit,” the first thing I do is open its Find and Replace window. Then I run back to BBEdit.
--Michael Tsai
Read the rest in BBEdit 8
Python is essentially Scheme with indentation instead of parentheses.
--John Cowan on the xml-dev mailing list, Wednesday, 23 Oct 2002
A subtle revolution is going on, and parts of it emerged in this conference. While there are many (especially those vendors) who want to declare the matter closed and have effectively turned a deaf ear to the plea for re-examining schemas, the most promising alternative candidate, Relax-NG (also given as RNG) has quietly been showing up in all sorts of interesting places. For instance, last year the SVG 1.2 specification was published using RNG and not XSD as the schema. Tools such as Oxygen are making RNG available as their primary validation scheme, and content management engineers, who have known for sometime the deficiency inherent in XSD, have been switching over DTDs to RNG and bypassing the XSD spec altogether.
--Kurt Cagle
Read the rest in Metaphorical Web: Conferences and Google
the canonical documentation of the Scheme and Lisp standards is maintained not in S-expression syntax but in LaTeX syntax. If S-expressions were easier to edit, it would be most logical to edit the document in S-expressions and then write a small Scheme program to convert S-expressions into a formatting language like LaTeX. This is, what XML and SGML people have done for decades, because they really do believe that their technologies are better for document editing and maintenance than LaTeX. The Lisp world seems to have come to a different conclusion about S-expressions versus LaTeX.
--Paul Prescod
Read the rest in XML is not S-Expressions
IE doesn't really do namespaces, it vaguely emulates them with a hack. The whole Mozilla family on the other hand implements them as per spec, as do (or soon will) browsers of the KHTML family.
--Robin Berjon on the xml-dev mailing list, Thu, 28 Oct 2004
This is a different situation than in Java, because compared to Java code, XML is agile and flexible. Compared to Python code, XML is a boat anchor, a ball and chain. In Python, XML is something you use for interoperability, not your core functionality, because you simply don't need it for that. In Java, XML can be your savior because it lets you implement domain-specific languages and increase the flexibility of your application "without coding". In Java, avoiding coding is an advantage because coding means recompiling. But in Python, more often than not, code is easier to write than XML. And Python can process code much, much faster than your code can process XML. (Not only that, but you have to write the XML processing code, whereas Python itself is already written for you.)
If you are a Java programmer, do not trust your instincts regarding whether you should use XML as part of your core application in Python. If you're not implementing an existing XML standard for interoperability reasons, creating some kind of import/export format, or creating some kind of XML editor or processing tool, then Just Don't Do It. At all. Ever. Not even just this once. Don't even think about it. Drop that schema and put your hands in the air, now! If your application or platform will be used by Python developers, they will only thank you for not adding the burden of using XML to their workload.
--Phillip J. Eby
Read the rest in dirtSimple.org: Python Is Not Java
For the user who spends 50 percent of the time in the Web browser and another 40 percent in the mail client, the Linux desktop is already there.
--Andy Hertzfeld
Read the rest in Technology Review: An Alternative to Windows
The First Amendment can't give special rights to the established news media and not to upstart outlets like ours. Freedom of the press should apply to people equally, regardless of who they are, why they write or how popular they are.
--Eugene Volokh
Read the rest in The New York Times > Opinion > Op-Ed Contributor: You Can Blog, but You Can't Hide
China is censoring Google News to force Internet users to use the Chinese version of the site which has been purged of the most critical news reports. By agreeing to launch a news service that excludes publications disliked by the government, Google has let itself be used by Beijing,
--Reporters without Borders
Read the rest in Internet News Article | Reuters.com
Just because the spammers are sociopaths is no reason for webmasters to behave in an equally offensive manner.
--Alan Eldridge
Read the rest in Using Apache to stop bad robots : evolt.org, Backend
RSS is a syndication format. It's not well-suited to carrying ads. It's designed for syndicating content, and content only. No navigation, no design, no advertisements.
--Andy Baio
Read the rest in Wired News: Advertisers Muscle Into RSS
The bottleneck for XML processing in an application is dependent on the application. This is all old ground. To some people the wire size is important so the added cost of compressing/decompressing is fine. For others, processing time is the bottleneck so compounding XML parsing with the cost of compressing/decompressing XML makes things worse not better. There is no one-size-fits-all solution to optimization problems.
--Dare Obasanjo on the xml-dev mailing list, Wednesday, 18 Aug 2004
apparently all of the talk of alternative XML encodings being much more efficient than text XML are based on consistent use of document classes for each test, so this consistency means the kind of redundancy that makes compression much easier. When someone prototypes an encoding that is orders of magnitude more efficient for arbitrary XML, which Mike said that no one had done yet, I'll more seriously consider the possibility that a binary XML standard might be worth the trouble.
--Bob DuCharme on the xml-dev mailing list, Monday, 22 Nov 2004
Why is it that even simple questions asked about straightforward aspects of Unicode somehow mutate into hairsplitting arguments about who exactly meant what and which version does which...?
--Mark E. Shoulson on the Unicode mailing list, Tuesday, 23 Nov 2004
A company has employees. The current company policy is that the minimum age of employees is 16. What happens when a 15 year old whiz kid is hired? Validation by the IT department of the data file for this new employee will result in sending up error flags. Should the IT department run the business, or should the business run the IT department?
--Roger L. Costello on the xml-dev mailing list, Tuesday, 24 Aug 2004
post XML Schemas, the W3C brand is fairly diminished as far as new specs are concerned. XBC could easily go the way of XPointer, XML 1.1 and XML Fragment Interchange: like a quarrelsome but beautiful neighbour, decorative but to be avoided.
--Rick Jelliffe
Read the rest in Binary XML? What about Unicode-aware CPUs instead?
I'll admit that there may be people who actually want the features SOAP provides and REST doesn't, maybe even for reasons other than relentless marketing. Counter to the past few years of Web Services propaganda, however, I'd also argue that they're a minority of cases, in projects if not in wallets. Most of the time using SOAP is just adding wasted overhead in the service of an architecture that isn't generally necessary.
Most developers don't need CORBA, nor do most most developers need Web Services. The real problem here from my perspective isn't whether or not SOAP sucks, but that gleeful vendors tried to pretend the market for it was much larger than it actually was, and weren't keen on hearing from people who pointed out that most of the time SOAP isn't a particularly clean solution. In fact, much of time, it's poison.
XML-based technologies seem particularly susceptible to the "if we standardize it, everyone will use it" fallacy. Somehow people seem to have absorbed the standardization aspect of XML while missing its flexibility and the fact that it was explicitly designed as "SGML for the Web". They've just kept going with an insane urge to create more sort-of standards...
--Simon St. Laurent
Read the rest in Eric Newcomer's Weblog: More on WS-* Complexity
As useful as the features and usability tweaks are, there is something much more interesting about Firefox and Thunderbird, and that is the sense you are dealing with well-polished end user applications and not collections of components. Firefox and Thunderbird represent a new breed of open source projects that are first and foremost, products. They have a clear focus on end users, well articulated missions, and critically, keen brand awareness.
--Bill de hÓra
Read the rest in Bill de hÓra
Why do you need a nul? They're not exactly legal characters in plain text; I know of no program that would do anything constructive with them in plain text. A file with arbitrary control characters in it is generally not a plain text file; an escape code certainly has no fixed meaning and where it does have meaning it does things, like underlining and highlighting and other things, that aren't exactly plain text.
--D. Starner on the unicode mailing list, Sunday, 14 Nov 2004
actual data ownership is maybe less important, in some areas, than people think. When we talk about user-contributed data, we're not just talking about my data proper (as in having your mail stored on Gmail or Yahoo! Mail or whatever.) We're also talking about a kind of content that users are contributing to a collective work. So for example, Amazon Reviews - people don't really care about that in the same way. They're not saying "Oh I created that review and I want to be able to export it to Barnes & Noble as well". They're creating it in a particular context of that community.
And when you think about ownership, it really gets portrayed as black and white - when in fact it's grey. It's kind of like valance electrons, where data has a center of attraction but it also is free to move. So when I write an Amazon review, it is mine in some sense - and you'll find that when people submit reviews to Amazon, they may also submit them to somewhere else because they have a copy of it. And nobody particularly cares. It's that data mobility zone that actually creates a lot of the free-flow ideas on the Net.
--Tim O'Reilly
Read the rest in Read/Write Web: Tim O'Reilly Interview, Part 1: Web 2.0
The W3C also doesn't help with its wonderful rule that "Cool URIs should be as hard as possible to remember". Throwing a random year in your namespace URIs is considered Good Practice. I guess we should be thankful they're not URNs.
--Robin Berjon on the xml-dev mailing list, Wed, 27 Oct 2004
SVG is something of a platypus, ornithorincus anatinus (the name of which I remember, curiously enough, from a Mr. Roger's Neighborhood song). It is a graphics format. It is an animation format. It is an interactive GUI format. It is a DOM for performing integrated web services. It's becoming a publishing format. Like the duckbill platypus, it seems like it was stuck in some kind of bizarre transmogrifier ray, a la Vincent Price's The Fly, neither bird nor mammal but somewhere in between.
There's never really been anything like it, to be perfectly honest. Flash often comes to mind as the point of comparison, but in reality, Flash lacks the capabilities for abstraction that are intrinsic to SVG. Don't get me wrong on this - Flash is a very powerful tool for creating impressive looking graphic animations. The difference between Flash and SVG, however, is that Flash is a self-contained world; SVG on the other hand is beginning to shape up into an application that entwines itself within other specifications.
This will become more obvious when SVG moves more into the native space of browsers and operating systems, rather than being a plug-in. The significance of the Mozilla SVG effort, even at its current nascent stage, is that you can create interactive and animated graphics inline to other markup such as XHTML or XUL. This means, among other things, that the graphics on a page are immediately accessible as part of the DOM, are integrated into the whole fabric of a web page both programmatically and visually.
--Kurt Cagle
Read the rest in Metaphorical Web: SVG and the Search for <elegance>
More people than you'd like are either using a browser with no JavaScript support, or have disabled JavaScript in their browser. Current stats (Browser Statistics at W3Schools, TheCounter.com) indicate that this is 8-10 percent of web users. Search engine robots currently don't interpret JavaScript very well either, even though there are reports that Google are working on JavaScript support for Googlebot. If your site requires JavaScript to navigate, don't expect great search engine rankings.
--Roger Johansson
Read the rest in Web development mistakes | Lab | 456 Berea Street
I use XML day in and day out and have learned everything I know by trial an error. I've made many mistakes along the way. I've tried my best to learn from them, but Effective XML was the book that made everything click for me. The best part is that the book went well beyond just helping me see my errors. I've already applied some of the ideas to new work I've done recently and have been able to head off some of the problems I would have encountered. Effective XML is by far the best XML book I've ever read, and quite possibly the best tech book I've read all year.
--Norman Richards
Read the rest in Review: Effective XML
Microsoft was a latecomer to the browser market and scrambled to catch up. Early on, the company stumbled and the first couple of attempts at a Web browser weren't any good. But this was a make-or-break proposition; Microsoft couldn't afford to let Netscape's Web browser displace Windows as the primary interface sitting on the computer between application developers and users.
By the third try, Internet Explorer had pulled even and later became the better Web-browsing application. The rest is history. Unfortunately for Web surfers, it's as if the calendar stopped in 1999.
Actually, that last statement is not fully accurate. There is one major change you can ascribe to Internet Explorer: The PC browser world is in much worse shape. Because management took so long to tackle Internet Explorer's security woes, Microsoft allowed virus writers to exploit vulnerabilities in the browser and wreak untold havoc on unsuspecting computer users.
--Charles Cooper
Read the rest in Why I dumped Internet Explorer | Perspectives | CNET News.com
I think it's an adventage when all the involved languages are based on the same (XML) convention: The data description (XML), the transformation (XSLT) and the output GUI (XHTML). It's easier to play with XHTML and apply the changes to the XSLT - comparing to integration to any scripting language. When using XSLT, the original XHTML tags stay "as is". Therefor, it's easier to understand the XHTML within the XSLT doc, than from a script.
--Amir Yiron on the xsl-list mailing list, Wed, 19 May 2004
The original impetus behind XML, at least as far as I was concerned back in 1996, was a way to exchange data between programs so that a program could become a service for another program. I saw this as a very simple idea. Send me a message of type A and I'll agree to send back messages of types B, C, or D depending on your A. If the message is a simple query, send it as a URL with a query string. In the services world, this has become XML over HTTP much more than so called "web services" with their huge and complex panoply of SOAP specs and standards. Why? Because it is easy and quick. Virtually anyone can build such requests. Heck, you can test them using a browser. That's really the big thing. Anyone can play. You don't have to worry about any of the complexity of WSDL or WS-TX or WS-CO. Since most users of SOAP today don't actually use SOAP standards for reliability (too fragmented) or asynchrony (even more so) or even security (too complex), what are they getting from all this complex overhead. Well, for one, it is a lot slower. The machinery for cracking a query string in a URL is about as fast as one can imagine these days due to the need services have to be quick. The machinery for processing a SOAP request is probably over ten times as slow (that's a guess). Formatting the response, of course, doesn't actually require custom XML machinery. If you can return HTML, you can return XML. It is this sort of thinking that being at a service company engenders. How do you keep it really simple, really lightweight, and really fast. Sure, you can still support the more complex things, but the really useful things may turn out to be simplest ones.
--Adam Bosworth
Read the rest in Adam Bosworth's Weblog: KISS and The Mom Factor
Firefox is suffering from a success crisis. The bad news is so many people can't get to the site. The good news is its popularity.
--Stephen Pierzchala
Read the rest in Firefox 1.0 fans clog Mozilla site | CNET News.com
re: Capitalization: Should you use ID or IdSpeaking for myself, when I see mixed capitalization, I switch from thinking in acronym/abbreviation mode to thinking in word mode. The goal with ID should be to immediately identify that you're referring to the word Identification, as opposed to Id. Of course, a millisecond's thought figures this out, but I see this as a kind of cognitive speed bump.
--Chris B. Behrens
Read the rest in Capitalization: Should you use ID or Id
I'm actually still a little disappointed by today's Web browsers.
Typographically they're back in the 1970s or early 1980s in some ways. Hung punctuation? Hyphenation? ffl ligatures (for Latin scripts)?
In terms of hypertext, yes, the distributed mostly-working Web was a great success, but Web brosers today haven't caught up to documentation viewers from 1994 nor from CD-ROM authoring software of years before.
--Liam Quin on the xml-dev mailing list, Thursday, 21 Oct 2004
correctness always comes first, however rare the scenario; and I also try to live by the principle that a clean API is more important than a 2% performance improvement. If you want a 2% performance improvement, just wait for next week's hardware.
--Michael Kay on the xml-dev mailing list, Thursday, 28 Oct 2004
you could in fact drop the notion of "Resource" from the TAG's Web Architecture document, and it would work about as well in terms of keeping the software running smoothly.
This would seem a little perverse, though. After all, the "R" in URI ought to stand for something, and if only for the mental comfort of our readers, we ought to say what.
But in practical terms, in the Web as implemented, a resource is simply "that which is named by a URI." That's all the system knows about. Any further assumptions about what a Resource can or can't be, from the Web software's point of view, are simply vacuous, because they have no observable effect. More damning, such assertions are non-scientific, because there is no falsifiable hypothesis that can be constructed to test them.
--Tim Bray
Read the rest in ongoing - On Resources
Server-side processing language gives you much flexibility in terms of applying logic upfront. XSLT takes this to a whole nother level. Before XSLT (for me) HTML was a dead horse, and if I wanted to get crafty with it, I had to intertwine the HTML code with my server-side code. This gets messy messy messy, and further complicates the server-side code. XSLT allows for full separation of presentation and server-side processing. Not just that, but XSLT allows you to *program* (I call it that) at the presentation level.
--Karl J. Stubsjoen on the xsl-list mailing list, Wed, 19 May 2004
Binary XML in my opinion flies in the face of loosely-coupled interoperability. By adding a "standard" binary XML format (be it based on ASN PER/BER or some other scheme) the interoperability gets bifurcated and the advantage of a single, auditable, interoperable format to be used in loosely-coupled environments disappears. In closely-coupled systems, you can use something else than XML (or a binary format). Since the coupling is closed, you do not need to follow a standard (although there are some reasons why you still may use XML).
--Michael Rys on the xml-dev mailing list, Tuesday, 18 Nov 2003
I like building web apps that (from the users point of view anyway) have only one url. Remembering that xslt is xml and itself easy to manipulate with xslt or dom code, it's perfectly possible, indeed practical, to have one asp/aspx page which has a resource of many interfaces (real(on disk) or virtual(created on the fly) (probably cached)) and many data sources which map together to form a whole application. Sort of an application that generates itself in response to the users input (according to rules of course).
--Rod Humphris on the xsl-list mailing list, Wed, 19 May 2004
Stupid is not illegal.
--Norm Walsh on the www-tag mailing list, 28 Jul 2003
W3C seems like a parliament too far away from practical needs and caught into political vested interests or simply jammed into ethereal dialogs.
--Didier PH Martin on the xml-dev mailing list, Wed, 07 Jul 2004
I started using XQuery back in the summer of 2002. What struck me immediately was that XQuery coding was fun — it had an appeal reminiscent of Servlet and Java programming. By contrast a lot of the other XML work I've been doing with Schema and DOM are, well, a lot less fun. If a particular technology is important-yet-tedious, it may succeed but won't blossom because people won't feel any allegiance to it. In this respect, XQuery is happily more like servlets than like Schema or DOM and I find that to bode well for its future.
--Jason Hunter
Read the rest in A Conversation with Jason Hunter on XML and Java Technologies
The last two weeks have really shown off the difference between open source projects and closed source to me.
The short version is “Closed source software can go to hell.”
libxml2 is supported on the GNOME XML mailing by Daniel Veillard at Redhat. The responsiveness on the mailing list is utterly amazing.
I got one response in 8 minutes, the next one in 2 hours.
Compare and contrast with a closed source vendor that shall remain nameless, but the first one to pop into your head is probably the right one.
9 days and counting so far for a useful meaningful response.
Repeat after me: “I will not be a share cropper.”
-- Victor Ng
Read the rest in crankycoder - vlibxml2 - first usable release
When you have two different organizations trying to push two different vocabularies for solving the same problem, it doesn't help the supply chain. If you're a small guy, supporting a bunch of different schemas gets difficult.
--Ron Schmelzer, ZapThink
Read the rest in XML: Too much of a good thing? | CNET News.com
The DOM is the infant of HTML DOM 0 implementations, XML when it didn't have namespaces, and then some heavy-duty namespace grafting on top. It's a tribute to its inceptors that it's not far more monstrous.
--Robin Berjon on the xml-dev mailing list, Wed, 27 Oct 2004
Insofar as the web architecture is concerned, it seems to me that we need not speak about whether one resource is actually referring to the other. It might. It might not. And even if it does, it is likely impossible to tell based on the representation itself. What truly counts is that, because the representation contains the link, a *web* relationship between the two resources can be deemed to exist, in terms of that link.
It may be that the only actual relationship between any two resources is a web relationship expressed by some link in the representation of one of those resources. But it's the web relations (not other kinds of relations) that matter for the web machinery. Who/what is actually doing the referring is not central to making the web work.
--Patrick Stickler on the www-tag mailing list, Monday, 25 Oct 2004
The 'single schema' approach only works in the limited case where the information is a small set, usually in a highly regulated and constrained domain.
Also - it also often works when people just do a 'one-off' interchange with limited participants. So - they have some initial success - then try and scale across a whole community - and then discover it is not going to be a simple linear growth path. Not to mention the need to express more than just the simple constriant rules and share those across the community.
--David RR Webber on the xml-dev mailing list, Tuesday, 07 Sep 2004
There are a lot reasons why the Web has been huge hit with the developer crowd. But, IMHO, the main "success" factor in developing for the web is that of visibility. The core specs of the Web (HTTP, HTML, CSS) are all there ready to be uncovered. Countless of times I leveraged the visibility of the Web to troubleshoot problems, learned how to create new problems, and gain invaluable insight, all because the "guts" of the Web were there ready to be digested. It is a very important aspect of the web that must utilized in the WS-* world.
Therefore, I think it is imperative that WS-* vendors engender visibility of the specs at the protocol level (a binary infoset.. yuck) and simplified exposure of the specs at the programming model level. Trust me, you will be very relieved when you have to troubleshoot a interop problem with a Google service in five years.
--Dave Bettin
Read the rest in Show me the angle brackets
To do to XML what the relational model has done to CODASYL, I think that we need not only a query language but also to break tree fragments into atoms that are easier to manipulate, query and recompose and RDF strikes as the candidate that seems (at least technically) able to do so today (by RDF, I mean the RDF triples basic data model, not the XML syntax).
In other words, IMO the join operation isn't enough if all you can join are tree fragments that are by nature not "merge friendly" and one of the biggest (and usually underestimated) benefit of RDF is its ability to "auto-merge" information from multiple sources.
--Eric van der Vlist on the xml-dev mailing list, Saturday, 23 Oct 2004
To me, the ultimate boon coming from the XML world will probably end up being a much easier way to create a standardized document format that would be open source and universal. Think about a format that would embed annotations and commentary, revision marks, stylesheets, and the like in a format that every other word processing vendor besides Microsoft would directly support. It's a crime that more than twenty-five years from the time the PC friendly word processor was written there is still no definitive open standard.
--Kurt Cagle on the "Computer Book Publishing" mailing list, Sunday, 25 Mar 2001
So a benefit of the RDF solution is that instead of leveraging existing investments in relational data stores that are common place in the enterprise one can use a different, potentially incompatible data store? Have you missed the occurences within the database world in the past few years with regards to Object Oriented Databases and Native XML databases? This should be taken to heart whenever one touts some new data storage technology as a replacement for relational stores.
--Dare Obasanjo on the xml-dev mailing list, Thursday, 24 Apr 2003
But instead I stay awake and keep clicking. The news channels melt into each other. I recognise the names but it is the first time I'm seeing many of these anchormen and reporters. I stick with Wolf Blitzer of CNN: that's a face I know.
The news channels here are not like the news channels I am used to. You should try watching al-Jazeera - Bad news! Serious news! More bad news! - and see what it does to your day. These people here are doing a live entertainment show, not news. The breakfast shows are the ones that annoy me most. I can't stand all this happiness this early in the morning. News about explosions in Baghdad and American troops refusing to follow orders is sprinkled with the cheerful banter of Mr Weatherman and jokey Miss Anchorwoman, and it all gets watered down.
--Salam Pax
Read the rest in Guardian Unlimited | Special reports | The Baghdad Blogger goes to Washington: day four
Open source is the ticket out of the banality Microsoft has imposed.
--Louis Suárez-Potts
Read the rest in Technology Review: An Alternative to Windows
I don't have a lot of time or patience to fiddle around getting my different applications to play nice. So when forced to decide between competing software alternatives, yours truly has nearly always gone with the Microsoft offering.
Okay, I'm a wimp who takes the path of least resistance. I'm also less interested in creating the ultimate computing experience known to mankind than in making sure things work the way they should. That's the upside of sticking with a convicted predatory monopolist: You can assume a high degree of uniformity and application integration.
--Charles Cooper
Read the rest in Why I dumped Internet Explorer | Perspectives | CNET News.com
If there's one thing that the RSS Draconian Wars taught us, it's that you don't want to be involved in any discussion of XML and error handling.
--Phil Ringnalda
Read the rest in phil ringnalda dot com: PHP turns evil
When you use defined standards and valid code you future-proof your documents by reducing the risk of future web browsers not being able to understand the code you have used.
--Roger Johansson
Read the rest in Developing With Web Standards | 456 Berea Street
Bloggers and radio hosts pound newspapers for bias that pales in comparison to their own. The same people who pilloried former New York Times Executive Editor Howell Raines for mounting a "crusade" against the Augusta National Golf Club’s men-only policy devoted their energies to the swift boat story with an obsessiveness impossible to contemplate in a general news publication. The same critics who stomped up and down when the Los Angeles Times made the mistake of saying none of the Swift Boat Veterans served on a boat with Kerry (actually, one did) seemed altogether blasé when the coverage for which they’d been begging exposed the accusatory veterans as being very far from scrupulously truthful. (For instance, in the original commercial, military doctor Van O’Dell said, "John Kerry lied to get his Bronze Star....I know, I was there, I saw what happened." In fact he wasn’t there, neither when Kerry was wounded nor when he gave his account of the incident.)
--Matt Welch
Read the rest in Reason: A Swift Boat Kick in the Teeth: How the mainstream media grapple with partisans
From my markup-centric perspective, RDF is ugly, high-level, and excessively charged with meaning encoded so abstractly as to be nearly cryptographic. Oh, and it's painfully constraining since it can't figure out how to deal with mixed content, a common human construct.
--Simon St.Laurent on the xml-dev mailing list, Tuesday, 20 Aug 2002
Market Dominance
Netscape had it by being first.
Microsoft has it by being everywhere.
Firefox will have it by being best.
--Ben Goodger
Read the rest in Inside Firefox: Market Dominance
Your job should you choose to accept it, is to build reliable, available, maintainable, scalable systems in a schedule of twelve to sixteen weeks or less, on a very tight budget (where very means 10-100 times less than what people paid in the past), where some of the sponsers think what you're building is just some kind of fancy-dan web site anyway. To do this you need smaller, tighter teams, not only because of cost factors, but because even medium size teams just aren't going to get a whole lot done in 3 months due to coordination overhead. You also need guerilla development approaches, not only because of the time factors, but because services and cross business integrations quickly dispatch any quaint notions of 'staging' and 'rollout' you might have carried over from database backed websites or middleware. In those circumstances you need be fastidious in driving out all forms of waste and inefficiency from systems building. So you could be forgiven for thinking that protocol constuction is not the best use of anyone's time. Relax. Take one of the shelf. More often than not, it'll be HTTP.
Why do this? That's easy - designing protocols is hard work. It takes smart people a long time to come with good ones, and the skill and mind sets to do it are rare, much rarer than folks who can design great APIs. Indeed protocol and API design are dealing with sufficiently different sets of problems that Sun's Geoff Arnold reckons they could be exclusive to some degree and Mark Baker thinks to switch from one to the other requires zen-like mental gear shifting. GUI toolkits built on top of protocol construction toolkits won't altogether save you from banging your head off the monitor in frustration as you design the thing. Consider that being able to reinvent something really really fast might not be as smart as re-using what already exists. And no matter what you might come up with for your business problem, it simply will not be battle-hardened the way globally deployed application protocols are.
--Bill de HÓra
Read the rest in Bill de hÓra: Monster Oriented
XML, used in conjunction with, for example, Java technologies and SQL, does provide digital archives and libraries developers with a significant means for tagging data that more effectively enables interoperability between and across systems, particularly in distributed network environments. This is mostly backend stuff, i.e., it is invisible to the end-user- the client- but it enables robust search and retrieval of data in ways not possible without it.
--James Landrum on the xml-dev mailing list
People who want to do things that experience has shown are short-sighted are sometimes called innovators while their critics are labeled Luddites or Sabots. After the innovators do their damage, it is a little late to hit them with shoes. We really do need to know if a binary is something only some applications need, and therefore, a generalized spec and standard are not required. Once a binary is approved for all XML applications, XML will rarely be seen as the programmers rush for the binary format for the same reason countries fear they will be second class without nukes.
--Claude L (Len) Bullard on the xml-dev mailing list, Wed, 14 Apr 2004
I think for a lot of Eudora users, myself included, the lack of support for HTML email is a feature, not a bug.
--Robert Gruber on the WWWAC mailing list
There are a whole lot of reasons why HTML form-based Web applications work less than perfectly in many situations. But the Web browser interface also has a few advantages. A Web form's boxes and buttons limit what a programmer can ask a user to do. That's frustrating for the programmer - but it also means that the user has less new things to learn when they're using a Web application, and that they'll be introduced to them one by one.
--David Walker
Read the rest in Shorewalker.com - Simplicity and ubiquity matter (or, How reality mugged Joel Spolsky)
there's no harm in using XML Schema to check data against the business rules, so long as you realize this is *an* XML Schema, not *the* XML Schema. We need to stop thinking that there can only be one schema.
--Michael Kay on the xml-dev mailing list, Thursday, 19 Aug 2004
I was a fool for believing that Office 2003 would open up the data generated by MS's cash cow products to 3rd party XML applications. Giving the peasants, oops sorry, "customers" some options ain't no way to run an evil empire :-)
One Word to write them all, one Access to find them, one Excel to count them all, and thus to Windows bind them.
--Mike Champion on the xml-dev mailing list, Saturday, 12 Apr 2003
The more I look at what's happening with WS*, the more I think it looks exactly like what the OMG did with CORBA - a blizzard of specs no one cares about, which tends to make vendor interop harder and harder.
--James Robertson
Read the rest in Smalltalk Tidbits, Industry Rants: What CORBA got wrong?
XML is really really good for interchange and really really irritating for in-memory manipulation. I think we all ought to be more up-front about this
--Tim Bray on the xml-dev mailing list, Wed, 21 Aug 2002
On the last system I worked on, we were struggling with SOAP and switched to a simpler REST approach. It had a number of benefits.
Firstly, it simplified things greatly. With REST there was no need for complicated SOAP libraries on either the client or server, just use a plain HTTP call. This reduced coupling and brittleness. We had previously lost hours (possibly days) tracing problems through libraries that were outside of our control.
Secondly, it improved scalability. Though this was not the reason we moved, it was a nice side-effect. The web-server, client HTTP library and any HTTP proxy in-between understood things like the difference between GET and POST and when a resource has not been modified so they can offer effective caching - greatly reducing the amount of traffic. This is why REST is a more scalable solution than XML-RPC or SOAP over HTTP.
Thirdly, it reduced the payload over the wire. No need for SOAP envelope wrappers and it gave us the flexibility to use formats other than XML for the actual resource data. For instance a resource containing the body of an unformatted news headline is simpler to express as plain text and a table of numbers is more concise (and readable) as CSV.
--Joe Walnes
Read the rest in Joe Walnes, REST and FishEye
When developers argue on a list about the best way to do something it's the first person to code it who wins. Even when they don't "win" per-se, the one with the code has a great advantage. If someone else said it can't be done, you've just proven them wrong.
--Joshua Marinacci
Read the rest in java.net: My 1 year anniversary at Java.net: the social side of software. [August 21, 2004]
XML in general doesn't consider the difference between CDATA and other text semantically meaningful; XSLT in particular discards that distinction on input. Trying to treat CDATA boundaries as meaningful is a Very Bad Idea.
--Joseph Kesselman on the xalan-j-users mailing list
It's very easy for people to switch to a new search engine. It costs little effort and no money to try a new one, and it's easy to see if the results are better. And so Google doesn't have to advertise. In a business like theirs, being the best is enough.
The exciting thing about the Internet is that it's shifting everything in that direction. The hard part, if you want to win by making the best stuff, is the beginning. Eventually everyone will learn by word of mouth that you're the best, but how do you survive to that point? And it is in this crucial stage that the Internet has the most effect. First, the Internet lets anyone find you at almost zero cost. Second, it dramatically speeds up the rate at which reputation spreads by word of mouth. Together these mean that in many fields the rule will be: Build it, and they will come. Make something great and put it online. That is a big change from the recipe for winning in the past century.
--Paul Graham
Read the rest in What the Bubble Got Right
It's a problem that people should have to pay for a whole OS upgrade to get a safe browser. It does look like a certain amount of this is to encourage upgrade to XP.
--Michael Cherry, Directions on Microsoft
Read the rest in Microsoft to secure IE for XP only | CNET News.com
The web browser wars are over and Microsoft won, right?
Well someone's forgotten to tell Ben Goodger and his team at the Mozilla Foundation because this Kiwi software engineer is taking market share from Internet Explorer (IE) with Firefox, the browser that's smaller yet smarter than just about anything else available.
--Paul Brislen
Read the rest in New Zealand News - Technology - Kiwi leads effort to build a better browser
Such is often the case when designing with CSS. When working with semi-complex layouts, I frequently encounter challenges that end up slowing me down. I’m getting familiar with these road blocks, and can often predict where I’ll find them. Having patience, or knowing what to try to get around them prevents head from going through monitor.
Without a doubt, the biggest challenge I encounter each time is in wrangling Microsoft’s Internet Explorer browser. This devil does not play fair. It often follows no rules, and its behavior defies all common logic. It will double margins for no apparent reason. Borders disappear, 62 pixels magically turn into 143 pixels. It dodges left when other browsers go right. I’ve decided to call this phenomenon “the IE Factor”.
--Douglas Bowman
Read the rest in Stopdesign | The IE Factor
Note that “works in any web browser” does not mean “looks the same in every web browser”. Making a document look identical across browsers and platforms is next to impossible. Not even using only images will make a website look exactly the same everywhere. Documents that are published on the web will be accessed by a wide variety of browsing devices on several operating systems, with monitors of differing size and quality (or no monitor at all), by users who may have changed their browser’s default text size and other preferences. Accepting this will make your life a lot less frustrating.
--Roger Johansson
Read the rest in Developing With Web Standards | 456 Berea Street
The problem is that the specs keep changing in non-backwards compatible ways. Sure, I can implement WS-Security. Which version? Each version has it's own namespace. WS-Addressing, again, there have been several revisions, each with it's own namespace. Am I supposed to wait until I think they will never change again before I implement a solution using one of these specs? I may implement a solution today and have perfect interoperability today, but in 3 months, the spec will get a minor revision, a new namespace, and if any of the collaborative peers decides to move to the new spec, they can't talk to me anymore. What's even worse is when my software can't talk to a slightly old version of itself because we decide to use an implementation of WS-Whatever that is slightly newer.
I don't expect these things to not change. And on some level, changing the namespace seems like the obvious way to version the the spec, but it sure does throw a wrench into the real world where deployed software has to *stay* interoperable for years and not just months.
--Erv Walter
Read the rest in WS-* Specifications: Are there too many and are they too complex?
Web Services emerged at a time when some of us actually believed that XML was a uniform solution to disparate problems, and there was a long time when XML and Web Services were treated as synonyms. Maybe what's happening now is the result of recognizing that a large number of programmers and users aren't actually enterprise developers - we have no more need of WS-Transfer than we have of an S/390 running a dedicated message queue system. For the most part, Web Services and the WS-* set of specifications address problems many people just plain don't have.
--Simon St. Laurent
Read the rest in Are Web Services receding?
Whether a blog leans left, right or sideways, as a collective force they are working to keep reporters honest. Journalists may not like their methods -- having your work sliced and diced in public is no fun -- but the end result may be better-quality news.
--Adam L. Penenberg
Read the rest in Wired News: Blogging the Story Alive
Metaphorically, a statement is like a molecule in which the predicate is the chemical bond between two atoms. The only structures in RDF are statements, and each statement associates exactly one subject with exactly one object. More complex structures, like topic map associations, must be built up one statement at a time.
--Thomas B. Passin on the XML Developers mailing list, Friday, 04 Jun 2004
The debate over who and isn’t a journalist is worth having, although we don’t have time for it now. You can read a good account of the latest round in that debate in the September 26th Boston Globe, where Tom Rosenthiel reports on the Democratic Convention’s efforts to decide “which scribes, bloggers, on-air correspondents and on-air correspondents and off-air producers and camera crews” would have press credentials and access to the action. Bloggers were awarded credentials for the first time, and, I, for one, was glad to see it. I’ve just finished reading Dan Gillmor’s new book, We the Media, and recommend it heartily to you. Gilmore is a national columnist for the San Jose Mercury News and writes a daily weblog for SiliconValley.com. He argues persuasively that Big Media is losing its monopoly on the news, thanks to the Internet – that “citizen journalists” of all stripes, in their independent, unfiltered reports, are transforming the news from a lecture to a conversation. He’s on to something. In one sense we are discovering all over again the feisty spirit of our earliest days as a nation when the republic and a free press were growing up together. It took no great amount of capital and credit – just a few hundred dollars – to start a paper then. There were well over a thousand of them by 1840. They were passionate and pugnacious and often deeply prejudiced; some spoke for Indian-haters, immigrant-bashers, bigots, jingoes, and land-grabbers. But some called to the better angels of our nature -- Tom Paine, for one, the penniless immigrant from England, who, in 1776 –just before joining Washington’s army – published the hard-hitting pamphlet, Common Sense, with its uncompromising case for American independence. It became our first best seller because Paine was possessed of an unwavering determination to reach ordinary people – to “make those that can scarcely read understand” and “to put into language as plain as the alphabet” the idea that they mattered and could stand up for their rights.
So the Internet may indeed engage us in a new conversation of democracy. Even as it does, you and I will in no way be relieved from wrestling with what it means ethically to be a professional journalist. I believe Tom Rosenthiel got it right in that Boston Globe article when he said that the proper question is not whether you call yourself a journalist but whether your own work constitutes journalism. And what is that? I like his answer: “A journalist tries to get the facts right,” tries to get “as close as possible to the verifiable truth” – not to help one side win or lose but “to inspire public discussion.” Neutrality, he concludes, is not a core principle of journalism, “but the commitment to facts, to public consideration, and to independence from faction, is.”
--Bill Moyers
Read the rest in Society of Professional Journalists - SPJ National Convention
93.7% still seems like a really daunting market share for IE. But turn it around: that's more than one out of every 20 web users (also known as "potential customers" to commercial websites). Just three months ago it was slightly less than one in 20; today it's trending toward 1 in 10. That's significant.
Many companies write web applications that support only IE. Although I've never agreed with that strategy, I can see how some are convinced that it's a reasonable one. But I suspect the problems with an IE-only approach will quickly become clearer.
--Glenn Vanderburg
Read the rest in You Can't Ignore Firefox
unique visitors are an irrelevant statistic. Most such visitors are sampling a single page to get an answer, rather than engaging with your site. Instead of tracking them, count loyal users as a key metric for site success.
--Jakob Nielsen
Read the rest in When Search Engines Become Answer Engines (Jakob Nielsen's Alertbox)
The problem of linking things together on the Web takes an almost vertical ascent into complexity which layer of abstraction piling on layer of abstraction very quickly.
All you need to do is move slightly beyond the "link this to this" model of HTML and you are in deeply complex philosophical territory. If you doubt that this is the case, I would suggest you take a look at the HyTime standard. The sheer size and complexity of it amply demonstrates the enormity of problem hidden behind the simple term "linking".
--Sean Mcgrath
Read the rest in ITworld.com - XML IN PRACTICE - XLink: A Hyperspace Oddity
If you're an Internet Explorer user, you owe it to yourself to download FireFox and see how a real browser works.
--Preston Gralla
Read the rest in The World's Best Browser Just Got Better
Because people's appetites for esoteric sports statistics are so insatiable, the data reports that get exchanged and formatted for display are often incredibly intricate. For our industry, the benefits of XML are clear: consistent input no matter what the provider, what the sport, what the native language."
--Alan Karben, chairman of the SportsML Working Group
Read the rest in XML: Too much of a good thing? | CNET News.com
If you’re going to package up data in messages to ship it around, XML is usually a good way to package it up. There are exceptions, of course: large binary objects like video streams are an obvious one. But across the universe of business data, XML hits a sweet spot for packaging up a high percentage of it.
There are a few nice things about XML (in particular the internationalization) but the key advantage is that it’s a thick enough buffer that when you get an XML message from me, you typically can’t peek through it to see whether I’m running Windows and SQL server or Solaris and MySQL. This is a Good Thing.
--Tim Bray
Read the rest in ongoing · Web Services Theory and Practice
Despite rumors to the contrary, the adult entertainment industry is not developing its own dialect of Extensible Markup Language dubbed XXXML.
Aside from that, it's hard to find an industry or interest that isn't taking advantage of the fast-growing standard for Web services and data exchange. In the six years since the main XML specification was first published, it's spawned hundreds of dialects, or schemas, benefiting everyone from butchers to bulldozer operators wishing to easily exchange information electronically.
--David Becker
Read the rest in XML: Too much of a good thing? | CNET News.com
The DOM very rarely makes sense, especially when it comes to namespaces. If you want to retain your sanity, avoid it.
--Michael Kay on the xml-dev mailing list, Friday, 3 Sep 2004
HTML will never die. Gencoding never does. There should always be that easy to learn, easy to apply vocabulary that gets jobs done fast.
--Bullard, Claude L (Len) on the xml-dev mailing list, Friday, 30 Apr 2004
As many of you may know, the sites hosted by ibiblio are not accessable from the People's Republic of China. This is due to ibiblio hosting Tibet related sites, censored by the chinese government. What you might not know is that that kind of censorship wouldn't be possible without the cooperation (complicity?) of some large U.S. corporations: Cisco Systems, Google Inc., Yahoo Inc., Microsoft Corp., Sun Microsystems, inter alia.
--Paola E. Raffetta on the webgroup mailing list, Tuesday, 7 Sep 2004
No, it doesn't work 100% of the time. It works 95% of the time, and it reduces the problems you'll have twenty-fold. Like everything else in sociology, it's a fuzzy heuristic. It kind of works a lot of the time, so it's worth doing, even if it's not foolproof. The Russian mafia with their phishing schemes will eventually work around it. The idiot Floridians in trailer parks trying to get rich quick will move on. 90% of the spam I get today is still so hopelessly naive about spam filters that it would even get caught by the pathetic junk filter built into Microsoft Outlook, and you've got to have really lame spam to get caught by that scrawny smattering of simplistic searchphrases.
--Joel Spolsky
Read the rest in Joel on Software - It's Not Just Usability
I should note, to be fair, that XML-on-the-Web idea fizzled just as fast as 3D-on-the-Web did. All of the supposed client-side HTML killers[*] in the late 1990's either died quickly (VRML, XML, ActiveX controls), are on life support (Java applets), or have found niches and learned to cohabitate peacefully (Flash, JavaScript, PDF).
--David Megginson on the XML Developers mailing list, Friday, 30 Apr 2004
File-sharing seems 2 occur most when people want more QUALITY over quantity. One good tune on a 20-song CD is a rip. The corporations that created this situation will get the fate they deserve. 4 better or 4 worse, 4 every action there is a reaction. An MP3 is merely a tool. There is nothing 2 fear.
--Prince
Read the rest in Wired 12.09: PLAY
In the long run, I think many people will be using XQuery and XML. Sometimes the storage will be basically relational with XML extensions, sometimes it will be basically XML with extensions to optimize typical XQuery operations. But the queries will look rather similar in either case, and vendors of relational databases and native XML databases will be working hard on solving many of the same problems.
For now, XML databases do seem to be a niche market. People are very conservative when it comes to changing their database technology...
--Jonathan Robie on the xml-dev mailing list, Wed, 25 Aug 2004
once I understood that a piece of XML can have any number of valid schemas then I really started to see some of the benefits of XML based messaging. System A can think of a piece of XML as Schema A, System B can see the same piece of XML as Schema B. They are both right.
--James Avery
Read the rest in The 7 Fallacies of XML Validation
XHTML is a stopgap, but the gap needs stopping -- alternatives such as XML+XSLT aren't universally supported (e.g. Mark Pilgrim's Atom feed looks terrible in Opera and Safari). Likewise, XML+XSLT has the problem that if you have M XML source formats and N output formats (e.g. for different browsers, different devices, different classes of users ...) then you have M x N stylesheets to maintain. With XHTML as the single source format, you have N stylesheets to maintain. Sure, that has the cost of the kludges you must do to force-fit information into XHTML, but it's not at all clear to me that the cost of this outweighs the practical benefit. And of course you can use some better XML format as the single source format, but then you have to design it, deploy it, change it, manage the versions ....and deal with the snotty users who don't believe you've added enough value over (X)HTML to justify all these costs.
I'm no fan of XHTML or the W3C process that has produced it, but I'm beginning to think that it's like democracy -- the worst of all possible approaches, except for the known alternatives.
--Michael Champion on the XML Developers mailing list, Wed, 14 Jul 2004
Word asserts, 70% of all users want a bulleted list right here! or an URL with a blue underline right here! -- and annoys the hell out of 30% of all users. Worse, you will inevitably move from the 70% group to the 30% group several times in each document.
Microsoft hires very smart engineers -- I would say the smartest in the business. When they see that some number of their users have some writing problem they believe a computer could be trained to solve, they do a better job than anyone at writing the code to solve that problem. They talk all the time about "knowledge workers" and their needs. What the Word team lacks, in my view, is an awareness that, when a user is trying to get his or her own work done, the user is always smarter than the technology. Assuming that smart people aren't their market is the surest way to produce a bad word processor, which is exactly what I think they've done.
--Marc Hedlund
Read the rest in O'Reilly Network: Microsoft Word and "Smarter Than"
It took far too long for the JCP to acknowledge that those in trenches knew EJB and other J2EE doodads had issues in a way that no amount of visual tooling and enterprise patterns could paper over. Huge pressure had to bubble up from the developer community in the form of OpenSymphony, Spring, Pico, Hibernate and bestselling books like Bitter Java and Bitter EJB. This lack of feedback seems to result in disdain for J2EE as a development platform and produces one reactionary OS project after another addressing the same issues (the web frameworks situation is so bad it's getting its own section). Some of these projects are valuable, but many result in buyer's regret and legacy issues if the project dies as the leads go off to do something else cool without leaving a community behind them (Hani Suleiman deserves immense credit for highlighting this problem). There are no winners here. Today, a significant issue is how the J2EE fits with integration styles where the Web, documents, messaging, interop-uber-alles and most importantly, tight budgets, dominate the landscape.
--Bill de hÓra
Read the rest in Bill de hÓra: Java unhinged
I always had this great vision of the internet bringing tons of brilliant people together to produce brilliant software. The more people, the better the software. I have found the most successful software to be developed by a small number of people, or at least with a very strong leader. The reason: focus. Well focused software is better software. I guess The Mythical Man Month was right.
--Joshua Marinacci
Read the rest in java.net: My 1 year anniversary at Java.net: the social side of software. [August 21, 2004]
Anyone who has doubts about the intrinsic crunchy goodness of URIs is liable to have an aneurysm during any serious encounter with RDF.
--Simon St.Laurent on the xml-dev mailing list, Sunday, 17 Nov 2002
I predict that within ten years, we'll have clothing that runs screensavers, and what's more, we'll have gangs of people running around with synchronized displays to show that they "belong". Schools will then outlaw gang screensavers, and impose uniform screensavers on their students. Someone will hack into your clothes processor just to get you into trouble with the teachers. Norton and McAfee will sell software to make sure your clothes keep saying what you want them to say, and not what someone else wants them to say. Or show...
Or maybe by then your shirt will be able to authenticate all the IPv6 addresses it communicates with. The hard part is going the other way — how are you going to authenticate your shirt to someone else? Are you going to bother to set up an unspoofable identity for every shirt in your closet?
Of course, if your shirt is programmable, you really only need one of them. Or maybe you need two, for when the other one is in the wash. I suppose geeks can get away with owning a single programmable shirt. For some definition of "get away with". Maybe it's more like "get away from", as in "get away from me".
--Larry Wall
Read the rest in Perl.com: The State of the Onion
real business-level validation checks a lot more than syntax. It's not enough that a name and address field are filled in, that usually has to unambiguously match a known customer. It's not enough that a date is in the format specified by the schema, it has to be in an appropriate timeframe (usually the recent past or the recent future). Given that you have to validate all that stuff anyway, and that you have tools such as XPath to extract the needed information from a more general context rather than a rigid syntax, lots of people find that the exercise of defining, agreeing to, and validating against a syntax-level schema doesn't add enough benefit to justify the cost.
--Michael Champion on the XML Developer mailing list, Tuesday, 8 Jun 2004
REST is an architectural style -- a way of organizing a system into components and governing the interaction between those components such that the resulting system remains stable while accomplishing the desired tasks.
The reason HTTP is involved in REST is because I had to shrink and redesign HTTP/1.0 to match those features that were actually interoperable in 1994, which turned out to be the core of the REST model (it was called the HTTP object model at the time) and that was carried forward into designing the extensions for HTTP/1.1. Thus, the two are only intertwined to the extent that REST is based on the parts of HTTP that worked best. There is absolutely no reason that a new protocol could not be a better match for REST than HTTP. Older protocols, however, typically do not supply enough metadata or require too many network interactions.
--Roy T. Fielding
Read the rest in Adam Bosworth's Weblog: Learning to REST
1. Think 'online first' - Don't let folks in your company treat the web as an afterthought. Even if your business is not primarily transacted on the web, the web is probably an important front door for your customers to learn about your products, get support, and possibly take delivery of products and services. If your products contain software or a service, they probably needs to call back to the headquarters site. Yet, though it sounds ludicrous, many product teams in many companies spend all of their time on traditional media like printed brochures and advertising... and then try to repurpose stuff to the web as an afterthought. The result will always be a user experience ("UE") that falls flat at best and is highly confusing and frustrating for web users who by all rights ought to be the first to get previews of your new products or announcements and actually want an online relationship with you.
--Martin Hardee
Read the rest in Sun.com Usability & Useful Stuff
W3C still maintains a distinction between HTML and XHTML, and still offers both specifications.on its site. HTML is not deprecated.
--Doug Ewell on the Unicode mailing list, Sunday, 15 Aug 2004
if this is the future of web services it looks awfully complicated and complicated specs in this industry often turn into either (a) hypernovas of non-interoperable (read "proprietary" implementations) or (b) withered leaves on a vine.
Back up the truck there Tonto! Get back to basics. You have URIs, you have XML messages. You need reliable, asynchronous message exchange. Its not complicated. Its a simple HTTP exchange pattern on top of a MOM. Synchronous? That is just fast asynchronous. Remember LU 6.2. Remember MQSeries?
Speed? Nacht! You can compile away asynchronous messaging into direct point to point API calls at runtime. Its not complicated.
--Sean Mcgrath
Read the rest in Sean McGrath, CTO, Propylon
the tools you use aren't just tools, they're risks. Every time you use somebody's framework, or IDE, or library, or interface, or what-have-you, you are essentially investing in that team/person/company. If that tool suddenly goes away, or takes a sharp right turn when you were hoping for a left, then your project might suffer as a result. Additionally, when you ship applications to your customers, if something goes wrong (even deep down in some third-party library you got from the Internet), your customer will come back after you, not the third party. Your application is always your responsibility; if you make use of a buggy framework, those bugs will reflect back on you.
--Justin Gehtland
Read the rest in ONJava.com: Better, Faster, Lighter Programming in .NET and Java
It is little wonder that MS has paid so much attention to ensuring that Direct X is at the cutting edge of gaming graphics technology so that game developers use it in the creation of their latest masterpiece. This has had the very neat effect of making those games run well on Windows and ensuring they don't run at all on their competitor's OS's. It is much harder for a game developer to shift a game created using Direct X over to the Apple or GNU/Linux OS's than it is if the game is OpenGL based.
This is one reason why id Software have always produced Linux versions of their games alongside the Windows version as they use OpenGL. Unfortunately, they are very much the exception and are likely to remain so unless those associated with competing OS's take action to redress the situation.
Until they do so, Microsoft will continue to have a major competitive advantage over Apple and GNU/Linux.
--Ian Mckenzie
Read the rest in Why Games Matter - OSNews.com
HTML is not XML (unless you are very lucky!)
--Daniel Joshua on the xsl-list mailing list, Thursday, 20 May 2004
UTF-8 is kind of racist. It allows us round-eye paleface anglophone types to tuck our characters neatly into one byte, lets most people whose languages are headquartered west of the Indus river get away with two bytes per, and penalizes India and points east by requiring them to use three bytes per character.
--Tim Bray
Read the rest in ongoing Characters vs. Bytes
Another common effect I've seen is the tendency to create multi-megabyte or even gigabyte monolithic XML files. XML is so flexible for data representation because of its nature as an annotated hierarchy. But this very nature also makes efficient processing quite difficult, especially with regards to scaling according to number of nodes.
So my first line of advice has always been: don't go processing gigabyte files in XML formats. I have been working with XML for about 8 years. I have used XML in numerous ways with numerous tools for numerous purposes. I have never come across a natural need for an XML file more than a few megabytes. There may be terabytes of raw data in a system, but if you're following XML (and classic data design) best practices, this should be structured into a number of files in some sensible way.
If you find yourself with a monster XML file -- perhaps you receive it that way from another source outside your control -- the best way to handle it is through the classic problem-solving technique, divide and conquer. Decompose the problem into smaller bits, solve each little bit to create a part solution, then combine the partial solutions into the overall solution.
Certainly this requires the problem to have certain mathematical properties, but in almost every case of monster XML files I've seen, an elegant decomposition is available.
--Uche Ogbuji
Read the rest in XML.com: Decomposition, Process, Recomposition
Joel calls for a richer set of controls and events. Those who know a bit about Mozilla will immediately start thinking about XUL and XBL, and Microsoft's equivalent (XAML) is also relevant here.
Much of this stuff is already doable in Javascript, but XML languages are better for a reason fundamental to the web: They lower the barrier to processing. It is an order of magnitude easier to decipher what a document is specifying than a program: the only way for a machine to really understand a program is to execute it. Unlike Javascript or any other Turing-complete language, XML doesn't suffer from the halting problem.
Lowering the barrier is vital so that a wider range of lesser-powered web clients can understand your content, whether those web clients are mini-browsers running on embedded devices or ten-line scraping scripts. Furthermore, explicit unambiguous markup means that the client then has more freedom in rendering the document in the way it sees fit, and this freedom is vital to true web accessibility. If the speech browser for blind users knows that what it's trying to render is not just a collection of layers with links in them but a standard menu then it can render it in a much more usable way.
--Yoz Grahame
Read the rest in Yoz Grahame's Cheerleader: What I Want For WHAT
It is interesting to me how this focus around simplicity in the services world could carry through even to the plumbing people use. For example take so called web services. The original impetus behind XML, at least as far as I was concerned back in 1996, was a way to exchange data between programs so that a program could become a service for another program. I saw this as a very simple idea. Send me a message of type A and I'll agree to send back messages of types B, C, or D depending on your A. If the message is a simple query, send it as a URL with a query string. In the services world, this has become XML over HTTP much more than so called "web services" with their huge and complex panoply of SOAP specs and standards. Why? Because it is easy and quick. Virtually anyone can build such requests. Heck, you can test them using a browser. That's really the big thing. Anyone can play. You don't have to worry about any of the complexity of WSDL or WS-TX or WS-CO. Since most users of SOAP today don't actually use SOAP standards for reliability (too fragmented) or asynchrony (even more so) or even security (too complex), what are they getting from all this complex overhead. Well, for one, it is a lot slower. The machinery for cracking a query string in a URL is about as fast as one can imagine these days due to the need services have to be quick. The machinery for processing a SOAP request is probably over ten times as slow (that's a guess). Formatting the response, of course, doesn't actually require custom XML machinery. If you can return HTML, you can return XML. It is this sort of thinking that being at a service company engenders. How do you keep it really simple, really lightweight, and really fast. Sure, you can still support the more complex things, but the really useful things may turn out to be simplest ones.
--Adam Bosworth
Read the rest in Adam Bosworth's Weblog: KISS and The Mom Factor
We often chant the slogan: "Easy things should be easy, and hard things should be possible." But as with any slogan, there are some qustionable assumptions hidden behind the sentiment. We assume that it's obvious which things should be easy or hard, and that the things that are currently easy are the things that ought to be easy. We assume that making the hard things easy will necessarily cause the easy things to become hard. But sometimes it's not obvious what should be easy or hard. Sometimes the wrong things are easy. And sometimes there are ways to make the hard things easier without making the easy things harder.
--Larry Wall
Read the rest in Apache News Blog Online
You might say that the lack of a clear technology platform was in some ways a surprise because you read so much about this and that solution being supposedly the way to great intranets. In fact, when we go and talk to those companies that have done great intranets—first of all, they all use something different, and, second, all of them say of whatever they happen to be using, "Well, we had to make a lot of changes ourselves to make it really work for us." So I think there is a big contrast between advertising and reality, and that these technologies are not all there yet. You really have to take responsibility yourself if you want to get a good solution.
--Jakob Nielsen
Read the rest in Time for a Redesign: Dr. Jakob Nielsen
It's important to distinguish the proposition that the information is not in the form I'm used to seeing it in and the information is not there.
--C. M. Sperberg-McQueen
at Extreme Markup Languages, 2004, Wednesday, August 4, 2004
People who want to do things that experience has shown are short-sighted are sometimes called innovators while their critics are labeled Luddites or Sabots. After the innovators do their damage, it is a little late to hit them with shoes. We really do need to know if a binary is something only some applications need, and therefore, a generalized spec and standard are not required. Once a binary is approved for all XML applications, XML will rarely be seen as the programmers rush for the binary format for the same reason countries fear they will be second class without nukes.
--Claude L (Len) Bullard on the xml-dev mailing list, Wed, 14 Apr 2004
The real reason I like Extreme is that it's a gathering of people who share a common interest. I think of it as manipulation of tagged content, but it's not that simple. It's a gathering of people who are secure enough in their knowledge and their interests they can talk about unsolved and perhaps unsolvable problems without causing a panic. Extreme is an XML conference at which we can talk about what we cannot do. We can do that without frightening away either potential users of XML or, more likely to be frightened, the marketers who are trying to sell software to those potential new customers. Just try talking about what's broken in XML at one of the big XML conferences. You won't be very popular. Extreme is a gathering of people who are eager or, at least willing, to listen to XML heresy, to people telling us that what we've been doing all along is silly or, more likely, that what we've believed and are comfortable with is wrong. We talk about our projects, specifications, and standards we love, hate, use, ignore admire, disdain; and the logic or philosophy behind our approaches to the definition, creation and manipulation of marked up documents.
--B. Tommie Usdin
Extreme Markup Languages Keynote, Tuesday, August 3, 2004
Another reason spam is so bad is that so many companies use Microsoft Outlook for reading e-mail. Again, because that program is written in C, it's quite easy to design a virus to go through your e-mail address book and broadcast spam to all the people you know. As soon as your company starts using Outlook, you can see emergent, horrible, almost biological things start to happen. So by using Outlook, you're not practicing safe e-mail. We need a "condomized" version of it.
--Bill Joy
Read the rest in Fortune.com - Technology - Joy After Sun
Nothing in the Namespaces Rec defines, describes, provides, specifies, suggests, entails, depends on, or constitutes a mechanism for defining globally unique names. The Namespaces Rec makes it possible to avoid one way in which names assigned in isolation might fail to be globally unique, but it neither requires that namespace owners ensure that local names are unique within a namespace, nor mentions that as a necessary or convenient step towards having globally unique names. You may have been misled by the rhetoric in the introduction to the first edition of the Namespaces Rec, but that introduction did not provide an accurate characterization of the technical content of the document.
--C. M. Sperberg-McQueen on the www-tag mailing list, 11 May 2004
It is becoming clearer every day, and we have evidence from the mobile phone market, that Flash Lite is getting its ass whooped by SVGT 1.1, even though SVGT 1.1 doesn't have all of its features and doesn't even have a multi-million dollar company pouring marketing resources into making it a success. And it's only getting worse as SVGT 1.2 is getting closer and closer and should be a recommendation within six months. Feature-wise SVGT 1.2 goes beyond Flash Lite's offerings, and with support from platforms like J2ME (through JSR-226) and Symbian (through Series 60 SE), you bet integration issues are pretty much figured out. And remember, SVGT is a standard, approved by W3C and 3GPP. No matter what play on words and rewrite of definitions Macromedia folks can come up with, Flash Lite is not standard.
--Antoine Quint
Read the rest in O'Reilly Network: A matter of trust (or lack thereof)
Google is much more dangerous to Microsoft than Netscape was. Probably more dangerous than any other company has ever been. Not least because they're determined to fight. On their job listing page, they say that one of their "core values'' is "Don't be evil.'' In a company selling soybean oil or mining equipment, such a statement would merely be eccentric. But I think all of us in the computer world recognize who that is a declaration of war on.
--Paul Graham
Read the rest in Great Hackers
The dependency injection rule (paraphrased "Don't let high-level classes depend on low-level details") is almost never followed by great programmers (like, e.g., James Clark, Kohsuke Kawaguchi, Michael Kay (don't feel left out if you aren't on this highly personal list of programmers whose work I have examined in detail)) presumably because they, like the rest of us, require a) actual proof that their solutions work, b) don't tolerate well overheads introduced by indirection and c) (maybe, just a thought) work in a culture where dependency injection is not the norm or highly valued.
--Bob Foster on the xml-dev mailing list, Thursday, 08 Apr 2004
It is likely that mathematical proofs are the mote in the eye of the semantic web community. There is a tendency to run to math and logic when faced with uncertainty as in a story where one holds up a cross or runs to holy ground when faced with a vampire (the unknown). Logic and math, though useful, have their limits and absolutes are rare. Over time, some AI researchers such as Richard Ballard and for comparison, John Sowa point out that knowledge is not merely good logic and math. It is a theory making behavior, a sense-making behavior, more like traditional scientific method than pure mathematical modeling.
--Claude L (Len) Bullard on the xml-dev mailing list, Tuesday, 15 Jun 2004
No one bothers to write in anonymously. Unlike Group Hug and other anonymous confession sites, which allow users to spill all without revealing their identities, messages to tired@tired.com are sent from the visitor's own e-mail client. Gripes about husbands, wives, children, and commanding officers come signed with the sender's real name and address. Mike doesn't reply to these messages, and he doesn't publish them, but how do they know he won't? One theory he's encountered in his user-experience work: People trust simply designed sites. Tired.com's plain-text, unadorned format seems soothing and trustworthy, particularly when compared to the garish, on-the-make look of most sites.
--Paul Boutin
Read the rest in So Tired - Where Web surfers go when they haven't slept a wink. By Paul Boutin
I have long been an advocate of technologies -- from XML through the Semantic Web -- that would make it easier to search and process information by more clearly expressing its structure and context. The problem is that creating a critical mass of such material would require a tremendous evolution in tools and discipline -- certainly an ambitious vision. Google realizes a respectable cross-section of the promise of the XML Web generation by merely finding creative ways of harnessing the mountain of legacy from the original Web.
--Uche Ogbuji
Read the rest in Perspective on XML: Steady steps spell success with Google- ADTmag.com
The world is different now than it was even just a decade or two ago. In more and more cases, there are no paper records. People expect all information to be available at all times and for new uses, just as they expect to drive the latest vehicle over an old bridge, or fill a new high-tech water bottle from an old well's pump. Applications need to have access to all of the records, not just summaries or the most recent. Computers are involved in, or even control, all aspects of the running society, business, and much of our lives. What were once only bricks, pipes, and wires, now include silicon chips, disk drives, and software. The recent acquisition and operating cost and other advantages of computer-controlled systems over the manual, mechanical, or electrical designs of the past century and millennia have caused this switch.I will call this software that forms a basis on which society and individuals build and run their lives "Societal Infrastructure Software". This is the software that keeps our societal records, controls and monitors our physical infrastructure (from traffic lights to generating plants), and directly provides necessary non-physical aspects of society such as connectivity.We need to start thinking about software in a way more like how we think about building bridges, dams, and sewers. What we build must last for generations without total rebuilding. This requires new thinking and new ways of organizing development. This is especially important for governments of all sizes as well as for established, ongoing businesses and institutions.There is so much to be built and maintained. The number of applications for software is endless and continue to grow with every advance in hardware for sensors, actuators, communications, storage, and speed. Outages and switchovers are very disruptive. Having every part of society need to be upgraded on a yearly or even tri-yearly basis is not feasible. Imagine if every traffic light and city hall record of deeds and permits needed to be upgraded or "patched" like today's browsers or email programs. Needing every application to have a self-sustaining company with long-term management is not practical. How many of the software companies of 20 years ago are still around and maintaining their original products?
--Dan Bricklin
Read the rest in Software That Lasts 200 Years
XML is one of the few formats out there that can handle multiple encodings and unicode decently, and much of this is due to the xml declaration.
--Thomas B. Passin on the xml-dev mailing list, Wed, 21 Jul 2004
I like WikiML and the whole notion of reduced, learnable, plain-text markup conventions, and I'll take it as a sign of real progress when one emerges with a design compelling enough, and a processing model robust enough (it'll have to go beyond "check correctness by eyeballing output"), to unseat the currently-dominant paradigm. Anything not as dead-simple as <tag>this</tag> is going to be a pain to learn, teach, maintain.
--Wendell Piez on the xsl-list mailing list, Thursday, 08 Jul 2004
XML comments are IMHO just to comment on the XML they are in, not for any outside use, say the processing of the xml or whatever use the XML has.
--Christof Hoeke on the xsl-list mailing list, Thursday, 8 Jul 2004
Periodically, I am getting emails from people or institutions (including the USA government national endowment for the arts and humanities agency!) asking me to sign and return incomprehensible legalese documents so that they can re-use the WebMuseum documents.
Sorry, but... I have invested thousands of hours into building this collection, and I make my work available for free over the Internet already. What else do you need ? Why would I sign incomprehensible legalese documents, that not only do not provide me any benefit, but instead could actually backfire when I least expect it ?
Also, these documents are usually subject to some foreign law and jurisdiction, such as USA laws. Being neither a USA citizen nor resident, I have absolutely no reason to submit to such a foreign jurisdiction.
In other words, the only thing you will ever get is the WebMuseum online Copyright and License Agreement. Any email request asking me to sign any legal document will be silently ignored.
--Nicolas Pioch
Read the rest in WebMuseum: How to contribute
The word "standard' when it comes to software and computer technology is usually meaningless. Is something standard if it produced by a standards body but has no conformance tests (e.g. SQL)? What if it has conformance testing requirements but is owned by a single entity (e.g. Java)? What if it is just widely supported with no formal body behind it (e.g. RSS)?
Whenever I hear someone say standard it's as meaningless to me as when I hear the acronym 'SOA', it means whatever the speaker wants it to mean.
--Dare Obasanjo on the xml-dev mailing list, Wed, 28 Apr 2004
As has been said many times; one persons metadata is another persons data. Treating types as anything other than data is wrong, wrong, wrong! Types are just an attribute that someone can attach to something and treating anything as though it has a single type only restricts future extensibility. Any XML schema mechanism that is going to be truly useful has to allow for elements to behave polymorphically with respect to type depending on the context in which the element is evaluated.
--Peter Hunsberger on the XML Developers mailing list, Thursday, 8 Jul 2004
Linux scares Microsoft on several levels. There's this business of giving the software away for free, which is totally confusing to Bill Gates -- confusing and scary, since it undermines the entire basis of his fortune. But it's the breadth of Linux and its potential on other platforms that also scares Microsoft. At a time when Microsoft is trying to be sure its software runs on all the handhelds, set-top boxes, mobile phones and any other new machine types that just might replace in our hearts the PC, versions of Linux compete on all those platforms, too.
--Robert X. Cringley
Read the rest in PBS | I, Cringely . Archived Column
Browser support for XHTML is pretty bad; it's more faked than real. HTML works great; no reason to throw it out.
--Joshua Allen, Microsoft, on the xml-dev mailing list, Monday, 12 Jul 2004
It's obviously in the interest of a vendor that has substantial market share to achieve customer lock-in. XML does make it qualitatively harder to achieve customer lock-in, because it comes with a predisposition towards openness. It makes it harder technologically, and it also comes with social expectations. If you publish an XML format, and it's proprietary gibberish, you're going to catch some heat--from the press, from analysts, from customers. So I think it just makes it harder for a company like Microsoft to achieve lock-in.
To the extent that I've looked the formats for Office 2003--I can deal with them. They're not simple, but then, Word isn't a simple product. But if need be, I could write a script to process a Word XML file and extract the text of all paragraphs with certain references--which would have been a very daunting task with previous editions of Word.
So, yeah, there's room for concern. As an industry, we have to be vigilant to preserve open access to our own data. But we are moving in the right direction.
--Tim Bray
Read the rest in Taking XML's measure |CNET.com
If you are starting from a DOM, performance will almost certainly be better if you use DOMSource.
If you are starting from XML text, or from a SAX stream, performance will almost certainly be better if you use SAXSource. (When doing the comparison, remember to allow for the time spent building the DOM, which has overhead similar to that of building our internal model directly from SAX.)
--Joseph Kesselman on the xalan-j-users mailing list, Thursday, 8 Apr 2004
I have issues with the "let's just emulate IE6 with minor improvements" approach. First, Microsoft will not hesitate to break existing content -- they did it with all previous major releases of their browser software. This means that whatever MS replaces IE6 with at some point in the future will break content, and leave other browser vendors in an unpleasant position: playing catch-up. The only way in which they would not do that would be if IE6 no longer is the dominant gorilla, which requires people to switch. The problem is, people don't switch for small incremental improvements, they switch for stuff that is an order of magnitude better (in whichever direction). I work in a very standards-oriented company, and have had the hardest time getting people to switch to FireFox (or Mozilla before that). Everyone here agrees that it's a better browser, but people have habits they don't like switching out of. If I could say "Look! It does this, that, and that which are evidently so much better" they'd switch. Currently, it's only an incremental improvement, of little interest to them.
That's precisely what happened with XHTML. Ok, so you can now feed it to XML tools. That's neat. Yeah yeah I'll switch my site some day, if I get time or something. Perhaps.
To me the "small, incremental improvements to the ugly web" approach will make FireFox/Safari/etc be to IE6 what XHTML 1.x is to HTML. Neat increment, mostly a flop.
--Robin Berjon
Read the rest in Brendan's Roadmap Updates: The non-world non-wide non-web
Applications last for ten years. People who have been battling with the limitations of version 1.0 of one language for five years don't want to be told to switch to version 1.0 of another language with similar limitations. XQuery is full of deliberate restrictions in functionality, which make it highly suited as a database query language, and very difficult to use as a general-purpose XML transformation language. Handling narrative XML (anything without a rigid schema) in XQuery is really hard work.
--Michael Kay
Read the rest in XSLT 2.0 Sir?
It often seems easier to write plain text (or only lightly marked up text) but the lessons of structured documents over the decades (latex, sgml, xml, ...) is that, on balance, markup is a good thing and the more of it you get and the earlier it's added, then the happier you will be in the long run.
--David Carlisle on the xsl-list mailing list, Thursday, 8 Jul 2004
*The* big initial driver of OWL was DARPA, as it is well known that OWL was incarnated as the "DARPA Agent Markup Language" and it is well known that there is alot of interest in certain well-funded government circles regarding threat classification, pattern recognition. For example suppose we have gobs of random information, and from these gobs we have a way to correlate the information into individual collections, for example, suppose we can correlate a whole host of phone conversations as involving individuals. Now suppose we have lots of other types of information that involve other (as yet unnamed) individuals. What we want are a set of inferencing operations that will allow us to *equate* individuals identified by sets of phone conversations with individuals identified by financial transactions. Get the picture.
--Jonathan Borden on the xml-dev mailing list, Saturday, 12 Jun 2004
My second epiphany about this stuff came more recently -- it became brutally clear that internet, XML and web services technologies had done a lot to remove the mechanical barriers to data interchange, so exchanging well-understood document, data records, and service invocations across platforms is no longer the painfully labor intensive proposition it was even a decade ago. Now that the plumbing is in place, however, it is clear that the barriers to effective communication lie more in what the data *means* than in what format it is in or what protocol will be used to exchange it. One might hope that industry-wide working groups will sort out the differences for each vertical.Wwheeooooffff [sound of dope smoke being inhaled ;-) ] One might hope that people will value interoperability more than inertia and adopt something like UBL [Kumbaya .... Kumbaya]. One might anticipate that some Omnipotent Entity such as the US government, WalMart, or Microsoft will just enforce uniformity [could happen, but the proles tend to resist such attempts by Big Brother].
One might much more plausibly believe, IMHO, that a) individual organizations can formalize what *they* mean by various terms, namespaces, etc. by reference to concrete documentation that describes them or software components/database fields that implement them; and b) that these private ontologies could be shared and mapped-between by those needing to exchange data across organizational boundaries. Maybe someday those will evolve into shared ontologies such as SNOMED, we shall see, but we don't need to believe in such things to use OWL, etc. to formalize and manipulate the private taxonomies/ontologies that are in actual use.
--Michael Champion on the XML Developers mailing list, Saturday, 12 Jun 2004
PDF documents are just not very suitable for online access because they are optimized for print, and they're big linear documents, and, therefore, they're not very good for search. So if you find something that's in a PDF file, it's probably on page 217 or something, and being dumped at page one doesn't really help you that much. And so often you'll miss the information even though it is, in fact, in the file.
Also, the formatting is optimized for print, so it's simply a nice brochure. It's typically letter-sized, and you kind of have to scroll it too much or the type becomes too small and hard to read. And the very first time you experience this, you don't even see the document. All you see is "Now we're loading Acrobat." So it becomes an extra delay that people hate as well.
--Jakob Nielsen
Read the rest in Time for a Redesign: Dr. Jakob Nielsen
QNames are kind of the result of a collision between a URI truck and the Name compact car where we end up driving the truck from the driver's seat of the car.
--Simon St.Laurent on the xml-dev mailing list, Sunday, 19 Jan 2003
The difference from procedural formats, is that with XML there is no need to change the format to enable new features in processing software, as long as those features do not require new information. In many cases, new information isn't needed, provided that the original markup is reasonably good. (At Ericsson, switching to SGML meant changing formats about once a decade, as opposed to switching every year, as they did before. This meant considerable savings, because they did not have to update their software as often. In effect, their software was much more robust.)
As a result, XML formats can be more stable than procedural formats, and therein lies a major advantage of XML.
--Henrik Martensson on the xml-dev mailing list, Tuesday, 08 Jun 2004
I think it's actually easier to get xhtml right first time, precisely *because* it is necessarily syntactically clean and strict. You are never left wondering if it matters if you leave a bit of syntax out, or if you should include one, because it always matters: the rules are clear. Closing your tags, quoting your attributes etc. are very good habits to get into from the start.
--Anton Prowse on the XHTML-L mailing list, Sunday, 20 Jun 2004
Mozilla and Firefox downloads have increased steadily since last fall, with the Firefox user base doubling every few months, as more people seem to have reached their threshold level of frustration dealing with problems with IE and Windows, and have found the Mozilla software a good solution to solving those problems. CERT's recommendation is just a reflection of the trend we have seen for quite some time.
--Chris Hofmann, Mozilla Foundation
Read the rest in Wired News: Mozilla Feeds on Rival's Woes
standards bodies are the wrong place to try to innovate. The right thing for standards bodies to do is to take a de facto standard (that is, a technology that is already pervasive in a particular domain) and turn it into a de jure standard (that is, one that is officially recognized). This was the role of standards groups with TCP/IP, and with the C language, and (originally) with HTML.
Watch out when a standards body tries to create a standard from scratch, or tries to make significant alterations in a de facto standard on the way to making that technology a de jure standard. Standards groups, for the most part, are political bodies rather than technical bodies. They work by reaching consensus, by compromise, and by making deals.
Good technology and innovation occur when small groups of smart people try to solve a problem in the most elegant fashion -- groups that are unwilling to do what's acceptable over what's right. Once the right thing is identified, and it is seen by everyone else to be the right thing (and that is determined by a rather long process in the marketplace, and unfortunately doesn't always occur), it is fine for a standards body to anoint that solution by making it a standard. But this is an act of recognition, not an act of creation.
--Jim Waldo
Read the rest in Jini Network Technology Fulfilling its Promise
Google is to "the semantic web" as Cleveland is to Xanadu.
--Bob Foster on the xml-dev mailing list, Thursday, 03 Jun 2004
Google is to "the semantic web" as CompuServe was to "the web".
--Joshua Allen on the xml-dev mailing list, Wed, 2 Jun 2004
Le format "xml" ne veut rien dire : xml est une façon de définir un langage, non un langage par lui même. C'est un abus de langage - mais tant de personnes semblent si heureuses de le croire - que de dire que, du moment que c'est du xml, c'est portable et tout ce que l'on voudra de --able.
--Herve AGNOUX on the xml-tech mailing list, Monday, 2 Feb 2004
We're missing an important business model on the network that's inbetween being free and paying $30 per month for a subscription, a valuable market indeed. It's like executing pocket change transactions on the network. There have been obstacles to doing it, but on a basic business model level, the major obstacle has been that it costs much more than a quarter to settle a quarter. But now, due to innovations in microcommerce it's possible to charge 25 cents for a transaction, without prohibitive settling costs. For example, it might now cost a nickel to collect 25 cents. So, a whole lot of possibilities present themselves, like premium searches or better driving directions.
--Greg Papadopoulos, Chief Technology Officer, Sun Microsystems
Read the rest in Grid Computing: A Conversation with Sun Microsystems' Chief Technology Officer, Greg Papadopoulos
IMHO learning html first and then xhtml afterwards is like being taught to drive by one's dad and, once you think you're ready for your test, only then getting professional lessons to correct any bad habits. Yes, it can work, and no, it does not necessarily mean that you will have even developed any bad habits (html *can* be written just as strictly and cleanly as xhtml), but what it does do is provide needless potential for developing such: you are going to be reading code on the web written by all kinds of people, from experts to amateurs, and it's good to already understand the stricter xhtml rules so that when learning from these people you are in a position to identify when and where their code fails to be xhtml.
--Anton Prowse on the XHTML-L mailing list, Sunday, 20 Jun 2004
There has been over enthusiasm surrounding XML. It has been hyped and everyone thinks it will infiltrate everything, but if it infiltrates everything, then we've got problems everywhere.
--Craig S. Mullins, director of technology planning at BMC Software
Read the rest in Date defends relational model
Web services exist because many people were under the impression that the Web couldn't be used to solve machine-to-machine integration problems. They were mistaken. Where that leaves us now is essentially with two competing architectural styles.
--Mark Baker on the xml-dev mailing list, Tuesday, 15 Jun 2004
I don't do much schema-work in general; here at Antarctica we use DTDs for all our interchange & config files (and somebody else does that). But someone said they needed a schema for the son-of-RSS work, and I volunteered. I used the compact syntax, which is just remarkably easy to read and write. So, having now done one serious (albeit small) project with RelaxNG, I really REALLY wonder why anyone would use anything else?
In particular because, if you use the Trang tool, you get an XML Schema for free. Of course the XML Schema can't cover a whole bunch of cases that are easy with RNG, and is much harder to read and understand, but hey, if that's what you want, you got it.
--Tim Bray on the XML Dev mailing list, Wed, 09 Jul 2003
We now have 80 % of data living in the messy horror world of proprietary file formats and ad hoc structures inside Excel sheets and the like. If those 80 % are taken over by XML, that's a big step forward.
--Alexander Jerusalem on the XML Developers mailing list, Monday, 31 May 2004
People who want to do things that experience has shown are short-sighted are sometimes called innovators while their critics are labeled Luddites or Sabots. After the innovators do their damage, it is a little late to hit them with shoes. We really do need to know if a binary is something only some applications need, and therefore, a generalized spec and standard are not required. Once a binary is approved for all XML applications, XML will rarely be seen as the programmers rush for the binary format for the same reason countries fear they will be second class without nukes.
--Claude L (Len) Bullard on the xml-dev mailing list, Wed, 14 Apr 2004
I'm not sure why one would bother with XML at all in a situation where horrible things happen when uncontrolled evolution occurs -- XML can be made to work in tightly coupled systems, but I don't see what advantage it has over proprietary object or database interchange formats if you want things to die quickly and cleanly when closely shared assumptions are violated. I can think of some, such as the classic SGML use case of maintenance manuals that must work across a wide variety of systems but must also conform to precise structural specifications. Nevertheless, the "I've got 50 customers who want to send me orders in conceptually similar but syntactically diverse formats" use case is a lot more typical IMHO. The typical options are between using a technology that can gracefully accommodate diversity and change (and paying the price of occasional breakage), and having humans transcribe information from diverse input formats into an internal standard (and paying a much higher price for every transaction ... and you still have to pay the price for human error!). Anyone who can avoid the dilemma by requiring the customers to send orders in a rigidly defined format probably doesn't need XML in the first place.
--Michael Champion on the XML Developer List mailing list, Tuesday, 8 Jun 2004
Microsoft, like almost all monopolies, has become fat and lazy. Monopolies do not engage in innovation with the same urgency because they don’t have to innovate to stay in business.
--Robert Lande, University of Baltimore
Read the rest in Seattle Weekly: News: Microsoft's Sacred Cash Cow by Jeff Reifman
WinFS, advertised as a way to make searching work by making the file system be a relational database, ignores the fact that the real way to make searching work is by making searching work. Don't make me type metadata for all my files that I can search using a query language. Just do me a favor and search the damned hard drive, quickly, for the string I typed, using full-text indexes and other technologies that were boring in 1973.
--Joel Spolsky
Read the rest in Joel on Software - How Microsoft Lost the API War
the decades of work on SGML had received little attention until XML came out. Was XML at all new? No. One of the huge improvements, at least to me, with XML was the specification by EBNF productions. Perhaps that is a technical change, but perhaps that is part of why it has such a huge adoption -- writers of parsers can be very precise about what is legal and what isn't.
--Jonathan Borden on the xml-dev mailing list, Saturday, 12 Jun 2004
We talk about world domination, but we'll neither have it nor deserve it until we learn to do better than this. A lot better.
It's not like doing better would be difficult, either. None of the changes in CUPS behavior or documentation I've described would be technical challenges; the problem is that these simple things never occurred to developers who bring huge amounts of already-acquired knowledge to bear every time they look at their user interfaces.
So, if you are out there writing GUI apps for Linux or BSD or whatever, here are some questions you need to be asking yourself:
- What does my software look like to a non-technical user who has never seen it before?
- Is there any screen in my GUI that is a dead end, without giving guidance further into the system?
- The requirement that end-users read documentation is a sign of UI design failure. Is my UI design a failure?
- For technical tasks that do require documentation, do they fail to mention critical defaults?
- Does my project welcome and respond to usability feedback from non-expert users?
- And, most importantly of all...do I allow my users the precious luxury of ignorance?
--Eric S. Raymond
Read the rest in The Luxury of Ignorance: An Open-Source Horror Story
L'élément <object> est sous-spécifié, et n'est que très mal implémenté
en général. De ce fait, toutes sortes de choses fonctionnent moins bien
quand on l'utilise, contrairement à <embed> qui est un vieux reste de
l'époque où Netscape balafrait maladroitement des browsers mais qui tend
à marcher correctement, ne serait-ce que du fait de sa simplicité.
--Robin Berjon on the xml-tech mailing list, Friday, 06 Feb 2004
the main threat to XSLT2 is not XQuery, it is XSLT1. Currently saxon7 is the only visible implementation, although when I last speculated how XSLT2 would get out of CR status you implied that you had hopes of further implementations appearing, which would be good. But it remains to be seen how many other scenarios (aside from client side browser transforms) stay at XSLT1. For as long as that remains a significant minority, it will always be easier to achieve cross platform portability by writing in XSLT1 than 2.
--David Carlisle on the xsl-list mailing list, Thursday, 13 May 2004
CSV has been around for ages, and the way I've always written CSV parsers is to take the first line as a line of column headings, and use those to select what is done with each column's fields. And my software ignores fields it doesn't know a use for, and assumes that missing fields it expects have a NULL value, which may or may not cause higher-level code to reject the row.
Also, ASN.1 has an extension mechanism, where people using different variants of a 'schema' can still communicate; the decoder may inform the application that it had to discard some data it didn't understand, but still provides the fields that the decoder knows.
This 'flexibility' isn't something new to XML, it's inherent in any format that uses some kind of tagged values; including things like TIFF and PNG image files. And MIME headers, and SMTP email messages. Nothing special about XML in this respect! XML fans seem to have similar marketing ideas to Microsoft, picking up a good idea from elsewhere and claiming to have invented it ;-)
--Alaric B Snell on the XML Developer mailing list, Tuesday, 08 Jun 2004
The Mozilla browser and suite were a high-profile example of the ills of featuritis. While the software did everything you could possibly want it to, people still seemed to prefer other software packages that did less.
In a well run software project (several of which I fancy myself a part of), any additional feature must be proved valuable before it is incorporated. Even if a patch is already written to add a new feature, there are plenty of reasons not to accept it. More code means more potential bugs and more management. Additional user interface features can detract from other more important features.
--Steven Garrity
Read the rest in The Rise of Interface Elegance in Open Source Software
For crying out loud! If you don’t want Atom to be XML, make it something else: RFC 822 name/value pairs, comma separated values, free text, whatever. But please, do the world a favor, take out all the angle brackets and things that look like XML. There can be no virtue in a design that intentionally misleads the user.
--Norm Walsh
Read the rest in On Atom and Postel’s Law
Metadata, like semantics, is in the eye of the beholder. One person's data is another's metadata.
--Dare Obasanjo on the xml-dev mailing list, Tuesday, 8 Jun 2004
the people writing the validation rules should always write them to allow maximum flexibility, in the recognition that the system designers aren't omniscient. Validation rules, for example, should never force users to tell lies in order to get past validation (like the web sites, fortunately now rare, that require me to enter a fax number - someone somewhere is getting some strange faxes by now).
--Michael Kay on the xml-dev mailing list, Sunday, 6 Jun 2004
The long-term strategic threat to the entertainment industry is that people will get in the habit of creating and making as much as watching and listening, and all of a sudden the label applied to people at leisure, 50 years in the making — consumer — could wither away. But it would be a shame if Hollywood just said no. It could very possibly be in the interest of publishers to see a market in providing raw material along with finished product.
--Jonathan Zittrain, Berkman Center for Internet and Society
Read the rest in The New York Times > Movies > Hijacking Harry Potter, Quidditch Broom and All
IE sucks so very much. But, at least IE is a stable platform to react against. You don't have to worry about dead end products pulling the rug out from under you.
--Ian Bicking
Read the rest in Ian Bicking 28.5.2004
Always do a tag-share analysis before writing an XML up/down/cross-translate in XSLT or DOM/SAX or whatever. A remarkably small number of element types make up the bulk of the markup - regardless of the size of the schema.
--Sean Mcgrath
Read the rest in XML tag share analysis and power law distributions
I'd love to see the semantic web movement more concerned with finding and generating usable metadata and less focused on what to do with that metadata come the revolution. FOAF files are fun, but demonstrating the potential value of the semantic web will require more metadata than information about which of our friends also have FOAF files.
--Bob DuCharme
Read the rest in OpenP2P.com: Wanted: Cheap Metadata [May. 24, 2004]
for things like SAX pipelines, deep data structures allow less concurrency; in a pipeline, you can only start on a element when you know the previous step has finished with it, deep hierarchies obviously slow that event down. When we need a hierarchy we have a metadata model that maps the hierarchy, the data that gets mapped to this structure is flat.
--Peter Hunsberger on the xml-dev mailing list, Monday, 17 May 2004
URLs are an essential part of the Web as we know it. URIs are a parasitic outgrowth on that technology which claims to be an improvement but mostly just adds infinite layers of ambiguity.
--Simon St.Laurent on the xml-dev mailing list, Monday, 12 Aug 2002
Using a hierarchy as a general mechanism for representing relationships adds to complexity, but using a relational model to model certain forms of hierarchical structure also brings its problems. In your example the objects that ended up as elements under the root element were all independent objects, their identities were separate. Nesting elements comes into its own where there is a strong aggregation relationship between an element and its constituent elements - the identity of the constituent elements being dependent on the identity of their parent element in an invariant fashion.
--Chris Angus on the xml-dev mailing list, Monday, 17 May 2004
Because of its regular structure, relational data is “dense” -- that is, every row has a value in every column. This gave rise to the need for a “null value” to represent unknown or inapplicable values in relational databases. XML data, on the other hand, may be “sparse.” Since all the elements of a given type need not have the same structure, information that is unknown or inapplicable can simply not appear.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
good data modeling is as important as it ever was, xml or no.
--Thomas B. Passin on the xml-dev mailing list, Monday, 17 May 2004
XML was invented to solve the problem of data interchange, but having solved that, they now want to take over the world. With XML, it's like we forget what we are supposed to be doing, and focus instead on how to do it.
--Chris Date
Read the rest in Date defends relational model
Jim Hendler told me that 4.9 years ago I asked him never to call the semantic Web AI because of this problem. Let me tell you the semantic Web is not AI. It's just databases. And when people give the same talks, it looks as though they're giving the same talks as they did many, many years ago. I would also remind you that when Web started, the first Web conferences, a lot of the Web, some of the conferences were "Hey, you know, we've been doing hypertext for 10 years. Nobody's doing anything so different here" but the difference is that just — it's a very small difference — just doing it in a Web-like way, just doing it in a way so that you're always virally, and the small tweaks that change the architecture to make it virally make a lot of talks look the same on the outside but the architecture inside has been made so that it's not centralized any more and so that is the difference between semantic web and databases. AI in fact is not artificial intelligence. The AI folks have got lots of code and techniques and useful languages that they've used in their search for artifical intelligence but we're not going to turn, nobody doing semantic web is holding their breath for something in strong artificial intelligence.
--Tim Berners-Lee at WWW2004, Friday, 21 May 2004
It's our practical observation that a loosely-coupled document based web-service ought only apply constraints to enforce tight/tighter coupling as necessary (typically production) but when possible the constraints are best kept loose - this is particularly the case during development, when data model and process may be in flux.
In other words the development philosophy is - create the service/process and constrain after the fact as necessary.
It seems like XSD and WSDL approach the world with the opposite philosophy mandating that a schema be developed before the service.
--Peter Rodgers on the xml-dev mailing list, Wed, 19 May 2004
I always present drunk.
--Dean Jackson
Mixed Markup, WWW 2004
XSLT was originally just a small part of a tool to transform XML into print, as is the 'transform' part of DSSSL. People looked at it, found other uses for it, and the pace has never slowed since. I think it took the original developers by surprise, and I doubt if we realise the extent to which its being used.
--David Pawson on the xsl-list mailing list, Friday, 14 May 2004
Actually now that I've written a few XSLT2 stylesheets I am coming to quite like it. The last one I wrote could not sensibly have been written in XSLT1, perhaps with some effort in xslt1+xx:node-set() but definitely far more natural in xslt2. (string handling, regexp and xslt-defined xpath functions). The schema support is an irritation (as I posted on the offical comment list I had to a) declare the schema namespace and b) use explict casting to xs:integer all over the place) but hopefully some of that irritation can be avoided by minor tweaks to the casting rules before XSLT2 is finalised. I suspect that the schema typing would be far more annoying and intrusive in a schema-aware XSLT processor, but it's hard to make any real comments on that given the lack of implementations to try. XPath2 is of course a complete mess compared to Xpath 1 but the mess seems worse (much worse) when reading the spec, and that will annoy rather few people, in practice as used in a non-schema aware XSLT2 engine it's not as bad as it seems (or at least, the bad bits don't intrude as often as you might expect), which is why I'm far more relaxed about XSLT2/XPath2 than I was when I first saw the specs.
--David Carlisle on the xsl-list mailing list, Thursday, 13 May 2004
The trend for such systems is to build in generic, default behavior (for collation or for other aspects of localizable information), to support a number of high visibility and high demand particular behaviors "out of the box" and then to open the systems to end-user customization of particular combinations of behavior.
The IT industry is, of course, a long way away from perfection here, in part because the entire field of internationalization of software is considered bizarre geekiness even among your run of the mill programming geeks. But the globalization of information technology is inevitable, in my opinion, and as that globalization proceeds, the inevitable tension between central control and end user demand will play itself out in ways that make the technology eventually more flexible and adaptive.
--Kenneth Whistler on the unicode mailing list, Friday, 14 May 2004
There are now, and always have been, excellent reasons why people create specific languages or data syntaxes to address specific purposes. (The "little languages" paradigm comes to mind.) However, while purpose built languages syntaxes can provide tremendous benefits in the domain for which they are intended, the darn things have a nasty habit of "leaking" out of their domains with unfortunate frequency. They also have a nasty tendency to slowly accumulate more and more features as their scope and domain of usage grows.
Languages, syntaxes and even "applications" share a common tendency to grow towards some ideal "super-state" which encompasses all uses and all domains. This isn't bad in itself. The problem comes when these beasts meet in the night and compete with each other. We end up with a Babelization when, in theory, language should be a matter of choice -- not a bar to interoperability.
--Bob Wyman on the xml-dev mailing list, Wed, 21 Apr 2004
XML is good at providing a facade of openness to things which really aren't open.
--Simon St.Laurent on the xml-dev mailing list, Wed, 29 Oct 2003
in the development labs where I work, we have found IE6 to have significant bugs and inadequacies in its implementations of CSS 1 and CSS 2 which make the browser nearly useless for some of our new intranet applications. As a result, we use Mozilla which seems to have the best support for the W3C CSS 1, CSS 2, and DOM standards. We have found that Opera and Konqueror/Safari (Safari uses Konqueror's KHTML renderer with additional tweaks by Apple) come in 2nd and 3rd place, respectively, with respect to supporting the features of these standards that we are most interested in. IE always seems to come in last place on our tests.
--Edward H. Trager on the unicode mailing list, Friday, 7 May 2004
Actually, my initial reaction to Mosaic was "Well, that's nothing special." But within a few weeks I was addicted to using it. Absolutely addicted. Convenience is good. And when we get to the point that we don't have to think about the technology we're using, we've won.
--Dr. Stuart Feldman
Read the rest in Wired News: The Unfolding Saga of the Web
When I was a student, the men's lavatories in the computer lab had a row of five urinals. Four were identical; the fifth was different and carried the manufacturer's mark "Ideal Standard".
I don't know if they taught the same lesson to the female students.
--Michael Kay on the xml-dev mailing list, Wed, 28 Apr 2004
RFC 2396 should earn its place in history as an example of how not to write a specification mere mortals can understand.
--Bob Foster on the xml-dev mailing list, Tuesday, 06 Apr 2004
It is now crystal-clear that allowing qnames to escape from element & attribute names into content was a terrible mistake that we're now stuck with forever.
--Tim Bray on the xml-dev mailing list, Friday, 19 Dec 2003
Syntax is NOT trivial. While one can always make a computer-science case for a simpler syntax than XML, and one can make a case for alternative schema languages, one faces a hard sale for moving away from an established syntax because syntax is a human user interface acquired by habit. Once acquired, it becomes easy to use and that varies mainly by application (eg, HTML vs X3D) and the skill of the language designer.
--Claude L (Len) Bullard, on the xml-dev mailing list, Monday, 19 Apr 2004
I set up a test for a customer a while back to see how fast Expat could parse documents. On my 900 MHz Dell notebook, with 256MB RAM and Gnome, Mozilla, and XEmacs competing for memory and CPU, Expat could parse about 3,000 1K XML documents per second (if memory does not fail me). If I had tried to, say, build DOM trees from that, I expect that the number would have fallen into the double digits (in C++) or worse. In this case, obviously, there would be far more to be gained from optimizing the code on the other side of the parser (say, by implementing a reusable object pool or lazy tree building) than there would be from replacing XML with something that parsed faster.
I have never benchmarked SOAP implementations, so I have no idea how well they perform, but my Expat datapoint suggests that XML parsing is unlikely to be the bottleneck. In fact, you might be able to gain more by writing an optimized HTTP library that fed content as a stream rather than doing an extra buffer copy.
--David Megginson on the XML Developers mailing list, Monday, 19 Apr 2004
Given the ongoing confusion and incompatibilities at the level of basics, it looks like Web services have a very long ways to go before the current "wave" of WS-* proposals really gain relevance. The one exception may be WS-Security, if it does get approved by OASIS and does get supported by all the players. Meanwhile, we've got yet another layer of hype being added to the top of the Web services heap with the latest "SOA" buzz acronym... Ah well, at least it looks like the market for Web services training isn't likely to dry up anytime soon.
--Dennis Sosnoski on the xml-dev mailing list, Sunday, 04 Apr 2004
Why is there no WSDL for REST? Because HTML is the WSDL for REST, often supplemented by less understandable choreography mechanisms (e.g., javascript). That usually doesn't sit well with most "real" application designers, but the fact of the matter is that this combination is just as powerful (albeit more ugly) as any other language for informing clients how to interact with services. We could obviously come up with better representation languages (e.g., XML) and better client-side behavior definition languages, but most such efforts were killed by the Java PR machine. Besides, the best services are those for which interaction is an obvious process of getting from the application state you are in to the state where you want to be, and that can be accomplished simply by defining decent data types for the representations.
--Roy T. Fielding
Read the rest in Adam Bosworth's Weblog: Learning to REST
As a writer I have a hard time understanding why most of the current crop of XML editors have user interfaces that are a lot worse than my old SGML editor. (WordPerfect with an SGML plugin.) Nor can they measure up to other old time SGML/XML tools, such as ADEPT Editor, FrameMaker+SGML, Documentor, and others. (Yes, I know that from a developers point of view, some of these were just horrible. Some were not exactly ideal writng tools either, it's just that they seem better than many of the things that are around today.)
Most of the current crop of XML editors, XMetaL and Arbortext Publisher are exceptions, seem to be little more than text editors with syntax highlighting. This is not what I want in an authoring tool that I am going to use several hours a day, every day. Text editors with syntax highlighting may suit programmers, but that is very different from being suitable for authors.
XML editors must make it easy to write and structure documents. Context sensitive element dialogs and validation are necessary, of course, but they are not enough, not by a long shot.
--Henrik Martensson on the xml-dev mailing list, Friday, 09 Apr 2004
Hard to generate inputs and hard to parse outputs are guaranteed to produce expletives rather then praise. I once wrote a "survey" application whose output was serialized C++ objects. A companion application, also in C++, reinstantiated the survey objects to populate a database (The "surveys" were mailed to consumers on floppies. When the consumers mailed the floppies back, the results were retrieved). This was a great idea until the company switched to Java. If I had added code to produce ASCII output (this was pre-XML days), I would have been cursed less often. Efficiency does less to assure a good legacy then interoperability.
--John Reynolds
Read the rest in java.net: Coding for your own legacy [May 01, 2004]
Research indicates that 82 percent of Internet users decline to provide any personal information because too many details were asked for that didn't seem necessary. And 64 percent decide not to buy online because they aren't certain how their personal data might be used. High-tech firms need to wake up to the fact that sharing information without permission is bad for business. Moreover, since, on average, users abandon 20 percent of Web sites they visit due to an unsatisfactory experience, you have to wonder why more than half of high-tech firms aren't responding to questions directly posed to them. Clearly, being technologically savvy doesn't correlate directly to providing a high-quality Web site experience.
--Roger Fairchild, president of The Customer Respect Group
Read the rest in BUSINESS WIRE: The Global Leader in News Distribution
The script spectrum is inarguably a continuum, and it's a matter of how many snapshots or branches to encode, and which ones. And of course, *who* gets to make that decision. It's something to be approached with some care, but perhaps it's smarter *not* to approach it with care, since a careful, detailed study involving more than one single decision-maker is almost certain to produce nearly endless debate and no decisions!
--Mark E. Shoulson on the Unicode mailing list, Wed, 28 Apr 2004
Separating processors from syntax is why SAX is a Good Thing.
--Norman Gray on the xml-dev mailing list, Wed, 21 Apr 2004
Even in simple contructs like address, customer and so on, you need the freedoms the XML provides to intermix structure with text, to nest structures and to make structures recursive.
XML frees you from the modelling strictures created by the false dualism between objects and containers and frees you from the horrors of flattening perfectly good business constructs to fit within the strictures of normalised database tables.
A good test for whether or not an application is taking a document-centric or a data-centric approach to data modelling is *mixed content*. Mixed content cuts to the heart of the document-centric worldview and is famously ugly when represented in an OO-like modelling approach.
--Sean McGrath
Read the rest in Sean McGrath, CTO, Propylon
One powerful idiom that has become accepted and expected with XML is that, whenever at all possible, you produce precisely but accept loosely. This is a direct expansion of the IETF meta-rule that states a similar principle. Furthermore, when accepting loosely and re-emitting an existing document/object, you attempt to preserve anything originally present, even if you didn't expect it. For instance, an 'object', i.e. a complex data type, may have grown a new field. You should not die when encountering this field and if you are modifying and exporting that object, you should preserve the field. This is a very powerful way to support schema evolution, router or separation of concerns patterns, and many other cases where you do not have a fixed or completely shared schema/IDL. It is for these and other reasons that IDL-based development, fixed schemas, and native-language object representation of what could be called "network business objects" is suboptimal in terms of development and maintenance requirements.
--Stephen D. Williams on the xml-dev mailing list, Monday, 19 Apr 2004
Steve Jobs used to talk about wouldn't it be great if software was like a radio. And then one day it was like a radio and nobody noticed. And it was the browser. My mother who is 74 has no trouble browsing, even my father who I think of as a technophobe, can browse. The second advantage of the browser you know about is that there's no deployment. That's a huge advantage. Because there's no deployment you don't have to bring all your people's PCs in or upload them all heavy. Now software's gotten better at being adaptive and self-modifying, but that cuts both ways. I'm sick of applying my upgrades on Windows every night. And it makes me nervous that the software on my PC is constantly changing.
So I think what we want isn't a thick client, and I wasn't leading that way. But I think there will be some cases where there's a thick client. I think in general we still want to say an app is just something you point to with a URL. And you don't have to deploy it. And you can throw it out of memory at any time, and there's no code and no libraries and no interdependencies. All the great things about installation-free software that the browser gave us. And the other thing big thing of course is that if you make a change everybody sees the change. So how do I get my cake and eat it too? How can you have a model where you have a thin client just like we have today and yet it works well against the data model. And I think what you do is you have two things that you point at on the web. One thing you point at is the information and one you point at as the way you present it and interact with it. And then the browser is smarter and it knows how to cache. It already knows how to cache your pages and now it knows how to cache your information. And it knows how to do offline synch so you actually go offline and come back online and can synchronize. But other than that it's still a browser. You have to know one thing once and that's your browser. Then you just point to the URL and you run them in the way that you do in the browser mall as opposed to .EXEs.
--Adam Bosworth, BEA
Read the rest in BEA's Bosworth: The World Needs Simpler Java
SGML came out of work done in the 60s, primarily for document processing and typesetting, and has a lot of practical, hands-on features (e.g. datatag, shortref, omittag, shorttag) that may have helped adoption early on, but in the 1990s were a handicap.
--Simon St.Laurent on the xml-dev mailing list, Friday, 4 Oct 2002
A lot of people for and against XML don't take the time to consider the proper usage of XML in large documents. Those I've spoken with who hate XML, or at least feel it's used in the wrong places, are usually because they are dealing with large documents and whatever application is processing them crumbles (such as Microsoft's Biztalk, can't handle a document larger than 20mb). And on the flipside, those for XML usually state things like XML is great for small documents, instant internet communications, etc. Both sides fail to realize that if coded properly, size matters not.
--Bryce K. Nielsen on the xml-dev mailing list, Thursday, 8 Apr 2004
XML has a few minor warts; it's still a Good Thing overall.
--Joseph Kesselman on the xerces-j-user mailing list, Wed, 7 Apr 2004
I can walk into any meeting anywhere in the world with a piece of paper in hand, and I can be sure that people will be able to read it, mark it up, pass it around, and file it away. I can't say the same for electronic documents. I can't annotate a Web page or use the same filing system for both my email and my Word documents, at least not in a way that is guaranteed to be interoperable with applications on my own machine and on others. Why not?
--Eugene Eric Kim
Read the rest in A Manifesto for Collaborative Tools
XML DSIG and XML-Encryption are based on the XPath model. Since I think signatures and encryption are crucial to the deployment of web services, and since I think it'll be a cold day in h... before the security folks get together to revise those specs to use the Infoset model, I tend to view SOAP 1.2 and its ilk as more DOA than SOA.
--Rich Salz on the xml-dev mailing list, Tuesday, 13 Apr 2004
I remain unconvinced that XML tools are usually the right prism through which to view such a serialized RDF document. Consider a document containing the serialization of an RDF triple that expresses [Noah(known by his URI), isAuthorOf, Document(known by its URI)]. While you can use XPath, XSL and or XQuery on the XML serialization, it's not clear to me that this is the architecturally robust way to extract the author for the document. It seems that if the data is fundamentally RDF, you usually want a query mechanism that's aware of RDF and triples. Similarly, I don't think that XML schema (or WSDL) would be a first class way of enforcing the rule that every such document must have a statement of authorship. It seems to me that using XML tools on such RDF serializations is about like using Grep on XML documents; both are useful tricks at times, but Grep is not usually the right way to extract information from an XML document and XML tools are not RDF tools.
--Noah Mendelsohn on the www-tag mailing list, Wed, 17 Mar 2004
Java objects have an awful lot of built-in memory overhead just for the java.lang.Object base class, and if you naively create a separate object for every element, attribute, attribute value, text chunk, and so on, you end up with a very large in-memory data structure. Memory aside, Java object creation and deletion is also very slow (that's why it takes so long to load an XML document into a DOM).
--David Megginson on the XML Developers mailing list, Tuesday, 06 Apr 2004
XLink is too flexible (and as a consequence too verbose) to be successful in a culture that emerged from a/@href. Wouldn't something along the lines of xlink:href + xlink:src + xlink:ref (and perhaps a xlink:multi to flag multi links) be sufficient for the needs of most, far simpler to use than the current XLink (for simple links), and more likely of success? It does nothing to address all the complex linking needs, but those would be easier to build later once at least the simple parts of the spec are successful
--Robin Berjon on the xml-dev mailing list, Friday, 26 Mar 2004
the XML infoset is too granular to represent the level of abstraction most real world applications deal with XML. Most real world applications use an abstraction of XML that is more akin to a subset of the XPath data model (elements, attributes, and text nodes).
--Dare Obasanjo on the xml-dev mailing list, Monday, 12 Apr 2004
In my view, the most important lesson to learn from SGML is not the syntax but the concept of generic markup. Generic markup means describing things in terms of their semantics rather than their appearance. Generalizing, the lesson is to keep your data as independent as possible of assumptions about how you are going to process it. The way to do this for XML is to focus on this minimal labelled-tree abstraction. The more you build alternative abstractions on top of that and look at XML instead as a serialization of some other abstraction such as an object or a remote procedure call the more you build in assumptions about how components will process XML and the less rationale there is for using XML.
--James Clark
Read the rest in RELAX NG, O'Reilly & Associates, 2003, pp. ix
Let's keep things in perspective. Microsoft's unethical business practices should be put into context. Unlike the pharmaceutical cartel or arms manufacturers, Redmond doesn't overturn democracies or kill thousands of civilians; unlike News Corporation it doesn't debase social discourse or undermine language. Unlike Google, it doesn't pretend to present "all the world's knowledge", when most of the world's knowledge isn't even on the Internet. Microsoft simply makes some fairly mediocre software and charges a lot for it.
--Andrew Orlowski
Read the rest in The Register
XLink simply doesn't fit into the layering of the XML architecture. The whole point of XML is that you can choose any names you like for your objects and attributes, and give them any semantics that you like (typically captured in schemas and stylesheets). So why should relationships be different from objects and attributes, and require fixed names and fixed semantics?
Hyperlinking is something that belongs in the user interface layer, not in the stored information. The stored information needs to hold relationship information in a much more abstract form. The hyperlinks, like all other user interface objects, should be generated by the stylesheet. It's because the hyperlinking community failed to recognize this that the idea failed to catch on.
--Michael Kay on the xml-dev mailing list, Friday, 26 Mar 2004
What I find most frustrating isn't bad software -- it's situations where we tell a company about a serious problem, but they decide to ignore it because we're under an NDA and therefore the problem won't hurt sales. If your company is knowingly advertising an insecure or untrustworthy product as secure, try to do something about it. Intentionally misleading customers is illegal, immoral, and a gigantic liability risk. (Keywords: Enron, asbestos, cigarettes.)
It's also frustrating that users keep buying products from companies that make misleading or unsupported claims about their security. If users won't pay extra for security, companies are going to keep selling insecure products (and our market will remain relatively small :-)
--Paul Kocher
Read the rest in Slashdot | Security Expert Paul Kocher Answers, In Detail
The "mature and ubiquitous" SOAP and WSDL are still in an extraordinarily confused state after several years of development. After starting off down the path of rpc/enc in SOAP 1.1, which basically provided an XML representation for simple data object graphs, this approach has now effectively been dropped in favor of doc/lit, which uses a schema description of data and leaves the interpretation of that data (as objects or whatever) to the applications. However, WS-I Basic Profile apparently couldn't swing enough support to *only* support doc/lit, so we're left with the alternative of rpc/lit also a part of the profile (neither one required, both allowed). This is not currently used by much of anybody, but at least some implementations (JAX-RPC included) plan to add this just because it's allowed by the profile.
--Dennis Sosnoski on the xml-dev mailing list, Sunday, 04 Apr 2004
Like a stored table, the result of a relational query is flat, regular, and homogeneous. The result of an XML query, on the other hand, has none of these properties. For example, the result of the query “Find all the red things" may contain a cherry, a flag, and a stop sign, each with a different internal structure. In general, the result of an expression in an XML query may consist of a heterogeneous sequence of elements, attributes, and primitive values, all of mixed type. This set of objects might then serve as an intermediate result used in the processing of a higher-level expression. The heterogeneous nature of XML data conflicts with the SQL assumption that every expression inside a query returns an array of rows and columns. It also requires a query language to provide constructors that are capable of creating complex nested structures on the fly -- a facility that is not needed in a relational language.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
Anyone remember the (dare I say "good") old days when people wrote useful software that did useful things, rather than writing incomprehensible specifications?
It's not a "standard" till it's implemented and in wide usage. So it does make me wince when these specifications are blessed as "standards" even before they are fully-baked (or half-baked if you prefer) and before there are usable implementations and/or wide-spread adoption.
--Andrzej Jan Taramina on the xml-dev mailing list, Sunday, 04 Apr 2004
I am very much in favour of XML recommendations being accompanied by reference implementations whenever possible. If the people who put the recommendation together can't implement it, then who can? If it is too expensive and time consuming for them to do it, then maybe the recommendation is too complex, or simply not useful enough.
--Henrik Martensson on the xml-dev mailing list, Sunday, 04 Apr 2004
I'm not at all sure that starting by building an ontology is a wise investment. As many of you are aware, I've spent the last 1 1/2 years working on the Web Services Architecture group at W3C, which might be characterized as an attempt to define a [relatively informal, although we are toying with an OWL representation] ontology for web services concepts, terminology, and concrete instantiations. It is, to put it bluntly, a bitch: what appears to be common sense to one person or organization is heresy to another; the meaning of simple words such as "service" tend to lead to infinitely recursive definitions; and just when you think you start to understand things with some degree of rigor, a new analyst/pundit/consultant fad comes out of left field to confuse things all over again. Imposing ontologies up front is as politically/economically impossible as imposing the One True Schema, and defining them post hoc is difficult and inevitably incomplete/partially inaccurate.
--Mike Champion on the xml-dev mailing list, Friday, 19 Sep 2003
It’s no secret that the original XML cabal was a bunch of publishing geeks, and we thought we were building a next-generation general-purpose document format. Except for, the XML world charged off after B2B and Web Services and transactions and RSS and so on, and hey that’s fine, it works well for all those things. But what we had in mind is more or less exactly what OpenOffice is doing.
--Tim Bray
Read the rest in ongoing -- OpenOffice
because all XML files are text files (despite the best efforts of some people to change that) they're particularly portable between operating systems.
--Bob DuCharme on the xml-dev mailing list, Friday, 12 Mar 2004
XML standardizes only a syntax, but if you constrain XML documents directly in terms of the sequences of characters that represent them, the syntactic noise is deafening. On the other hand, if you use an abstraction that incorporates concepts such as object orientation that have no basis in syntax, then you are coupling your XML processing components more tightly than necessary. What then is the right abstraction? The W3C XML Infoset Recommendation provides a menu of abstractions, but the items on the menu are of wildly differing importance.
I would argue that the right abstraction is a very simple one. The abstraction is a labelled tree of elements. Each element has an ordered list of children in which each child is a Unicode string or an element. An element is labelled with a two-part name consisting of a URI and local part. Each element also has an unordered collection of attributes in which each attribute has a two-part name, distinct from the names of the other attributes in the collection, and a value, which is a Unicode string. That is the complete abstraction. The core ideas of XML are this abstraction, the syntax of XML, and how the abstraction and syntax correspond. If you understand this, then you understand XML.
--James Clark
Read the rest in RELAX NG, O'Reilly & Associates, 2003, pp. ix-x
The perceived slowness is because Xerces is a conformant XML parser. "Faster" XML parsers usually gain from not implementing validation or only supporting a limited number of character encodings. So keep this in mind when evaluating parsers and pick the parser for your application appropriately.
--Andy Clark on the xerces-j-user mailing list, Sunday, 07 Mar 2004
PowerPoint can make almost anything appear good and look professional. Quite frankly, I find that a little bit frightening.
--David Byrne
Read the rest in Wired 12.04: The 2004 Wired Rave Awards
The data itself shouldn't be tied to /any/ sort of mechanism for displaying itself, nor should it be self-aware of how it might be used. Because by doing so you pigeon hole the data into use in a single context or application.
--Eric Hanson on the xml-dev mailing list, Saturday, 27 Mar 2004
My two cents: I think XLink should be scrapped (even though it has my name on it - *sigh*). The XSL and CSS Working Groups should be made to sit down with a new XLink WG, and forced to hammer out the stylistic and behavioral aspects of good hyperlinking. And then the XLink folks should make a specific that describes relationships, not behavior. This should've happened years ago.
If such stylistic and behavioral support existed in the first place, maybe we'd be seeing a better quality of hyperlinking on the Web, instead of the ugly morass of scripting tricks we're seeing now.
--Ben Trafford on the xml-dev mailing list, Friday, 26 Mar 2004
Many news aggregator applications have "support" for RSS 1.0, using naïve XML parsers. However, if the RDF of the feed is serialized using a triple-oriented format analagous to TriX, most news aggregators would break. The whole ecosystem works, for now, because producers of the RSS 1.0 feeds are careful to emit files that conform to the XML format that the aggregators expect. In other words, RSS 1.0 claims to be an RDF vocabulary, but in practice it ends up being an XML schema.
--Joshua Allen on the www-tag mailing list, Thursday, 18 Mar 2004
Programming with libxml2 is like the thrilling embrace of an exotic stranger. It seems to have the potential to fulfill your wildest dreams, but there’s a nagging voice somewhere in your head warning you that you’re about to get screwed in the worst way.
Libxml2 is fast. I mean insanely fast. Nothing else even comes close. It is insanely fast and insanely compliant with all the specifications that it claims to support, and it is getting faster while gaining more features. So you just know that somewhere, someone is selling their soul to somebody, and you just hope it isn’t you.
--Mark Pilgrim
Read the rest in Beware of strangers [dive into mark]
when a 'system' has that many users, cheap convenient pet tricks recoup very large costs. Those costs come in many forms including the potential wrangling over the holy brackets (Shall We Make Curly or Pointy Holy?). But even then, the impedance mismatch that a syntax specification with a namespace and a structure can cause create very real headaches. XML is the winner of the 'pick one' contest. Syntax can be very important. Is it important for everyone to pick one? No, but it is cheaper and convenient.
--Claude L (Len) Bullard on the ' www-tag mailing list, Monday, 27 Oct 2003
An even better analogy is putting XML into RDBMS by shredding the documents into tables and columns. You can make it work, and with a little care, you can make it extremely fast (i.e. you can avoid joins for tree-based operations), but the fundamental models *are* different, and there *is* impedance. Without careful design, those issues often come back and bite you in unexpected ways.
--Gavin Thomas Nicol on the www-tag mailing list, Friday, 19 Mar 2004
"if I serialize my XML carefully (no comments or no CDATA sections perhaps), it will be a bit easier to use Grep to reliably extract information from my files". True, and that might be a handy thing to do, but Grep really doesn't properly navigate the structure or model of an XML document.
--Noah Mendelsohn on the www-tag mailing list, Thursday, 18 Mar 2004
Recent versions of Word claim "save as XML" features of one kind or another. Maybe that "claim" is too harsh; they do create well-formed XML documents, after all. But it's XML of a spectacularly hideous form, even for simple documents -- nearly as gnarly and impenetrable to the human eye as XSL-FO.
--John Simpson
Read the rest in XML.com: From Word to XML [Dec. 30, 2003]
On the subject of error handling, one of my biggest hassles in making Saxon portable across different XML parsers has been differences in the way they handle an exception thrown by a callback such as startElement. They vary in whether or not such an exception is notified to the ErrorHandler, and they vary in whether it re-emerges intact as an exception thrown by the parse() method or whether it gets wrapped in some other exception. The specs, of course, are very unspecific on such points.
--Michael Kay on the xml-dev mailing list, Tuesday, 24 Feb 2004
What really intrigues me is that for all the theoretical interest in semantic approaches to search/discovery/analysis over the past few years, the actual advances in practical applications seem to come from metadata generation and pattern matching (Google), dirt-simple fuzzy or Bayesian classifers (e.g. Spam Bayes), and brute force "kitchen sink" combinations of it all (e.g. IBM "WebFountain, AFAIK http://www.almaden.ibm.com/WebFountain/). I'm willing to bet that there is some good synergy between ontologies and the brute-force stuff -- for example I would like to be able to give Spam Bayes some knowledge of my world, e.g. I never spam myself, or a message with no recognizeable words in it is almost certainly spam. Still, I see the "dumb" approaches working every minute of every day (about how often I get spam!) and I'm not seeing the real world success stories for the "smart" approach.
--Mike Champion on the xml-dev mailing list, Friday, 19 Sep 2003
The strengths of XML, etc. are not in computer science. Rather, XML's strengths are in the *human sciences* of sociology, psychology, and political science. XML offers us no concepts or methods that weren't completely understood "computer science" long before ASN.1 was first implemented in the early 80's. From a "computer science" point of view, XML is less efficient, less expressive, etc. than ASN.1 binary encodings or the encodings of many other systems. However, because XML uses human readable tag names, because it is text based, easy to write, has an army of evangelists dedicated to it and many freely available tools for processing it, etc. XML wins in any system that values the needs of humans more than those of the machines.
XML's ability to "win" in the human arena has enabled a great outburst of computer science as a result of the greater interchange of information and the increased ease of interchange. However, this great outpouring of utility and enablement of new computer science work has been at the cost of accepting an interchange format which is "inferior" from the point of computer science. Of course, I think most of us accept that this cost is an acceptable one and a small price to pay in most cases.
--Bob Wyman on the xml-dev mailing list, Saturday, 14 Feb 2004
One of my favorite things about my Linux-based Sharp Zaurus PDA is that XML is its native format for its address book, calendar app, etc. I can put it in its cradle, ftp the files to a Windows machine, and do anything with them that I'd do with any other XML file.
--Bob DuCharme on the xml-dev mailing list, Friday, 12 Mar 2004
I don't buy the argument that programmers benefit from a Web Services toolkit. Such things do not build applications -- at most they automate the production of security holes. Getting two components to communicate is a trivial process that can be accomplished using any number of toolkits (including the libwww ones). The difficult part is deciding what to communicate, when to communicate it, in what granularity, and how to deal with partial failure conditions when they occur. These are fundamental problems of data transfer and application state.
--Roy T. Fielding
Read the rest in Adam Bosworth's Weblog: Learning to REST
IE is such a poor excuse for a browser that you won't be able to do much with CSS. IE only does tagsoup rendering, you're feeding caviar to the pigs.
--Robin Berjon on the xml-dev mailing list, Thursday, 04 Mar 2004
The different XML usage patterns, semi-structured documents and rigidly structured data, typically have different requirements when it comes to accessing the content within XML documents as typed data.
Consumers of such XML documents that contain rigidly structured data often want to consume the documents as strongly typed XML. Specifically, such applications tend to map the elements, attributes, and character data within the XML document to programming language primitives and data structures so that they can better perform operations on them. This mapping is usually done using either an XML Schema or a mapping language. Listing 1 is an example of a W3C XML Schema document that describes the strongly typed view of the XML document.
Consumers of semi-structured XML documents typically want to consume the documents as weakly typed or untyped content presented as an XML data model. In such cases XML APIs that emphasize an XML-centric data model, such as DOM and SAX, are used to process the document. An XML-centric view of such semi-structured documents is preferable to an object-centric view because such documents typically use features peculiar to XML, such as mixed content, processing instructions, and the order of occurrence of elements within the document is significant.
--Dare Obasanjo
Read the rest in XML-Journal - Can One Size Fit All?
If you happen to use anything but the latest Microsoft browser on the latest Microsoft operating system, there’s a fair chance you’ll be unable to complete a purchase at many sites. For example, there’s some shopping system supplier that has invisible “Buy” buttons when Macintosh users try to hand over their dough, even using the latest Internet Explorer.
True, most people have Microsoft systems, but not everybody does, and the color of several million Mac-user and Linux-user credit cards are the same as those of Windows users.
The purveyor of this popular shopping system is clearly at fault for not testing its system properly, but what of their many victims, the retail websites that are losing millions because of it? No matter what kind of outsourcing you may do, you need to have an active program to verify the outside party is doing their job.
--Bruce Tognazzini
Read the rest in AskTog: Top 10 Reasons to Not Shop On Line
Perhaps the longest-standing principle in the design of XQuery is that XQuery should be a functional language incorporating the principle of compositionality. This means that XQuery consists of several kinds of expressions, such as path expressions, conditional expressions, and element constructors, that can be composed with full generality. The result of any expression can be used as the operand of another expression. No syntactic constraints are imposed on the ways in which expressions can be composed (though the language does have some semantic constraints). Each expression returns a value that depends only on the operands of the expression, and no expression has any side effects. The value returned by the outermost expression in a query is the result of the query.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
the Web is messy behind the scenes, but I can still use it to read film reviews, keep up to date with news stories, buy books, check weather, reserve hotel rooms, file flight plans (in the U.S., anyway), download software, and so on.
I haven't heard of anyone doing things like that with Xanadu or HyTime-based systems, much less XML+XLink: that's the ultimate testament for 99:1 or 100:0 designs.
--David Megginson on the XML Developers List, Thursday, 04 Mar 2004
The ultimate testament for 80/20 designs is that 80% of the time the 80/20 is 80% subjective, caused by a lack of vision and/or a bad design and just a poor excuse to refuse legitimate features!
--Eric van der Vlist on the xml-dev mailing list, Thursday, 04 Mar 2004
Well, as all good XML developers know, there are 2 main (could also be called “standard”) methods to parse your XML: using the DOM, which loads the whole document tree into memory and is (relatively) easy to use, or via SAX, which is extremely fast, doesn’t have a large memory footprint, but requires a bit of (repetitive) coding techniques. Since the majority of .Net (and previously VB/MSXML) developers didn’t need the performance of SAX, we typically used the DOM method to parse XML in our applications. When Microsoft rolled out .Net, they didn’t include a SAX parser for .Net, but something just a fast (and just as complicated to code against) called the XmlReader (which is basically the pull version equivalent of the push style SAX parser), and .Net developers still had basically two ways to parse XML, and most developers still used the DOM. If you want to parse some XML, you used the DOM. If you had to worry about memory or performance, you used one of the XmlReaders. Life was good, and as developers we fell into a DOM induced coma.
--DonXML Demsak
Read the rest in Waking Up From a DOM Induced Coma
not using XML when you claim to be creating XML has real costs that demand examination of whether using XML in the first place is reasonable. InkML appears to my eye to flunk this test with flying colors.
--Simon St.Laurent on the xml-dev mailing list, Thursday, 21 Aug 2003
pretty well all the characters from ASCII and EBCDIC and JIS and KOI8 and ISCII and Taiwan and ISO 8859 made it into Unicode. So at one level, it's reasonable to think of all these things as encodings of Unicode, if only of parts of Unicode. XML blesses this approach, and allows you to encode XML text in any old encoding at all, but doesn't provide a guarantee that software will be able to read anything but the standard Unicode UTF encodings
--Tim Bray
Read the rest in ongoing Characters vs. Bytes
I don't have mainstream radio to count on anymore - they won't play my stuff. The Internet is the new radio. To tell the stories I want to tell, I have to use everything that's available and use it all at once. I have to go through a lot to make sure people won't perceive it as just a Neil Young record, because everybody thinks they know what that is. The challenge is to remind them that they have no idea what the hell that is.
--Neil Young
Read the rest in Wired 12.03: The Reinvention of Neil Young, Part 6
Nattering nabobs of negativism will doubtless be glad to note that XML 1.1 parsers MUST support XML 1.0 as well, and that human and mechanical XML generators SHOULD generate XML 1.0 unless there is a specific reason to generate XML 1.1.
--John Cowan on the xml-dev mailing list, Wed, 4 Feb 2004
The problem right now is that there is a moral hazard to filing a bogus patent claim: you don't face any serious consequences if you lose, so why not take a chance? In many ways, it's exactly analogous to spamming. If a company could stand to lose millions (or more) because of a irresponsible patent, the system might work a little better.
--David Megginson on the XML Developers mailing list, Friday, 13 Feb 2004
Rule 1 of writing software for nontechnical users is this: if they have to read documentation to use it you designed it wrong. The interface of the software should be all the documentation the user needs. You'd have lost the non-techie before the point in this troubleshooting sequence where a hacker like me even got fully engaged.
--Eric S. Raymond
Read the rest in The Luxury of Ignorance: An Open-Source Horror Story
Clearly, standards are important. But just as clearly, having your swanky new Web Services stack from Microsoft or IBM anointed as a standard is not going to save you if it turns out the technology has fundamental flaws. Sometimes I think I should keep a poster of the OSI Network Model over my desk just to stay clear on this key point.
--Dan Milstein
Read the rest in Edge East 2004 - A Skeptic's Tour
Microsoft's rich XML-based but IE-specific user interface language (XAML) is their client-side weapon. Doesn't the non-Microsoft world have anything similar? Well, if Microsoft releases an "innovation", it's a safe bet that someone else thought of it first. Sure enough, the inspiration for XAML can be seen to be an Open Source rich client framework from the Mozilla Foundation, called XUL.
XUL, at the most superficial level, provides richer widgets than HTML, such as scrollable tables, collapsible trees and tab folders. It is a brilliantly-architected rich client that happens to be thin as well, because it's based on XML. Microsoft, as usual, has seen the light early, and has "innovated" XAML based on XUL's pioneering features, but Sun and the rest of the industry are still caught in a user-interface time warp.
--Ganesh Prasad
Read the rest in Linux Today - Community: Beyond an Open Source Java
Honestly, the biggest hurdle is not technical. The biggest hurdle is the lack of effective laws. The biggest reason we still have a spam problem in the United States is that spam is 100 percent legal, with some very minor exceptions. If you send spam with forged headers, that's illegal, but forgery has been illegal all along.
But if your headers aren't forged, and your message starts by saying, "This is spam," and two-thirds of the way through the message it tells you that you can go to a Web site and push buttons to get off the spammer's list--that's legal. ISPs can say, "Well, our customer is complying with the federal law, and the fact that he sent you 14,000 messages you don't want isn't our problem."
The biggest reason we still have a spam problem in the United States is that spam is 100 percent legal, with some very minor exceptions. If we had effective spam laws, we would be able to get the spam situation under control. It's just like the fax situation. In the 1990s, persons' faxes were full of advertisements. Congress passed a very simple law stating that you cannot advertise by fax to people who haven't asked for it. This hasn't completely gotten rid of junk fax, but it has kept our fax machines usable. Until we have a legal environment like that, we are just going to have this continuing cat-and-mouse game with spammers.
--John Levine
Read the rest in Finding a way to fry spam |CNET.com
XML is "fractally complex". At the highest level it appears only slightly complicated, but as you dig deeper you discover increasing complexity, and the deeper you go the more complicated it continues to become. Trying to be faithful to the XML standards while staying easy to use and intuitive was a definite challenge.
--Jason Hunter
Read the rest in Servlets.com Weblog: JDOM Hits Beta 10
I built IE 4 and built the DHTML and built the team that built it. And when we were doing this we didn't fully understand these points. And one of the points was people use the browser as much because it was easy to use as almost anything else. In other words I'd talk to customers and say we can add to the browser all these rich gestures. We can add expanding outlines and collapsing and right click and drag over and all that—all the stuff you're used to in a GUI. And without exception the customer would tell me please don't do that, because right now anyone can use the sites we deploy and so our training and support costs are minimal because it's so self-evident what to do. And if you turn it into GUI we know what happens, the training and support costs become huge. So one of the big values of the browser is its limits.
--Adam Bosworth, BEA
Read the rest in BEA's Bosworth: The World Needs Simpler Java
I have written two books on, or partially on, XQuery, in the last year. My take on it is that in order to do anything of interest, you need to know XPath to a fairly solid degree. By the time you get there, XSLT is more expressive and capable than XQuery.
--Kurt Cagle on the xsl-list mailing list, Saturday, 21 Feb 2004
It is harmful to allow producing incorrect results in the name of "better performance".
In fact, the best speed for producing wrong results should be as close to zero as possible. We should always do whatever is possible to decrease the speed of producing wrong results.
--Dimitre Novatchev
Read the rest in RE: [DM] IBM-DM-105: Order of comments, PI's and text given [schema normalized value] property from Dimitre Novatchev on 2004-02-18 (public-qt-comments@w3.org from February 2004)
XSLT is *not* an angle-bracket processor, it is a node processor where the nodes usually (but not always) happen to come from and go to XML angle brackets. An out-of-line system is going to require users to consider syntactic issues rather than let the processors consider syntactic issues. XSLT relieves the users of this and lets people focus on their information, not on their syntax.
The designers of XSLT make this claim up front and don't try to hide it: XSLT was not designed to preserve or manipulate the syntax of a document, it was designed to be totally general purpose with the information structure of a document: when the document is used by a processor downstream, the choice of syntax is irrelevant as long as it is correct.
--G. Ken Holman on the xml-dev mailing list, Friday, 20 Feb 2004
Infoset is a concept unknown to XML 1.0, and parsing is the necessary first step of *every* instance of XML 1.0 processing. Many things might be built upon the output of a particular XML 1.0 parse, including a tree, or a graph, or the abstraction into Infoset form of the data items identified in that parse. Yet it seems to me that an 'infoset' would not usually be the desired final product of processing an XML instance, in large part because such an 'infoset' is a terminal output product: it cannot be passed or pipelined into any other context because it is utterly specific to the circumstances in which it is produced.
What is required if the output of particular XML processing is to be passed to other XML processing is a document, which of course will first be parsed before any other processing is performed on it in that new environment. It is this conveyance of XML instances from context to context which is so well suited to the internetwork topology, and particularly to the Web-as-we-know-it. Instances are available as entity bodies which we may GET at a particular URL, process, and then republish as new instances at other URLs.
--W. E. Perry on the XML DEV mailing list, Wed, 14 Jan 2004
APIs should not be exempt from fundamental usability principles. I'm coming from the selfish perspective of one who has to actually learn to *use* these things, and, more recently, as someone who has to try to help others do the same. And if for no other reason than to make myself feel better, I'm going to suggest that if you have real trouble working out an API and you're thinking that it REALLY shouldn't be that hard... perhaps it really isn't YOU. So before you start questioning how on earth they ever gave YOU a PhD in astrophysics when you can't even work out one frickin' interface, take a deep cleansing breath and repeat this little positive affirmation: "It's not me. I AM smart. I have the <test_scores/degree/certification/hairstyle> to prove it. But oh what I wouldn't give to wrap my hands around the neck of the one who designed this API..."
--Kathy Sierra
Read the rest in To API designers/spec developers: pity those of us who have to LEARN this...
The initial buzz around SOAP was all based on the rpc/encoded use (or Microsoft's "wrapped" variation) where method calls were "transparently" exposed as web services. Now that that's effectively deprecated in WS-I BP 1.0 (for good reason) SOAP currently offers very little (if any) functionality beyond what can be done using direct XML interchange over HTTP. What *is* interesting in the web services area is WSDL, but WSDL doesn't require SOAP. And UDDI has always struck me a solution in search of a problem - I've yet to see any practical applications that couldn't be handled just as easily with a simple web page directory of services.
--Dennis Sosnoski on the xml-dev mailing list, Saturday, 14 Feb 2004
most useful innovations tend to come from visionaries who don't fully understand the complexities of what they're unleashing on the world, and not the experts who are focusing on the details. That's more or less Clayton Christensen in a nutshell -- the experts were doing the "sustaining innovations" to make faster and more complicated mainframes while the Jobs/Wozniaks of the world were screwing around in their parents' garages creating the "disruptive innovations."
I'm reminded of the (possibly apocryphal) story that Tim Berners-Lee was ignored or scorned by the hypertext community of the late 80's / early 90's because his stuff was so trivial and didn't address the interesting problems. Of course, by ignoring the interesting problems he could deliver something that actually added value vastly disproportionate to the cost of seeing 404 messages now and then.
--Michael Champion on the xml-dev mailing list, Friday, 9 Jan 2004
As it applies to XML, ironically, Postel's Law would be difficult, even impossible, to observe if it were not exactly for the relatively stark clarity afforded by the definition of XML well-formedness. (That allegation that XML's creators "broke Postel's Law" misses the point that Postel was writing a *specification*, after all.) Imagine if XML were woolier and wafflier than it is, that its corner cases had not been fully explored and discoverable in the archives of this list. What would being liberal or conservative mean then? Among the endless debates over what was in and what was out, the horn would be sounded for being liberal in what you accept from others, and for all to play, we would have to accept more and more. Soon enough we would have bloatware and vendor lock-in (hey, where have we seen that?). Far from a wise counsel that we should work together, Postel's Law would be a recipe for disaster, if the hawks could not keep insisting that "if it's not well-formed, it's not XML". Well-formedness, however bizarre or arbitrary it may seem in some respects (no unescaped less-than signs in attribute values! no slashes in tag names!) is not just a religion, it is the placing of a boundary. If it parses as XML, well good, go ahead and move on. If it doesn't, all bets are off. Not a threat, but simply a statement of fact, that stuff that doesn't lie about what character encoding it uses (to use an example actually cited), is going to be more predictable and less troublesome in general than stuff that does.
--Wendell Piez on the xml-dev mailing list, Friday, 16 Jan 2004
Different applications have different optimization requirements and thus it is unlikely that a single binary XML standard will satisfy all scenarios (we're pretty sure it won't satisfy all the scenarios of the various individual Microsoft products) given that in some cases they are conflicting. Even it was the case that a single binary XML standard could somehow satisfy all scenarios and not end up turning into something like W3C XML Schema there is still the fact that this poisons the well with regards to the interoperability of XML on the Web. Given both these points we are against standardizing on binary XML format(s).
--Dare Obasanjo, Microsoft, on the xml-dev mailing list, Tuesday, 25 Nov 2003
We found that a lot of people try to cut corners on the Internet, because customer service is expensive. They think, oh, it's online, it's self-service. But that's a mistake. Online, people need help more, not less, and the human element is something that can be really a great differentiator for a brand.
--Lauren Freedman, E-tailing Group
Read the rest in Online Shopper: Toll-Free Apology Soothes Savage Beast
Do you believe it's impossible to build a generic data model & format that could be used in place of the myriad of formats you talk about there? A format which could be used, for example, to describe the state of a lightbulb as well as the state of a business process? IMO, not only do I believe this is possible, I believe we have it in our grasp today; it's called RDF (Topic Maps would also work, but they're not as Web-friendly).
--Mark Baker
Read the rest in Adam Bosworth's Weblog: Learning to REST
In my explorations, ASN.1 toolkits felts more to me like data-binding kits than XML parsers. There doesn't seem to be much notion of anything like an "ASN.1 infoset", a set of containers and properties you can explore without necessarily knowing the bindings. ASN.1 feels effectively schema-driven, designed from the outset to be optimized for a world where processes are tightly bound. There aren't general ASN.1 "parsers" in the same sense that there are XML parsers, or at least there weren't last time I looked.
Folks who actually care about XML per se are often looking for looser bindings. ASN.1 chafes against the kinds of assumptions that are common in XML, like that I might conceivably work on found documents with no accompanying metadata.
--Simon St.Laurent on the xml-dev mailing list, Friday, 3 Oct 2003
J'aurais été très content de débourser quelques euros pour avoir par exemple un éditeur ayant les fonctionalités de oXygen. Malheureusement sur une machine puissante il est tout simplement inutilisable. Une minute pour valider un document de trente lignes avec un schema simple dans un thread séparé, c'est un peu plus de 59 secondes de trop.
--Robin Berjon on the xml-tech mailing list, Tuesday, 10 Feb 2004
the key thing to understand is that the value of the Web as a whole increases each time a new resource is made available to it. The systems that make use of that network effect gain in value as the Web gains in value, leaving behind those systems that remain isolated in a closed application world. Therefore, an architectural style that places emphasis on the identification and creation of resources, rather than invisible session state, is more appropriate for the Web.
--Roy T. Fielding
Read the rest in Adam Bosworth's Weblog: Learning to REST
My belief is that the failings of RSS are so great and that the quality of service we'll be able to provide with Atom feeds is so much greater than what we can currently provide, that RSS use will fall off rapidly once Atom becomes established. Users will demand it.
--Bob Wyman on the xml-dev mailing list, Friday, 6 Feb 2004
if your site has won a web graphics design award, you are likely in serious need of a redesign. You are likely featuring something useless but pretty or you wouldn’t have won it. Your job is to move product, not to win awards. Useless but pretty not only slows up transfer, it dazzles the customer and draws him or her away from an appreciation of the product you're in business to sell. Sales 101.
--Bruce Tognazzini
Read the rest in AskTog: Top 10 Reasons to Not Shop On Line
Relational data is regular and homogeneous. Every row of a table has the same columns, with the same names and types. This allows metadata -- information that describes the structure of the data -- to be removed from the data itself and stored in a separate catalog. XML data, on the other hand, is irregular and heterogeneous. Each instance of a Web page or a book chapter can have a different structure and must therefore describe its own structure. As a result, the ratio of metadata to data is much higher in XML than in a relational database, and in XML the metadata is distributed throughout the data in the form of tags rather than being separated from the data. In XML, it is natural to ask queries that span both data and metadata, such as “What kinds of things in the 2002 inventory have color attributes," represented in XPath by the expression /inventory[@year = "2002"]/*[@color]
. In a relational language, such a query would require a join that might span several data tables and system catalog tables.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
I think that adding NEL is in fact a more correct usage of Unicode than what XML 1.0 specifies. I'm still against it because it has a lousy cost/benefit trade-off. XML 1.0 achieves a pretty good balance of making the benefits of Unicode available without incurring much in the way of cost for users of legacy text-processing systems. I think the linebreak aspects of Blueberry are a step back in that respect.
--Tim Bray on the xml-dev mailing list
Mosaic and the web browser in general made the web go, and the pundits said it would ever be so. But the heyday of the web browser is ending or so it is said. Long live the PC Client for where there might be one icon before, now there will be many each with its own non-interoperating XML format. Standards schmandards, I want candy.
What isn't waning? HTTP and URLs.
So you have a case for the architecture over the implementation.
--Claude L (Len) Bullard on the xml-dev mailing list, Monday, 12 Jan 2004
At this point, when you put something up on the Web, you don't have to say who put it up there, you don't have to say where it really lives, the author could be anyone. Which is supposedly its freedom. But as a user, I'm essentially in a position where everyone can represent themselves to me however they wish. I don't know who I'm talking to. I don't know if this is something that will lead to interesting conversation and worthwhile information -- or if it's a loony toon and a waste of my time.
I'm not a big control freak, I don't really know who would administer this or how it would be. But I would just like to see that a Web page had certain parameters that are required: where it is and whose it is. I would like to have some way in which I could have some notion of who I'm talking to. A digital signature on the other end.
--Ellen Ullman
Read the rest in Salon | 21st
internal subsets shouldn't ever be supported, at least not in the form of passing in a string; guaranteeing that this string is well-formed is way too expensive. Generating an internal subset would require a full DTD-generating API, which is hardly necessary these days, IMAO.
--John Cowan on the xml-dev mailing list, Wed, 21 Jan 2004
Xerces is bigger, slower, has more features and fewer bugs.
--Michael Kay on the xml-dev mailing list, Sunday, 23 Nov 2003
SGML was a succesful in a narrow niche: the niche of people with huge, expensive, publishing problems, for example Boeing and the European Parliament. But it never touched the lives of most computer users or programmers. When HTML came along, it was occasionally advertised as being an application of SGML, but that claim was deeply bogus on a bunch of levels.
On the other hand, it did prove out a whole bunch of good ideas that were shamelessly stolen by the designers of XML, including me, and will live forever for that reason alone.
--Tim Bray
Read the rest in ongoing - TPSM 2: Technology Losers
ASN.1 would have been a useful interop format for data exchange if it would have been simpler and only text-based. By providing the different binary encodings, it hampered its adoption for interoperability because of the added complexity and costs for tools. Up to the point XML came along and was doing this right and provided the additional benefit of being usable for markup and data. I don't think making XML more complex will be a good idea given my experience with ASN.1 over the last 12 years.
--Michael Rys on the xml-dev mailing list, Tuesday, 18 Nov 2003
the notion that there has to be one format, even if that format is only for interchange (which it rarely is), speaks volumes about our fears and very badly about our initiative.
--Simon St.Laurent on the xml-dev mailing list, Friday, 23 Jan 2004
It seems to me that it is a little pompous to elevate a pragmatic and practical design choice by Postel in the specification of TCP to a "law" of supposed universal truth - Postel's Law (sic). Postel put forward an approach which was intended to produce particular practical results in the context of TCP. Postel's so-called "Law" is, in fact, a rule of thumb. It may be applicable outside its originally intended scope. But intelligent assessment of its relevance or otherwise in specific settings with potentially different functional requirements is required. Always remembering that the fifth digit is not the most cogitatively-endowed part of the human anatomy.
--Andrew Watt on the xml-dev mailing list, Wed, 14 Jan 2004
there is nothing in the DOM specification that states an implementation of that API must be thread-safe. So application writers should never assume that a particular DOM implementation is thread- safe.
--Andy Clark on the xerces-j-user mailing list, Monday, 12 Jan 2004
There are two interpretations of Postel's Law: the conservative and the liberal. The conservative interpretation is that if there is an ambiguity in a specification then you should:
1. Be careful not to produce data whose compliance is ambiguous.
2. Be careful to accept any data which might reasonably be considered to comply with the specification.
Also, maybe you should fix the specification.
The liberal interpretation of Postel's Law is that even data streams which unambiguously do not comply with the specification should be accepted.
I'm fully in favour of the conservative interpretation but I think anybody who supports the liberal interpretation rather misses the point of having a specification in the first place.
--Ed Davies
Read the rest in On Atom and Postel’s Law
I realize that no committee can come up with something that works for absolutely everyone. But, when you have something evolving on its own, completely organically, it evolves into tag soup, and commercial enterprises don't want to use tag soup. They're nervous enough about the differences between RSS .9, 1.0, 2.0, and Atom.
--Bob DuCharme xml-dev mailing list, Friday, 23 Jan 2004
Hypertext is a particular application of linking, a more basic notion. It's a mistake to think of links solely in the hypertext context.
--John Cowan on the xml-dev mailing list, Thursday, 22 Jan 2004
Postel's law isn't a law of nature, or of logic. It's a law like almost all the laws we have: a ceteris paribus law. But as is so often the case, other thing _aren't_ always equal. As an engineering principle, tho', it seems to hold enough of the time that it's reasonable to take it as a default position. But that doesn't exclude counter-examples; nor is the presence of counter-examples sufficient to render the rule worthless.
--Miles Sabin on the xml-dev mailing list, Tuesday, 13 Jan 2004
Postel's Law is being invoked on the one hand, out of context -- as if by saying "be liberal" Postel were arguing that users of the protocol he is defining can go ahead and break the rules, what the heck, because being liberal and forgiving in what you accept is the rule. But Postel (as has been pointed out) doesn't fairly warrant this: his assumption is that users will be following the explicit rules. Only where the rules fail to be perfectly clear, he adds (in a metastatement that is not, after all, a specification of a protocol but rather -- in the "Philosophy" section -- a hopeful instruction to his readers about how to behave), they should be "conservative ... and liberal". Note *both* conservative and liberal. To suggest that he's therefore licensing rule-breaking (whatever the rule may be) is to miss how he's simultaneously insisting on conformity ("be conservative in what you do"). The Law is a paradox.
--Wendell Piez on the xml-dev mailing list, Friday, 16 Jan 2004
Typically, big e-commerce sites have pretty good usability because their managers obsess over the smallest usability metrics, knowing that any usability improvement translates directly into hundreds of thousands of dollars in increased sales. In contrast, big established companies and government agencies frequently have sites that completely ignore customer needs because they don't know how often their users leave in disgust.
--Jakob Nielsen
Read the rest in Competitive Testing of Website Usability (Jakob Nielsen's Alertbox)
Being "new" no longer has the traditional meaning when patents are concerned. Innovation is no longer a practical requirement for receiving a patent. Consider, for instance, the Microsoft XUL patent. The basic concept or "innovation" that they claim has been "obvious" since at least ALL-IN-1 used a forms based interface for defining office applications back in the early 80's. Even the MID stuff is way too recent if you're looking for prior art on the concept. However, the only thing that distinguishes the Microsoft patent is that *every* claim requires "HTML". Most of the prior art didn't use HTML to accomplish what XUL does and so is not relevant. The mere fact that HTML, or any encoding format like HTML, has the same properties as all the other encoding formats used similarly in the past, is not considered relevant by the patent office.
--Bob Wyman on the xml-dev mailing list, Thursday, 18 Dec 2003
I am uncomfortable with RDDL instances being returned as representations of XML namespaces because (a) they do not in fact (necessarily or usually) enumerate the terms grounded in that namespace (and I consider such an enumeration a necessary component of any complete representation of an XML namespace) yet may further contain any arbitrary information about any other resource even remotely related to that XML namespace or some term grounded in that namespace. It seems to me that every example RDDL instance I've ever seen is not a valid representation of the XML namespace -- as I would perceive a valid representation.
--Patrick Stickler on the www-tag mailing list, Thursday, 23 Jan 2003
Experience, with RSS feeds, has shown an overwhelming willingness on the part of the content producers to FIX THEIR ERRORS.
Ask them, do it using a third party if you don't want to do it yourself.
But above all, DO SOMETHING to help eliminate the problem.
--Bill Kearney on the xml-dev mailing list, Friday, 16 Jan 2004
The majority of my users don't know they are using XML, but I definitely like the fact that when my app creates something broken I clearly know it. Otherwise, my bugs would have children, grandchildren, ad infinitem; tracing their ancestry would be such a pain.
It is such a simple thing to provide a well-formed instance that I don't understand the issue. It is not like every instance *has* to be validated against some XML Schema (or whatever). It makes things much simpler for me so I can provide UIs to users who could care less about XML.
--Robert Koberg on the xml-dev mailing list, Friday, 16 Jan 2004
Give a man* a valid XML document and he has a valid XML document. Give a man a tool that produces valid XML documents and he has valid XML documents for life. Expect a man to deal with tag soup, and you're telling him to fish.
--Danny Ayers
Read the rest in Raw: Thought experiment?
Whatever 'liberal' and 'conservative' might have meant in Postel's original usage, in the context of XML instances which we can GET at URLs 'liberal' in what we accept means that we acknowledge the instance is not likely to be in a form which our process can use directly. The only form which our process could use directly would be a very particular data structure. In their own terms, processes operate only upon specific data structures, and neither a concrete instance document nor an abstract infoset is what a process can use directly. The difference is that the process can be designed to be liberal in accepting numerous schematically differing concrete instances to parse and then to instantiate the output of the parse as the particular data structure which the process requires. That liberality cannot be extended to some sort of 'infoset' as input, because input which is not a parseable document must either correspond perfectly to a very specific and typed schematic or be useless to a particular process, and that is the most illiberal of demands.
--W. E. Perry on the XML DEV mailing list, Wed, 14 Jan 2004
For those not familiar with XQuery -- it is like XPath + XSLT + Methamphetamines.
--Brian McCallister
Read the rest in Things I Wish I Had Time to Code in 2004
When I first started talking to Roy about the Web, most of what he said was over my head too. Learning about Web architecture, for me at least, has been a journey of self-exploration more than a lesson in distributed computing (though it's been that too); revisiting past assumptions, reinterpreting past experience, rebuilding mental models, etc...
I suppose that I only understand what I do now because I was making an honest attempt to learn why the Web was so successful (as you appear to be doing now, kudos). I wasn't expecting to discover the bigger truth of the Web - I didn't even know there was one when I started. But I think that approaching things from that point of view - humility, I suppose - is a productive thing to do. It encouraged me to spend far too much time dissecting the words of Roy, TimBL, and DanC, because I respected the massive success that their work had seen.
--Mark Baker
Read the rest in Adam Bosworth's Weblog: Learning to REST
to understand the value of REST, you have to understand the value of the Web. People don't need to be a programmer to interact with the Web, or even to build Web pages. People don't need to understand a new language in order to access interesting resources; they only need a URI. The Web increases in value whenever a resource is made available via a URI. A company's resources, whether they be bank acccounts, seats on a plane, or simple marketing materials, can either be made available as a set of state operations or as an opaque application (whether that be a poorly written CGI, ISAPI, or JSP application is largely irrelevant -- the user has to learn each site behavior independently). There is no role for control-based integration in such systems -- all of the value is in the data. That is why applications that behave like the Web work better than those that merely gateway to the Web.
For the same reason, it would be foolish to use REST as the design for implementing a system consisting primarily of control-based messages. Those systems deserve an architectural style that optimizes for small messages with many interactions. Architectures that try to be all things to all applications don't do anything well.
--Roy T. Fielding
Read the rest in Adam Bosworth's Weblog: Learning to REST
The last time all the peoples of the earth spoke the same language, were Smote Down. Perhaps there's something to be said for Tag Soup.
--Rich Salz on the xml-dev mailing list, Friday, 9 Jan 2004
World domination isn't my thing, but if it was, I'd be using XML.
--Norman Walsh on the xml-dev mailing list, Friday, 09 Jan 2004
xml potentially helps by providing a universal, verifiable data stream to work with (for starters). things like xsl provide definable ways to manipulate data (transfer functions), and careful construction of specs (dtd/schema) means we can handle pre-conditions - a lot of discussion on this list is about this exact point. this could be a very significant development for computing.
--Rick Marshall on the xml-dev mailing list, Tuesday, 06 Jan 2004
Most browser makers couldn't implement HTTP/1.1 correctly if their lives depended on it.
--Wesley Felter
Read the rest in Re: HTTP Digest Authentication
the problem I have with so many tools today is that the engineers have succeeded spectacularly at enabling us to get information into the systems, and failed almost as spectacularly at enabling us to get information out.
--Claude L (Len) Bullard on the XML Developers List, Monday, 5 Jan 2004
I've seen a lot of sites that do silly heavy-Javascript navigation stuff, the kind that (when you visit their home page) show you a page saying that they've noticed your browser isn't the most recent version of IE, so please visit http://www.microsoft.com/ and download the latest version.
I usually email them a big long rant about how their HTML developers are ripping them off, investing all that extra effort to make the site usable by less browsers, all for a few flashy drop down menus. Judging from responses I receive, the support staff seem to think that adding support for extra browsers is something they need to pay the Javascript developers more to do, not as something that would be there if they're paid the Javascript developers *less* in the first place. I wonder who could have put THAT idea in their heads, eh?
--Alaric B Snell on the XML-DEV mailing list, Monday, 05 Jan 2004
Most of the time you really don't need to know which XML parser you are using. I found recently a particular job has been running on Xerces for months when I thought it used Crimson.
--Michael Kay on the xml-dev mailing list, Sunday, 23 Nov 2003
RELAX-NG is getting good buzz these days not because it's based on the formalism(s) of hedge automata and tree regular expressions, but because it's elegant -- simple yet powerful. RELAX/TREX are elegant because Makoto Murata and James Clark very deeply understand both the underlying formalism and XML itself. No amount of post-hoc formalism can create elegance when it does not exist in the core of a design.
--Mike Champion on the xml-dev mailing list
The excitement of having a human-readable data format (XML) overwhelmed a lot of people, many to the point that they decided (probably from having to hand- code HTML) that it should be human-writable as well. But it's not even particularly human-readable; it's probably best described as human-tolerable. If you need to delve in and investigate what's going on with your XML, you can, just like you can go look at Java byte codes, or assembly opcodes etc.. XML is easier than those things, but it certainly isn't a great way to write.
As I've said before, lots of programmers will say "all you have to do is this here, and that there, and this other thing, and voila!" Each one of those things is a bit of noise, and reduces your productivity. I'll maintain that XML should never be used for something that is written by humans, just as you shouldn't try to use it as a programming language, although people have been so in love with XML that they've tried to do both.
--Bruce Eckel
Read the rest in Bruce Eckel's MindView, Inc: 1-1-04 Why we use Ant (or: NIH)
XML is the foundation for the evolving architecture of the Web. When XML 1.0 was released by the W3C five years ago no one knew how far-reaching its effects could be. Two years after XML's release, VoiceXML was born to bridge the gap between the Web and the phone. VoiceXML stands as a classic XML success story. It allows businesses to bring the power, flexibility and quality of their Web applications to the phone. Today, Fortune 500 companies depend on VoiceXML to power thousands of phone systems and answer millions of calls every week, creating unprecedented customer satisfaction and saving companies millions of dollars.
In just three years, VoiceXML has achieved widespread industry adoption, making it the most broadly supported and implemented voice standard in the world. The open-standard framework of XML has made all of this possible, and in the specific case of VoiceXML, will continue to drive innovation on the phone in the years to come.
--Brad Porter, TellMe Networks
Read the rest in XML makes its mark - Tech News - CNET.com
Like a stored table, the result of a relational query is flat, regular, and homogeneous. The result of an XML query, on the other hand, has none of these properties. For example, the result of the query “Find all the red things" may contain a cherry, a flag, and a stop sign, each with a different internal structure. In general, the result of an expression in an XML query may consist of a heterogeneous sequence of elements, attributes, and primitive values, all of mixed type. This set of objects might then serve as an intermediate result used in the processing of a higher-level expression. The heterogeneous nature of XML data conflicts with the SQL assumption that every expression inside a query returns an array of rows and columns. It also requires a query language to provide constructors that are capable of creating complex nested structures on the fly -- a facility that is not needed in a relational language.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
Quotes in 2003 Quotes in 2002 Quotes in 2001 Quotes in 2000 Quotes in 1999