Quotes about XML in 2005

Saturday, December 31, 2005
From personal experience, I'd have to say that complex DTDs are slightly more penetrable than XSDs. As a user, I'm usually just trying to find out one or two things and I can do this by chasing entities through the DTD with a text editor. I give up completely when faced with a complex XSD document.

--Ronald Bourret on the xml-dev mailing list, Monday, 01 Nov 2004

Friday, December 30, 2005
"View Source" is terribly important. I know a huge number of people who know HTML at varying degrees of expertise and for every one *without exception*, "view source" provided a large part of their education.

--Tim Bray on the WWW-Tag mailing list, Friday, 04 Oct 2002

Thursday, December 29, 2005

Since SGML and XML provide no semantic primitives in terms of which documents can be interpreted, they are frequently described as being “just syntax.” One may wonder whether a purely syntactic notation constitutes a real step forward. Does defining a labeled bracketing, a tree structure, and a formalism for document grammars really suffice to make XML interesting or important? In some ways, of course, the answer is no; none of these is a particularly complex problem. What graduate student in computer science would regard the problem of developing a notation for serializing trees as requiring more than a weekend’s worth of work? Any competent programmer can write a program to parse any reasonably clean notation. Constraint checking is a bit more difficult, but if constraints are checked in a Turing-complete programming language, we may have a better chance of actually expressing all the constraints we would like to express.

In many other ways, however, the answer is yes; purely syntactic notation is a big step forward. XML is interesting to people who wish to exploit their data, because it provides enough commonality among documents to make possible the construction of generic tools instead of tools specialized for single notations. For those who want to spend their time working with their data, rather than building new tools, such generic tools are a great step forward. Generic syntax-directed editors, guided by nothing more complicated than a schema or DTD, make it easier to create good data. Generic tools for data validation make checking for many errors possible, and allow programmers to spend more time on processing the data and less time on checking that the input is clean. Generic browsers and display engines, supported by good style-sheet languages, make it possible to display data in multiple styles tailored for different users or processes.

--C. M. Sperberg-Mcqueen
Read the rest in ACM Queue - XML and Semi-Structured Data - What role can XML play in solving the semi

Wednesday, December 28, 2005

Of course, we're advocating OpenOffice.org, not because it's free in the sense of gratis, although that's certainly an argument, but free in the sense that they can configure it now and in the future. So even if they don't necessarily want to make their own modifications now, maybe in the future they might want to make a better spell checker or make it an “on demand” software or re-brand it however they like. It's all possible. Thus, someone in Brazil has just done a grammar checker using Java. It's quite good. And at the same time, one of the Summer of Code students worked on his own grammar checker, using quite different strategies, I believe. Who benefits? Everyone. Choice is good.

The thing about open source is that it's perfectly unscripted in this way. We like to let the invisible hand of the market actually work. Open source generally works best in that way for certain areas, where you let the invisible hand determine what people need and want. But at the same time, governments are interested in predictability, and want to have a very visible hand in the process. So if governments want to participate, they certainly can put up money for this process. They can say, “Okay, I want this or that feature, and I'm willing to give a developer X amount of money to let them work on it.” That's fine. That's what Japan Inc, did, and it worked wonderfully. Japan Inc. is now doing this with some open-source projects, and other governments are certainly encouraged to do the same.

--Luis Suarez-Potts
Read the rest in :: Interviews : OpenOffice.org 2.0: An Office Suite With No Horizons

Tuesday, December 27, 2005
there is no such thing as a CDATA node. CDATA is just a nice way to write something so that you don't have to worry about escaping all the < and other special characters. There's only text nodes.

--Jon Gorman on the xsl-list mailing list, Tuesday, 13 Dec 2005 09:17:52

Monday, December 26, 2005
Our brains aren't wired to think in terms of statistics and probability. We want to know whether an encyclopedia entry is right or wrong. We want to know that there's a wise hand (ideally human) guiding Google's results. We want to trust what we read. When professionals--editors, academics, journalists--are running the show, we at least know that it's someone's job to look out for such things as accuracy. But now we're depending more and more on systems where nobody's in charge; the intelligence is simply emergent. These probabilistic systems aren't perfect, but they are statistically optimized to excel over time and large numbers. They're designed to scale, and to improve with size. And a little slop at the microscale is the price of such efficiency at the macroscale.

--Chris Anderson
Read the rest in The Long Tail: The Probabilistic Age

Saturday, December 24, 2005
It took the markup and then the web community a long time to come to the realization that binding types strongly to a syntax standard is a bad design for reach and scale even if it works for a particular product.

--Claude L (Len) Bullard, on the xml-dev mailing list, Wednesday, 12 Oct 2005 13:17:38

Friday, December 23, 2005
For all kinds of application design, the most common mistakes is to jump right in and start adding features to whatever it is you already have (or have copied from someone else). For various reasons this doesn’t work: Big piles of hit and miss features, chosen thoughtlessly, is less desirable that small piles of good features, chosen carefully. Consider this: what makes a good feature? It’s not an abstract quality: goodness means a problem is solved for a user. If you don’t spend some time considering who these people are and what they’re doing, odds are slim you’ll find features that matter much:

--Scott Berkun
Read the rest in How to build a better browser

Thursday, December 22, 2005
We need architectural forms painfully. SGMLers sold us out by making them very unnecessarily gnomic. OK just kidding, SGMLers, but when even Mike Kay admits he doesn't grok them, someone feel asleep at the tutorial wheel.

--Uche Ogbuji on the xml-dev mailing list, Friday, 02 Dec 2005 08:42:55

Wednesday, December 21, 2005
The progression, IMHO, is really between the generally high skill set of your average XForms author and the comparatively low skill set of your average HTML author. It is quite possible to write highly accessible HTML (even if it uses scripting) and quite possible to write highly _in_accessible XForms. In both cases, the author is violating either best practice guidelines or actual rules of the language.

--Ian Hickson on the www-forms mailing list, Friday, 11 Mar 2005 16:13:18 +0000

Tuesday, December 20, 2005
Google Earth is not acquiring new imagery. They are simply repurposing imagery that somebody else had already acquired. So if there was any harm that was going to be done by the imagery, it would already be done.

--John Pike, Globalsecurity.org
Read the rest in Google Offers a Bird's-Eye View, and Some Governments Tremble

Monday, December 19, 2005
Pay attention to physics. As a simple rule of thumb, an individual server can normally handle perhaps 100 requests a second (I’ll say within one order of magnitude up if simple ones and down if very hard ones). If the goal is for each server to support 1,000 concurrent users, therefore, try to avoid an event model in which each user pings the server more than once per 10 seconds, let alone one. In short, avoid a fine-grained model of code on the server responding to events at the level of mouse moves or keys typed or each time the scrollbar is dragged three millimeters because if you do, you’ll be generating about 10 events per second or two orders of magnitude too many. By the way, even if supporting only 10 concurrent users per server is acceptable, communications are often fragile, and it isn’t a great idea to be extremely fine-grained because a small rare glitch in communications can much more easily break the system.

--Adam Bosworth
Read the rest in ACM Queue - Learning from THE WEB

Thursday, December 15, 2005
changing a schema (DTD, or whatever structure representation) and associated documents is quite common. We keep doing this in ActiveMath. Changing the POJOs to match is a bit more delicate (and typically has influence much deeper into the code than just the beans). Allow me to add that XPath (which can be used in XOM, DOM, JDOM, XSLT, DOM4j and many others) is the best flexibility and readability you can afford. The performance is smaller, indeed (but not enormous), but the manageability is much greater!

--Paul Libbrecht on the xml-dev mailing list, Monday, 05 Dec 2005 10:50:40

Sunday, December 11, 2005

The unquestioning regurgitation of administration spin through the use of anonymous sources is the fault line of modern American journalism. You'd think that after all we've seen -- from the horrific reporting on WMD to Judy Miller and Plamegate (to say nothing of all the endless navel-gazing media panel discussions analyzing the issue) -- these guys would finally get a clue and stop making the Journalism 101 mistake of granting anonymity to administration sources using them to smear their opponents.

The Washington Post, for example, citing an anonymous "senior Bush official," reported on Sept. 4 that, as of Saturday, Sept. 3, Louisiana Gov. Kathleen Babineaux Blanco "still had not declared a state of emergency" ... when, in fact, the declaration had been made on Friday, Aug. 26 -- more than two days before Katrina hit Louisiana. This claim was so demonstrably false that the paper was forced to issue a correction just hours after the original story appeared. It's time for the media to get back to doing their job and stop being a principal weapon in Team Bush's damage-control arsenal.

--Ariana Huffington
Read the rest in Wired News: Arianna Learns to Love the Blog

Saturday, December 10, 2005
there are at least five individuals on the XQuery WG who could have produced a perfectly usable spec two years ago if the other four individuals had not been present.

--Michael Kay on the xml-dev mailing list, Thu, 19 Aug 2004

Friday, December 9, 2005
Atom. It can be tricky to explain why it was needed when there is RSS to anyone that hasn’t spent time coding around the stuff. As I see it Atom has three key benefits over RSS 2.0: a clear, community-consensus spec; mandates the identifiers of the Web (URIs); the content model isn’t broken (apart from its general opaque messiness, escaped HTML in content is fundamentally flawed - check silent data loss).

--Danny Ayers
Read the rest in Danny Ayers, Raw Blog : » Presenting syndication

Thursday, December 8, 2005

Traditional tools require the data schema to be developed prior to the creation of the data. Unfortunately, sometimes the data schema emerges only after the software is already in use—and the schema often changes as the information grows. A typical example is the information contained in the item descriptions on eBay. It seems impossible for the eBay developers to define an a priori schema for the information contained in such descriptions. Today, all of this information is stored in raw text and searched using only keywords, significantly limiting its usability. The problem is that the content of item descriptions is known only after new item descriptions are entered into the eBay database. EBay has some standard entities (e.g., buyer, date, ask, bid...), but the meat of the information—the item descriptions—has a rich and evolving structure that isn’t captured.

Traditional software design methodology does not work in such cases. One cannot rigidly follow the steps:

  1. Gather knowledge about the data to be manipulated by the software components being designed.
  2. Design a schema to model this information.
  3. Populate the schema with data.

We need software and methodologies that allow a more flexible process in which the steps are interleaved freely, while at the same time allowing us to process this information automatically.

--Daniela Florescu
Read the rest in ACM Queue - Managing Semi-Structured Data

Wednesday, December 7, 2005
It seems that everyone using CSS eventually resorts to using absolute positioning with widths specified in pixels, to get everything to work. How, exactly, is this better than using tables?

--Phillip Pearson
Read the rest in Second p0st: I hate CSS | Generating Motorola S19 checksums in Python

Tuesday, December 6, 2005

Another important question is the extent to which the Open Document file format itself supports or fails to support accessibility. This comes up for things like storing the alternate text tag for an image, or noting the relationships of labels with the objects they label in on-line forms. While a thorough examination of the file format specifically for these issues still needs to be done, much of ODF is based on standard web technologies like SMIL for audio and multimedia, and SVG for vector graphics, which have and continue to be vetted by the World Wide Web Accessibility Initiative processes. We also know that two of the existing applications that currently read/write ODF can export Tagged PDF files in support of PDF accessibility, and Adobe has already conducted some tests to verify that accessibility across that translation is preserved (and thus must exist in the original ODF file). Finally, at this very moment the OASIS Technical Committee that created ODF is looking into forming a specific subcommittee to examine ODF for just these accessibility issues and address any shortcomings found.

This is in stark contrast to proprietary file formats like those used by Microsoft Office. Those formats are totally opaque, with no peer review of accessibility issues possible. Thus we cannot objectively tell how well the Microsoft Office file format supports accessibility, or say whether it does a better or worse job than ODF.

--Peter Korn
Read the rest in Peter Korn's Weblog

Monday, December 5, 2005
In the general case, REST is XML over HTTP. In the general case, SOAP is XML over HTTP. Anything you can do with one, you can do with the other. Using SOAP standardizes some things, you'd have to reinvent if you go with REST. Using SOAP+WSDL does so even more.

--Dare Obasanjo on the xml-dev mailing list, Wednesday, 30 Mar 2005 08:44:34

Sunday, December 4, 2005
Don't make premature demands on users who aren't ready to buy. For example, don't require registration to read whitepapers or see a demo. If you do, you'll scare away many users who otherwise might have converted at a later date.

--Jakob Nielsen
Read the rest in The Slow Tail: Time Lag Between Visiting and Buying (Jakob Nielsen's Alertbox)

Saturday, December 3, 2005

An example of where an inappropriate extension has been made to the protocol to support features that contradict the desired properties of the generic interface is the introduction of site-wide state information in the form of HTTP cookies [73]. Cookie interaction fails to match REST's model of application state, often resulting in confusion for the typical browser application.

An HTTP cookie is opaque data that can be assigned by the origin server to a user agent by including it within a Set-Cookie response header field, with the intention being that the user agent should include the same cookie on all future requests to that server until it is replaced or expires. Such cookies typically contain an array of user-specific configuration choices, or a token to be matched against the server's database on future requests. The problem is that a cookie is defined as being attached to any future requests for a given set of resource identifiers, usually encompassing an entire site, rather than being associated with the particular application state (the set of currently rendered representations) on the browser. When the browser's history functionality (the "Back" button) is subsequently used to back-up to a view prior to that reflected by the cookie, the browser's application state no longer matches the stored state represented within the cookie. Therefore, the next request sent to the same server will contain a cookie that misrepresents the current application context, leading to confusion on both sides.

Cookies also violate REST because they allow data to be passed without sufficiently identifying its semantics, thus becoming a concern for both security and privacy. The combination of cookies with the Referer [sic] header field makes it possible to track a user as they browse between sites.

As a result, cookie-based applications on the Web will never be reliable. The same functionality should have been accomplished via anonymous authentication and true client-side state. A state mechanism that involves preferences can be more efficiently implemented using judicious use of context-setting URI rather than cookies, where judicious means one URI per state rather than an unbounded number of URI due to the embedding of a user-id. Likewise, the use of cookies to identify a user-specific "shopping basket" within a server-side database could be more efficiently implemented by defining the semantics of shopping items within the hypermedia data formats, allowing the user agent to select and store those items within their own client-side shopping basket, complete with a URI to be used for check-out when the client is ready to purchase.

--Roy T. Fielding
Read the rest in Fielding Dissertation: CHAPTER 6: Experience and Evaluation

Friday, December 2, 2005
When someone says "my code doesn't work in your browser, which you claim to be DOM-compliant", but it worked in Safari, I am much happier to tell them "that is because Safari is not DOM compliant" rather than telling them "that is because DOM isn't interoperable between two implementations in this feature, even though you complied with the standard". That is why making exceptions optional in this sort of case is a bad idea

--Ray Whitmer on the www-dom mailing list, Sunday, 1 Dec 2005 07:46:16

Thursday, December 1, 2005
Simplicity takes effort-- genius, even. The average programmer seems to produce UI designs that are almost willfully bad. I was trying to use the stove at my mother's house a couple weeks ago. It was a new one, and instead of physical knobs it had buttons and an LED display. I tried pressing some buttons I thought would cause it to get hot, and you know what it said? "Err." Not even "Error." "Err." You can't just say "Err" to the user of a stove. You should design the UI so that errors are impossible. And the boneheads who designed this stove even had an example of such a UI to work from: the old one. You turn one knob to set the temperature and another to set the timer. What was wrong with that? It just worked.

--Paul Graham
Read the rest in Ideas for Startups

Wednesday, November 30, 2005
given that search engines are pretty much the only reason that the Web works as well as it does (HTML & HTTP are nice, but I remember the Web before search engines, and it was rather grim), I don't see how the Semantic Web can ever work either without search engines. In practical terms, nobody has a clue how to make the Web work without search engines; at least, nobody has ever demonstrated such a thing that I am aware of. Which is to say that getting Google and other search engines on board with a common and workable approach to Semantic Web content has to be a #1 priority, because it's really the only game in town.

--Anthony B. Coates on the www-tag mailing list, Monday, 28 Nov 2005 08:44:14

Tuesday, November 29, 2005
XML does not have null values. It just has character content (#PCDATA) and nested element content (and attributes). If you are wanting to model a database entry that may be null, there are several ways that you may do this, by allowing the element to be empty, or by having an explict child element, say <null/>, or having an explict attribute, it is your choice how to model this in teh XML, and you must then code your application to understand this model. However you chose to model the null value, the DTD declaration will reflect that model, you can't declare "null" in a DTD.

--David Carlisle on the xml-dev mailing list, Tuesday, 29 Nov 2005 11:58:00

Monday, November 28, 2005
Frequently updated web sites perform better in organic search engine results. This enables the business to attract customers to its web site without paying for sponsored links.

--Robert J. Miller
Read the rest in java.net: Fitnesse Testing for Fast

Sunday, November 27, 2005
some usability issues are different for users with disabilities than for those without, but the overlap is remarkably large. Also, it's an oversimplification to distinguish between users with and without disabilities as if that were a dichotomy. It's really a continuum of people with more or less severe disabilities. For example, most users over the age of 45 have somewhat reduced vision and need resizable fonts, even if they don't qualify under the official definition as "low-vision users."

--Jakob Nielsen
Read the rest in Accessibility Is Not Enough (Jakob Nielsen's Alertbox)

Saturday, November 26, 2005
With very few exceptions, any attempt to devise a suitable upper bound for any 'maxOccurs' value is bound to involve wild-ass-guessery. How many paragraphs should one allow in an HTML document? You can take this from a business logic standpoint ("what's the longest web page anyone is ever going to want to produce"), or a processing standpoint (how many <p> elements can Mozilla cope with? What about MSIE? Does your answer change depending on how old the user's computer is?), but you'll never be able to come up with a satisfactory number. Whatever number you choose will either be too large as a meaningful resource constraint, or it will be too small for some existing or future document.

--Joe English
Read the rest in xml-dev - Re: xml-dev] Constrain the Number of Occurrences of Elements in your XML

Friday, November 25, 2005

Buy Nothing Day has become this huge phenomenon around the world. It's sort of like an edgy Earth Day and people are doing all sorts of things including blogs. But I have to tell you that there is also a downside to blogs. There are a number of people who think they are activists if they start a blog and talk sustainability.

I think there is more to it than that. The downside of the internet is that it has spawned a generation of activists who are actually very passive, who basically forward an e-mail to a friend and they think they are being some kind of an activist, and to me that is not the sort of activism that is effective.

--Kalle Lasn
Read the rest in Wired News: Put Your Money Where Your Mind Is

Thursday, November 24, 2005

When XSLT is a good fit for a problem, it is a phenomenally good fit. When it's a bad fit it's incredibly frustrating. Unfortunately it's totally possible to take an approach to a problem that could be a good fit, but find that the approach makes it a very bad fit.

The fact that it's so different from procedural and object oriented languages only exacerbates this for newcomers, who are vastly more likely to choose bad fits than good ones. But the great strengths XSL has are inextricably tied to these differences.

--Nathan Young on the xsl-list mailing list, Sunday, 8 Sep 2005 14:43:21

Wednesday, November 23, 2005
Sounds like it might be interesting but you know, I'm tired, and just not up to the aggravation of clicking on something that ends in ".pdf". And I bet I'm not the only one

--Tim Bray on the xml-dev mailing list, Monday, 25 Aug 2003

Tuesday, November 22, 2005
Never make users register, unless you need to in order to store something for them. If you do make users register, never make them wait for a confirmation link in an email; in fact, don't even ask for their email address unless you need it for some reason. Don't ask them any unnecessary questions. Never send them email unless they explicitly ask for it. Never frame pages you link to, or open them in new windows. If you have a free version and a pay version, don't make the free version too restricted. And if you find yourself asking "should we allow users to do x?" just answer "yes" whenever you're unsure. Err on the side of generosity.

--Paul Graham
Read the rest in Web 2.0

Monday, November 21, 2005
All the alternatives to MS Office suck eggs, and they have all tried to copy all the worst things about the MS tools. I have news for the ODF fans - an open document format doesn't mean anything if the base tool is horrid - and Open Office is, quite simply, horrid.

--James Robertson
Read the rest in Microsoft and the Mountain

Saturday, November 19, 2005
Poorly designed XSLT is hard to maintain like poorly designed code of any type. Well designed XSLT is a treat to work with.

--Charles Knell on the xsl-list mailing list, Tuesday, 02 Aug 2005 10:21:59

Friday, November 18, 2005
Data integrity is one of those idealistic formal criteria that sound and look great on the drawing board, but tend to lose their street appeal as soon as they get implemented and the rubber meets the road, so to speak. Like that shopping mall that looks fabulous on the computer-generated screen, but gives us the chills of repulsion once it gets built and is twice as hideous as anyone could’ve imagined, the concept of data integrity offers a rather laughable sense of false security that only people who spend nights at the computer screen and never go out can buy into.

--Alex Bunardzic
Read the rest in Ethical Software by Alex Bunardzic » The Myth Of Data Integrity

Thursday, November 17, 2005
Sometimes a rather thin, syntax-oriented, semantically vacuous layer of commonality is all that is needed to simplify things dramatically.

--C. M. Sperberg-Mcqueen
Read the rest in ACM Queue - XML and Semi-Structured Data - What role can XML play in solving the semi

Friday, November 11, 2005
WS has long ago degenerated into a joke to all but a few marketing professionals, industry analysts and committed developers. Originally it was supposed to be an improvement over the likes of COM and CORBA, this improvement coming because somehow the use of XML would work a salubrious magic. So much for that fantasy. WS-*, as Robertson points out, and many others have before, is now a far more complex "stack" than the entire OMA (of which CORBA is but a part) and with much less grounding in practice. Too bad for WS folk. XML folk don't care. Why? Because just as XML was never going to magically save an under-architected system from itself, XML was never likely to be substantively damaged by the fact that it was considered the keystone of said under-architected system.

--Uche Ogbuji
Read the rest in More proof that Web services are poison to XML

Thursday, November 10, 2005
You can tell Java developers who understand XML from those that don't. Those who don't often tend to have overspecified object model hierarchies and like to boast of using hundreds of classes - the ones who do typically have far simpler interfaces, prefer to put most of the hierarchical complexities into the XML, and usually can be found at the beach during the summer.

--Kurt Cagle on the xml-dev mailing list, Friday, 28 Jan 2005 10:54:18

Wednesday, November 9, 2005
Another common problem in managing today’s information is the lack of agreement on vocabularies and schemas. Existing information-processing methodologies require that all the communities involved in generating, processing, or consuming the same information agree to a given schema and vocabulary. Unfortunately, different people, organizations, and communities have inherently different ways of modeling the same information. This is independent of the domain to be modeled or the target abstract model being used (e.g., relations, Cobol structures, object classes, XML elements, or RDF [Resource Description Framework] graphs). Reaching schema agreements among different communities is one of the most expensive steps in software design. Database views have been designed to alleviate this problem, yet views do not solve the schema heterogeneity problem in general. We need to be able to process information without requiring such a priori schema and vocabulary agreements among the participants.

--Daniela Florescu
Read the rest in ACM Queue - Managing Semi-Structured Data

Tuesday, November 8, 2005
if you have 5GBs of data, you probably should not be keeping it in one XML file.

--Jason Robbins on the jdom-interest mailing list, Sunday, Mar 2005 14:33:12

Monday, November 7, 2005
The wisdom of crowds works amazingly well. Successful systems on the Web are bottom-up. They don’t mandate much in a top-down way. Instead, they control themselves through tipping points. For example, Flickr doesn’t tell its users what tags to use for photos. Far from it. Any user can tag any photo with anything (well, I don’t think you can use spaces). But, and this is a key but, Flickr does provide feedback about the most popular tags, and people seeking attention for their photos, or photos that they like, quickly learn to use that lexicon if it makes sense. It turns out to be amazingly stable. Del.icio.us does the same for links (and did it first, actually). Google’s success in making a more relevant search was based on leveraging the wisdom of crowds (PageRank). RSS 2.0 is taking off because there is a critical mass of people reading it and it is easy to read/write, so people have decided to leverage that when publishing content. It isn’t that it is a good or bad format for things other than syndicated content (for which I think it is very good). Rather, it works well enough.

--Adam Bosworth
Read the rest in ACM Queue - Learning from THE WEB

Sunday, November 6, 2005

Thou shalt remember the customer's phone number. This means you, computer and cellphone companies. We call for help; we're asked to type in our 10-digit phone numbers or 20-digit customer numbers; then when an agent picks up, we're asked for that number again.

What - did you think we actually moved and changed our identities since placing the call?

If they can write software that sends a man to the moon, they can surely write call-center software that passes on to the agent the information we've already typed in.

--David Pogue
Read the rest in 10 Ways to Please Us, the Customers

Friday, November 4, 2005
In my experience working with developers and users, this is the biggie: There is a certain hard core of SGML/XML people who just 'grok' the XSLT development style and can use if to great advantage. But there are a lot of people (and I count myself among them, to my shame) that just can't get anything non-trivial done in XSLT without an example to work from and a reference manual in hand. Many of those same people can grok the basics of XQuery pretty quickly ("oh, it's a lot like SQL except ...."). My rough estimate from talking to XML users (as opposed to geeks) over the years is that SQL/XQuery grokkers outnumber XSLT grokkers by something like 10:1.

--Michael Champion on the xml-dev mailing list, Monday, 8 Nov 2004

Thursday, November 3, 2005
many stylesheets consist of two-thirds data to be copied into the result tree, and one-third instructions to extract data from the source document. An XML-based syntax is beneficial for the two thirds that is data, because it means the code in the stylesheet is a template for the final result. This also facilitates a development approach that starts with graphical designers producing a mock-up of the target HTML or XSL-FO page, and then handing it over to a programmer to add the control logic. (XQuery has recognized this by using an XML-like syntax for element constructors, but there's a lot of difference between being XML-like and being XML.)

--Michael Kay on the xml-dev mailing list, Wednesday, 8 Dec 2004

Wednesday, November 2, 2005
Charsets are easy to accomodate with forethought, but hell to deal with if you just assume (or don't think of it at all, which is an implicit assumption). Nor is it a new problem. Just one that's ignored too often.

--Greg Guerin on the java-dev mailing list, Saturday, 3 Sep 2005 19:53:04

Monday, October 31, 2005
There all right but let me add one more little thing. XSLT is a template language and when written in its more functional/template-based nature XSLT is really quite easy and doesnt require complexity to get proper results. The problem as I see it (and Colin pointed this out to a nice example of using XPath... people tend to force XPath to do TONS of heavy lifting when in reality you just have to let the XML fall gracefully until targeted by the best matching template and then captured and processed accordingly) is that our core focus for so many years has been to have COMPLETE and TOTAL control at every moment using procedural coding that we all tend to force the issue with XSLT (you will do this and do this now! kind of thing) instead of letting the issue happen and then dealing with it when it does... laaaaazzzzyyyy style ;) When you step back from that mentality and simply design your XML and XSLT in such a way that there is no need to climb up and down the tree to process the data then XSLT is never complex. It may take a little getting used to the style and as such seem a bit complex but after you just "let go and let God" as the saying goes then your mind adjusts and it simply makes sense. It seems to me that just about the time you find yourself having to write complex code to make something work in XSLT is right about the time you need to rethink things and realize you are doing things the hard way (the procedural side of your brain if you will) and that there is a much simpler approach that will work twice as efficient if you block the procedural line of thinking and embrace the template-based functional side of your brain

--M. David Peterson on the xsl-list mailing list, Sunday, 17 Apr 2005 12:50:54

Friday, October 21, 2005
Once you've fundamentally handicapped yourself by tying your data model/programming language to XML Schema then you are already broken. It's like saying "I did the best job I could painting my apartment with a tootthbrush". The thing to do isn't asking for better ways to paint apartments with a toothbrush but instead realizing you picked the wrong tool for the job.

--Dare Obasanjo on the xml-dev mailing list, Friday, 3 Dec 2004

Thursday, October 20, 2005
"Make the right thing easy and the wrong thing hard." If designers followed that one clear principle, there'd be a lot more happy users. I'd get a lot more work done instead of struggling with a counterintuitive interface. Writing software would be easier because APIs would simply make sense, with less chance of blowing up at runtime. I could use my car stereo.

--Kathy Sierra
Read the rest in Creating Passionate Users: Making happy users

Wednesday, October 19, 2005
Many big companies, such as Microsoft and Nortel, in their quest to gain shares of the large Internet market in China, transform China into an information prison by collaborating with the Chinese regime on questions of censorship. They should not forget all moral principles under the temptation of financial gain.

--Guo Guoting
Read the rest in Battle blogging for profit

Tuesday, October 18, 2005

Classic network file systems are designed around the idea of mapping system file procedures into network transactions. A lot of naive designs that use XML do much the same thing: they wrap a relatively small amount of data up in a call, and get a relatively small amount of data in return (fopen, fseek, fread equivalents).

XML enables larger granularity. Rather than sending "commands" and getting "return values", you can send and receive larger documents. This corresponds to the way that WebDAV works, for instance. Instead of "opening, seeking, reading", you just get the document/file. Fewer, somewhat larger network transactions; in many cases one can rely on low-level infrastructure to speed the operations. Compression is also more effective in this scenario. Parser start up times contribute overhead, if the documents are larger, the overhead starts to recede to insignificance.

For increasing performance, increasing the message size (that is, increasing the content, increasing the granularity of operations) is often far more effective than attempts to bum a few cycles from fine-grained operations.

--Amelia A Lewis on the xml-dev mailing list, Thu, 11 Nov 2004

Monday, October 17, 2005
the reason most current XForms documents are of a higher markup quality (more conformant, more accessible) than the average HTML document is that the average XForms author today is more competent than the average HTML author. It has nothing to do with whether XForms is easy or hard.

--Ian Hickson on the www-forms mailing list, Sunday, 13 Mar 2005 17:22:13 +0000

Saturday, October 15, 2005

The internet is not an accident. The internet was not bound to happen. There was no guarantee that the internet would reach its current state as a side effect of emerging digital processing and communications capabilities. We did not recover complex alien technology.

The internet, that place where all eventual business will be transacted, all content and media will be distributed, all correspondence will be exchanged, all history will be recorded, and all pornography will be is being admired, has a design - and its meant for exactly these purposes.

Many of the principles that led to this design are still with us today, although I would challenge you to ascertain them by observing the mainstream technologies being peddled by leading vendors, publications, and analyst firms. Those who rose to power in a much different environment, where the short-term profits of disconnected, dead-end business software was deemed more important than laying a fertile ground where millions of new ideas (and hence new profits) could bloom.

But the dead-end has long been reached and so these industry leaders have turned their attention to this new place, built on principles and values very different from their own, and have somehow reached the conclusion that this thriving ecosystem must be re-arranged such that they have somewhere to place their baggage. Instead of embracing the people, principals, and technologies that gave rise to this phenomenon they have chosen to subvert its history and to implant the ridiculous notion that it is “incapable of meeting the stringent demands of the business community.”

Not only have these business radicals claimed the internet as their own but they have also somehow gained the confidence of all the worlds industry in their ability to deliver a new and sparkling internet, one no doubt capable of reproducing the complexities and flaws that plague existing mediums so as to make it feel more like home. They’ve brought their own principles and agendas, asserting them as obvious and correct while ignoring the wisdom we’ve gained and shared and gained and shared over years of collaborative practice and observation of working systems at this scale.

--Ryan Tomayko
Read the rest in Motherhood and Apple Pie [@lesscode.org]

Friday, October 14, 2005
XLink is neat conceptually, but in practical use, there are easy ways to do the same thing without using XLinks. Without a compelling use case shared by a near universal set in the user community, it gets very little play outside implementations of link data bases

--Bullard, Claude L (Len) on the xml-dev mailing list, Friday, 14 Jan 2005 12:12:34

Wednesday, October 12, 2005
There comes a point where some of these functionalities, such as the seamless interoperation of the Internet, are too important to leave to the private interest of businesses. We like to think that people won't do antisocial things, but when push comes to shove they will defend their economic interests even at the expense of the public.

--Mark Cooper, Consumer Federation of America
Read the rest in Net blackout sparks talk of new rules

Tuesday, October 11, 2005

When you are using a platform-specific data binding tool, it will generate good quality code for serializing the data into XML/SOAP and then recreating the objects at the other end fine. And it will also generate an XML Schema, if it didn't use a custom annotated one for its template.

But that XML schema may be suspect or unacceptable to other tools. But no worries, the other end probably does not validate with the XML Schema or use its typing, anyway.

The paradox? The way to use XML Schemas successfully in multi-vendor web services is to just pretend to use them. The schema and/or WSDL become just documentation, and should not be considered translatable specifications.

--Rick Jelliffe on the xml-dev mailing list, Sunday, 10 Jul 2005 05:15:29 +1000

Sunday, October 9, 2005
People are already calling the XQuery data model the XML data model. They point to how vendors rallying behind SQL made relational DBMS valuable to the mainstream. The problem with this comparison is that SQL is not an omni-tool; it is designed to work on very highly normalized data sets. XML, by contrast, derives much of its power from denormalization; as a result, there is no way to have a single catchall data model for XML that is suitable for all purposes.

--Uche Ogbuji
Read the rest in Is XQuery an omni-tool?- ADTmag.com

Saturday, October 8, 2005

Netscape 4 was a really crappy product. We had built this really nice entry-level mail reader in Netscape 2.0, and it was a smashing success. Our punishment for that success was that management saw this general-purpose mail reader and said, "since this mail reader is popular with normal people, we must now pimp it out to `The Enterprise', call it Groupware, and try to compete with Lotus Notes!"

To do this, they bought a company called Collabra who had tried (and, mostly, failed) to do something similar to what we had accomplished. They bought this company and spliced 4 layers of management in above us. Somehow, Collabra managed to completely take control of Netscape: it was like Netscape had gotten acquired instead of the other way around.

And then they went off into the weeds so badly that the Collabra-driven "3.0" release was obviously going to be so mind-blowingly late that "2.1" became "3.0" and "3.0" became "4.0". (So yeah, 3.0 didn't just seem like the bugfix patch-release for 2.0: it was.)

--Jamie Zawinkski
Read the rest in Groupware Bad

Friday, October 7, 2005
Perhaps I am missing the point, but it seems to me that the answer to the question "Xforms or Web Forms" is an emphatic yes. There is room and need in the Web universe for both the evolutionary Web Forms approach, to support backwards HTML compatibility, and the revolutionary XForms approach, to support a new XML based paradigm.

--Eric S. Fisher on the www-forms mailing list, Monday, 14 Mar 2005 17:00:47

Thursday, October 6, 2005

Flash should not be used to jazz up a page. If your content is boring, rewrite text to make it more compelling and hire a professional photographer to shoot better photos. Don't make your pages move. It doesn't increase users' attention, it drives them away; most people equate animated content with useless content.

Using Flash for navigation is almost as bad. People prefer predictable navigation and static menus.

--Jakob Nielsen
Read the rest in Top Ten Web Design Mistakes of 2005 (Jakob Nielsen's Alertbox)

Tuesday, October 4, 2005
A person from the UK makes an online purchase from a US supplier. The online supplier requires entry of a two-letter code in the "State" box and a numeric value in the "postal code" box, despite the fact that the person entered UK as the country. So, the person entered "ZZ" as the state and 12345 as the postal code. Does validation result in forcing people to supply incorrect information?

--Roger L. Costello on the xml-dev mailing list, Tuesday, 24 Aug 2004

Monday, October 3, 2005
We need XML Schema so that other WGs know what not to do. It is a signpost of pain warning each and every specifier of standards gone astray.

--Robin Berjon on the xml-dev mailing list, Tuesday, 16 Aug 2005 14:52:34

Monday, October 3, 2005
One of the ironies of technological evolution is that even when we *intend* there to be a Darwinian weed-out, sometimes the opposite happens -- we get more variety and speciation -- just as sometimes when we hope/expect many alternatives to flourish, along comes something and dominates, and almost everyone switches to that. XML has seen both kinds of wrong guesses in its short history. For example, way back when, we thought there might be lots of different document formats, but it turns out that to the extent this is the case, the "bespoke" formats are fairly private, and in public one sees pretty much the same thing over and over (Docbook, TEI, OpenOffice XML, etc.). Yet on the other hand, the expectation that we'd have *a single* schema language was met by the proliferations of DTD, W3C Schema, RelaxNG and Schematron (not that there hasn't been any Darwinian weeding here), plus a smattering of ingenious more local solutions -- so the opposite has happened, and to be an expert on "schemas in/for XML" you have to be conversant with all these.

--Wendell Piez on the xsl-list mailing list, Wednesday, 24 Aug 2005 12:42:31

Saturday, October 1, 2005
I don't recommend Real Media to anyone these days because the free player has become such a nuisance for end-users to install. It endlessly tries to force you to register, trick you into paying for a premium player, and then once they do get it installed, the video display is tiny because the pane is crowded with their "content", little mini browsers and so-called "channels". Bleah.

--David Kaufman on the WWWAC mailing list, Sunday, 8 Sep 2005 17:11:32

Friday, September 30, 2005

But comma-delimited ASCII doesn't work just as well.

First: comma-delimited. What if the fields contain commas? Or newlines? They need to be quoted (and the developers need to know that they need to be quoted), which means you've already got an interop problem, namely, which of the half-dozen flavors of CSV are you going to use?

Second: ASCII. 'Nuf said.

Third, and most important, is the shape of the data. Not everything fits in a list of homogeneous records, which is CSV's the natural shape. Of course you can wedge data that isn't shaped like an N by M table into a CSV file, but then you have to devise your own encoding scheme.

XML's basic building blocks of elements, attributes, and text are flexible enough to accomodate a much broader range of data. Only a few things fit in a regular table, but a lot of things can fit in a tree.

--Claude L (Len) Bullard, on the xml-dev mailing list, Wednesday, 1 Jun 2005 15:41:15 -0500

Wednesday, September 28, 2005
One of the ironies of technological evolution is that even when we *intend* there to be a Darwinian weed-out, sometimes the opposite happens -- we get more variety and speciation -- just as sometimes when we hope/expect many alternatives to flourish, along comes something and dominates, and almost everyone switches to that. XML has seen both kinds of wrong guesses in its short history. For example, way back when, we thought there might be lots of different document formats, but it turns out that to the extent this is the case, the "bespoke" formats are fairly private, and in public one sees pretty much the same thing over and over (Docbook, TEI, OpenOffice XML, etc.). Yet on the other hand, the expectation that we'd have *a single* schema language was met by the proliferations of DTD, W3C Schema, RelaxNG and Schematron (not that there hasn't been any Darwinian weeding here), plus a smattering of ingenious more local solutions -- so the opposite has happened, and to be an expert on "schemas in/for XML" you have to be conversant with all these.

--Wendell Piez on the xsl-list mailing list, Wednesday, 24 Aug 2005 12:42:31

Tuesday, September 27, 2005
many customers have XML files in the 5 gig size range and XML content sets exceeding 5 terabytes. I think we'll see larger XML files and XML content sets as people get familiar with the advanced tools capable of handling them. As a comparison, probably no one has a 100 million row Excel table, but a 100 million row relational database is common.

--Jason Hunter on the jdom-interest mailing list, Monday, 14 Mar 2005 21:47:19

Monday, September 26, 2005
I have been studying the SVG specs when I happened upon the SVG path tag. Much to my dismay, the W3C decided to stuff a micro-language within a single parameter! That is one of the most awful hacks I have ever seen. This completely defeats the purpose of XML! Well so much for the original goals of simplicity and genericity of XML representations. Apparently this is old news. It's a shame the spirit of XML is being ignored in such a blatant manner.

--Christopher Diggins
Read the rest in XML down a slippery slope

Sunday, September 25, 2005
K-12 teachers tend to simplify English style into rules. The idea is benevolent enough - by following these so-called rules, the children will produce writing that works reasonably well AND that the teacher can grade without much mental effort, leaving time and brain cells for the rest of the multitude of required chores. But as a result, many people grow up believing in the linguistic equivalent of Santa Claus: Do things according to "the rules" (say, don't use parentheses, or don't begin a sentence with "but"), and your writing will be good writing. And any writing that doesn't follow these rules will be bad writing - even if you like it.

--Hilary Powers on the cbp mailing list, Saturday, 09 Jul 2005 10:19:55

Friday, September 23, 2005
The Binary Infoset issue is small compared to the larger one that has crippled fundamental standards at the W3C: the chaotic development of DTD-replacing layers (xml:include, xml:base, xml:id, xlink, XML Schemas) without having a corresponding dependable processing sequence like that of DTDs.

--Rick Jelliffe on the xml-dev mailing list, Tuesday, 12 Apr 2005 04:05:06 +1000

Thursday, September 22, 2005

The Times has just placed all of their "name" op/ed columnists behind a pay wall ($49.95 per year). The question I have to ask is, why would anyone pay for that? Forget the political persuasion of any of the writers - there are tons of voices on the net that cover the opinion spectrum. The vast majority of them are free, and can be read quickly and easily with a news aggregator (and the masses will be using syndication, once IE7 ships).

So why would you pay for the privilege of reading the Times' stable of writers? Are they really better than the free pundits? I don't think so, and I think that the Times is in for a rude awakening. The reality is, they just opted out of the political conversation. Up until now, bloggers of all stripes linked to the Times writers, either to agree or disagree. That's not going to happen now - even if a given blogger subscribes, he'll know that most of his readers won't

--James Robertson
Read the rest in Smalltalk Tidbits, Industry Rants: The New York Times vs. Free Content

Tuesday, September 20, 2005
It often takes some time to internalize the design of XSLT, especially if your background is in procedural programming. When you are accustomed to writing programs that say "Do this, and then do that until this is true.", writing and understanding XSLT can be a challenge.

--Charles Knell on the xsl-list mailing list, Tuesday, 02 Aug 2005 10:21:59

Monday, September 19, 2005
it's just not clear to me that XQuery really does what XSLT does in a developer-friendly way. It's certainly a lot more approachable as a way to query databases than XSLT, but is it dramatically more convenient as a way to filter, merge, and transform XML data and services? Is anyone even marketing it that way any more? Most of the commercial interest in XQuery these days is as a database interface,

--Michael Champion on the xml-dev mailing list, Sunday, 12 Dec 2004

Sunday, September 18, 2005
The problem isn't that the stories I care about aren't being covered, it's that they aren't being covered in the obsessive way that breaks through the din of our 500-channel universe. Because those 500 channels don't mean we get 500 times the examination and investigation of worthy news stories. It often means we get the same narrow, conventional-wisdom wrap-ups repeated 500 times. Paradoxically, in these days of instant communication and 24-hour news channels, it's actually easier to miss information we might otherwise pay attention to. That's why we need stories to be covered and re-covered and re-re-covered and covered again -- until they filter up enough to become part of the cultural bloodstream.

--Ariana Huffington
Read the rest in Wired News: Arianna Learns to Love the Blog

Friday, September 16, 2005
Microsoft hasn't really improved on their browser for five years. That's a long time not to update a product and especially when it's the most used product in the world.

--John Von Tetzchner, Opera Software
Read the rest in Gussying up for the Opera | Newsmakers | CNET News.com

Wednesday, September 14, 2005
I use Word for book creation (don't bother commenting about that; I've heard all the "you should use this" comments, and every one of them would require much more work to use than Word does. And OpenOffice, much as I love the concept, won't handle a book. When I find a truly better solution, I'll be the first to switch.).

--Bruce Eckel
Read the rest in Java Threads

Tuesday, September 13, 2005
I have been a web developer for ten years. It's not 1997 anymore. I cannot think of any misguided "feature" that would require a particular browser, except testing guidelines. That is: THIS IS NOT A TECHNICAL PROBLEM. THIS IS A BUREAUCRACY PROBLEM.

--Boyd Waters
Read the rest in MacInTouch: timely news and tips about the Apple Macintosh

Monday, September 12, 2005

XOM has the best API ever.

In my app we churn business objects into XHTML then XSL:FO and finally PDF. XOM makes it super easy to build the XHTML tree. And if I play my cards right, I might be able to turn that XHTML into FO without serializing it to bytes first. Amazing.

XOM makes XML fun again! Get rid of SAX, DOM and hardcoded "<html>". Get XOM, be happy.

--Jesse Wilson
Read the rest in Public Object: Get XOM

Sunday, September 11, 2005
I am a Web developer for a non-profit that is trying to help families with special health needs affected by Hurricane Katrina. Why in the world would FEMA, a federal agency subject to Section 508 provisions, build a web site that is only accessible via Internet Explorer for Windows? I know of no *good* technical reason to do so. In fact, if your web team is worth its salt, they should be developing to W3C standards anyway, which would mean that your web resources would be available to anyone with a web browser, even an old one on an old system! As a Macintosh user, I'm used to this kind of marginalization, but I find it outrageous that a resource as critical to people in desperate straits as this one would exclude millions of people for no reason other than ignorance of best practices.

--Andrew Hedges
Read the rest in MacInTouch: timely news and tips about the Apple Macintosh

Saturday, September 10, 2005

The XML Spec says:

"Names beginning with the string "xml", or with any string which would match (('X'|'x') ('M'|'m') ('L'|'l')), are reserved for standardization in this or future versions of this specification."

http://w3.org/TR/2004/REC-xml-20040204/#sec-common-syn

Reserved does not meen invalid or forbidden. The meaning is: "You can still use such names but you have been warned. Some nice day you'll wake up to find that the nice name "xmlns2" you used has been given exclusively to "XML Namespaces -- 2" and your source xml documents have suddenly changed their meaning".

--Dimitre Novatchev on the xsl-list mailing list, Friday, 11 Feb 2005 23:07:16

Friday, September 9, 2005
DTDs are not an option for browsers. Browsers have to say "No!" to DTDs for performance reasons. None of Mozilla, Opera and Safari actually load any of the XHTML 1.x DTDs. The reason why the XML spec made external DTD loading optional was that the writers of the spec considered external entities incompatible with browsers.

--Henri Sivonen on the xml-dev mailing list, Saturday, 3 Sep 2005 01:46:51

Thursday, September 8, 2005

A few years ago, I did a major restructuring of the Open Source FlightGear simulator, moving as much of the configuration as possible out of the C++ and into XML (the physics models for aircraft were already using a pseudo-XML, but everything else was hard-coded). A short while after that, the contributor base grew enormously, drawing in people with little or no programming experience but other useful skills, such as 3D modelling, aerodynamics, etc.

So, in this case, the majority of *coders* know C++ better than XML, but the majority of *contributors* know only XML. They don't know it all that well, but XML doesn't have to be hard -- just make sure the tags match, escape the special characters, quote the attributes, and remember that names are case sensitive.

--David Megginson on the xml-dev mailing list, Friday, 28 Jan 2005 08:53:51

Wednesday, September 7, 2005
elementFormDefault is quite obviously the worst design decision ever made in a W3C specification

--Robin Berjon on the xml-dev mailing list, Sunday, 30 Jun 2005 19:04:05

Tuesday, September 6, 2005
Personally I'm no great fan of data binding tools: it's much better to write your application in a language that has XML Schema as its native type system (i.e., XSLT or XQuery) than to mess around converting your data between two very different representations.

--Michael Kay on the xml-dev mailing list, Tuesday, 30 Aug 2005 11:55:40

Monday, September 5, 2005

Many W3C Working Groups try very hard to minimize any work in response to reviews. Good strategies here are

  • ignoring comments alltogether
  • waiting several months or years to get back to reviewers
  • adressing feedback by explaining things in mails but not updating deliverables as if only the very reviewer is too dumb to understand it in the first place
  • authoring documents such that they are inaccessible to large parts of the intended audience
  • maintaining comments-only mailing lists
  • moving important illustrations and clarifications to out of band deliverables such as primers and FAQs
  • making it impossible for reviewers to schedule reviews by maintaining impossible schedules about the working group's expected progress
  • using misleading maturity levels (Fourth Last Call, this time we /really/ mean it!)
  • infrequent publication

Some strategies are less common, for example, applying twisted logic to the design process like developing a solution first and then trying to find use cases that fit to the solution, or simply plain ignorance, "Its our language and we can do that."

--Bjoern Hoehrmann on the xml-dev mailing list, Friday, 19 Aug 2005 01:16:05

Sunday, September 4, 2005
Desktop software that supports OpenDocument and PDF in the future is acceptable; Microsoft's proprietary XML formats are not. during an telephone interview Friday.

--Eric Kriss, Secretary of Administration & Finance for the Commonwealth of Massachusetts,
Read the rest in InformationWeek > XML > Microsoft Blasts Massachusetts' New XML Policy > September 2, 2005

Friday, September 2, 2005
The modern day H1b program in the United States of America is no better than slave labor. It is bad for the Alien, and it is bad for the US Denizen. It is however good for a lot of enterprising "body shops" - which I prefer to call as "pimps" who are able to make a quick buck by bringing fresh meat to the united states to sell in the software industry.

--Sahil Malik
Read the rest in Sahil Malik [MVP C#] : 21st Century Slave Labor

Thursday, September 1, 2005

If you want to do something that's going to change the world, build software that people want to use instead of software that managers want to buy.

When words like "groupware" and "enterprise" start getting tossed around, you're doing the latter. You start adding features to satisfy line-items on some checklist that was constructed by interminable committee meetings among bureaucrats, and you're coding toward an externally-dictated product specification that maybe some company will want to buy a hundred "seats" of, but that nobody will ever love. With that kind of motivation, nobody will ever find it sexy. It won't make anyone happy.

--Jamie Zawinkski
Read the rest in Groupware Bad

Tuesday, August 30, 2005

Let’s say you buy a new computer and use it for three years. That’s about 1,000 days. Your first-run experience – the experience you encounter the first time you boot the machine after taking it out of the box – therefore constitutes about one-thousandth of your entire experience with the machine. I think that’s the sort of logic that has driven most companies not to put that much effort into designing the first-run UI – it’s only going to happen once, and if it isn’t smooth, so what?

Whereas I think Jobs looks at the first-run experience and thinks, it may only be one-thousandth of a user’s overall experience with the machine, but it’s the most important one-thousandth, because it’s the first one-thousandth, and it sets their expectations and initial impression.

The first-run experience with a new Mac – or a new installation of Mac OS X – is much better now than it ever was before. The music is nice, the animation looks cool, and the new Migration Assistant makes it easy for anyone to move their important files from an old machine to a new one.

--John Gruber
Read the rest in GUIdebook > Articles > Interview with John Gruber

Monday, August 29, 2005
it takes six years for software to become mature; so, for libraries and applications, XML is mature, XML Schemas still has 2 years to go, and XQuery has maybe 7 years to go. Caveat emptor: when you use semi-mature technologies you will have problems that you wouldn't (or shouldn't) have with mature technologies.

--Rick Jelliffe on the xml-dev mailing list, Friday, 29 Apr 2005 17:06:32

Sunday, August 28, 2005

CDATA is just an authoring convenience, within the marked region < acts like &lt; and & acts like &amp; so if you have a large chunk of XML that you want to quote as data rather than as part of the XML tree, you can do

<x><![[CDATA[<p>zzz <span>dddd   ...</span></p>]]></x>

but XSLT will see the same input as if you had gone

<x><p>zzz <span>dddd &nbsp; ...</span></p></x>

--David Carlisle on the xsl-list mailing list, Friday, 1 Apr 2005 10:18:17

Saturday, August 27, 2005
Part of the value of XML is its nearly universal interoperability. XML data can be repurposed over and over again, sometimes for uses not originally anticipated. You can take most any XML and read it into Excel, import it into a variety of databases, transform it with widely available XSL tools, etc. While in principle one could re-release all the software that's already out there to include new drivers for binary XML, in practice there will for years be software that only understands the text form. Even if binary is successful, we will bear for the indefinite future the cost of conversion between the two, e.g. when editing in Emacs is desired.

--Noah Mendelsohn on the www-tag mailing list, Sunday, 7 Apr 2005 12:59:58

Friday, August 26, 2005

WXS hasn't got a type *system*. It contains a collection of "simple" types, which includes, for example, nine different unrelated types associated with dates, times, and durations, three numeric types (four if you include boolean) which are unrelated, two encoded byte-stream types which are not related to one another, the obscure NOTATION type, string, two types (qname and anyuri) which arguably ought to be subtypes of string (unless they're separately typed because they act as pointers within XML, in which case one could wish for some relation between them, somehow), no expanded name type, no types associated with currency, no consistency, rhyme, reason, or evident method.

Despite this inconsistency and incompleteness, the collection is closed to extension on the ur-type (where one might reasonably wish to create a type for, say, geographic position, or a rich enough "number" type that it could include rational numbers (fractions) or imaginary and irrational numbers). Realms that need *those* as basic types are effectively excluded, unless they can plead, cajole, beg or threaten the committee into including *their* pet types.

As a counter-example, and much as I hate Lisp in general, the Lisp type system encompasses, or can encompass, all of the WXS types, systematically. And more. And is extensible. Partly *because* it's a system.

--Amelia A Lewis on the xml-dev mailing list, Saturday, 1 Jan 2005 23:28:38

Thursday, August 25, 2005
Blogs scare the MBAs witless. Blogs out branding, they out attempts to use language to hide the truth, they out the con artists and the fakirs who can push a stock price into a bubble by gutting the employees's benefits without doing anything to get product out the door. The blogs scare the ownership society. They force them to confront the customers face on without the protection of their spin meisters and closed door meet-and-greets. They can co-opt blogs, but they can't change the essential openness. Vampires don't like bay windows on their crypts.

--Claude L (Len) Bullard, on the xml-dev mailing list, Friday, 12 Aug 2005 10:42:44

Wednesday, August 24, 2005
With hindsight we can say XML could have been simpler, and so could XML Schema. This isn't unique to the W3C: FTP could be simpler, and TCP (as in TCP/IP) had some features that turned out not to scale well in all cases, such as "Slow Start". The local building codes here in Ontario aren't the simplest either. But I'd rather have all of these things than not have them at all, and in practice that's usually the choice.

--Liam Quin on the xml-dev mailing list, Friday, 19 Aug 2005 18:11:03

Tuesday, August 23, 2005
The whole Web Services space is one that is based more on hope than any real evidence of success. There are dozens of companies that have invested heavily in this space and thousands of individuals who have tied their hopes to the idea of Web Services. The mere fact that nothing useful seems to be coming out of the expenditure of all this energy doesn't seem to have dimmed the desire to make it happen.

--Bob Wyman on the xml-dev mailing list, Friday, 2 Apr 2004

Monday, August 22, 2005
Implicitness fails us I think. There's a correlation between bozo technology and implicit or axiomatic approaches to specification. Personally speaking, I think I have wasted a lot of time since coming into this industry in having to decipher the consequences of specs or interpret things through to some form of conclusion that may or may not be what was expected. And then to go back to the source or the community only to be told that's not the preferred interpretation. And then watch the bugs and interop reports come in. At that point the typical response is that it that it's too late to fix now. When I think of a spec written in implicit form I tend to think of XML Namespaces.

--Bill de hÓra on the xml-dev mailing list, Saturday, 20 Aug 2005 18:17:44

Sunday, August 21, 2005
Effectively, the Web services architecture is a separate architecture from the Web architecture. There are almost no services that are generally considered Web services (that is use SOAP and/or WSDL) that are on the Web. Further, Web services clients rarely see Web applications. The attempt to unify the architectures in SOAP 1.2 with the SOAP-response MEP simply have not been deployed. This could be because the community has not moved to SOAP 1.2, perhaps partially because of the lack of WSDL 2.0 deployment. However, I think it is much more that Web services authors perceive little benefit in offering their services on the Web. For example, the WS-ResourceFramework[1] and WS-Transfer[2] provide generic operations in SOAP for achieving state transfer, despite proposal such as WS-REST [3] that could have integrated the resource framework with the Web.

--David Orchard on the www-tag mailing list, Monday, 13 Jun 2005 11:32:51

Friday, August 19, 2005
It's not surprising that XForms is raising hackles and ruffling feathers since it has the potential to displace a language that a lot of people have spent a lot of time learning and building businesses around. But just as I gave up assembler programming to use C, so I now gladly drop spaghetti-script for XForms. It's not perfect and there are plenty of new features that I would like to see introduced, but its most important role has been to demonstrate that we don't need to be wedded to the old approaches for ever, and that new solutions are possible, and here today.

--Mark Birbeck
Read the rest in Internet Applications: The "XForms Myth" Myth

Thursday, August 18, 2005
The problem is that a lot of customers don't really know what they want when they first start out using a new technology. They haven’t had enough experience with the technology and their use-cases to know all the ins and outs. XML has demonstrated itself to be unfortunately bad in this. Customers think they only need to worry about X (some simplified subset of XML) until 2 weeks before ship, when someone points out that they have been completely ignoring xml-namespaces / whitespace / mixed-content / processing-instructions / etc... I have seen this time and time again.

--Derek Denny-Brown
Read the rest in only this, and nothing more: Serving Many Masters

Wednesday, August 17, 2005

I wrote an XML Schema for SVG Full 1.1, and another for SVG Tiny 1.1. Doing so taught me a number of things:

  • 85% of XML Schema is thoroughly useless and without value;
  • the few useful features are weak and without honour;
  • creating a modularized XML Schema is easier than with DTDs, but nowhere near as simple as with RNG;
  • while a zillion useless features have been included in the spec, anything useful such as making attributes part of the content model has obviously been weeded out with great care, basically leaving one with DTDs supporting namespaces, a few cardinality bits, no entities, and loads of cruft;
  • tools like XML Spy that are supposed to help one write schemata will produce very obviously wrong instances, meanwhile the syntax of XML Schema was obviously produced by someone who grew up at the bottom of a deep well in the middle of a dark, wasteful moor where he was tortured daily by abusive giant squirrels and wishes to share his pain with the world;
  • the resulting schema is mostly useless anyway as there is no tool available that will process it correctly.

--Robin Berjon on the xml-dev mailing list, Sunday, 09 Jun 2005 11:59:45

Tuesday, August 16, 2005
Different applications have different processing requirements, so having the syntax specification define any one model would be wrong. For example, editing applications could well require a different API/InfoSet than XML databases. IMHO. This was one of the most significant issues in the early DOM specification process: the tension between different application domain requirements.

--Gavin Thomas Nicol on the xml-dev mailing list, Friday, 6 May 2005 14:54:36

Monday, August 15, 2005
standardisation of 'object mapping', even within a set of today's best of breed technologies such as Java/C#/Python is a little dangerous given XML is about exchanging documents, or at least interoperating with those who want to work with XML directly. What goes on behind the XML curtain is very much a per-implementation concern.

--Paul Downey on the xml-dev mailing list, Wednesday, 13 Jul 2005 16:48:14

Sunday, August 14, 2005
XML is not the best format but it's at least one everyone seemed to agree on.

--Didier PH Martin on the xml-dev mailing list, Wednesday, 01 Jun 2005 21:08:41

Friday, August 12, 2005
Unless there were compelling arguments in favor of using XSD, I'd use RNG. Even if there were compelling arguments, I might very well use RNG and create the XSD with trang.

--Norman Walsh on the docbook mailing list, Wednesday, 02 Feb 2005 18:43:16

Thursday, August 11, 2005

if the domain model behind the service changes (due to integration or evolution) and this (naturally) propagates to the service API conversation with former clients becomes impossible because they do not know the new semantics (and API). With REST you at least *can* have a conversation and the server can for example provide the client with the neccessary information to adapt (e.g. transformation code can be send accross to transform the new schema into the old one).

IMHO, this capability of REST to handle domain model evolution without having to re-program existing clients makes it a very interesting choice not only for internet-scale/outside-firewall situations but also for common enterprise IT (behind the firewall) systems.

--Jan Algermissen on the xml-dev mailing list, Tuesday, 05 Apr 2005 19:07:54

Wednesday, August 10, 2005
We wound up purchasing the codezoo.com domain name from someone who owned it already, and in the process, I had a brutal introduction to the world of domain name registrar transfers. In the name of openness and deregulation, ICANN allows many companies to register a domain name, and then manage the registration of that domain, for you or your company. The past owner of codezoo.com had registered it with a registrar named Omnis, and O'Reilly manages its domains with another firm. We had to transfer the name from Omnis to our registrar in order to use it. Of course, that doesn't benefit Omnis at all, so they did everything possible to prevent the transfer from occurring: holding requests for as long as they could, directly lying to us on the phone about it, day after day, and eventually refusing to take our calls at all unless we wanted to sign up for services from them. Towards the end of this process, when we had launched the site as codezoo.net since Omnis had prevented us from launching with the name we'd bought, I realized: "Wait a minute! I'm dealing with the phone company, that's what this is!" Ma Bell is gone, and in her place, the new locus of bureaucratic metastasization is domain registries. I guess the Internet really has grown up! Don't do business with Omnis, and if you plan to transfer a domain name, allow more than a month for the whole thing to happen.

--Marc Hedlund
Read the rest in Some notes on the building of CodeZoo

Tuesday, August 9, 2005
What amazes me when I study the work of the truly great programmers such as Dimitre and his FXSL Functional XSLT library is that there tends to be one very common trait among them -- simplicity.

--M. David Peterson on the xsl-list mailing list, Sunday, 17 Apr 2005 15:52:04

Monday, August 8, 2005
One of the oddities of the open-source world is that I don't know very much about what my users are doing. I know that there are around 250 downloads of Saxon a day, of which around half are the 7.x version, but I have very little idea who is downloading it and what they are doing with it (if anything). Most of the feedback I get is either from a small group of experts who know the technology inside out and are stretching its boundaries, or from beginners who don't know where to start. There's a silent majority in between that I never hear from.

--Michael Kay
Read the rest in An Interview with Michael Kay

Thursday, August 4, 2005
Anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool.

--Tim Bray
Read the rest in ongoing - On Postel, Again

Wednesday, August 3, 2005
I am a great believer in the bogosity of authorial intent.

--Walter Perry, Extreme Markup Languages 2005, August 3, 2005

Tuesday, August 2, 2005
Mixed content, entites, PIs, etc. are undeniably central to document apps, but an annoyance to data-centric apps. Types, the PSVI, and nil are central to data apps but an annoyance to pure document folks. But that doesn't mean we can all just go our separate ways: there is a huge middle ground (e.g. most InfoPath documents) that have features from both worlds, because real business documents have both semi-structured text and strongly typed data in them.

--Michael Champion on the xml-dev mailing list, Sunday, 14 Jul 2005 13:07:02

Monday, August 1, 2005
we're only just scratching the surface with what can be done with xml. Getting the ideas and selling them is the easy part. Writing them is a little harder. Debugging them harder again. Production testing is time-consuming.

--David Lyon on the xml-dev mailing list, Tuesday, 3 May 2005 09:18:18

Sunday, July 31, 2005

In the web platform team that I lead, our top priority is (and will likely always be) security – not just mechanical “fix buffer overruns” type stuff, but innovative stuff like the anti-phishing work and low-rights IE. For IE7 in particular, our next major priority is removing the biggest causes of difficulty for web developers. To that end, we’ve dug through a lot of sites detailing IE bugs that cause pain for web developers, like PositionIsEverything and Quirksmode, and categorized and investigated those issues; we’ve taken feedback from you directly (yes, we do read the responses to our blog posts) on what bugs affect you the most and what features you’d most like to see, and we’ve planned out what we can and can’t do in IE7.

In IE7, we will fix as many of the worst bugs that web developers hit as we can, and we will add the critical most-requested features from the standards as well.

--Chris Wilson
Read the rest in IEBlog : Standards and CSS in IE

Saturday, July 30, 2005
frames may be useful, if used in the right way, in intranets and certain web applications. For a public website, however, frames have too many accessibility and usability problems. Bookmarking problems, printing difficulties, trouble with deep linking, and having to do search engine workarounds are a few of the drawbacks to using frames.

--Roger Johansson
Read the rest in Web development mistakes, redux | Archive | 456 Berea Street

Friday, July 29, 2005
you cannot write a normative DTD for a namespaced language. That's an issue that crops up regularly in document-based communities where people have tried to learn from HTML's errors and recommend validation against a DTD. Unfortunately, since it produces false-negatives, you then end up with content producers wondering why their content doesn't validate when according to every knowledge of the spec they can muster it is perfectly fine.

--Robin Berjon on the www-tag mailing list, Monday, 04 Apr 2005 01:13:16

Thursday, July 28, 2005
We use XML for messaging so we can stop talking about what format to use whenever we want to exchange something. That lets us get our work done efficiently, leaving us lots of time to hang out on XML-Dev and discuss what other formats might have been better.

--Jonathan Robie on the xml-dev mailing list, Wednesday, 01 Jun 2005 16:52:33

Wednesday, July 27, 2005
Xerces has many wonderful characteristics that make it the right choice for many purposes, but it is nowhere near the fastest parser you can write for many important high-performance applications.

--Noah Mendelsohn on the www-tag mailing list, Sunday, 7 Apr 2005 10:31:18

Tuesday, July 26, 2005
The OFWeb (Old-Fashioned Web) primarily uses http: URIs as a means to an end, namely retrieving encoded character streams with some kind of rendering semantics, the encoding and semantics typically being signalled in the http header. The SemWeb primarily uses http: URIs as an end in themselves, as the constituents of RDF triples i.e. as names for things and relationships between them.

--Henry S. Thompson on the www-tag mailing list, Sunday, 28 Apr 2005 16:34:52

Monday, July 25, 2005
There IS an art to working with XML, just as there is with any other language, a perceptual understanding of XML trees in a set mode, just as the way you approach programming using SQL differs from the way you approach C++. Linux programmers in general usually tend to come up the ranks as either shell coders (pure proceduralists) or C/C++ devs, and getting them to shift their thinking to XML can be as hard as it is on the Windows side (I note that most highly proficient XML developers still come from the web/eCommerce side, where declarative programming structures are far more common).

--Kurt Cagle on the xml-dev mailing list, Friday, 28 Jan 2005 11:17:32

Sunday, July 24, 2005
Over the years I have worked with a good number of XML developers, ranging in skill from occasional user to expert. In almost every case I have found a lack of understanding of namespaces -- or, in the presence of understanding, hands-on confusion in working with and debugging namespace-related issues. XML namespaces, as defined by the current specification, are a departure from the perl-hacker-should-be-able-to-create-an-XML-parser-in-two-weeks credo; it takes more than two weeks just to understand the nuances of XML namespaces. The XML namespace FAQ gives the flavor of this confusion

--Parand Darugar
Read the rest in Abolish XML namespaces?

Friday, July 22, 2005
Evil design is where they stop you from doing what you are trying to do, like putting an advert over the top of the page. That's the wrong way to do it. Google has made billions by putting the ads where people do want them, rather than where they don't want them.

--Jakob Nielsen
Read the rest in Guardian Unlimited | Online | Lazy, stupid and evil design

Thursday, July 21, 2005
The early adopters of Firefox includes a lot of people in IT departments. Those are the people you want to have, as they will be the ones who can convince their management to migrate to Firefox. We have high hopes that we'll do better and better in that space with Windows 2000 users. If users don't upgrade to Windows XP they won't get IE 7, but 50 percent of businesses are still using Windows 2000. We're excited about Microsoft launching IE 7 — it will remind a lot of people that if they want better features they have to spend hundreds of dollars upgrading. Even if we stopped supporting Windows 98, a company can support themselves as it is open source. This is one of the advantages of open source — you can avoid the forced update cycle.

--Asa Dotzler
Read the rest in Firefox: Doing it for love

Wednesday, July 20, 2005

Hmmmm, what Does SOAP/WS Do that A REST System Can't?

Lots:

1) Provides big vendors with new vendor lock-in and marketing spin opportunities.

2) Adds additional complexity and obfuscation that are unnecessary in many circumstances.

3) Adds a lot of overhead.

4) Provides a marked reduction in scalability, especially when used by the unwashed masses as an RPC mechanism.

5) Highlights the beauty of KISS-based approaches.

6) Lets you create an acronym that has no formal expansion, which sounds like an oxymoron.

7) Gives the XML-Dev group, and the industry as a whole, lots to discuss/argue about.

8) Opens up the door to silly questions.

--Andrzej Jan Taramina on the xml-dev mailing list, Sunday, 07 Apr 2005 12:30:55

Tuesday, July 19, 2005

If you are going to make the switch to XML, you need to understand that, to be truly effective, you probably will have to read a number of books.

You are learning a new programming paradigm. You will need to combine technologies, parsers, schemas, namespaces, xpath, xquery, and transformations. In the short time I've been working with XML (3-6 months, off and on) I have had to study all of these things. From my experience,

learning "XML" is really learning a family of technologies. Certainly, this complicates the communication; it is not a trivial, pick-it-up-and-go family of technologies. XML, in my opinion, is a *transport* mechanism that, by itself, is of limited use; what makes XML really powerful is all those other manipulators you can bring to bear on the base document - learning those takes time.

--Robert A. Jacobs on the xom-interest mailing list, Friday, 6 May 2005 11:03:39

Monday, July 18, 2005
Another great advantage of an XML-based syntax is that it's so easy to automate the manipulation of resources expressed in that syntax. Outside of XSLT, I can't think of another language in which it's so easy to write programs that read and manipulate other programs written in the same language, and that includes LISP and Scheme (")))))))...."). Many, many production applications out there have automated the reading and writing of XSLT stylesheets as part of their workflow.

--Bob DuCharme on the xml-dev mailing list, Tuesday, 9 Nov 2004

Saturday, July 16, 2005

About 5-6 years ago, there was an interesting comparison of the size of a binary EDI (CEFACT) message versus the same transaction using an XML/EDI document. The size difference was about 1K (EDI) versus 11K (XML/EDI). For a single transaction, the "XML overhead" was about 10K. Bandwidth and disk storage were comparatively more expensive than MIPS, so economics favored using cheap CPU cycles to compress documents.

Disk and bandwidth costs are lower today than in 1998. Compression seems less important, but securing documents has become more important because we're using them as the basis for critical services. We may not need to invest in MIPS for compressing documents, but (in some cases) we need to do so to secure them using digital signatures and encryption. Changes in the regulatory environment (Sarbanes-Oxley, HIPAA, Basel II, SEC/NASD) are trending us towards securing more data and document exchanges.

--Ken North on the xml-dev mailing list, Thu, 19 Aug 2004

Friday, July 15, 2005
What I really wanted from XQuery was its data manipulation language and I'm very disappointed that it doesn't even appear to be on the horizon. Instead I've had to devise my own XML update language as an extension of XUpdate.

--John Watson on the xml-dev mailing list, Thu, 11 Nov 2004

Wednesday, July 13, 2005
As a consumer and something of a student of history, I always question people that are highly motivated to protect their jobs and money. Did big tobacco say their products were safe long after they knew it wasn't true? Might Microsoft be inclined to say that their products provide better total cost of ownership (TCO) and security than another product despite knowing it wasn't true?

--Chris Spencer
Read the rest in Linux Opinion: An Open Letter to a Digital World (LinuxWorld)

Tuesday, July 12, 2005
In most XML applications that actually do anything significant with the parse events, parsing overhead is a tiny fraction of total processing time, say 1% of the total. In other words, making the XML parser twice as fast might reduce processing time by 1/200.

--David Megginson on the xml-dev mailing list, Saturday, 1 Jan 2005

Saturday, July 9, 2005

XLinks go beyond identification or the resource and enable typed links. Given that a link for the web is the URI, the rest of that information is annotative, that is, a link processor of some kind might use it. For example, you often want the links in the bibliography, the links in the table of contents, and the links in the index at the back of a book to be displayed differently, and certainly, to behave differently. A click on a table of contents should take you to the location of that resource. A click on the inverted index might display all, none or one of the resources.

So linking as a concept seems simple, but it has overtones of identity, display and control over navigation. Not everyone agrees or can agree on how that fits into a single set of concepts or implementations. Also, most anything one can do with a multiway link, one can do with multiple simple links given some control to display it in. The classic example is a popup menu.

-- Claude L (Len) Bullard on the xml-dev mailing list, Friday, 14 Jan 2005 12:12:34

Friday, July 8, 2005
My definition of a mainstream language is that Computer Weekly has job ads asking for people with three years' experience. XSLT passes that test with no trouble.

--Michael Kay on the xml-dev mailing list, Sunday, 2 Dec 2004

Thursday, July 7, 2005
It strikes me that most of my complaint vis-a-vis XML are not so much with *XML* as the things built around or on top of it.

--Gavin Thomas Nicol on the xml-dev mailing list, Tuesday, 26 Oct 2004

Wednesday, July 6, 2005

we ought to have learned by now that standards are terrific when applied to proven industry practice but high-risk in the domain of theory and science. SQL and XML were both exercises in writing down something that had already been proven to work.

When committees get together either in an informal cabal or an official standards process, and go about inventing new technologies, the results are usually pretty bad. ODA (Never heard of it? Exactly); OSI Networking; W3C XML Schemas. The list goes on and on.

--Tim Bray
Read the rest in ongoing · Web Services Theory and Practice

Tuesday, July 5, 2005
No one wants a repeat of the mess that is W3C XML Schema.

--Dare Obasanjo on the xml-dev mailing list, Thu, 21 Oct 2004

Sunday, July 3, 2005
Your data is going to change, that's why you're using XML to begin with. Do NOT lead your clients down a path that leads them to overly coupling their systems to your changing data. I.e. do not encourage them to use some kind of XML data binding tool that's strongly coupled to a particular schema. This also means you need to be very selective in what you consider required data. Adding additional required data to an existing operation is going to cause breakage. If data is not absolutely 100% for certain required then just make it optional. Adding additional optional data should never break the system. Another reason XML Schema based systems are so horribly brittle is they tend to break on any schema change, not just required data changes.

--Kimbro Staken
Read the rest in Inspirational Technology: 10 things to change in your thinking when building REST XML Protocols

Saturday, July 2, 2005
properly used a PI does not change the data semantics, but as with everything else, XML Doesn't Care. and PIs give people rope with when they have cheerfully hanged themselves. Of course, they also provide plenty of rope to tie up loose ends, e.g. letting an instance offer hints about what stylesheet to use to display it.

--Michael Champion on the xml-dev mailing list, Sunday, 6 Feb 2005 23:09:58

Friday, July 1, 2005
What a lot of people have not figured out yet--though the W3C and IETF are moving in this direction--is that open standards depend on open source reference implementations. It is extremely difficult to specify everything that needs to be specified in a technical standard; in some notorious cases, such as the way Microsoft perverted Kerberos, vendors have used that underspecification as a loophole to create implementations that locked in their customers. Open source reference implementations are the only known way to prevent such abuse.

--Eric S. Raymond
Read the rest in ONLamp.com: ESR: "We Don't Need the GPL Anymore"

Thursday, June 30, 2005
Whenever a vendor tries to do a new OSS license (c.f IBM, Apple, Sun, Mozilla), their corporate lawyers keep trying to retain control of things (as good lawyers should do), and the effect is to introduce a new license with controversy and integration uncertainty. Picking something well known means you say "LGPL" when you are asked what license you use, and nobody needs to discuss details.

--Steve Loughran on the xom-interest mailing list, Wednesday, 9 Feb 2005 12:30:47

Wednesday, June 29, 2005
Putting XML inside other XML is still too hard. Part of that happens because envelope metaphors are taken too literally. Part of it happens because XML grammar efforts, especially container formats, are usually chartered to think up a storm about what might go on inside their format, but are remarkably thought-free about what might go on when their format is inside someone else's. This is why the default namespace is broken as designed - it enables container markup to happily trash what's being contained. I'm amazed it hasn't been deprecated.

--Bill de hóra on the xml-dev mailing list, Tuesday, 08 Feb 2005 01:06:22

Tuesday, June 28, 2005
I won't argue that XSLT 2.0 doesn't bring useful features and I must admit I am using it punctually, but I consider that the flaws of the PSVI based architecture promoted by XSLT 2.0 generally outweigh the benefit of these features and that's what prevent me from using it on a large scale.

--Eric van der Vlist on the xsl-list mailing list, Tuesday, 19 Apr 2005 12:48:41

Monday, June 27, 2005

I remember the networking people complaining just as much a decade and a half ago about the inefficiencies of IP, TCP, etc. I had to fight hard as late as 1993-94 to get a university lab to use TCP/IP because the administration had been convinced by their vendor (Novell) that TCP/IP was too slow for serious work.

I think that we might be at the same place right now with XML. The self-annointed analysts are telling the networking people that they'll have to handle enormous volumes of XML network traffic in a few years, and the networking people are freaking out over hypothetical future problems like verbosity and parsing time. I don't think that we have much of a clue yet whether (a) there actually will be much XML network traffic over the next few years, or (b) what it might look like, so any optimization is waaaay premature.

--David Megginson on the XML Developers mailing list, Monday, 19 Apr 2004

Sunday, June 26, 2005
This isn't really a question of data loss, data protection or data safeguarding. That, my friends, is a red herring. The real question is why corporations need to store all of this personal data in the first place. Why does my credit card company need to store my social security number? Why does Amazon need to store my credit card number? Why shouldn't every company store only what I tell them they can store? And why shouldn't the data that they store be as little as they possibly need to conduct business?

--Eric Norlan
Read the rest in The red herring of data protection | Perspectives | CNET News.com

Saturday, June 25, 2005
While it is possible to serialize a Xerces2 DOM using Java object serialization, we recommend that a DOM be serialized as XML wherever possible instead of using object serialization. The Xerces2 DOM implementation does not guarantee DOM Java object serialization interoperability between different versions of Xerces. In addition, some rough measurements have shown that XML serialization performs better than Java object serialization, and that XML instance documents require less storage space than object-serialized DOMs.

--Elena Litani and Michael Glavassevich, IBM
Read the rest in Improve performance in your XML applications, Part 2

Friday, June 24, 2005
Surprisingly, after all these years, users of standard-compliant browsers are still faced with sites that do not support their browser or with a link suggesting they download Internet Explorer.

--Deri Jones, SciVisum
Read the rest in BBC NEWS | Technology | Websites alienate Firefox users

Thursday, June 23, 2005
The Soviet Union collapsed not because of communism or central planning, but because of corrupt accounting. They couldn't organize the means of production because everybody was lying about everything. It was a game of fake numbers, and when you do that, you get crap for answers.

--Bill Joy
Read the rest in Fortune.com - Technology - Joy After Sun

Monday, June 20, 2005
One interesting things about client/server XML is that the computing load is asymmetric. It's a heck of a lot easier to generate XML than it is to consume and process it.

--Rich Salz on the xml-dev mailing list, Friday, 12 Nov 2004

Sunday, June 19, 2005
Many developers and publishing systems use query strings to send variables from one document to another within a website, and the ampersand is the most widely used query string separator. The problem is that the ampersand is also used in HTML and XHTML to start a character entity. If a query string contains an unencoded ampersand which is followed by a sequence of characters that resemble a character entity, browsers will convert the ampersand and those characters to the character represented by the entity. That will obviously break the query string, and can make things stop working. Having been bitten by this bug myself, I make sure any ampersands under my control are properly encoded.

--Roger Johansson
Read the rest in Web Standards Group - Ten Questions for Roger Johansson

Saturday, June 18, 2005
a CDATA section is just a syntactic convenience, <x><![CDATA[aaa]]></x> means exactly the same as <x>aaa</x> so schema languages do not allow any significance to be placed on CDATA sections, just as they do not allow significance to be placed on whether " or ' is used to surround an attribute value: <a b="2"/> means the same as <a b='2'/>.

--David Carlisle on the xml-dev mailing list, Friday, 22 Apr 2005 11:59:07

Friday, June 17, 2005

I personally like XForms. However, I think it is a lost cause on the popular Web for three reasons:

  1. It's not backwards compatible with existing Web content and existing Web browsers,
  2. It uses too many levels of abstraction to be understood by most authors.
  3. It requires the use of many namespace prefixes.

None of these problems really have anything to do with XForms features per se. It is IMHO possible to bring many of XForms features (especially the declarative constraints ideas) to authors without introducing the other problems. For instance, by just extending HTML's form features instead of making a totally new language.

--Ian Hickson
Read the rest in Internet Applications: The "XForms Myth" Myth

Thursday, June 16, 2005
For the data you receive just take what you need and pass through everything else. If there's extra data in the stream it is NOT an error as long as all the other data you need is there as well. Just ignore the extra data. This is one reason why XML Schema based systems are so fragile, they take too literal of a view about the shape of the data.

--Kimbro Staken
Read the rest in Inspirational Technology: 10 things to change in your thinking when building REST XML Protocols

Wednesday, June 15, 2005
The hacks that authors use are not to get around browser bugs -- if only that were the case. The reason authors are forced to carry out hacks is because they are having to do things that HTML and JavaScript were never designed to do. Every day we see sites having to validate data and create clever pop-up solutions for help text and calendar widgets. But whilst the HTML programmer is messing around with mouseover events and scripting, the XForms author writes the following (declarative) mark-up:


<input ref="sn">
<label>Surname:</label>
<hint>Please enter your surname</hint>
</input>

Note that whilst the JavaScript hack has to rely on platform-specific features such as the mouse, our XForms mark-up does not rely on anything like that, which means that the same form functions just as easily on a voice system

--Mark Birbeck
Read the rest in Internet Applications: The "XForms Myth" Myth

Tuesday, June 14, 2005

CDATA is not considered to be information-bearing in the XPath data model. In other words,

<![CDATA[xyz]]>

is exactly the same content as

xyz

just as a="3" and a='3' are considered simply as two different ways of writing the same thing.

--Michael Kay on the xsl-list mailing list, Friday, 1 Apr 2005 10:28:28

Monday, June 13, 2005
early in the 20th century, the law became latched to a device that would produce unimagined changes in copyright's reach. For in 1909, through a mistake in codification (literally: it was an error in the wording used in the statute), the exclusive right that copyright protected was defined to be not only the right to "publish" or "republish" but the right to "copy." That change didn't matter much in 1909: the machines for making copies were still printing presses, and no one believed a schoolchild writing out a poem 50 times so as to memorize it was committing a federal offense. But as the machines that copied became more and more common, the reach of copyright law became more and more extensive. At first it was commercial machines that bore the burden of the law: player pianos, radio, cable TV. But in the 1970s, and for the first time, a printing press to which the common folk had access--the "copier"--became the target of extensive litigation.

--Lawrence Lessig
Read the rest in The People Own Ideas!

Friday, June 10, 2005
There was a lot more pain involved in implementing even simple SOAP services, and debugging them once they were up and running. We're now using RESTful APIs and these have proved to be successful. They also feel easier to work with: I've been merrily applying XSLT to RESTful interfaces to generate HTML forms for applications, shell scripts using curl to implement management and testing tools. Feels a lot more agile.

--Leigh Dodds on the xml-dev mailing list, Wednesday, 30 Mar 2005 20:33:01

Thursday, June 9, 2005

Shredding is very useful for storing certain classes of data-centric documents, especially when non-XML documents need to access that data. In fact, I'd bet that shredding accounts for the vast majority of XML use with relational databases.

However, shredding shouldn't be viewed as a substitute for native XML data storage as it doesn't preserve sibling order, comments, processing instructions, etc. and usually doesn't support mixed content. It also requires design-time mapping of the XML schema to the relational schema -- something that won't work with schemaless documents. So it shouldn't be viewed as a substitute for native XML storage but as a complementary technology.

--Ronald Bourret on the xml-dev mailing list, Friday, 15 Apr 2005 09:05:28

Wednesday, June 8, 2005
I have no real interest in XQuery. It's a non-queriable, non-transformable subset of XSLT 2.0. I should care why? No, don't answer that. There are lots and lots of good reasons for XQuery and I'm not moved or excited by any of them.

--Norman Walsh on the xml-dev mailing list, Tuesday, 23 Nov 2004

Tuesday, June 7, 2005
For my money, XQuery is a heroic effort by a bunch of incredibly smart people which is crippled - we don't know how seriously - by its insistence on cohabiting with XSD.

--Tim Bray on the xml-dev mailing list, Friday, 03 Dec 2004

Monday, June 6, 2005

Clearly if flexibility is paramount, order and hierarchy are a pain, and there's not much gain *if* you have unique identies for everything and the identity is all you need to know to figure out what to do with the information.

On the other hand, if *context* is important, i.e. the intepretation / semantics / meaning / processing paradigm of some bit of information depends more on where it stands in relation to other information, then order and hierarchy are critical. That's where the XML "data model" (by which I mean "fairly deeply hierarchical labelled trees in which order is preserved", not the InfoSet per se) comes into its own.

--Michael Champion on the xml-dev mailing list, Monday, 17 May 2004

Sunday, June 5, 2005
Being able to carry content in any language and being able to use anything in element names are two totally different things. The first one is crucial. The latter is not. In fact, the world keeps turning with XHTML, DocBook, SVG, OOo XML, Atom etc. using English-based ASCII element names. The point is that the content can be in any language. I think i18n political correctness goes overboard when interoperability is sacrificed in order to change the characters allowed in programmer-visible identifiers.

--Henri Sivonen on the xml-dev mailing list, Saturday, 14 May 2005 00:26:11

Friday, June 3, 2005
The piano isn't the music.

--Alan Kay quoted on the WWWAC mailing list, Sunday, 17 Mar 2002

Wednesday, June 1, 2005
I use XInclude *heavily* as a means to write articles about XML. I usually write listings as external files, for easy testing, and then I use <xi:include parse="text"... to bring it seamlessy into the article for sending to the editor. All escaping and encoding issues are pretty much taken care of. Very handy.

--Uche Ogbuji on the xml-dev mailing list, Friday, 29 Apr 2005 13:40:24

Tuesday, May 31, 2005

Leading vendors, including BEA, IBM, Microsoft, Oracle and Sun, are cooperating in groups like the W3C and the Web Services Interoperability Organization (WS-I) to make sure that there will be effective interoperability among their different protocol stack implementations. These platform vendors will need to understand and implement all of these standards as part of creating their interoperable protocol stacks, but typical developers should never need to see them.

My strong advice to Java Web services developers is to program to the high-level JAX-RPC and JAXB models, and wherever possible leave the XML plumbing and protocol details to the platform libraries. This will allow us to implement the interoperability details for you, including adding future support for new protocols without disrupting your applications.

--Graham Hamilton
Read the rest in SD Times

Sunday, May 29, 2005
XML is text, often UTF-8. As an industry we went and cooked up APIs that pass around all the strings as UTF-16, which to be fair is common on many platforms. Not surprisingly, there are conversion overheads, and I agree they are very significant.

--Noah Mendelsohn on the www-tag mailing list, Sunday, 17 Mar 2005 14:46:45

Friday, May 27, 2005
a conforming XML parser might decide only to report half the elements, and to rename them all as foobar. The reporting requirements in the XML spec are notoriously incomplete. Fortunately most writers of XML parsers want their products to be not only conformant, but useful.

--Michael Kay on the xml-dev mailing list, Friday, 6 May 2005 11:31:43

Thursday, May 26, 2005
the Web is different from 'the enterprise'. XML technologies over the past few years have been hijacked by enterprise concerns. It is telling that XQuery is now primarily being driven by relational database vendors and WS-* is basically taking on the use cases of DCOM/CORBA/etc for the Enterprise. These may all be the right solutions on the intranet or within the firewall (maybe) but they are too complex for the worse-is-better world that is the Web.

--Dare Obasanjo on the xml-dev mailing list, Saturday, 23 Apr 2005 12:50:01

Wednesday, May 25, 2005

CDATA sections are un-escaped character data, not elements.

They are transparent to XSLT applications, so

<foo><![CDATA[.....snip.....]]></foo>

and

<foo>.....snip.....</foo>

are logically the same.

--Pete Kirkham on the xsl-list mailing list, Friday, 01 Apr 2005 10:11:20

Tuesday, May 24, 2005

Web Forms is backward-looking and more or less automatically compatible with the current generation of browsers. XForms is forward-looking and is more concerned with being an open and compatible player in the XML based Web services arena than in being compatible with earlier technologies. In order to have XForms capability in current browsers, you have to download a plug-in, just like Macromedia Flash, Real Player and Apple QuickTime, to name just three.

I see no reason at all to consider XForms a dead end just because it is not supported natively in current browsers. If this were true, Macromedia Flash, Real Player and Apple QuickTime would also be limited this way -- and I have never heard users of any of these technologies complain because they had to download a plug-in.

--Eric S. Fisher on the www-forms mailing list, Wednesday, 16 Mar 2005 09:50:10

Monday, May 23, 2005

One approach to the upper ontology, or any ontology really, is to accept that it is, like law, an artifice. It works as well as it works when it works and that is as well as it will work. Like your car, it gets a job done and when it doesn't, you or someone else can fix it.

The question of the semantic web is the golem problem: how much power and authority will you give the artifice over your choices? Otherwise, don't mistake a tool for the truth of the results of using the tool. A computer doesn't know how to add 2 + 2. It can be used to simulate that operation and give a repeatable result. If 2 + 2 = 4 for an acceptable number of uses, it is a useful tool. If you hit the one context in which that isn't true, it fails. So understand in advance what you are committing to and what the bet is.

-- Claude L (Len) Bullard on the xml-dev mailing list, Tuesday, 9 Nov 2004

Sunday, May 22, 2005
XML Schemas are to act as documentation and spot validation, do not use them as a straight jacket around the data. W3C XML Schema driven systems are horribly brittle in operation because they don't bend. If the data doesn't exactly match what is expected they blow up, even on something as simple and non-critical as an extra attribute from a different schema being added to an element. Yes validate the data when you need to, but do not overly constrain it.

--Kimbro Staken
Read the rest in Inspirational Technology: 10 things to change in your thinking when building REST XML Protocols

Saturday, May 21, 2005
If your users are writing Greasemonkey scripts, it shows that your users care. They are sending a message that your user interface isn't up to scratch.

--Simon Willison
Read the rest in Wired News: Firefox Users Monkey With the Web

Friday, May 20, 2005
XML dialects of course have semantics 'in there' that people can see. But to see that semantics clearly you typically want to reformat the XML and display it in ways easy for people to understand, or work with. That's why good XML editors don't show tags -- they use a layout that shows the semantics more cleanly, that's easier for people to read and understand.

--Ian Graham on the 'XML Developers List' mailing list, Sunday, Feb 2005 17:09:48

Thursday, May 19, 2005
In general, when a new technology shows up, it creates new opportunities. New opportunities for some often translate to a threat to the established few; it is traditional for the established few to cry foul:-) The Web has always been a disruptive technology. It enabled a young upstart browser first from a university, later from a Valley startup to challenge the mighty. The rest of course is history.

--T. V. Raman on the www-forms mailing list, Sunday, 17 Mar 2005 17:16:09

Wednesday, May 18, 2005
Much of XML is disappearing into infrastructure, so that you're more likely to work with it indirectly instead of directly unless you are a real XML-zealot. Beyond the aforementioned web services, I find that XML is making its way into application frameworks, publishing has largely completed the first wave of migration to XML and is undergoing a second, XML is found handling graphics in Linux (KDE 3.4 will be largely SVG based), being used for UML to multi-language generation, is quickly moving into the database sphere, and is showing up on nuclear submarines (I caught a fascinating presentation about the use of XML in managing and maintaining missile launch sequences a few months basck).

--Kurt Cagle, Thu, 27 Jan 2005 22:56:18 -0800, on the xml-dev mailing list

Tuesday, May 17, 2005
Don't use a | instead of : for the drive. This was never part of a formal spec. All reasonably recent URL handlers will understand file:///c:/temp/yyy.xsd Some will *NOT* understand file:///C|/temp/xxx.xsd.

--J.Pietschmann on the xml-dev mailing list, Wednesday, 11 May 2005 22:35:34

Monday, May 16, 2005
XML syntax is simple: Essentially, you must balance the opening and closing tags. Yet I wish I had a proverbial nickel for every e-mail I've received saying, "I'm trying to process the attached XML document through such and such tool and it fails -- could you recommend a better tool?" Invariably, I open the document and find an obvious syntax error such as an empty tag without the closing slash (it should look like this: <empty/>). If the document does not completely adhere to XML syntax, then it's not an XML document; if it's not an XML document, XML tools cannot process it. XML has a very precise and formal syntax. Either a document adheres to the syntax fully, or it is not recognized. Simple as that.

--Benoit Marchal
Read the rest in Working XML: Safe coding practices, Part 1

Sunday, May 15, 2005
My mother tongue is not ASCII-safe. It also isn't invariant under canonical decomposition. When I design and XML vocabulary, I use English-based ASCII element and attribute names. I don't want to ever spend a single minute debugging an app, because someone was being politically correct and used umlauts in element names and then the app expected the decomposed form but the document had them in the precomposed form (or vice versa).

--Henri Sivonen on the xml-dev mailing list, Saturday, 14 May 2005 00:26:11

Friday, May 13, 2005
the cost in changing an XML parser is not in changing the code, it's distributing it. The changes between xml 1.0 and xml 1.1 are more or less restricted to changing the tables of allowed characters and white space normalisation which is a minor change, but the exorbitant cost of getting updates distributed has more or less killed xml 1.1.

--David Carlisle on the xsl-list mailing list, Friday, 11 Feb 2005 16:17:29

Thursday, May 12, 2005
Extra space in Firefox compared to *what*? IE? If so, then it's probably IE that's wrong. Gecko is the best rendering engine that exists for standards compliance, and that's the one that I think everyone should start their testing on.

--David W. Fenton on the WWWAC mailing list, Sunday, 24 Apr 2005 17:08:34

Tuesday, May 10, 2005
maybe we should just scrap the whole thing, and let developers use URIs to represent countries, languages, and combinations thereof. Country and language codes are up there with MIME types in the list of "someone thought this needed a registry", when registries of all sorts are of course fragile and evil - as this very moment demonstrates.

--Simon St.Laurent on the xml-dev mailing list, Sunday, 21 Sep 2003

Saturday, May 7, 2005
My friend Nelson told me he planned to look at the site in Lynx as a way of testing it, and what do you know, it broke in Lynx. Does that matter for anyone other than Nelson? Well, Lynx is a good lowest common denominator, and if it works in Lynx, it probably works on cell phones, airport web terminals, WebTV sets, old versions of Netscape, and whatever other freaky browser a user might choose to use. You know what? All of the browsers I just mentioned already show up in CodeZoo's access logs. So yes, Lynx is a great test. Use it.

--Marc Hedlund
Read the rest in Some notes on the building of CodeZoo

Friday, May 6, 2005
My own personal regret is that I have spent so much of my energy since 1997 implementing W3C specs and such with excessive literal-mindedness, including DOM, XPath, XSLT, etc. Developments such as SOAP, XQuery and DOM L3 finally made it clear to me that the W3C* and many other such organizations seem bent on maximum complexity in XML. Many of my fellow Pythoneers tried to convince me earlier to plump for simplicity and Python idiom. Amara XML Toolkit represents the fact that I'm just now coming around to properly appreciating their point of view.

--Uche Ogbuji on the xml-dev mailing list, Sunday, 30 Dec 2004

Thursday, May 5, 2005

XML et SGML ne sont pas comparables à XHTML et HTML.

XML et SGML sont des "méta langages", c'est à dire des règles vous permettant de définir vos propres vocabulaires (SGML est l'ancêtre, XML en est une simplification).

HTML et XHTML sont des vocabulaires définis par le W3C en utilisant SGML (pour HTML) et XML (pour XHTML) pour décrire des pages web.

--Eric van der Vlist on the xml-decid mailing list, Tuesday, 26 Apr 2005 12:48:05

Wednesday, May 4, 2005
Ask Jeeves has a search engine that nobody really wants to go to. To get users to come, they push these toolbars. But if the toolbars are installed without proper notice and consent, then the entire business collapses. They have no legitimate business source of any substantial traffic to their Web site.

--Ben Edelman
Read the rest in Spying on the spyware makers

Tuesday, May 3, 2005

even though most Web services today continue to be delivered by classic Web servers (Apache and Microsoft IIS, for example), database systems are starting to listen to port 80 and directly offer SOAP invocation interfaces. In this brave new world, you can take a class—or a stored procedure that has been implemented within the database system—and publish it on the Internet as a Web service (with the WSDL interface definition, DISCO discovery, UDDI registration, and SOAP call stubs all being generated automatically). So, the “TPlite” client/server model is making a comeback, if you want it.

Application developers still have the three-tier and n-tier design options available to them, but now two-tier is an option again. For many applications, the simplicity of the client/server approach is understandably attractive. Still, security concerns—bearing in mind that databases offer vast attack surfaces—will likely lead many designers to opt for three-tier server architectures that allow only Web servers in the demilitarized zone and database systems to be safely tucked away behind these Web servers on private networks.

Still, the mind spins with all the possibilities our suddenly broadened horizons seem to offer. For one thing, is it now possible—or even likely—that Web services will end up being the means by which we federate heterogeneous database systems? This is by no means certain, but it is intriguing enough to have spawned a considerable amount of research activity. Among the fundamental questions in play are: What is the right object model for a database? What is the right way to represent information on the wire? How do schemas work on the Internet? Accordingly, how might schemas be expected to evolve? How best to find data and/or databases over the Internet?

--Jim Gray and Mark Compton
Read the rest in ACM Queue - A Call to Arms

Sunday, May 1, 2005

Dashboard goes a few steps further. Dashboard widgets can be augmented with compiled Objective-C code. This allows them to interact with parts of the system that are not accessible from within a web browser (e.g., Mac OS X's built-in address book database). Apple has also added a few new features to Web Kit in support of Dashboard widgets. There are two new controls (a slider and a rounded search field) plus a JavaScript interface to a subset of the Core Graphics API.

The new Web Kit features are also present in Safari, of course, and this has caused much controversy in the web developer community. I side with the other web developers here. I wish the new, proprietary Web Kit features were confined to Dashboard where they are appropriate and useful. I see little value, and much danger, in allowing them to work in Safari by default.

I pessimistically predict a proliferation of slider controls and rounded search fields on Mac-centric websites in the wake of Tiger's release. Please, Mac web masters, exercise restraint. Don't make this the MARQUEE tag all over again.

--John Siracusa
Read the rest in Mac OS X 10.4 Tiger : Page 17

Saturday, April 30, 2005

XML est un métalangage autrement dit une grammaire !

Par analogie avec les langages humains, il ne vous dit par quels sont les mots ( balises ) à employer mais vous impose de les mettre en phrase ( la structure du document ).

XHTML est une instance de XML autrement dit c'est un langage particulier créé en respectant la syntaxe et la structuration de XML comme le français est une instance de la grammaire française !

--Didier Courtaud on the xml-decid mailing list, Tuesday, 26 Apr 2005 12:26:57

Saturday, April 30, 2005

Dashboard goes a few steps further. Dashboard widgets can be augmented with compiled Objective-C code. This allows them to interact with parts of the system that are not accessible from within a web browser (e.g., Mac OS X's built-in address book database). Apple has also added a few new features to Web Kit in support of Dashboard widgets. There are two new controls (a slider and a rounded search field) plus a JavaScript interface to a subset of the Core Graphics API.

The new Web Kit features are also present in Safari, of course, and this has caused much controversy in the web developer community. I side with the other web developers here. I wish the new, proprietary Web Kit features were confined to Dashboard where they are appropriate and useful. I see little value, and much danger, in allowing them to work in Safari by default.

I pessimistically predict a proliferation of slider controls and rounded search fields on Mac-centric websites in the wake of Tiger's release. Please, Mac web masters, exercise restraint. Don't make this the MARQUEE tag all over again.

--John Siracusa
Read the rest in Mac OS X 10.4 Tiger : Page 17

Friday, April 29, 2005

RELAX NG only imposes schema-instance constraints; that is, the rules specified in the schema dictate what the instance can and can't contain.

XML Schema also has rules of this kind, but additionally it has schema-schema constraints, where parts of the schema dictate what other parts of the schema can and can't be. For example, if the schema says that the 'foo' type is an extension-by-restriction of the 'bar' type, then the child elements and attributes allowed for elements of type 'foo' are a subset of those allowed for type 'bar'.

XML Schema also has instance-instance constraints, where parts of the instance are dictated by the content of other parts. DTDs and RELAX NG have these only in the form of ID, IDREF, and IDREF attributes.

--John Cowan on the xml-dev mailing list, Sunday, 25 Nov 2004

Thursday, April 28, 2005

Much of the opposition to XLink has come from its top-down imposition of a single way of looking at links. The many battles over show and actuate and whether they should even be in the spec, the fights over links as sets vs. links as traversal paths, and concerns about integrating this with existing vocabulary and practice all look to me like well-earned friction. XLink's thorough failure to make friends, even within the W3C reinforces my suspicion that its approach was simply bone-headed to start with.

"There's more than one way to do it" seems especially important in linking, where even just the hypertext field of linking is marked by incredible diversity and a wide range of conflicting opinions. I'm not sure what gave the XLink working group the moxie to think that they could develop the one true way for linking XML documents or the W3C TAG the gall to expect the HTML WG to conform to that way, but I regard it as perhaps the saddest chapter of XML history.

--Simon St.Laurent on the xml-dev mailing list, Friday, 23 Jan 2004

Wednesday, April 27, 2005

Because all XML is is a labelled hierarchy of information and nothing more, using namespaces provides a rich method of writing the labels. This method distinguishes labels as being from different "owners" and, for each "owner", distinguishes labels from each other in a collection of labels.

I put the word owner in quotes because depending on the URI method you use for the namespace URI it may actually be owned through public registration (such as domain names in a URI) or it may not actually be owned but just private use (such as private-use conventions allow in a URN).

--G. Ken Holman on the xml-dev mailing list, Tuesday, 19 Apr 2005 11:02:37

Tuesday, April 26, 2005

CDATA means Character data and specifically it means "THIS IS NOT XML MARKUP"

If you have

<x><p>...<br/>...</p><x>

in a source and you want that HTML copied to the result you just want

<xsl:copy-of select="x/p"/>

and the p element and all its descendants will appear in the source.

If on the other hand you have

<x><![CDATA[<p>...<br/>...</p>]]><x>

Then you have gone to the trouble to carefully flag that the stuff inside the x element that looks like XML is just character data and not XML markup at all. If you then change your mind and want it to appear in the result tree as XML (which is usually what people want) then life is harder, there are various possibilities (all FAQ's) but the usual advice is "don't start from here"

--David Carlisle on the xsl-list mailing list, Friday, 1 Apr 2005 11:45:57

Monday, April 25, 2005
XML is a technology similar in level and scope to the RDBMS, destined to have a similarly huge effect on the way computers and networks are applied. That there appears to be no coherent purpose, nor order in the way we collectively figure out what we can do with it should be no surprise.

--Allen Razdow on the xml-dev mailing list, Saturday, 23 Apr 2005 10:01:54

Sunday, April 24, 2005
XQuery makes programming with XML easier because it directly supports the things that matter most for XML - construction, path expressions, and restructuring with FLWOR. This is true of XSLT too, but (1) XSLT has not been successfully optimized for large data stores, (2) some programmers don't wrap their brains around XSLT, either because of the syntax or the heavily recursive programming idioms.

--Jonathan Robie on the xml-dev mailing list, Sunday, 02 Dec 2004

Saturday, April 23, 2005
I tried to wrestle my way through the arcana of WSDL, but I'm giving up. I don't doubt that it can be done, but I don't care enough to work my way through it.

--Norm Walsh
Read the rest in WITW: WSDL: 1, Norm: 0

Friday, April 22, 2005
There are a lot of people, most especially Microsoft, that would like to see the XForms effort fail. Truly open standards are fundamently incompatible with lock-in strategies of any sort. XForms opens the door to a number of truly astounding applications not invented yet, and its openness provides the user and developer communities with options for innovation and competition that would be unavailable otherwise.

--Eric S. Fisher on the www-forms mailing list, Wednesday, 16 Mar 2005 09:50:10

Thursday, April 21, 2005

Namespace names are URIs, and they were chosen this way back in 1999 largely (in the XML community) because of their useful syntactic uniqueness properties and (in the nascent RDF community) because of the emerging grander ambitions for URIs.

For some years, I steadfastly argued that these URIs were just names and don't you worry your pretty little head about what they point at. This position turned out to be untenable; the user population really wanted to dereference these and get something back.

--Tim Bray on the www-tag mailing list, Thu, 01 Jan 1970

Wednesday, April 20, 2005
data typing in XML is nothing more than a specialized interpretation of text. That is the crux of the matter: XML is text, and only text. Other layers such as data typing are mere interpretations of that text (and should be optional interpretations). The moment you lose sight of that, you're in for all sorts of unforeseen complications.

--Uche Ogbuji
Read the rest in Thinking XML: State of the art in XML modeling

Tuesday, April 19, 2005
In essence, the WHAT-WG approach is a band-aid solution created by people who are not very familiar with the needs of forms applications.

--John Boyer on the www-forms mailing list, Wednesday, 16 Mar 2005 10:26:15

Monday, April 18, 2005
I've yet to understand why this is so hard for people to grasp: Apple does what's in *Apple's* interests. Period. Yet time and again, no matter how many times Apple does what's in Apple's interests, people keep expecting Apple to put *other* people's interests first, and then get "outraged" when Apple does what benefits Apple. How many times does Apple have to act like a greedy, money-grubbing corporation before people figure out that that's what Apple *is*?

--Glen Fisher on the java-dev mailing list, Monday, 07 Mar 2005 15:38:20

Sunday, April 17, 2005
scripting played a key role on the Web during the late 90's in helping Web authors discover the next set of things beyond HTML4 that would like. Since the script interpreter was built into the browser, advanced Web programmers could experiment to their heart's' content, and what's more, even deploy the result of their experiments to a wide audience to do real-live usability testing. I believe this to be a first in the somewhat fledgling field of user interface engineering. Now that we have done this level of experimentation, it is time for the next turn of the crank in "democratizing the web", namely, opening it up from the software programmers to the document authors". What we have done in the XForms design starting in 2000 was to carefully enumerate the most common Web programming tasks for which authors have to resort to scriptig, and built the next generation design to obviate those explicit programming tasks.

--T. V. Raman on the www-forms mailing list, Friday, 11 Mar 2005 14:59:26

Saturday, April 16, 2005

Ever since Amazon slipped its unfortunate “one click” patent by the patent office, the rest of the ecommerce industry seems to have become resigned to making their customers’ lives a living hell.

The lack of a single, supported standard for check-out is costing the ecommerce community billions of dollars in lost sales.

You may be feeling pretty good, knowing your log files show that only around 25% of your customers are bailing on your check-out pages. Don’t. I spent 15 years in bricks-and-mortar retail, and the instances of customers bailing after the deal is made is vanishingly small (except in the case of high-pressure sales, like automobiles, etc.).

Not only are you leaving a lot of money on the table today, but the customer you harass today is far less likely to return tomorrow.

--Bruce Tognazzini
Read the rest in The AskTog Bug House: Three New Persistent Design Bugs

Friday, April 15, 2005
The problem with file based XML systems is that it is awful to manage. Databases are so much cleaner to maintain. If the documents are filed nicely inside memo/xml fields, well I think how easy is that.

--David Lyon on the xml-dev mailing list, Friday, 15 Apr 2005 08:44:36

Thursday, April 14, 2005
A lot of studies have been made on the subject and it's a well-known fact that past a certain width, reading text in a wide window becomes slower. The sweet spot is probably somewhere between eighty and a hundred columns, and the three-pane display is a pretty good illustration of that.

--Cedric Beust
Read the rest in Outlook 2003

Wednesday, April 13, 2005
the infoset would be more interesting to more people if it weren't so clearly a compromise between DOM, XPath, and SAX, and if it weren't possible to make a refrigerator infoset-compliant.

--Amelia A Lewis on the xml-dev mailing list, Sunday, 10 Apr 2005 19:36:12

Tuesday, April 12, 2005

I never cease to be surprised at the miserable usability of university websites. The Web was invented to disseminate academic papers, but it's almost impossible to find research results on academic websites.

In this case, I didn't know the paper's title, as it wasn't reported in the newspaper. I did have the lead author's name, so I searched for it and was promptly led to a faculty homepage at the University of Pennsylvania. Unfortunately, this page was useless, as are most faculty member homepages. The most recent entry on the "selected publications" list was from 2002. The professor's main research interest was presented in colored text, offering a strong perceived affordance of clickability. Nevertheless, it offered no link. The biography page offered no further information about the professor's research either. It did link to his full curriculum vitae (in PDF, oh woe), but it hadn't been updated since March 2003 and also had no links.

Looking for the author failed to produce any information about the research. What about the academic institution responsible for the project? The newspaper handily provided the department's full name, making for an easy search. The top search result was the correct one, but the page title -- CCEB -- had almost no information scent. Further probing revealed that CCEB stands for "Center for Clinical Epidemiology and Biostatistics." With an entire line available to spell out their names, you'd think organizations would want to help poor outside users by doing so.

But this was far from the worst problem. Sadly, the university has almost no idea of how to use the Web for PR. On a day when the "CCEB" was featured on the front page of The New York Times' business section, the department's latest News page entry was ten months old

--Jakob Nielsen
Read the rest in Medical Usability: How to Kill Patients Through Bad Design (Jakob Nielsen's Alertbox)

Monday, April 11, 2005
Plain Old Text still reigns supreme, despite the many predictions of its demise.

--Bob Foster on the xml-dev mailing list, Sunday, 10 Apr 2005 15:11:13

Saturday, April 9, 2005

There is a general process problem here that bedevils most XML standards efforts. You get a group of volunteers together, and then put together what you can with limited resources. The problem is that so much of what you do is theoretical, because no-one is testing along the way.

Where I think Java got it so *right* was that conformance was always part of the standard, so there was always a reference implementation and a set of conformance tests. I think that is where all XML standards have to get to, if they are going to avoid this type of problem as a general consequence.

--Anthony B. Coates
Read the rest in XInclude, xml:base, and validation

Friday, April 8, 2005
Is RNG that much better than XSD? I'm still trying to ascertain that myself, though earlier indications are that it is a better schema language. RNG was designed from the bottom up to be a compelling language for defining schemas, XSD was designed from the top down as being a wish-list for vendor support of certain features. Having written a couple of books on XSD over the years, I think I can speak with some authority on this -- it has some REAL problems.

--Kurt Cagle
Read the rest in Metaphorical Web: Conferences and Google

Thursday, April 7, 2005
XQuery 1.0 is XSLT 2.0 minus template rules, grouping, analyze-string, format-number, format-date, keys, generate-id, unparsed-text and a few other things besides. True, XQuery has a more composable syntax and thus avoids some of the duplication that XSLT has between the XSLT and XPath levels of the language, and it's less verbose. But in terms of functionality, there is only one small feature in XQuery that's not available in XSLT, namely the ability in ORDER BY to sort on a criterion that isn't a function of the items being sorted, and I don't think many XSLT users have asked for that.

--Michael Kay, on the xml-dev mailing list, Tuesday, 9 Nov 2004

Wednesday, April 6, 2005
every time when I was pointing out to my ex-fellows BEA engineers that the messy Java code that they were writing could have been written SO much simpler in XQuery, I consistently received the same answer: "YES, maybe that's true, but I KNOW Java, and I do NOT know XQuery. And BTW, there are no good debuggers for XQuery."

--Daniela Florescu on the xml-dev mailing list, Tuesday, 30 Nov 2004

Tuesday, April 5, 2005
Is 2005 the year of XQuery? Well if you believe the vendors pushing XQuery products, it very well may be. Of course, there is still that little problem of XQuery not even being out of Working Draft status. My prediction for 2005 is that come 2006, XQuery still won't be a recommendation and the vendors will proclaim 2006 as the year of XQuery.

--Kimbro Staken
Read the rest in Inspirational Technology: DataDirect Technologies Announces Top 10 XQuery and XML Predictions for 2005

Monday, April 4, 2005
Web services may collapse under its own weight. No one at the conference said this. Those are my words. I'm beginning to feel that all the disparate web service specs and fragmented standards activities are way out of control. Want proof? Ask one of your IT folks to define web services. Ask two others. They won't match. We asked folks around the room - it was pretty grim. It's either got to be simplified, or radically rethought. As you know, I also believe simplicity and volume always win - and that today's web services initiatives are in danger of vastly overcomplicating a very simple (really simple) solution.

--Jonathan Schwartz
Read the rest in Jonathan Schwartz's Weblog

Sunday, April 3, 2005
The only compelling reason for a binary standard would be if the cost/benefit ratio is appropriate to begin with. Which boils down to "how much faster? And perhaps, how much smaller?" vs. "how much additional complexity and pain does it cause".

--Wolfgang Hoschek on the xml-dev mailing list, Wednesday, 9 Feb 2005 15:21:00

Saturday, April 2, 2005
I tend to view XQuery as the beginning of a possible shift beyond SQL since it is designed in such a way that people can use it to query any data — of course it works great for XML, but it has application for relational data, legacy data sources, or anything else. XQuery is particularly beneficial in the data aggregation area, for example, in creating views of distributed data.

--Mike Olson
Read the rest in An Interview with Mike Olson on XQuery and Database Technologies

Friday, April 1, 2005
Instead of using domain specific technologies optimized for that target domain, various parties want to usurp interoperable text-based XML with some binary derivation of it. Sometimes I look at efforts like SOAP and wonder if we wouldn't have all been better off if they hadn't picked ASN.1 instead. Trying to satisfy the needs of a person editing business documents with WordML and a developer building distributed applications using similar toolkits has proven to be a problem no one has quite figured how to truly solve. I'd rather see the XML world solve the API and toolkit problem first before further fragmenting itself into text-based XML and binary XML.

--Dare Obasanjo on the xml-dev mailing list, Tuesday, 8 Feb 2005 09:52:04

Thursday, March 31, 2005
XML on day one had zero presentation. The separation of content and presentation is kind of an elusive goal that, in almost no application, is ever really fully achievable. But to the extent you can achieve it, you usually win. XML at least doesn’t get in your way as you try to achieve that.

--Tim Bray
Read the rest in ACM Queue - A Conversation with Tim Bray - Searching for ways to tame the world’s vast stores of information.

Wednesday, March 30, 2005

beyond ontologies and ontology designers, XML is also enabling a new category of developers/architects called "language designers" who design XML-based XMLSchema-defined domain-specific languages that can be used by users who want to program in the big/higher-level while others implement those languages by programming in the small/lower-level.

This is all the more possible NOW, since - the value of language is less and less related to how rich its API is (Java or C# is cool b/c the APIs of J2EE or .NET are full of things, but with WS and SOA, I can virtually call any "library" from any language supporting WS), and more and more about the kinds of abstractions, expressiveness and overall "adhocness" to a problem it is. - Particularly in Service-Oriented software, the implementation language does not matter as much, as long as it matches the service contract.

--Guillaume Lebleu on the xml-dev mailing list, Wednesday, 01 Dec 2004

Tuesday, March 29, 2005
The API problem is certainly far more interesting, but I'm not convinced the XML community as a whole can get together to fix it. I think the whole problem with DOM and SAX is that they are LCDs for all the various languages and platforms interested in XML. I think the best APIs are environment-specific. From COmega to HAXML, the right work is already progressing. Let's just keep the W3C out of it.

--Uche Ogbuji on the xml-dev mailing list, Monday, 14 Feb 2005 18:34:03

Monday, March 28, 2005
the press has a duty, too, and that's to give our readers the information they need to see through the FUD and to make informed purchase decisions. In the past, that duty has often not been performed as nearly as well as it should be, because the easy thing to do was to just print the stuff that the vendors would spoon feed you. And one of the great things about weblogs and online publishing in general is that "news" sources that just re-print press releases are becoming extinct.

--Ed Foster
Read the rest in The Gripe Line Weblog by Ed Foster

Saturday, March 26, 2005
Ontologies are unlikely to get much traction, IMHO, as long as they are called "ontologies", and one is forced to be conversant with formal semantics, formal logics, etc. in order to use them in an enterprise IT project. Somehow this approach has to be repackaged in a way that rests cleanly on the foundation of semantic web theory/technology, but exposes only those concepts (and terminology) that are accessible to ordinary mortals.

--Michael Champion on the xml-dev mailing list, Tuesday, 9 Nov 2004

Friday, March 25, 2005

When an XML document gets validated against a schema, the result isn't just a pass or fail: every element and attribute gets labeled with the schema-defined type that it validated against.

So you will have elements and attributes labeled as strings, integers, or dates, or as instances of user-defined types such as geographic coordinates, postal addresses, or taxpayer reference numbers. In a schema-aware stylesheet or query, you can write functions to process objects of a particular type, just as you would in Java or C#: the schema becomes the type system of the language. And you basically get the same benefits -- many programming errors are picked up sooner, which gives you a faster debugging turnaround, which means you can deliver working code more quickly.

At the coding level, you can declare the argument types of your variables, templates, and functions, and you can write path expressions and match patterns that select nodes according to their schema type. That means, for example, that you can select "all inline elements" in an XHTML document, without having to list all the elements that are classified as inline elements. Apart from anything else, that makes your code more resilient to changes in the schema.

--Michael Kay
Read the rest in An Interview with Michael Kay

Thursday, March 24, 2005
I remember listening many years ago to someone saying contemptuously that HTML would never succeed because it was so primitive. It succeeded, of course, precisely because it was so primitive. Today, I listen to the same people at the same companies say that XML over HTTP can never succeed because it is so primitive. Only with SOAP and SCHEMA and so on can it succeed. But the real magic in XML is that it is self-describing. The RDF guys never got this because they were looking for something that has never been delivered, namely universal truth. Saying that XML couldn’t succeed because the semantics weren’t known is like saying that Relational Databases couldn’t succeed because the semantics weren’t known or Text Search cannot succeed for the same reason.

--Adam Bosworth
Read the rest in Adam Bosworth's Weblog: ISCOC04 Talk

Wednesday, March 23, 2005
the web at large is not a meritocracy, but a global marketplace of ideas and cheap thrills that is strongly resistant to control of any kind, including quality control.

--Bob Foster on the xml-dev mailing list, Tuesday, 15 Mar 2005 19:24:14

Tuesday, March 22, 2005
Open source is not happening only in the US. In fact, much of OSS is happening in India, China, Brazil, Europe, where governments are trending to using FOSS (free/open source software) and away from proprietary and monopolistic software for reasons that are as much political as pragmatic and economic. Political, because FOSS offers a way out of the hegemony of US software products and English-only applications and pragmatic because not only is it cheaper but FOSS encourages the formation of local software production and distribution economies. An engineer learning and working with Linux contributes more to local wealth not to the wealth of a remote company, even if there are local Microsoft training companies. He will still end up supporting Microsoft's business. And there is the language issue. Since WWII, the lingua franca of business has been English, and software used reflects that. To be sure, there are Japanese, Chinese, French, German, etc., versions of proprietary software, but these are usually inadequate and imperfect; and the user has no choice but to use it. Open source has changed that. By giving the user--the native speaker--access to the localization process (translation, customization), the application can evolve to meet the speaker's real needs. Thus, our Welsh localization, our Indic language localizations, our Japanese, Czech, Serbian, Swahili, Basque, etc., localizations, all these are superior to proprietary versions, for they were created by the communty actually using the application according to an open model of development. Errors are corrected, innovations incorporated.

--Luis Suarez-Potts
Read the rest in AdityaNag.org › An Interview with the OpenOffice.org Team.

Monday, March 21, 2005
The idea that 'ownership' of a DNS name might allow you to say "well, when I say ftp://mydnsname.com/blah, I don't really mean 'use the ftp protocol to connect to mydnsname.com and access 'blah', I really mean 'use the http protocol' -- well, it's nonsense. It's a Humpty Dumpty kind of argument, where words mean whatever the speaker wants them to mean, outside of any intrinsic or social meaning. URIs have meaning that is delegated through the scheme definition, and resource owners can merely manipulate the resources so that the URIs match what the resource owners might want the URIs to connect to.

--Larry Masinter on the www-tag mailing list, Tuesday, 15 Mar 2005 21:24:23

Sunday, March 20, 2005

Today people complain about a particular browser vendor having 90+% market share and therefore dictating the destiny of the Web. Let's remember that this same tyranny under a different browser made life just as unpleasant in the mid-90's --- where every Monday morning saw the addition of a new tag or an ill-named event attribute showing up. I believe the Web is still recovering from that mess, and has in the process stagnated during the period 98--04.

In fact the move to XML-based technologies at the W3C, with XHTML 1 the first step was to introduce some method to the madness, and create something that was well-formed and predictable that the rest of the world, not just a couple of browser vendors could build on.

I believe this broad XML vision driven by the W3C and its member companies has succeeded in spades as epitomized by the number of companies who are able to create products around these standards.

--T. V. Raman on the www-forms mailing list, Sunday, 17 Mar 2005 17:16:09

Friday, March 18, 2005
both RNG and XSD define grammar for your document. The difference is that in RNG it is often easier to express desired grammar than in XSD and your grammar is not constrained by many restrictions that are present in XSD. OTOH restrictions in XSD guarantee that you can unambiguously map content of elements and attributes to data-type defined in XSD. This is very important when you want to do things like data-binding of XML to particular object representation. RNG schema can contain ambiguous content models that are problematic when you want to do data-binding.

--Jirka Kosek on the xml-dev mailing list, Sunday, 25 Nov 2004

Thursday, March 17, 2005
XForms is not a Web standard. It's a relatively new spec seeing early-adopter use in intranets.

--Brendan Eich
Read the rest in Browser battle shakes Net apps: ZDNet Australia: Insight: Software

Saturday, March 12, 2005
if you have the money and playtime that Google has, then sure you can build a passable email client and a map site with JavaScript ... But then given long enough I could dig a canyon with a teaspoon

--Mark Birbeck
Read the rest in Internet Applications: The "XForms Myth" Myth

Friday, March 11, 2005
HTML is a media-independent, device-neutral semantic markup language at its core. Most accessibility problems with HTML pages come from people abusing APIs that aren't even standardised (such as .innerHTML) or using standard APIs in illegal ways (for example, using document.write() on documents that were not created via document.open()).

--Ian Hickson on the www-forms mailing list, Friday, 11 Mar 2005 16:13:18 +0000

Thursday, March 10, 2005
For my current contract they could have saved months of effort by using XForms instead of standard HTML forms. We're talking thousands and thousands of lines of JSP code that could be thrown in the trash. I love nothing more than throwing code away ... love it.

--Kimbro Staken
Read the rest in Inspirational Technology

Wednesday, March 9, 2005
the basic ideas behind Web Services are really great! Distributed applications are, in fact, a really wonderful idea. They were back when we first started implementing them seriously back in the early 80's and they still are today. The idea of having standard interchange formats and describing your interfaces with formal definition languages like WSDL is also a good one that has been well proven over time. ASN.1, CORBA IDL, many RPC interfaces, XML, and, of course, lots of IETF and ISO standards have validated these ideas. We should also be particularly supportive of the attention that the WS-* folk give to reuse of standards. The mere fact that they seem to insist on defining new standards before they are willing to reuse them should not take away from our appreciation of their appreciation of reuse as a concept.

--Bob Wyman on the xml-dev mailing list, Friday, 2 Apr 2004

Tuesday, March 8, 2005
the reason for xsl:import's existence is that you should be able to take 10000000000 lines of Norm's docbook stylesheets, xsl:import them and then define a couple of your own templates for specific elements where you want your own processing. You can be sure that your templates will win, even if you have not fully digested the publicly available stylesheets that you were importing.

--David Carlisle on the xsl-list mailing list, Monday, 31 Jan 2005 16:07:04

Monday, March 7, 2005

You started at the wrong end of the pipe. SOAP always was and always will be and API-centric worldview.

That is not the future.

That is the past.

Re-implemented badly.

--Sean Mcgrath
Read the rest in WS-MD? No thanks

Sunday, March 6, 2005
Moore's law doesn't really apply to batteries, and not quite either to the amount of heat your processor puts out. It's technically quite possible to put a very beefy processor in the space you have inside one of the larger devices, so long as you don't mind having to recharge it four times an hour and needing to have oven gloves and ear protection to handle it.

--Robin Berjon on the xml-dev mailing list, Monday, 07 Feb 2005 14:36:23

Friday, March 4, 2005

On controversial topics, the response can be especially swift. Wikipedia's article on Islam has been a persistent target of vandalism, but Wikipedia's defenders of Islam have always proved nimbler than the vandals. Take one fairly typical instance. At 11:20 one morning not too long ago, an anonymous user replaced the entire Islam entry with a single scatological word. At 11:22, a user named Solitude reverted the entry. At 11:25, the anonymous user struck again, this time replacing the article with the phrase "u stink!" By 11:26, another user, Ahoerstemeir, reverted that change - and the vandal disappeared. When MIT's Fernanda Viégas and IBM's Martin Wattenberg and Kushal Dave studied Wikipedia, they found that cases of mass deletions, a common form of vandalism, were corrected in a median time of 2.8 minutes. When an obscenity accompanied the mass deletion, the median time dropped to 1.7 minutes.

It turns out that Wikipedia has an innate capacity to heal itself. As a result, woefully outnumbered vandals often give up and leave. (To paraphrase Linus Torvalds, given enough eyeballs, all thugs are callow.) What's more, making changes is so simple that who prevails often comes down to who cares more. And hardcore Wikipedians care. A lot.

--Daniel H. Pink
Read the rest in Wired 13.03: The Book Stops Here

Thursday, March 3, 2005
I just started to use XOM in my beanshell scripts and have found it intuitive and very simple to use. It produces code that is very clear at a higher level of abstraction than I usually am forced to work.

--Gary Furash on the xom-interest mailing list, Wednesday, 9 Feb 2005 11:12:29

Wednesday, March 2, 2005
Pick one language for your tags and be done with it - preferably using some kind of naming rules based on ISO 11179. I suggest that the language (used for markup) be English. It's ubiquitous among IT and business folks and is the least ethnocentric (i.e., the vast majority of speakers are non-English, including myself). Are you confusing markup (tags) with the data to be carried between those tags? Non-English languages, and indeed languages not written in the Latin alphabet, can and will be used for data content - this is the data that the consumer or non-programmer will see on his Web form or report.

--William J. Kammerer on the xml-dev mailing list, Sunday, 27 Feb 2005 11:23:12

Tuesday, March 1, 2005
I think of XML as just syntax: having syntactically correct XML doesn't tell you anything about what the XML means. Applying human intelligence to the vocabulary can let you add that meaning. It is nice to be human... Schema languages are then tools for just restricting the expressions you can write. But again, this doesn't mean any of those XML expressions are semantically meaningful (without that brain again, and additional information outside of the XML). So I tend to think of schemas as just prescribing higher-level syntactic rules, and nothing more.

--Ian Graham on the xml-dev mailing list, Sunday, Feb 2005 17:09:48

Monday, February 28, 2005
A lot of the spec. world isn't about IP, it's about trying to standardize business practices. Trying to use specs. to leverage first mover advantages is a tad like bit like, <bad-metaphor> trying to keep the wilderness pristine by building a theme park in it</bad-metaphor> (though when someone does manage to sneak -- Netscape like -- under the radar, the advantage can be enormous for a while).

--Peter Hunsberger on the xml-dev mailing list, Monday, 5 Apr 2004

Sunday, February 27, 2005
I have coded a lot more C in my career than Python (or REXX or the like) in my life, and there is nothing to make me miss the specious type checking of a C compiler. I code a lot fewer bugs in dynamically typed languages, especially when they allow me greater flexibility in expression. There is nothing in the WXS type system that convinces me it is any less specious.

--Uche Ogbuji on the xml-dev mailing list, Saturday, 01 Jan 2005

Saturday, February 26, 2005
If the XBC discussion takes off, expect all the usual baloney, in particular the extremely fragrant sausage that if we reduce the number of different tags that XML recognizes, it will speed up parsers in some significant way. (Baloney because there are efficient ways of implementing parsers so that tests for rare tags don't penalize the common cases. For example, an optimized parser could detect that a document has no DOCTYPE declaration, and then switch to an implementation that does not need to do any buffer reallocation to handle entity inclusion. Java even provides jump tables to make simple parsing fast.)

--Rick Jellife
Read the rest in Binary XML? What about Unicode-aware CPUs instead?

Friday, February 25, 2005
denial has been now replaced with panic. The RESTful approach has now born fruit. Applications like BlogLines, Flickr, Mappr, Del.icio.us and 43Things are revealing that proof is in the pudding. All of these have shunned SOAP in favor of more RESTful designs. In stark contrast, the SOAP high priests have nothing to show but fancy visual tools. Their followers are at the brink of rebellion and if they don't act quickly they could find themselves swinging from a high treetop.

--Carlos Perez
Read the rest in SOAP is Comatose But Not Officially Dead!

Thursday, February 24, 2005
XML and XSLT have completely changed the way I approach internet applications. In fact, to the degree that I have avoided much of the .NET development only because my intrigue with these two technologies was overwhelming!

--Karl J. Stubsjoen on the xsl-list mailing list, Wednesday, 19 May 2004

Wednesday, February 23, 2005
RDDL was designed with human limitations and expectations in mind. The RDF-based approaches, including the RDDL/RDF piece, seem to be marching down the road toward maximum complexity and ever-steeper learning curves.

--Simon St.Laurent on the xml-dev mailing list, 15 Apr 2002

Tuesday, February 22, 2005
Formal libraries are generally better than command-line tools, and so are preferred, but they're more work to create so are not as common.

--Bryce Harrington
Read the rest in Achieving higher consistency between OSS graphics applications - OSNews.com

Monday, February 21, 2005
people often just consider Schematron a front end for XSLT, therefore some kind of cheating. The default query language binding is indeed for XPaths a la XSLT, but other query languages are possible (and have been used by people). It is a general framework for make assertions. You could make an application-dependent (e.g. geography aware, time aware, currency aware, widget aware, business-process-aware) query language and use it in a Schematron.

--Rick Jelliffe on the xml-dev mailing list, Wednesday, 25 Aug 2004

Saturday, February 19, 2005

The value of XML is that people can interchange structured data without limiting or defining exactly how that data is to be processed. The extent to which XML documents are self-describing is the extent to which people can take XML documents and process them in new ways. Part of that processing is to build and manipulate some sort of data structure, defined by a set of types and operations and often called a data model.

There's no limit to the number of different ways in which a given piece of XML can be processed.

So it's a strength of XML that there is no single data model and no single API.

--Liam Quin on the xml-dev mailing list, Sunday, 2 Dec 2004

Friday, February 18, 2005
the rigid abstract layers of web service plumbing are all anonymous, endless messages flowing through under the rubric of the same URL. Unless they are logged, there is no accountability. Because they are all different and since the spec that defines their grammar, XML Schema, is the marriage of a camel to an elephant to a giraffe, only an African naturalist could love these messages. They are far better, mind you, than the MOM messages that preceded them. Since they are self describing, it is possible to put dynamic filters in to reroute or reshape them using XPATH and XSLT and XML Query and even other languages all of which can easily detect whether the messages are relevant and if so, where the interesting parts are. This is goodness. It is 21st century. But the origination and termination points, wrapped in the Byzantine complexity of JAX RPC or .NET are still frozen in the early bound rigidity of the 20th.

--Adam Bosworth
Read the rest in Adam Bosworth's Weblog: ISCOC04 Talk

Thursday, February 17, 2005

XML markup is case-sensitive because the cost of monocasing in Unicode is horrible, horrible, horrible. Go look at the source code in your local java or .Net library.

Also, not only is it expensive, it's just weird. The upper-case of é is different in France and Quebec, and the lower-case of 'I' is different here and in Turkey.

XML was monocase until quite late in its design, when we ran across this ugliness. I had a Java-language processor called Lark - the world's first - and when XML went case-sensitive, I got a factor of three performance improvement, it was all being spent in toLowerCase().

--Tim Bray
Read the rest in Tyranny of the geeks

Wednesday, February 16, 2005
XSLT is the Lisp of native XML programming. XQuery is ... more like a SQL with report writing capabilities.

--Jonathan Robie on the xml-dev mailing list, Sunday, 02 Dec 2004

Tuesday, February 15, 2005
For search engines to be able to index a frame based website you need to provide links to all content pages. Visitors that come to the site from a search engine will also encounter problems, since they are likely to request a document that is missing important parts of the site, like navigational links. Some frame-dependant websites try to get around this by using the file robots.txt to tell search engines not to index sub pages. On other sites, JavaScript is used to send anyone who arrives at the site from a search engine to the homepage. Both of these methods may work, if the goal is to get fewer visitors.

--Roger Johansson
Read the rest in Developing With Web Standards | 456 Berea Street

Monday, February 14, 2005

You may think you're still in good shape because all your competitors have websites that suck, too. Guess again. Your real competition is bricks-and-mortar stores. The web bubble collapsed when Wall Street discovered that's where everyone was going again. Why? Because websites suck. The web experience is a miserable experience, so bad that people will actually leave the comfort of their home and go through the miserable experience of fighting their way through traffic, crowds, and uniformed salespeople just to avoid having to deal with your site.

The abysmal state of ecommerce in particular and web checkout specifically has delayed the promise of the web for more than a decade.

Fix it!

--Bruce Tognazzini
Read the rest in The AskTog Bug House: Three New Persistent Design Bugs

Saturday, February 12, 2005

I think we should stop comparing XSLT and XQuery. There is no winner among the two. They are computationally equivalent, and they are fortunately compatible with each other.

They each use different programming styles, and each one is better under certain circumstances. I personally prefer XQuery when the data has an expected shape, and I prefer XSLT while dealing with unpredictable data shapes.

I think XQuery is easier to optimize, simply because it is simpler to do dataflow analysis (i.e. to know what operations are applied to each data items, and what data items are given as input to each operation), and that's the bread and butter of any compiler/optimizer technique. But that doesn't mean that it is not possible in XSLT.

--Daniela Florescu on the xml-dev mailing list, Sunday, 2 Dec 2004

Friday, February 11, 2005
It's important to understand that using XML simply implies a certain amount of overhead to parse and process XML. Same deal as processing SQL. However, when you put things in perspective, the overhead of processing XML is often a small price to pay given the tremendous benefits like increased interoperability and flexibility that XML has to offer – as I mentioned a moment ago, you can use it to tackle, elegantly and easily, applications that would have you coding in knots using SQL.

--Mike Olson
Read the rest in An Interview with Mike Olson on XQuery and Database Technologies

Thursday, February 10, 2005
Historically, a lot of the motivation for XSLT being in XML was the experience of DSSSL, where the unfamiliar LISP-like syntax was widely regarded in retrospect as the reason for the lack of take-up. It was intended that XSLT should be writable by non-programmers, and I believe that often happens. In fact I have heard it said that non-programmers have far fewer conceptual problems with the language than JavaScript or SQL programmers do.

--Michael Kay on the xml-dev mailing list, Wednesday, 8 Dec 2004

Wednesday, February 9, 2005
RE: Capitalization: Should you use ID or Id

Exactly, especially when most of my applications are database apps. When I see ID I immediately associate that with a primary key / foreign key field in the database. Plus with camel case being so popular, Id looks like a word, not like a key identifier.

--Eric Wise
Read the rest in Capitalization: Should you use ID or Id

Tuesday, February 8, 2005
We honestly weren't trying to make a commercial that would get rejected, but we were making a commercial that we hoped would get noticed. We worked hard to make sure we didn't cross the line, but we poked fun at censorship and guess what? We were censored. It's kind of scary.

--Paul Cappelli, chief executive of the Ad Store
Read the rest in The New York Times > Business > Media & Advertising > Advertising: Super Bowl Spot Provokes After Only One Broadcast

Monday, February 7, 2005
most people in charge of websites are at the extreme high end of the brainpower/techno-enthusiasm curve. These people are highly educated and very smart early adopters, and they spend a lot of time online. Most of the teens they know share these characteristics. Rarely do people in the top 5 percent spend any significant time with the 80 percent of the population who constitute the mainstream audience.

--Jakob Nielsen
Read the rest in Usability of Websites for Teenagers (Jakob Nielsen's Alertbox)

Saturday, February 5, 2005
Use XML tools as much as possible. XPath and XSL-T are enormously powerful tools for working with XML data, traditional programming languages are generally very poor at working with XML. Trying to bend traditional programming language techniques to working with XML data is what leads to horrible systems like W3C XML Schema and static binding of programming language objects to schemas. If you want to build truly robust systems do not fall into this trap.

--Kimbro Staken
Read the rest in Inspirational Technology: 10 things to change in your thinking when building REST XML Protocols

Friday, February 4, 2005
We're an adult network. We reserve the right to program our network for adults. In a universe of 500 channels, we beg you - watch something else. Please.

--Doug Herzog, Comedy Central
Read the rest in Wired 13.02: Building the Fun Bomb

Thursday, February 3, 2005
Ever since Internet Explorer toppled Netscape in 1998, browser innovation has been more or less limited to pop-up ads, spyware, and viruses. Over the past six years, IE has become a third world bus depot, the gathering point for a crush of hawkers, con artists, and pickpockets.

--Josh McHugh
Read the rest in Wired 13.02: The Firefox Explosion

Wednesday, February 2, 2005
So far, no one has shown me a DOM, XSLT, or XQuery-based app that is not at least an order of magnitude or two slower than a hand-rolled streaming application, and that's not even considering the memory overhead. As a matter of fact, many XML consultants get called in precisely because someone designed a prototype using XSLT that ran just fine in the demo but melted in a puff of goo and smoke on the first load test. Unfortunately, the ones who do not call in the consultants simply conclude that XML is too slow and abandon it completely.

--David Megginson on the xml-dev mailing list, Monday, 27 Dec 2004

Tuesday, February 1, 2005
the RSS/Atom experience has very clearly shown that it is far easier to hack software to process tag soup than to motivate and educate even fairly geeky people to write valid XML.

--Michael Champion on the xml-dev mailing list, Wednesday, 27 Oct 2004

Monday, January 31, 2005

While XML itself may not be proprietary, there have been an alarming number of patents issued and applied for that claim the use of XML in specific applications. Thus, while XML itself may not be proprietary, its use may be.

Personally, I believe that those of us who rely on XML every day should be doing whatever we can to get the law or practice changed to stop the sort of silliness that is allowing people to chip away at the scope of applications for which we can use XML. Any claim that says something like "Doing x with XML." should be rejected. If "Doing x." isn't good enough to be a claim or if "Doing x with a generic encoding format." isn't good enough, then the claim should fail. The idea of claiming a patent on something because you do it with XML or HTML instead of doing it with ASN.1, XDR, CSV, etc. is ridiculous.

--Bob Wyman on the xml-dev mailing list, Monday, 6 Dec 2004

Saturday, January 29, 2005
Working, as I do, in a place where the data comes in many forms from many different sources converting it all into xml distributes the complexity away from the point of building the interface. I have always found it very quick and easy to roll my own if the storage system doesn't do it for me.

--Rod Humphris on the xsl-list mailing list, Wednesday, 19 May 2004

Friday, January 28, 2005

It just isn't possible to be even a single order of magnitude faster in the general case. An alternate encoding for XML can provide gains in 2 ways: file size, or parser CPU usage. I've never seen a format which provides significant file size gains without high CPU cost (although you can sometimes play games and push all the CPU cost to the writer, so that the reader is cheap, for example). Most of the CPU cost of parsing is related to the abstract model of XML, not the text parsing: Duplicate attribute detection, character checking, namespace resolution/checking. Every binary-xml implementation I have researched which improves CPU utilization does so by skipping checks such as these. At that point you are no longer talking about XML.

Note the key constraint in my opening sentence: "in the general case". Some scenarios clearly benefit from an alternate encoding, but different scenarios demand different trade-offs. I have yet to hear of any proposed solution which successfully balances the different demands. I'm not sure it is possible, without creating a homunculus.

--Derek Denny-Brown on the xml-dev mailing list, Monday, 22 Nov 2004

Thursday, January 27, 2005
the "ability to see things from others' points of view," also known as empathy, is a crucial skill for creating great user interfaces, because you have to be able to see things through the eyes of the user; and not just one user but the whole crazy gamut of them, all with varying technical expertise and life experience. Understanding human nature is crucial in both UI design and storytelling, as well as lots of other spheres of endeavor. In a way, marketing is storytelling, too.

--Andy Hertzfeld
Read the rest in MacDevCenter.com: The Insanely Great Story of How the Mac was Made -- An Interview with Andy Hertzfeld

Wednesday, January 26, 2005
A resource is identified by URI and may emit representations. There's no way to tell from the representations what the resource "is"; I tend to believe a resource is what its publisher says it is as a good rule of thumb. But it doesn't affect the software very much.

--Tim Bray on the xml-dev mailing list, Wednesday, 23 Jul 2003

Tuesday, January 25, 2005
XSL is an incomplete answer. You see, XSL is a constraint language. In XSL, you can specify how large the pages are, how many columns they have, the sizes of fonts, and a myriad other parameters. What you don't specify directly are where the page breaks necessarily occur, or which words get hyphenated, or where exactly any of the actual marks are going to wind up on paper. The XSL Formatting Objects (FO) document is input to a formatter, a composition tool that renders marks on paper, typically these days in the form of a PDF file. Producing quality printed output is devilishly hard. Of all the various sorts of software systems I've encountered, a formatter is hands down the hardest to implement well.

--Norm Walsh
Read the rest in webarch.pdf

Monday, January 24, 2005
New users of Linux are almost always exposed to it through a member of the userbase, insuring that they have at least one person on-hand who can answer their inevitable questions and undo their horrible mistakes. The above is a romanticized description of the Linux experience, because it implies that the ubiquitous Linux veteran is not a factor. Unfortunately, Linux was not designed for end-to-end ease of use -- in that respect, it was not "designed" at all.

--Garrett Birkel
Read the rest in The Command Line In 2004

Sunday, January 23, 2005

The problem with Xquery is not that it doesn't use XML syntax, no one moans that C doesn't use XML syntax. The problem is that it uses (or one could say abuses) XML syntax when it isn't XML.

<foo> ʚ </foo>

_looks_ for all the world like XML, and people will put it in XML documents, or try to write it with XML editors, but if in fact that it is Xquery rather than XML then things will go wrong, and worryingly they won't go wrong straight away, they will just go wrong sometimes when you hit the obscure (or not so obscure) edge cases where the XML and Xquery grammars parse the same string in different ways

--David Carlisle on the xml-dev mailing list, Sunday, 9 Dec 2004

Saturday, January 22, 2005

I've always been skeptical about many of the visions of ubiquitous computing, whether Jini technology-based or not. The idea that my refrigerator will know its contents and will suggest that I make certain dishes (or conspire with my stove and mixer to insure that I will) sounds like the Nutramatic machine from the Hitchhiker's Guide to the Galaxy -- which, after much analysis, always served you something that you didn't want. Many of these uses seem pretty silly to me.

But this is not to say that this technology won't be useful; it will be used in ways we don't currently envision. For instance, it would be useful for my car to be running constant diagnostics, to warn me of needed maintenance problems before they develop, and to give feedback to the auto manufacturers so that they can receive data to make a car that is more efficient, safer, and durable. I'd like my home appliances to be able to interact with the electric grid to conserve energy. And fairly simple medical monitoring connected to existing wireless telephone networks could decrease medical costs and save lives.

In my vision of the future of ubiquitous computing, the role of humans in the network is pretty minor. Most of the information will be passed to other computers, which will do the analysis of the data, and perhaps pass that analysis on to other computers. This is a different kind of network than those that we have designed, for the most part, in the past. Without human intervention, mechanisms for finding, navigating, and interaction will need to be designed to be used by machines and software objects, not people. And machines and software have very different capabilities and skills than humans. This will require a different approach to the construction of networks. That was one of the challenges we had when designing Jini technology, which was designed around machines and software objects interacting with other machines and software objects. We got a lot of it right, but there is a lot more that needs to be done in this area.

But not all of the questions regarding this subject are technical in nature. What do we want in such a world? For instance, insuring that the information collected benefits those it refers to is essential. Will the government be the guarantor of privacy, or will we need to protect ourselves against government intrusions?

I don't think that we can simply stop the trend towards ubiquitous computing. Its efficiencies are too large, and benefits are too great, and its savings too significant. We need to ask the right set of questions at the beginning of the process, and not wait until the technology dictates the answers.

--Jim Waldo
Read the rest in Jini Network Technology Fulfilling its Promise

Friday, January 21, 2005
The choice between JSP and XSLT comes down to whether the pages are just dynamic, or dynamic and customized. If they are dynamic *and* customized, and you're in a J2EE environment anyway, you probably want to use JSP rather than XSLT -- JSPs compile to Java code, so they can change content without requiring a reparse; XSLT usually compiles to static HTML, so any change in content requires a reparse. If you're simply publishing news stories or chapters of a technical manual, where every user sees the same thing for (say) an hour before the content changes, then XSLT and a good cache will be a great solution; if you're showing user account information, shopping carts, the current weather in Tulsa, or anything else that is the result of a real-time database query, then JSP templates will give an significant performance advantage, since the HTML from the XSLT will not usually be cacheable.

--David Megginson on the xml-dev mailing list, Wednesday, 29 Dec 2004

Thursday, January 20, 2005
1.1 after 2.0?! The two camps really should have used different names for their formats instead of duelling version numbers. It's as though IMAP had been named "POP 4". At least the Atom folks didn't name their format "RSS 3.0"!

--Jens Alfke
Read the rest in inessential.com: Weblog: Comments for ‘RSS 1.1’

Wednesday, January 19, 2005

One of the common misconceptions about writing XML applications is that creating a parser instance does not incur a large performance cost. On the contrary, creation of a parser instance involves creation, initialization, and setup of many objects that the parser needs and reuses for each subsequent XML document parsing. These initialization and setup operations are expensive.

In addition, creating a parser can be even more expensive if you are using the JAXP API. To obtain a parser with this API, you first need to retrieve a corresponding parser factory -- such as a SAXParserFactory -- and use it to create the parser. To retrieve a parser factory, JAXP uses a search mechanism that first looks up a ClassLoader (depending on the environment, this can be an expensive operation), and then attempts to locate a parser factory implementation that can be specified in the JAXP system property, the jaxp.property file, or by using the Jar Service Provider mechanism. The lookup using the Jar Service Provider mechanism can be particularly expensive as it may search through all the JARs on the classpath; this can perform even worse if the ClassLoader consulted does a search on the network.

Consequently, in order to achieve better performance, we strongly recommend that your application create a parser once and then reuse this parser instance.

--Elena Litani and Michael Glavassevich, IBM
Read the rest in Improve performance in your XML applications, Part 2

Tuesday, January 18, 2005

Now imagine what it would be like if instead of using our algorithms we relied on the news suppliers to put in all the right metadata and label their stories the way they wanted to. "Is my story a story that's going to be buried on page 20, or is it a top story? I'll put my metadata in. Are the people I'm talking about terrorists or freedom fighters? What's the definition of patriot? What's the definition of marriage?"

Just defining these kinds of ontologies when you're talking about these kinds of political questions rather than about part numbers; this becomes a political statement. People get killed over less than this. These are places where ontologies are not going to work. There's going to be arguments over them. And you've got to fall back on some other kinds of approaches.

The best place where ontologies will work is when you have an oligarchy of consumers who can force the providers to play the game. Something like the auto parts industry, where the auto manufacturers can get together and say, "Everybody who wants to sell to us do this." They can do that because there's only a couple of them. In other industries, if there's one major player, then they don't want to play the game because they don't want everybody else to catch up. And if there's too many minor players, then it's hard for them to get together.

--Peter Norvig, Google
Read the rest in Semantic Web Ontologies: What Works and What Doesn't :: AO

Monday, January 17, 2005

Since the basic question is fairly simple -- "Is my beach going to be hit by a destructive tsunami and when?" -- and the required data sources are limited, I figure we won't need a supercomputer.

The seismographs are online, we gather the data using XML, continuously crunch it using the codes I am assuming already exist, then we need the warning, which I would flash on the screen of my PC down at the surf shop using a Javascript widget built with Konfabulator, the most beautiful widget generator of all. Looking just like a TV weather map, the widget would flash a warning and even include a countdown timer just like in the movies.

You don't need an international consortium to build such a local tsunami warning system. You don't even need broadband. The data is available, processing power is abundant and cheap. With local effort, there is no reason why every populated beach on earth can't have a practical tsunami warning system up and running a month from now. That's Internet time for you, but in this case, its application can protect friends everywhere from senseless and easily avoidable death.

--Robert X. Cringely
Read the rest in PBS | I, Cringely . Archived Column

Sunday, January 16, 2005

There are many people who seem to think that 99% of the world's data is held in relational databases. They are badly wrong.

I had some involvement with an internet banking application a few years ago. When the customer logged on, all the relevant customer and account information was assembled from various back-end systems into an XML document which was then used as session data for the duration of the session. If I remember rightly, there were about six back-end systems involved, and only one was relational. The main operational transaction system was accessed by sending an application-level CICS transaction to a mainframe. Many of the other databases were ad-hoc, for example information about recent interactions with the telephone call centre was in Lotus Notes.

The beauty of using XML for this kind of information integration is that it allows the resulting data to have a much richer structure. This makes it much easier to combine the structured data with the relatively unstructured, and handle the semantic conflicts that occur: for example if two databases record different phone numbers for a customer, you just capture both. That is very much harder to do if the result has to be in relational form.

--Michael Kay on the xml-dev mailing list, Wednesday, 15 Dec 2004

Saturday, January 15, 2005
SOAP-using apps should either avoid the use of SOAP messages to do information "gets" or should provide an alternate interface that allows real HTTP get. There will always be more tools out there that know how to do HTTP gets (proxies, spiders, browsers) than tools that know how to interpret your SOAP method as a getter. And more to the point, things that are GET-able through HTTP have URLs that can be linked to and thus integrated into distributed information collections (such as Google or Meerkat)

--Paul Prescod
Read the rest in RestWiki: HowSoapComparesToRest

Friday, January 14, 2005
One of the things I've found interesting about discussions with the RDF/Semantic Web crowd is that many of them fail to see that moving to ontologies and the like basically is swapping one mapping mechanism (e.g. transformations using XSLT or regular code in your favorite OOP language) for another (e.g. creating ontolgies using technologies like OWL or DAML+OIL). At the end of the day one still has to transform format X to format Y to make sense of it whether this mapping is done with XSLT or with OWL is to me incidental. However the Semantic Web related mapping technologies don't allow for the kind of complex and messy mappings that occur in the real world.

--Dare Obasanjo on the xml-dev mailing list, Tuesday, 30 Nov 2004

Thursday, January 13, 2005
I guess it worries me when people say that the problem with XML is that it doesn't do ENOUGH. Most of the problems I see stem from trying to do too much, too soon, and biting off more before the previous mouthful was chewed.

--Michael Champion on the xml-dev mailing list, Friday, 29 Oct 2004

Wednesday, January 12, 2005

Can you imagine how you would react if the bricks-and-mortar stores did to you what you do to your customers?

You go into the supermarket. Every item you're interested in is inside a box. You press the button on the front of the box. It kicks off a 30 second time delay. You open the box. There's a fuzzy picture of the item along with a note that says it's out of stock.

Nonetheless, you finally fill your cart with items that are in stock and go to checkout. You hand over your ATM card, and they hand you a long, three-page form to fill out, similar, yet different than the one you filled out in another four stores down the block earlier today. The same one you filled out in this store last week, and the week before that, and the week before that.

Across the street is another supermarket that has all the food on display, instantly available, and will take your ATM card with a smile, then seems to have the advanced technology known as a computer to gain all the information it needs about you without bothering you with a form.

Where would you be shopping tomorrow?

--Bruce Tognazzini
Read the rest in The AskTog Bug House: Three New Persistent Design Bugs

Tuesday, January 11, 2005
I've gone through the source code of several open source projects. Some are excellent and some are just awful. On the whole Sun's code is better. I assume it is better tested as well. Many open source projects have trivial, or worse yet, no testing in place. What I think is great about open source is exactly what its name says. The source code is available. If there is a bug I can find it, report it and correct it. I can wrap the offending class, use all of its non-buggy implementation and re-implement the buggy parts. Sun provides me with the source, so generally, I'm happy. I can do all of the above with Sun's Java.

--Glen Ezkovich on the JavaDev mailing list, Friday, 26 Nov 2004

Monday, January 10, 2005
Forget the WS-I lunacy. Web applications for computers were happening before the web services standards junk. Amazon would still be providing their interfaces with or without SOAP, WSDL and UDDI, and indeed all the evidence is that their users prefer to use the simpler HTTP/XML APIs anyway. As far as the web is concerned, the WS-* work is about sprinkling XML pixie dust on a failing idea.

--Edd Dumbill
Read the rest in Edd Dumbill's Weblog: Behind the Times

Sunday, January 9, 2005

is it possible that people are beginning to react to XML technologies that they feel were forced down their throat by bypassing those XML structures in favor of more ad-hoc ones? There is little in the way of true empirical evidence, but there is a lot of anecdotal evidence that suggests this. SOAP has definitely achieved success in the hard business sphere, but is being adopted much less readily by many companies that are dealing with document content. WSDL (and the implied RPC model that it brings to the table) has likewise been successful in a much smaller niche than the proponents of these specs had hoped.

A word to such companies -- your customers are looking for the best solution to their dilemmas, not yours. While it is necessary to place a stake in the ground over a particular technology periodically, making the argument that it's too late to make changes will only make the foundation that these technologies are building just that much more fragile. It also may make your technologies out of synch with the rest of the world, and in a world that is becoming increasingly heterogeneous, this can result in some serious discontinuities to the bottom line when a critical mass of users of the alternative standard is reached.

--Kurt Cagle
Read the rest in Metaphorical Web: Conferences and Google

Saturday, January 8, 2005
XML is fine... it's all the stuff dumped *on* and *around* it that cause problems.

--Gavin Thomas Nicol on the xml-dev mailing list, Sunday, 31 Oct 2004

Thursday, January 6, 2005
A lot of people are quick with the cattle prod for throwing every bit of useful code out to the waiting world. But they forget just how much work it actually is to release software. If you try releasing code with a minimum of effort, you're subject to licensing flames and users who expect bottomless tech support and four score reams of professionally-edited documentation.

--Uche Ogbuji on the xml-dev mailing list, Sunday, 30 Dec 2004

Wednesday, January 5, 2005
One problem I see, considering how long people have been talking about the Semantic Web, is that there's still surprisingly little data to form into a web. (I'm just talking in terms of publicly available non-transient RDF.) I wonder how far XML would have gotten if we'd all spent the first few years writing DTDs and only occasionally created little document instances to demonstrate how our DTDs might be used. That didn't happen because people coming out of the SGML world already had plenty of real-world applications in which to use DTDs and documents, and the dot com boom gave people lots more ideas, but the amount of practical, usable RDF data still seems remarkably small. I've been compiling a list at rdfdata.org, and it's getting harder and harder to find new entries.

--Bob DuCharme on the xml-dev mailing list, Tuesday, 9 Nov 2004

Tuesday, January 4, 2005

Hierarchical and CODASYL databases provided speedy inserts and updates, but the link/pointer model isn't so great for querying -- particularly for ad hoc queries. That's why the relational model and SQL became important. You tell an engine what results you want instead of telling it how to navigate to the data. The same phenomenon applies to the Web.

Serendipity is appealing -- browsing the web and uncovering gems by following links -- but there's a point when the document collection becomes too large. That's when using an engine to do the work of finding information becomes more appealing than specifying the mechanics of navigation. Search engines are an example of that phenomenon. So is XQuery. We'll need those and other information retrieval solutions as the universe of searchable documents continues to expand.

--Ken North on the xml-dev mailing list, Sunday, 2 Dec 2004

Monday, January 3, 2005

Nobody should start to undertake a large project. You start with a small _trivial_ project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision.

So start small, and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly over-designed. And don't expect people to jump in and help you. That's not how these things work. You need to get something half-way _useful_ first, and then others will say "hey, that _almost_ works for me", and they'll get involved in the project.

--Linus Torvalds
Read the rest in Linux Times : An Online Linux Magazine - Linus Torvalds: ''Desktop Market has already started''

Sunday, January 2, 2005

I have trouble believing that RDF will go anywhere as long as it depends on URIs like:

   http://www.example.com/terms/editor

Let's face it. I want to write about Bob, Bob's house, the address of Bob's house, and the color of Bob's house, not:

   http://www.example.com/terms/person
   http://www.example.com/terms/name
   http://www.example.com/terms/house
   http://www.example.com/terms/owner
   http://www.example.com/terms/address
   http://www.example.com/terms/color
   ...

--Ronald Bourret on the xml-dev mailing list, Tuesday, 09 Nov 2004

Saturday, January 1, 2005
While most people using the web are sighted, all are not. And there is no way of making the web WYSIWYG. There will always be variations as long as people use different browsers, operating systems, monitor sizes, screen resolutions, window sizes, colour calibration, and font sizes. The web is not print or television. Make your design flexible.

--Roger Johansson
Read the rest in Web development mistakes | Lab | 456 Berea Street

Quotes in 2004 | Quotes in 2003 | Quotes in 2002 | Quotes in 2001 | Quotes in 2000 | Quotes in 1999


[ Cafe con Leche | Books | Trade Shows ]

Copyright 2005 Elliotte Rusty Harold
elharo@ibiblio.org
Last Modified at Monday, January 2, 2006 5:02:38 AM