Most of the unfulfilling argument surrounding it springs from the assumption that, since namespace names *look* like URLs, they should *act* like URLs -- that is, that one should be able to to point a Web Browser at them and retrieve something useful since they look like something one might point a Web Browser at. This assumption, while not unreasonable, is explicitly disclaimed by the namespaces spec. Namespace names are Identifiers, not Locators.--Joe English on the xml-dev mailing list
I propose instead we have a Millennial Namespaces Treaty of Wulai (named after the hot springs where I have been soaking today, looking out at a waterfall on the pretty green opposing valley, feeling like a boiling egg with no timer), where we all agree that in some limited but important circumstances (namespace URI=schema URI) makes sense, and should be allowed, even though the general case should not support (namespace URI=schema URI), though in no case does (namespace=schema). Then the discussion would be on how to cope with this diversity both hygenically and respectfully.--Rick Jelliffe on the xml-dev mailing list
I don't really see any reason that anyone would use XHTML modularization; XHTML is already "extensible" in the sense that anything in XML is -- why the need for another extensibility scheme added onto that? My prediction is that modularization will seem quite useful at first, until developers realize that it does very little for them, and it will soon be a discarded technology full of promise but of little actual use.--Kynn Bartlett on the XHTML-L mailing list
Separating document structure from it's presentation may not make any sense for documents authored solely for display on one platform, but as soon as you have to redeploy that content on another platform or within another context, it becomes very important.--Steven Champeon on the xhtml-l mailing list
I have no faith in Semantic Web plans -- they sound too much like the Information-Highway plans that *lost* to the Internet and Web a decade ago. I see a role for RDF or something similar on the Web for pure data, like (say) mutual fund values or GIS information -- stuff that users plan on processing automatically by machine (and that you would use delimited text or a spreadsheet file for right now). In this view RDF would simply be the data counterpart of HTML rather than a silver bullet that will enable intelligent searching and machine understanding.--David Megginson on the xml-dev mailing list
Take a look at the dot.com fallout. People will believe and invest and will lose their shirts and others will take their shirts and build big pink houses on the hillsides outside San Jose with them.--Claude L Bullard xml-dev mailing list
I'm very happy with a lot of the work I see in Topic Maps, RDF, schemas, and other information modeling systems. I'm deeply unhappy, however, with the strange visions of a Grand Unified Information Model (GUIM) they seem to produce in some people. I'd like to take something of an Extreme Programming view on this project, evolving vocabularies and architectures from pieces which we can make work today without nearly as much concern for the larger vision set forth in the various requirements of the GUIM. I don't think it will lead us directly to the GUIM, but it might let us get more work done in the meantime.--Simon St.Laurent on the xml-dev mailing list
The Unicode standard *is* universal, at least in intent. Virtually every script on the planet is either (a) encoded, (b) proposed for encoding, or (c) in need of more information before encoding is practical.--Doug Ewell on the Unicode mailing list
--Simon St.Laurent on the xml-dev mailing listI'd love to hear more folks taking the contingent nature of communications, including XML communications, more seriously - as an accepted foundation, not as a problem. There's more going on than just nailed-down semantics and well-understood content, and there always will be in any large-scale project.
Local understandings - as Walter Perry has made clear a number of times - really do matter. Maybe it's frightening to people who want something more solid to hold on to, but solid often seems to equal brittle...
The more tools you have an understanding of, the better you will be at resolving any problem placed before you. XML, Java, C, even Fortran have their appropriate uses. It should be our goal as developers to know when to use which tool. If you only learn XML and Java, you will always try to use them, even when a simple script would be the better solution. Not everything needs to be cross-platform, web accessible, and scalable. Know the requirements and choose the appropriate tool to solve the problem.--Nicole Curcio on the XML-INTEREST mailing list
the right of anonymity, which people are beginning to see they might have some stake in, and what we now call privacy, by which this time we mean control over personal information, both depend upon encryption based solutions. If we are going to have the ability to read what we want without being surveilled in reading it, that's because we are using agents to do our reading, which are unidentifiable and which restore content to us in an encrypted stream. That's how we get around people who establish surveillance blockages or interception points to find out what we're reading and whether we're paying for it, doing something seditious because of it, or just looking at naked people, or whatever. Our ability to behave anonymously on the net, and our ability to control the flow of information about ourselves both depend on our ability to encrypt what we do.--Eben Moglen
--Jenny OffillAccording to Mark Tilden's calculations, building a traditional robot typically takes about eighteen months of manpower and programming. BEAM robots can be built in considerably less time. Somehow this does not reassure me. The instruction manual is filled with blurry pictures and daunting explanations. ("When the voltage level matches a preset point at the 1381, it sends a pulse out to the 2N3904 transistor, turning it on, then some power splits off through the 2.2.k resistor and goes into the 2N3906 transistor.") Hmm
I decide to assemble a crackerjack team of experts (my architect roommate and computer whiz boyfriend) to help me build my robots. They are excited, as is every male friend that I tell about the project. All boys love robots.
--Eben MoglenWhat we have here are two different structures of the distribution of cultural product. You have a set of people whose fundamental belief is that cultural products are best distributed when they are owned, and they are attempting to construct a leak proof pipe from production studio to eardrum or eyeball of the consumer. Their goal is to construct a piping system that allows them to distribute completely dephysicalized cultural entities which have zero marginal cost and which in a competitive economy would therefore be priced at zero, but they wish to distribute them at non-zero prices. In the ideal world, they would distribute them at the same prices they get for physical objects which cost a lot of money to make, move and sell, and they would become ferociously profitable. They are prepared to give on price, but at every turn, as with the VCR at the beginning of the last epoch, their principle is any ability of this content to escape their control will bring about the end of civilization.
This is an absurd claim. Nobody believes it but studio executives. My students are beginning to believe, to my shock, a communist thing - namely, it's our music, and how dare they take it away from us - which is an enormously important and suggestive development.
When you create a sophisticated XSLT style sheet you are programming. At Talva we develop a lot of stylesheets and discovered that using the extreme programming practice leads to better written stylesheets and also that the knowledge is percolating more easily among the team members, especially when the team has to master XSLT in addition to: WML, VoiceXML, SVG, XHTML, SMIL.--Didier PH Martin on the xml-dev mailing list
It's hellishly difficult entering data on the phone. The buzz starting later this year will be more about voice-base systems, voice interfaces and VoiceXML technology.--Richard Barnwell, CTO Zefer Inc.
The most dismaying thing I learned in the course of The Plant's run (a run that's not over but only lying dormant until next summer) is that there's a profound crevasse of misunderstanding between the smart guys of the business world and the talented goofballs who make entertainment in this increasingly entertainment-hungry society. Publishers, investors and media watchers see a venture like The Plant and say, "Ah, King is moving into e-commerce!" in the tones of 1940s newscasters relaying the news that Hitler is moving east. King, in the meantime, is thinking something along the lines of, "Hey guys! My uncle's got a barn! Let's put on a show!" It's a goofy thing, in other words. Not a business thing at all. Which, may I add, isn't the same thing as saying there's no money in it. Or cultural clout. Just ask the goofball who thought up Napster.--Stephen King
Following a UK field study, 70% of users decided not to continue using WAP. Currently, its services are poorly designed, have insufficient task analysis, and abuse existing non-mobile design guidelines. WAP's killer app is killing time; m-commerce's prospects are dim for the next several years.--Jakob Nielsen
--James H. Morris, Dean Carnegie Mellon School of Computer ScienceWe're about to reach the end of what might be known as the golden age of personal computer software. Like the automobiles of the 1950's, the software of the 1990's delighted and amused us despite its many flaws and shortcomings. In the 1950's what was good for the car industry was good for the US? an argument that in ways has applied to the software and "dot com" industries in the 1990's. As with car quality in the 1950's, it is widely argued that it is a disservice to stockholders to make software more reliable than the market has demanded. Instead of solid engineering values, fancy features and horsepower are the two factors used to sell computing systems. While this euphoric era of desktop computing will be remembered fondly by many, its days are numbered.
The current era of desktop computing will pass soon, just as it did for automobiles when the combination of oil shortages and Japanese manufacturing prowess threw Detroit from the leading force in the economy into part of the rust belt. It will only be possible to pin down the triggering factors for demise of the current "golden era" of computer software in retrospect. But, it seems likely that the shift to a new era will involve factors such as the globalization of the software business, the adoption of desktop computers as an essential business tool rather than an occasional productivity enhancer, and the continuing proliferation of computers into embedded systems that form the new infrastructure of our society. The issue of what is eventually said to cause the transition to this new era of computing systems is, however, not as important as the fact that it is inevitable in our changing world.
While efforts to streamline e-commerce may seem innocent, we have all seen the zeal with which businesses pursue our most private personal information and their complete willingness to sell our information for their own profit. It is absolutely wrong.--Senator Richard Shelby, R-AL
--James H. Morris, Dean Carnegie Mellon School of Computer Science Read the rest in The High Dependability Computing and Communication ConsortiumA central theme of the new era of computing will be an absolute requirement for high dependability (but, without the traditionally exorbitant price tag usually associated with critical systems). Public computing fiascoes such as probes crashing into Mars or on-line services of all sorts going belly up for a hours at a time are only the tip of the iceberg of this need. Every one of us personally experiences computing system meltdowns on a regular basis, and it would be no surprise if our children develop a quick reflex for pressing control-alt-delete before they've memorized their multiplication tables. While stories of bad software killing people are still rare, they exist and may portend the future. The lessons of the Y2K experience are twofold: such problems can indeed be overcome by dint of extraordinary effort and expenditures, but just as importantly, we rely upon computers far more than we fully realize until we're forced to step back and take notice of the true situation.
The point is that enthusiasm for computers has progressed to the point that our society is already completely committed to using them, and is becoming utterly dependent on them working correctly and continuously. But, commercial computer systems, as we currently build them, simply aren't worthy of our unreserved trust and confidence.
> What is changing is that higher-level languages are becoming much more important as the number of computer-involved people increases. Things that began as neat but small tools, like Perl or Python, say, are suddenly more central in the whole scheme of things. The kind of programming that C provides will probably remain similar absolutely or slowly decline in usage, but relatively, JavaScript or its variants, or XML, will continue to become more central. For that matter, it may be that Visual Basic is the most heavily used language around the world. I'm not picking a winner here, but higher-level ways of instructing machines will continue to occupy more of the center of the stage.--Dennis Ritchie
> Working with Mac OS X Beta, I feel slightly as if an old friend had been kidnapped by aliens. Something has taken over my Macintosh, and I'm not quite sure what.--Angus McIntyre
Experienced standards and specification developers do not rely on Schemas of any sort as the complete definition for an interoperable system. They provide an object model and/or API with the Schema.--L Claude Bullard on the XML-Dev mailing list
99 times out of 100, if you are using disable-output-escaping then you are doing something wrong.--David Carlisle on the xsl-list mailing list
I've always felt that SYSTEM identifiers were the way to go, and that this wasn't really a problem operationally. My feeling is that PUBLIC identifiers are a legacy concession to the SGML community, who had made good use of them, and a reflection of the fact that in that space, you can't do anything no matter how trivial without access to the DTD. But lots of smart people disagree with me and think that you should build your addresses around PUBLIC identifiers; history will tell. It is the case that the design of XML 1.0 is clearly biased toward the use of SYSTEM identifiers.--Tim Bray on the xml-dev mailing list
With XHTML we finally have a chance to get back to the strengths of SGML without the complexity; we have the chance to encourage browser vendors to supply us with tools that actually work instead of managing tag soup by adding bloat; we have the chance to correct the widespread corruption of HTML by people who don't want to learn anything more than how to make their pages *look pretty* in their browser; we finally have the chance to migrate from a hideous morass to a logical foundation; we finally have the chance to do search engine and knowledge management *right*; we finally have the chance to use markup the way it was meant to be used: as a structurally oriented means of applying semantics to otherwise meaningless content and leave presentation to the receiver. XHTML becomes the vehicle for the introduction of sanity into an insane world, by making up for the mistakes of the past eight years through reintroducing the concepts of validity, well-formedness, and semantic markup.--Steven Champeon on the XHTML-L mailing list
--Simon St.Laurent on the XML-Dev Mailing listthe current approach to XML's continuing development may in fact stifle more possibilities than they create. If complexity grows faster than capability, the net benefits are only apparent to those for whom complexity is not a cost.
XML's initial rise had something to do with its approachability, and the tools which supported it. While the tools are continuing to improve, however, the overall approachability is (IMHO) declining.
--Kynn Bartlett on the XHTML-L mailing listIt's not easy to answer the following question in a manner which most current, year 2000, web designers will useful:
What will XHTML 1.0 do for me that HTML 4 doesn't do for me?
Without any answer to that -- from the perspective of the average front-line web designer (a perspective I feel is lacking in a number of circles, including the XHTML working group) -- there's no way that someone is going to want to suddenly pick up XHTML.
not very many ordinary (not in the field) folks (who probably spend the most money on Web-related books, technologies, classes and products for the very reason that they aren't in the market) who use the Web or build Web sites have heard of XHTML. Why not? Well, ask them and you'll hear things like, "Is it on Dreamweaver?" "Is it being offered this term at Houston Community College?" "XHTML...is that like HTML?" "My headhunter said I need HTML, what the hell is XHTML?"--Martin L. de Vore on the XHTML-L mailing list
The true genious behind simplicity is when it does not limit the complex from being possible. This is the balance we must all strive for as we continue to drive the evolution of XML and it's related family of technologies.--Chris Lovett on the XML-Dev mailing list
>--Matthew Brealey.Netscape has been heavily criticized for its deficiencies. These criticisms do seem a little onesided however, given that the release of Netscape 6 is overwhelmingly good news for web developers, since support for things like the DOM mean that DHTML will no longer have to be rewritten with each new browser release (in fact many people have criticised Mozilla for this - saying that because it doesn't support the document.all object, which is just as proprietary as Netscape 4's layers, it therefore does not support 'DHTML').
Given that Netscape's focus on standards has been unprecedented, these criticisms are unbalanced, if not entirely unjustified (obviously any standards support problem is undesirable). In particular, it is easy to take an open bug database and pick some bugs and say that for those bugs the product should not be released - the openness of Mozilla does not help it from a PR point of view - when in fact every browser, every product ships with bugs - it's just that when (for example) previous buggy versions of Netscape shipped, there was not an open database from which to search for its problems.
Furthermore, it is important to emphasize that for every linux distribution that comes with Netscape 6 rather than 4, for every Windows user that upgrades their browser, and for every Netscape/AOL product that comes with a Gecko-based rendering engine rather than Netscape 4, there is one fewer headache for web developers, and one step closer to the time when huge savings in terms of development time, efficiencies from using CSS properly, can be truly realized.
Rather than giving the impression, which many people have received from the general hype surrounding Netscape, that Netscape 6 is not a standards-compliant browser, the true picture is that it is, the most compliant ever, but with some unfortunate, and highly noticeable thanks to Bugzilla, bugs, but them notwithstanding, the fact is that it is much, much, much better from a developer point of view for end users to be using Netscape 6 rather than 4, and the only way that this will happen is for Netscape 6 to be released.
> Even if we don't get 100 percent compliance in the first 6.0 release, we will certainly have a browser that is more standards-compliant than anything we've ever seen on the Web. I can't imagine a Web developer on earth who isn't looking forward to that.--Jeffrey Zeldman, WaSP
> If you look at how long it takes to write a software product from start to finish, it wasn't such a long time. We're proud of how quickly the product came out the doors.--Sol Goldfarb, Netscape
Too many designers still forget that the Web is not paper, and you cannot guarantee the same display experience for every visitor. Period. The name of the game is graceful degradation, and you do that best by being standards compliant.--Ann Navarro on the "Computer Book Publishing" mailing list
Anyone that designs purely for MSIE outside of a highly-controlled environment is poking a significant number of visitors in the eye.--Daniel Gray on the "Computer Book Publishing" mailing list
We actually discussed including world peace and an end to hunger as possible entries in the XML Schema requirements document, but decided against it in the interests of getting to REC in good time :-)--Henry S. Thompson on the xml-dev mailing list
Maybe a few years of trying to shoehorn "namespaces" onto everything might make some lightbulbs go on somewhere, but until then, I'm fondly recalling February 1998, when XML really *was* simpler than SGML ...--Jelks Cabaniss on the XHTML-L mailing list
> English is much easier to learn poorly and to communicate in poorly than any other language. I'm sure that if Hungary were the leader of the world, Hungarian would not be the world language. To communicate on a day-to-day basis -- to order a meal, to book a room -- there's no language as simple as English.--Michael Henry Heim, UCLA
people say "business rules" when they mean "every constraint that cannot be represented by my schema language". So a business rule is a constraint (or function) that the database designer can leave to the applications programmers while (s)he concentrates on DDL or whatever.--Rick Jelliffe on the xml-dev mailing list
Let's pass on the SGML bias stuff. XML is SGML. Trying to make it seem otherwise is populist flame bait.--Len Bullard on the xml-dev mailing list
--Rick Jelliffe on the xml-dev mailing listSGML is much simpler than XML. I can have an SGML entity with no explicit markup at all, and use tag minimization, short-refs and so on so that the parser inplies the tags. That is simpler for writers: more WYSIWYG. HTML is simpler than XML for this reason.
XML is much simpler than SGML. The XML spec is shorter.
HTML is much simpler than XML. You don't need extra conventions and stylesheets to actually do anything.
XML is much simpler than HTML. Dave Ragget's tidy still has not managed to capture all the syntactic permutations in various HTML systems, but there are many pretty complient XML parsers.
I'm pretty much a standards-wacko myself, but if the shoe doesn't fit, don't wear it. There are a lot of folks asking tough questions about whether XML Schema really fits many problems.--Simon St.Laurent on the XML-INTEREST mailing list
I want a jack on the wall that says "World Here" and if I plug a computer into it, I'm on the net, if I plug a TV into it, I see a show, and if I plug my phone into it, I talk to mom.--Martin Focazio on the wwwac mailing list
the W3C is naive in its methodicalness, thinking it has all the time in the world to do things right, when the inevitable outcome is that the marketplace will forge ahead and do what's possible (useful or not) now, probably in a way that is not very well designed nor very compatible, but that raises the technology bar for everyone else as companies jockey for position. Anticipation of W3C (re)action is not a compelling subsitute for the competition between applications in the free market, which can rarely afford to wait for standards to be handed down from an ivory tower.--Mike Brown on the xsl-list mailing list
If you've read the CSS documents - themselves pretty long - there isn't a whole lot of new material in XSL-FO. Fortunately, the differences are clearly explained, but on the whole I can't find much to get excited about in XSL-FO. The differences appear to be what excited the typographers, but implementing those new features in CSS wouldn't exactly be rocket science, either.--Simon St.Laurent on the XHTML-L mailing list
I can see XHTML potentially splitting the web into the "haves" (those organisations who are XML-literate) and the "have nots" (who are frozen with using HTML and scripting languages). The split won't be caused by XHTML 1.0 (because conversion of well-written HTML 4 into XHTML 1.0 is trivial, if potentially tedious) but by XHTML Modularization.--Andrew Watt on the XHTML-L@egroups.com mailing list
--Andrew Watt on the xml-dev mailing listXML on its own does, essentially, nothing. Let's add the approximately 90 pages of the XSLT Recommendation to express our XML as HTML or XHTML. Then we can add the 500 pages of the XSL-FO draft if we want the potential advantages of Web and paper output from the same source data. And if we want to illustrate our pages with Scalable Vector Graphics (SVG) images we can add another 492 pages of reading. But, let's not forget that SVG has dependencies on Cascading StyleSheets, CSS, has a variant Document Object Model, and is dependent on the SMIL Animation drafts for animation, which in turn has dependencies on the SMIL 1.0 Recommendation and the SMIL 2.0 Working Draft.
Now, just who was it that claimed that XML was "simple"?
Unlike relational databases, XML documents do not require a pre-defined schema. Thus an XML repository/information retrieval system, could receive new documents containing new elements without breaking the system. Yes, with a relational database, you could simply add a new table, but that's a manual task of changing the *schema*. I'm not suggesting that XML is inherently more flexible when it comes to designing schemas. What I'm saying is that schemas are inherently inflexible. This is what the semi-structured data crowd has been shouting from the rooftops. XML describes its own structure.--Evan Lenz on the xml-dev mailing list
--Simon St.Laurent on the xml-dev mailing listThe W3C is neither a charity nor a corporation. Effectively, the W3C is the legislature of the Web, drafting laws that its members and others should follow. It is, of course, hobbled by lack of a court system and has only a tiny executive, but it does have law-making powers that affect far more than its members.
While many legislatures do have occasional executive sessions, much work is done with the public in the room - it's generally considered a hallmark of good government, and not just in the US. (Even the UN is opening up more and more.)
The W3C is presently accountable only to its members, who represent a tiny share of the people using its specs. While I'm very grateful that W3C public documents are published without licensing or reproduction restrictions, I continue to find it troubling that these influential documents are controlled by a vendor consortium with strict confidentiality rules and pay-to-play limits on participation.
HTML was an important demonstration of some of the weaknesses in SGML, notably laxness in parsing as related to markup minimization. (in other words, not a botch, but a grand, magnificent, *splattery* botch).--Amy Lewis on the xml-dev mailing list
--David Megginson on the xml-dev mailing listMy advice to a new XML user would be to learn XML 1.0 itself, XML Namespaces, and (if she's a coder) at least one of the XML-related APIs. A glance at a Unicode tutorial might be a good idea as well.
After that, she should ignore the other specs until she has a serious problem that she cannot easily solve otherwise; if she never ends up reading RDF, SMIL, DOM, SAX, XML Schemas, XLink, XPointer, XSLT, SOAP, RSS, CSS, XHTML, XHTML modules, etc., then she didn't need them in the first place.
Nobody (well, practically nobody except some over-zealous marketers) is telling people who want to use SGML that they can't. XML is not a replacement for SGML, it is something new that provides an attractive technology option for many people. XML has what most people need in a markup metalanguage most of the time. And that's wonderful! And for requirements not met by XML there's SGML.--B. Tommie Usdin on the xml-dev mailing list
Since when is the current web the be-all and end-all of user-facing Internet technology? And especially, since when is HTML the be-all and end-all of presentational markup? The amazing thing about HTML is how something so bad could be so successful, right? Sure, HTML makes it simple to do simple things. And this has got us the broad adoption we see today. But it's soooo hard to do hard things with HTML. This is why we need to move forward, and why XML is important. I don't think the portrayal of XML as a "new HTML" is just clever hype from the W3C. It's the real deal.--Matthew Gertner on the XML-Dev mailing list
XML really does make things *much* easier. I remember writing libraries to handle various data formats, and it was tedious busiwork, and the resulting software was also often unreliable. Think about how many incompatible implementations of RTF there are. Remember that before XML, data formats were generally specified with text and examples, and rich data formats were hard to describe precisely.--Jonathan Robie on the xml-dev mailing list
--Len Bullard on the xml-dev mailing listIn the beginning was the white space. And it was without content. until the standards gods breathed on the data and there was ASCII but it was without form. Then came commas, then came round brackets, then came curly brackets, then came the pointy bracket, and SGML was good. Then came Berners-Lee to say, but SGML Is Too Hard. Eat the fruit of HTML and all the world will be the same. You will be as gods and nothing that is known will be unknown to you. Then came the Yahoo and it spake, "but it takes too long to find anything in this sameness. Before I know anything, I am as one dead or too bored too care."
And then came XML. Then came the begattings...
--Andrew Watt on the xml-dev mailing listXML (Extensible Markup Language), so it has been claimed, is a "subset" of SGML (the Standard Generalized Markup Language). SGML, for better or worse, is notorious for its complexity and lack of accessibility. XML did, at least initially, remove some of the complexities of SGML but to promote XML as a "simple format" is profoundly misleading.
If XML ever was "simple", can it seriously be suggested that that remains true after the addition of SMIL, XSLT, XPath, RDF and XHTML and the soon emergence of SVG, XPointer, SMIL 2.0, SMIL Animation, CC/PP, Canonical XML, XML Digital Signatures etc?
The world can alway vote with its feet and override the votes of any standards committee. If the XML family of technologies do indeed prove too obfuscated for the needs of industry, we can expect "YML" or "ZML" or whatever to come along and rectify the mistakes that we refuse to face up to. We've seen Java get a lot of acceptance by addressing the "mistakes" in C++, and we see C# trying to address the "mistakes" in Java. We already see JDOM addressing the "mistakes" of the DOM, RELAX addressing the "mistakes" of XSD, etc. The marketplace of money and ideas, not the W3C or ISO, will ultimately decide which specs prevail.--Michael Champion on the xml-dev mailing list
While the discovery of XML may be a religious experience for some, XML's primary virtue is utility. The few XML-related disaster stories I've run across involved sacrificing an engineer's common sense for a marketeer's enthusiasm.--Tyler Sperry on the Developer.com Express mailing list
the last 10% of conformance can cost dearly in performance.-- Michael Kay on the xsl-list mailing list
XSLT is good because it is a declarative, pseudo-functional programming language. No question that in this day and age, procedural coding at the developer/problem levels that one gets on the web is just a form of collective madness. Declarative programming has been the future for about forty years.--Bill de hOra on the XHTML-L mailing list
SAX has gone through two versions in the same amount of time that some of the W3C projects have gone from draft to recommendation. Technology is moving much faster than the W3C can manage to keep up with. The xml-dev group, on the other hand, is the technology (or at least the group that is driving technology) and therefore has driven SAX to keep up with everyones needs. Giving SAX over to the W3C will most likely slow any future revisions to a crawl.--Seairth Jacobs on the xml-dev mailing list
One spec good, two specs bad. XSL-FO takes CSS warts and all: when I was reading through it I noticed it had adopted quite a few errors from CSS - it really doesn't make sense to have one spec mirroring the other; errors and differences will creep in and be a nightmare to maintain.--Matthew Brealey on the www-style mailing list
I'd _love_ to switch all of my writing over to XML, but my publishers still seem devotedly hung up on Word, each with their own template.--Simon St.Laurent on the "Computer Book Publishing" mailing list
XML+XSLT+CSS will be the market leader for all-days- life solutions. Industry/Defense will probably need XML+XSLT+XSL but the requirements can be very different : no need for dynamic browsers.--Daniel Glazman on the www-style mailing list
> What happens with Apache and Mozilla and other Web-centric open source projects is WAY more important to the usability question than what happens on the desktop. (I should broaden that to say Internet-centric, because email, and soon chat-related applications, are going to be hugely important.) Heck, most people work in their Web browser and email client as much or more as they live in any traditional office applications.--Tim O'Reilly
> We are probably only in year two or three of a 10- or 15-year build-out of an incredible network. Everything, including these light bulbs, will someday be connected to the Internet in one way or another.--Ed Zander, Sun
>--David TouretzkyThe judge decided to invent a new category of speech that does not enjoy First Amendment protection. Besides the old standards (libel, fraud, obscenity, incitement to riot and copyright infringement), the court's new category is, essentially, "anything that potentially threatens the profits of Time Warner and Disney."
That ought to scare the hell out of everyone. If the government can suppress information that is true fact -- as opposed to speech that has a direct effect like inciting people to riot -- then we're all in trouble.
Nobody ever found "very clear answers" or even very clear statements in anything emanating from the TEI. It's a priesthood thing.--Michael Beddow on the xsl-list mailing list
> We now have about thirty terabytes of archival material that we data mine. And that's 1.5 times the size of all of the books in the Library of Congress. So we're now at an interesting point, we're now beyond the largest collection of information ever accumulated by humans. We've gotten somewhere! We use as our original inspiration the Library of Alexandria. Because they were the first people that tried to collect it all. And they started to actually understand the intersection between completely different self-consistent belief systems. They knew what the Egyptians, Romans and Greeks, Hebrews, Hittites, Sumerians, Babylonians -- they knew the mythologies, because they had it all in one place. And they had the scholars to stare at it and try to make the disjunctions conjunctions and start to get an idea of what humans are. The dream is that we're in another one of those positions. They got up to five hundred thousand books. Of course, they were scrolls. The Library of Congress -- the largest library now -- is seventeen million. Only thirty four times more than what we had in 300 B.C. It indicates that the technology hasn't scaled. But now we've broken through into a new technology that allows us to bypass the Library of Congress in very little time, and the sky's the limit. What can we discover about ourselves as a species? As different peoples? Are we couch potatoes or do we actually have independent will? Do we have interests that go beyond the fifteen demographics of slotted marketing hell? And what we're finding is, people are interesting, diverse and peculiar. They are constantly looking for new things that are of interest to them.--Brewster Kahle
>--Jim HeidFlash content is inaccessible to blind and otherwise disabled users. Back when Flash was all about animation and motion graphics, Flash accessibility wasn't all that important. But Flash has evolved into an environment for building complete Web applications. Indeed, a growing number of sites are being built entirely in Flash.
To disabled users, these sites are saying, "We don't want your kind here. We don't need your business. Go away." That bothers me. If Barneys department store was to remove its handicapped ramps and bathroom grab handles, it would be sued. But its inaccessible Web site wins awards.
I would recommend anyone beginning to learn XSLT now to avoid the old MSXML which came with IE5.0 and, surprisingly, with the recently released IE5.5 and focus on standards compliant XSLT.--Andrew Watt on the XHTML-L mailing list
If you are interested in using XML as a single consistent data markup method for eBusiness, then vocabulary inconsistencies should scare you right down to your core.--Mark Crawford on the XML-L mailing list
> Playing with building blocks is more profound intellectually than anything a child could do in front of a computer.--Alison Gopnik
Schemas are not the salvation for the world of Markup Languages, just as DTDs aren't the embodiment of evil.--Ann Navarro on the XHTML-L mailing list
If anyone on this list believes that XHTML 1.0--much less modularization and other concepts under discussion--is being employed and studied by, or in some cases even known to the majority of professionals in the industry (excluding the high-end developers and standards-oriented few), it's time for me to pour you a fresh cup of my extra-strong coffee.--Molly E. Holzschlag on the XHTML-L mailing list
XML strikes me as being somehow the revenge of IT departments, which were taken completely by surprise by the success of the web: they found that users had created web sites, including a huge amount of adhoc CGI's in whatever language they felt like using. XML offers those IT people a wonderful opportunity to wrestle the power back from the hands of unsuspecting users. XML developments, being more complex than mere HTML generation with CGI.pm, follow the "standard" IT procedures, including using the standard languages in the organization. Especially with wonderful additions such as namespaces, schemas, XLink etc... which will discourage even the most stubborn non-programmers.--Michel Rodriguez on the XHTML-L mailing list
building on Schemas is basically building on a fault line. While some people do in fact love it, a remarkable number of people don't.--Simon St.Laurent on the XHTML-L mailing list
Everything you can do in Zope you can do faster and with smaller efforts in PHP (our favourite) or a similar product (pick one: Perl; ASP; JSP; Python; or pick again: Zend; Vignette; ColdFusion).--Catalin Braescu on the WWWAC mailing list
> Short of getting rid of the ability to link to Web images from Word documents, there really is no solution to being able to track Word documents using Web bugs.--Richard M. Smith
A processor that does not faithfully reproduce the whitespace
characters in an xsl:text
element is not a conforming xslt processor.
--Norman Walsh on the xsl-list mailing list
I can tell you for a fact that Dreamweaver 3.0 does not correctly read some forms of HTML structure very well, particularly div tags and span tags. Sure, you can get around things like this if you want to waste a lot of time and build it all from scratch in Dreamweaver, but in an environment where everyone else is hand coding I really don't think it makes sense to drop in your devout Dreamweaver user, especially if the code is CSS heavy. Haven't you ever opened an HTML doc in Dreamweaver only to find that half the page is lined in bright yellow error lines?--Giulio Pellegrini on the wwwac mailing list
The Infoset is the unfortunate standard to which those in retreat from the radical and most useful implications of well-formedness have rallied. At its core the Infoset insists that there is 'more' to XML than the straightforward syntax of well-formedness. By imposing its canonical semantics the Infoset obviates the infinite other semantic outcomes which might be elaborated in particular unique circumstances from an instance of well-formed XML 1.0 syntax. The question we should be asking is not whether the Infoset has chosen the correct canonical semantics, but whether the syntactic possibilities of XML 1.0 should be curtailed in this way at all.--W. E. Perry on the xml-dev mailing list
The whole point is that the infoset does *not* limit the *syntax* of XML documents; rather it specifies what variations in syntax are "significant" and what aren't. Insignificant variations in syntax (such as the use of a character reference rather than a literal character, or different orders of attributes) map many-to-one into single infoset contributions. This isn't the same goal things like SML and Common XML have, which is to constrain the range of syntactically-equivalent forms.--Eric Bohlman on the xml-dev mailing list
--Eric Bohlman on the tpassin@home.com mailing lista document type *definition* does *not* specify a document's root element; it's a document type *declaration* that does. Of course, the declaration needs to specify an element defined in the definition, but that's pretty much it. So, for example,
<p>This is a paragraph</p>
*is* valid according to the XHTML DTD *if* you tell the processor to treat <p> as the root element. Nothing in the XHTML DTD demands that <html> be the root element; that knowledge comes from outside the DTD itself.
new specifications that normatively reference XML 1.0 should also normatively reference XML Namespaces--Richard Tobin on the xml-dev mailing list
XML-RPC gives you enough rope to hang yourself, but SOAP gives you enough rope to hang yourself and a lot of other people too. Rope is, of course, useful for other things as well!--Simon St.Laurent on the xml-dev mailing list
--Tim Bray on the xml-dev mailing listNamespaces do a very good job of what they were designed to do - give elements and attributes globally unique names that allow software to recognize and differentiate. They have been implemented by pretty well every interesting piece of XML software in the world (hint: it's not hard) and in practice, in real-world implementations, just work.
Yes, namespaces and DTDs don't get along very well. At the end of the day, the conclusion was that uniquifying names in a simple and clean way was an important enough piece of the puzzle that that was an acceptable cost. Although, to be honest, nobody foresaw how far into excessive overengineering schemas were going to veer...
--Tim Bray on the xml-dev mailing listXML took a lot of static in its early days because it was "just syntax" - there are certainly a lot of people who want to think only in terms of object models (groves, DOMs, whatever) and see the syntax as disposable fluff. Me, I think syntax is crucial. Because describing data structures in a straightforward, interoperable way is really hard to get right and very often fails. At the end of the day, if you really want to interoperate, you have to describe the bits on the wire. That's what XML does.
Think of it another way... a promise like "my implementation of SQL (or posix, or DOM, or XLib) will interoperate with yours" is really hard to keep. A promise like "I'll ship you well-formed XML docs containing only the following tags and attributes" is remarkably, dramatically, repeatably more plausible in the real world.
XML is a serialization of a logical document structure defined by the XML Infoset. The Infoset uses the DOM as an API. If an XML document is defined by the character stream, the document is also defined by the SAX event stream (which may result from parsing the XML document, but also may result from another event source).--Jonathan Borden on the xml-dev mailing list
--Tim Bray on the xml-dev mailing listNot that SAX and the DOM and so on aren't good things. But at the end of the day, de facto and de jure, XML is syntax.
XML Schema seems to be in that worst-of-both-worlds state. On one hand the tool vendors say they can't support it because it's not an approved standard (so we'll have to produce DTDs for the XML editor, since none of the candidate products with a customization API sufficient for our requirements has any Schema support). On the other hand, the working group says the specification documents are in the "Last-Call" stage, and that no further changes in functionality can be expected.--Bob Kline on the xml-dev mailing list
> no matter how much I love independent bookstores, they just weren't getting the books to the people. In many smaller cities across America, the Borders and B&N megastores represent the first time there's been a decent selection of books available. I love the fact that B&N means that worried gay teenagers can read XY magazine, even if they live in Kansas City. I love the fact that B&N means that 2600 magazine is available nationwide. I love the fact that I can buy an XML reference manual 11:30 PM in my neighborhood... before B&N, even in New York City, you had to go to McGraw Hill in midtown for good computer books, and they closed at 5 PM promptly.--Joel Spolsky
> We have this notion of "attributes" in C# that allows you to add declarative information to types and members. Just as you can say a member is public or private, you also want to be able to say this one's transacted, or this one's supposed to be a Web service, or this one is supposed to be serializable as XML. So we've added attributes to provide this generic mechanism, but then we utilize it in all of our Web services and XML infrastructure. We also give you the ability to put attributes on classes and on fields in your classes that say: "When this class goes to XML, it needs to become "this" tagname in XML and it needs to go into "this" XML namespace. You want to be able to say a specific field in one place becomes an element, and that another becomes an attribute. You also want to control the schema of the XML that goes out; control it where you're writing your class declaration, so that all of the additional declarative information is available. When attributes are properly used in this way to decorate your C# code, the system can simply turn a specific class into XML, send it over the wire, and when it comes back we can reconstitute the object on the other side. It's all done in one place. It's not like additional definition files or assorted infos and naming patterns. It's right there. It gives you statement completion when you build it in the IDE, and we can then provide you with higher-level tools that do the work for you.--Anders Hejlsberg
If you use JDOM you won't need JAXP. JAXP is primarily a crutch needed for DOM.--Jason Hunter on the jdom-interest mailing list
> I hope that the Napster experience teaches Congress that next time they alter the copyright law, twenty million voters have stated by their daily actions that they want it looser, not tighter -- despite the dollars dangled by pro-copyright lobbyists.--John Gilmore
XML will become devalued and meaningless (although not useless) unless everyone sticks to pure XML. If the goal of XML is to improve the intercommunication of products within the computing world then it will only work if the rules are followed.--Anthony Channing on the xml-dev mailing list
MS DOM may not parse the XML directly (although the load() method hides from the user the fact that it is using a separate parser to read in the XML), but it does "provide access to the content and structure" to various XML applications. It does not do anything (as far as I know) with the XML aside from provide access to the content and structure - it does not *apply* the XML in any way: it does not display it, it does not process it. Thus, even though it sits on top of another XML processor, this does not mean it is not an XML processor itself.--Jeni Tennison on the xml-dev mailing list
The frustration with the O2K format is over the embedding of XML chunks (excuse me, "islands") within strange MSHTML markup that makes any XML parser choke. (And I don't care if Navigator doesn't choke on it--it's not standard HTML because it's not W3C HTML, but a proprietary extension of it.) Why does Microsoft brag[1] about their use of XML in Office if they have erected barriers to the use of this XML by others? Because it's such a trendy standard? It comes off as trying to take credit for providing the advantages of the trendy standard without actually doing so. Of course, this is what marketing people are paid to do.--Robert DuCharme on the xml-dev mailing list
Better bad XHTML than good RTF.--Claude L. Bullard on the xml-dev mailing list
Microsoft have undoubtedly caused a great deal of confusion in the marketplace by shipping a general release mass market product that implemented an approximation to an early draft of a standard, and describing the product as if it implemented the standard.--Michael Kay on the xsl-list mailing list
The Infoset is the unfortunate standard to which those in retreat from the radical and most useful implications of well-formedness have rallied. At its core the Infoset insists that there is 'more' to XML than the straightforward syntax of well-formedness. By imposing its canonical semantics the Infoset obviates the infinite other semantic outcomes which might be elaborated in particular unique circumstances from an instance of well-formed XML 1.0 syntax. The question we should be asking is not whether the Infoset has chosen the correct canonical semantics, but whether the syntactic possibilities of XML 1.0 should be curtailed in this way at all.--W. E. Perry on the xml-dev mailing list
--John Cowan on the xml-dev mailing listWe considered that this behavior is licensed by the following language in the XML Rec:
# Note: The colon character within XML names is reserved for
# experimentation with name spaces. Its meaning is expected to be
# standardized at some future point, at which point those documents
# using the colon for experimental purposes may need to be updated.
In the WG's view, the meaning of colon has now been standardized, and documents using it in non-Namespace-conformant ways deserve to lose.
> If a million people use a Web site simultaneously, doesn't that mean that we must have a heavy-duty remote server to keep them all happy? No; we could move the site onto a million desktops and use the internet for coordination. The "site" is like a military unit in the field, the general moving with his troops (or like a hockey team in constant swarming motion). (We used essentially this technique to build the first tuple space implementations. They seemed to depend on a shared server, but the server was an illusion; there was no server, just a swarm of clients.) Could Amazon.com be an itinerant horde instead of a fixed Central Command Post? Yes.--David Gelertner
it's plain that the W3C's vision of URIs is heavily based on using them to name *abstractions*, and that a good part of the URI community is having a hard time fitting this into their mental models. I'm starting to wonder whether overloading the notion of a URI (particularly in the URL form) to encompass both addressing sequences of bits and identifying abstract statements in a form of higher-order-logic is really a good idea. Maybe the distinction between floor waxes and dessert toppings actually serves a useful purpose in practice if not in theory and shouldn't be blurred solely in the name of mathematical elegance. Imagine a programming language in which all arithmetic had to be done in terms of set-theoretic primitives. It would be extremely elegant, but not very useful.--Eric Bohlman on the xml-dev mailing list
This point needs to be re-emphasized, especially to the folks who WROTE these specs and know what they really say and really mean: you've created a monster that has escaped from the lab :~) Whatever the intent and wording of the URI spec, people use URIs (well, URLs anyway) everyday and *think* that they know exactly what they are. For better or worse, many many web developers are horribly confused by the notion of an HTTP URI that identifies an abstract resource rather than a concrete piece of data.--Michael Champion on the XML-Dev Mailing list
.NET is a desperation "Hail Mary" pass attempt over the heads of the DOJ, the PDA/wireless market that has scorned Windows, the light client/application service provider business model that potentially threatens to make the Windows API irrelevant to enterprise developers, the consumers not upgrading their Windows systems and applications as soon as the "new, improved" version is available, and the general sense that MS is no longer at the cutting edge. But how anyone but MS and its close allies would benefit if they catch this pass is not a bit clear to me. And why the rest of the industry would let them catch it boggles the imagination.--Michael Champion on the xml-dev mailing list
--Eric Hodges on the general@xml.apache.org mailing listSo why is the DOM API so bloated and ugly? Why doesn't it use Java collections? Why are there so many non-obvious steps required just to parse a document?
I already know why because you told me. W3C's job was to make several different implementations happy with one API. The result is an API that doesn't make anyone happy.
> Dependence by users on proprietary technologies in a Web-based production environment does not support the Web. Users frequently suffer when relying on proprietary technology that is not available across browsers, or even different platforms for the same browser. Companies are best advised to implement W3C specs and work on the development of new ones.--Janet Daly
> SQL Server 2000 was basically a discussion of the merits of loose coupling applications. AKA, flat file integration. Screw that. Its just an excuse to scream XML everywhere. MS is holding their heavy use of XML-the MS way as proof they are open, while at the same time engaging in this anti-competitive Java crap.-- J. Scott Bushey
having XML support that doesn't bind to the DOM, or that doesn't allow styling via CSS, is meaningless. So, the question of "which comes first" is meaningless as well: you need to have an acceptable level of support for all three--Steven Champeon on the xhtml-l mailing list
in the world of standards there is basically only the choice between commercially-dominated groups, Western/English-dominated groups or ISO and national standards bodies. The idea that somehow W3C is a good ground for simplicity or "technology for the rest of us" is belied by the facts: look at the current generation of WDs, CRs and LCs.--Rick JELLIFFE on the xml-dev mailing list
HyTime was only terrible if you tried to pick it up without the help of a book explaining it such as deRose and Durand's book. Because it didn't become mainstream, and because the hypertext community suddenly expanded a millionfold forcing a devolution phase (one reason for devolution is a sudden explosion of members forcing conservation of resources), it hasn't been given that much attention by writers. Unless XML Schema has critical flaws, I expect that it will pick up more support once some good books are available, but not before.--Len Bullard on the xml-dev mailing list
> XML via ADO finally, however, all the damn demos were in C#, yet almost know one knows it. They are forcing developers to learn their new language simply to understand sample code in order to implement all these new features in their language of choice...I hate being forced into something like this. It does the developers, their customers and employers, and the industry in general a grave disservice, and makes these sessions far less useful. During every session the presenters were told to mention the "excitement in the air." Why? because Gates got so little applause, and no one was very excited. It was ominous to tell truth. In fact, there were people sleeping in the aisles during the presentations. I guess MS still doesn't realize that just because they order the troops to maintain a lie, it doesn't become truth. Either that, or we were being ordered to be excited about this stuff.--J. Scott Bushey
> Many of the problems originally noted in IE 4 remain unfixed in IE 5.5. The company has delivered interesting new technology, but has not finished the job on the Web standards it already supports (somewhat incompletely), and has not committed to a timeline for delivering those missing pieces, or for fully supporting XML and the DOM.--Jeffrey Zeldman
HyTime was only terrible if you tried to pick it up without the help of a book explaining it such as deRose and Durand's book. Because it didn't become mainstream, and because the hypertext community suddenly expanded a millionfold forcing a devolution phase (one reason for devolution is a sudden explosion of members forcing conservation of resources), it hasn't been given that much attention by writers. Unless XML Schema has critical flaws, I expect that it will pick up more support once some good books are available, but not before.--Len Bullard on the xml-dev mailing list
--Michael Brennan on the xml-dev mailing list...before XML came along, developers either used EDI standards like X12 or they used proprietary formats and custom programs to handle them (usually something simple like tab-delimited or fixed-length columns). The latter was orders of magnitude more prevalent. There were plenty of people saying the whole world should be using X12 (or EDIFACT or whatever), but few did because the barriers to entry were too high. The standards were too difficult to understand for novices and too difficult to implement. Available applications were expensive. So most IT shops simply wrote custom point-to-point interfaces, usually low-tech scripts shuffling files around via FTP.
Today, most developers using XML are still just trying to solve business problems. They are still writing simple low-tech point-to-point interfaces rather than buying expensive middleware products. But more and more of them are using XML for one simple reason: it's easy to understand, and it's easy and cheap to implement. XML is an easy sell for these reasons.
--Sean McGrath on the sml-dev@egroups.com mailing listI have become increasingly worried about where the XML world is headed. The complexity seems to be growing exponentially and the XML schema stuff is about to bless the complexity for all time...
Something ain't right.
XML is about describing hierarchical data structured dammit! How hard should that be?
> The "paperless office" is a bad idea because paper is one of the most useful and valuable media ever invented.--David Gelertner
--Michael Champion on the xml-dev mailing listA development manager who says "I promised it with the specified features and within budget by the end of the second quarter, so it will ship on June 30th or heads will roll" is essentially saying "it's OK to have flaws". He/she may deny it, but we all know that that's the reality.
Likewise, if the Schema spec goes to Recommendation status before there is extensive implementation experience by *independent* developers and *proof* that the independent implementations of the spec interoperate cleanly, then the W3C is essentially saying "it's OK to have flaws ... we'll fix them later ... but we have to get the spec out now [for some reason or other]."
A validating parser checks that the document conforms to whatever rules the sender wants it to conform to, not that it conforms to the rules required by the recipient. Since DTDs are only capable of expressing a small subset of the application-level validity rules anyway, I've found it easier in practice to do all the validation at application level.--Michael Kay on the xml-dev mailing list
The problem is that we're not providing the tools. We're providing the specs. That's a whole different ball game. If tools existed for actually making really interesting use of RDF and XLink and XInclude then people would use them. If IE and/or Mozilla supported the full gammut of specs, from XSLT 1.0 to XLink and XInclude (OK, so they're not quite REC's, but with time...) then you would find people using them more. Especially XLink. Bob DuCharme's talk at XMLDevCon about XLink was interesting, but without browser (or even any other application) support, its a fairly dull technology.--Matt Sergeant on the xml-dev mailing list
Would it be too much to ask that MS put a big flashing red warning on all their XSL pages warning newcomers that "XSL" in IE5 documentation does not mean the same thing as "XSL" anywhere else.--David Carlisle on the xsl-list mailing list
We are going to celebrate the first anniversary of XSLT/XPath Rec, and there is still no sign of whether it goes to replace MS-XSL in IE. In a meantime, MSXML 2.0 usage proliferates, generating confusion; and Microsoft does not seem to care about it. I don't think this can be explained by technical time required to write code/docs: Michael Kay alone built a 100% compliant implementation and wrote an excellent book (that includes a detailed documentation of MSXML ;-)). I wonder if MS developers aren't motivated enough to work as hard as Michael?--Nikolai Grigoriev on the xsl-list mailing list
> we never seriously considered supplying the text for online publication as HTML. We knew from ten years of editing the OED on screen how helpful it was to be able to search the text using the information on structure stored in the generalized markup language: for example, we had found it useful to search for a word just within etymologies or just within illustrative quotations. We wanted our online readers to have similar search facilities. All that useful information on text structure would be lost if we converted the text to HTML, because HTML is only good at storing information about format (bold, italic, etc) and about links to other web pages - OED�s authors and cross-references, quotations and definitions, would all have disappeared back into an undifferentiated blur of text distinguished only by typeface.--Laura Elliott
One reason why some people want to deny that XML is a subset of SGML is because they want to allow W3C, an institution paid and dominated by the largest companies in the world, to be able to alter XML willy nilly as it suits them. Of course, ISO committees are also dominated by commercial concerns, but to a far lesser extent.--Rick Jelliffe on the xml-dev mailing list
XSLT is a declarative language, not a functional language.--Matt Sergeant on the xsl-list mailing list
If BT-PLC thinks hyperlinking technology prior art starts with the WWW, they are hopeless and seeking a full employment economy for lawyers. This has been tried already and is frivolous. Hyperlinking is unpatentable.--Len Bullard on the xml-dev mailing list
The XML specification makes it clear that all character data must be passed to the application. This includes leading and trailing whitespace, even when xml:space="default". If you want to strip leading/trailing whitespace, then that must be done in the application.--Andy Clark on the xerces-j-dev mailing list
I thought our tools would sell really well, but sales have been sluggish, and the reason is primarily that in spite of all the interest, most real-world application developers still don't understand what you can do with XML and how you can do it.--Dale Hunscher on the sml-dev mailing list
> If you read most privacy policies, there's nothing private about them. So now that we can encode those in a machine language, hasn't fixed the privacy problem. All that means is bad privacy policies can now be written in a form that your computer can read.--Austin Hill, Zero Knowledge
> In the software industry, things were very balkanized. With Windows, Netware, OS/390 (and many other operating systems), you slice the skills fairly thinly, with people taking special courses in their particular dialect. But it's easier if you need someone who speaks (an industry standard such as) HTML or TCP/IP. You go to any high school and you hire some good computer scientists.--Irving Wladawsky-Berger, vice president of technology and strategy at IBM
It's not the DTD's job to specify a document's root element; it's the doctype declaration's job. This flexibility is a Good Thing--it's the reason that XML (and SGML) have lent themselves so well to electronic publishing systems in which different elements were mixed and matched to create different documents all conforming to the same DTD.--Robert DuCharme on the xml-dev mailing list
Cocoon is THE tool of choice of the moment for implementing a web production process that strictly separates content from style and from logic. In other words, Cocoon seems to have been designed with a large, complicated, heavy-trafficated web site in mind. At least at the conceptual level, Cocoon lends itself to this demanding application in a very natural way.--Alessandro Bottoni on the general@xml.apache.org mailing list
> Microsoft's prosperity has largely been a function of dominance -- made possible, in large part, by its control over key technologies into which other companies must plug their own products. That leaves many people, including me, wondering whether there's a catch in Microsoft's pledge of openness in these emerging Web rules of the road. Maybe the company is telling the absolute truth, but third-party suspicion has been well earned.--Dan Gillmor
> It will be critical that the implementation by browser software and by user agents sets the default as a pro-privacy rather than a pro-surveillance standard. And P3P doesn't address what those defaults are.--Joel Reidenberg, Fordham University School of Law,
Web design has lost most of its fun aspect and has become just another boring job. As an aside, I think the fact that so many designers realize Web design is boring is one of the major reasons we're seeing such an interest in non-boring-but-generally-not-necessary-to-use technologies like Flash.--Vincent Flanders on the Computer Book Publishing mailing list
SVG strikes me as beautiful but dangerous, promising designers QuarkXPress-like layout without any of the flexibility or semantic markup of XML or even XHTML.--Simon St.Laurent on the XHTML-L mailing list
>--Edd Dumbillwe have conducted XML 1.0 conformance tests using the suites developed by OASIS and the US NIST. In our last round of tests we found the big vendors -- Oracle and Microsoft -- lagging a significant amount behind various open-source XML parsers.
The inference here is obvious -- vendors with a platform lock-in don't view interoperability with as much importance. As users though, we need to demand it. Poor interoperability even at the most basic level limits who you will be able to do business with, and increases your costs of communications. Until we really start demanding conformance, we won't get it
> Because no security is required in either HTTP, XML, or SOAP, it's a pretty simple bet that different people will bungle any embedded security in different ways, leading to different holes on different implementations. SOAP is going to open up a whole new avenue for security vulnerabilities.--Bruce Schneier
It is frustratingly difficult to find this kind of definitive information when the Internet RFCs refer to expensive ISO publications. It would seem to undermine the intent of standardization, especially on the Internet, to hold the standards for ransom. It is as if the IETF says "We think everybody should be following these standards on the Internet. If you want a copy of the standards please send hundreds of dollars to Switzerland and someone will mail you a set of paperweights."--Mike Brown on the Unicode mailing list
the effort in XML was in reviewing, restating, rebranding and relaunching an existing technology, not in original work. Jon Bosak is the father of XML, but only as a foster father; Charles Goldfarb et al (from IBM, GCA, ISO, etc.) developed almost all the ideas and forms.--Rick Jelliffe on the xml-dev mailing list
FO can work, in its trivializing way, because people have learnt not to care. `Hamburger and coke' HTML has taught them that. We are, I predict, entering a period of rapid decline in typesetting standards.--Sebastian Rahtz on the xml-dev mailing list
> For all practical purposes, over the next three decades, the cost of computing and storage will stay as close to nothing as we can imagine.--Bill Joy
> the current technology is rapidly turning the whole idea of copyright into a risky proposition -- not quite a joke, but something close to it.--Stephen King
> I want to be able to use other peoples' programs in ways they didn't intend.--Tim O'Reilly, JavaOne keynote
--Dan GillmorNow that a breakup seems more likely, the omigod chorus has started singing -- "Omigod, consumers will be forced to make actual choices now, which is just a terrible idea because life is so much better if one company does everything for us." This is the Ma Bell argument, the notion that life was so much better when only one phone company did everything for us.
Ma Bell, you'll recall, was a regulated monopoly, and for excellent reason. I don't hear the save-us- from-complexity crowd making this connection. They'd grant a monopoly to Microsoft and give the company permission to keep abusing it, all in the name of making the software trains run on time. History is irrelevant to these people, and if they get their way, they'll regret it.
What worries me is that XSL FO's parents are DSSSL and CSS. Not a very healthy start. DSSSL was never accepted by any major player, or implemented to the extent one could do top-quality typesetting with it; and CSS, well what can one say, it's like one's more embarassing relatives, those you hope keep quiet when the boss comes to dinner.--Sebastian Rahtz on the xml-dev mailing list
If you haven't understood xsl:apply-templates then you haven't understood XSLT: it's rather fundamental.--Michael Kay on the xsl-list mailing list
the "HTML" saved by Office 2000 is an ugly mix of XML, ill-formed HTML, scripts, and if statements inside of square braces.--Robert DuCharme on the xsl-list mailing list
It sounds as if your application was written by the mythical character whom we thought had disappeared from the scene - the Desparate Perl Hacker. The defining characteristic of the DPH is that he doesn't use an XML parser because he thinks it's an overhead or because he thinks it's only three lines of code to do the parsing himself. The result is an application that rejects perfectly correct XML documents, thus causing immense aggravation to the people who write the code that generates those documents (you).--Michael Kay on the xsl-list mailing list
> New protocols - including but not limited to those using HTTP - should not attempt to circumvent users' firewall policies, particularly by masquerading as existing protocols. "Substantially new services" should not to re-use existing ports.--Keith Moore
The document you are producing, if XML or HTML, is a stream of bytes that represent a physical sequence of UCS characters that in turn represent an underlying sequence of abstract UCS characters. It is the underlying sequence that is the essence of your document; parsed general entity references are part of the physical layer and do not change the meaning of the document as far as an XML or HTML application is concerned.--Mike Brown on the xsl-list mailing list
SOAP is just a way to get understandable messages back and forth (from *anything*) - what you do with the message that we craft is up to each of us.--John D. Gwinner on the xml-dev mailing list
XML 1.0, Second Edition, will incorporate the published errata into the text of the XML Recommendation. It is not XML 1.1. It is not considered a substantial change in XML.--John Cowan on the xml-dev mailing list
I'm starting to get a little impatient with people who "blame" others who are donating their free time (nights, weekends, whatever) to deliver a free product...--Keith Visco on the xsl-list mailing list
--Sebastian Rahtz on the xsl-list mailing listas it stands at present, the eager punter cannot compare the 3 available products, because their test files are mutually incompatible:
- XEP does April 1999, pretty well
- FOP does a mixture of drafts, quite well
- PassiveTeX does only March 2000, not bad in some respects
so someone evaluating XSL FO is going to be deeply unimpressed by the whole shebang, and probably enter the Hell of CSS.
--Sebastian Rahtz on the xsl-list mailing listOffering the world a nice tool based on an old draft of the spec is, IMHO, a mistake. Waiting until it supported the current spec would have been much better....
Which of the two scenarios is better?
a) people like the thing, get stuck into it, and we start having these nonsensical conversations about "oh you mean *that* draft of the language"
b) no-one uses it, as the language it implements isn't currently documented, and RenderX fail to get the feedback they deserve
> HTML 4 - the LAST HTML - includes dozens of accessibility improvements, and it is insane for any company not to fully support that. Without full support for HTML 4, millions of web users get hurt. That's morally wrong, and it's also just plain bad for business. I think that in time, all browser companies, including Microsoft, will come to see that.--Jeffrey Zeldman
The SGML/XML semantic model may be unchanged, but it is, in fact, being replaced with something that doesn't work in the public interest, under the banner, "Down with those #$% DTDs!" We need to acknowledge that fact. It is not helpful to retreat into statements like "There is nothing semantic about XML Namespaces", when in fact most people use names in order to label things meaningfully. Most people literally can't imagine using names for any other purpose.--Steven R. Newcomb on the xml-dev mailing list
> I've never liked secrets. Secrets generally mean somebody's getting screwed. If you don't know who's getting screwed, then it's likely to be you.--Michael Robertson, CEO MP3.com
>--Jeffrey ZeldmanThe next stage is full separation of content from structure, and that means using HTML 4 and CSS (and eventually, replacing HTML 4 with XHTML; and eventually, migrating to XML).
We can't safely do that yet. Gecko is still in development, Netscape 4 has appalling "support" for CSS, IE5/Windows has better but far from complete support, and the only released browser that gets it right - IE5/Mac - has a 6% market share.
SOON. Not soon enough, but SOON, we will look back on this era of stupidity and laugh. Oh, how we will laugh. This is the TRON era and we are striving to reach the MATRIX era.
>--Jakob NielsenNapster is usually seen as yet another nail in the coffin of the legacy media industry. It is certainly a good example of the need to start charging micropayments for content and eliminating copy-protection.
More important, however, Napster is also an example of reverting to the original networked ideology of the Internet. The Web has degraded into mainly being a one-to-one environment instead of the many-to-many environment that is inherent in the Internet. Most websites simply show their stuff to one user at a time. (Yes, that's what I do as well - this very column is a perfect example of very traditional use of the Web.) The few examples that are many-to-many, such as eBay, have been uncommonly successful.
SOAP/XML-RPC can (and will) be used to implement things in stupid ways that leave security holes; just like their moral equivalents, the CGI scripts of the world. But, unlike for example Outlook, using SOAP in the default way as as provided out of the box is not guaranteed to make your computer vulnerable to vicious attacks by bored teenagers.--Tim Bray on the xml-dev mailing list
well-formed XML is likely to become the format in which active content -- including maliciously active content -- is sent over the wire. Even today, I could attach an XSL stylesheet containing an infinite loop that would cause the original IE5 XSL processor (and possibly other processors) to fail with a stack overflow.--Tony Graham on the xsl-list mailing list
--Oren Ben-Kiki on the sml-dev mailing listMinimal-XML breaks compatibility with each and every XML spec except for XML 1.0. Namespaces, XPointer, XSLT, ... Minimal-XML also breaks compatibility with many XML languages - XHTML, RDF, ... It is true that there are many XML languages compatible with Minimal-XML, but it is not at all clear that these languages will remain compatible with Minimal-XML once they hit actual use in real XML systems. Take namespaces as an example. Namespaced XML-RPC is _not_ Minimal-XML.
In addition, Minimal-XML is _weak_. Every time we tried to use it to build "industrial strength" frameworks, we hit upon the brick wall of "no mixed content, no attributes". Giving them _both_ up seems to have significantly reduce its usefulness for such tasks.
So, given all that, is Minimal-XML still justified?
I'm becoming less and less convinced that it is.
>--Jon KatzMetallica is invading its fans' privacy, challenging the ability of others to move freely and privately about the Net and the Web -- perhaps the hallmark social, creative and educational feature of the Internet. The band's action will not improve the life or work of a single artist. It will advance the interests of the greedy and invasive corporatists moving aggressively to turn the Net into the cultural and commercial equivalent of a Disney theme park.
Artists have the right to fight for their interests. But Metallica's move against hundreds of thousands of music lovers is outrageous. It needs to be fought tooth and nail.
Step One: Let's Shut Down Metallica's attacks on computer users, not Napster. Stop buying the band's music. Urge everyone you know to do likewise until Metallica calls off its legal Rottweillers, leaves kids downloading music alone, and agrees to slug the issue out in court and other venues where it belongs.
> The DMCA is a perfect example of the harm done when business dominates government and society. One part of the law explicitly says that only commercially significant activities are considered important (to legitimize a program which is often used to bypass technological means of controlling the users)--showing explicit prejudice against educational uses, recreational uses, communitarian uses, military uses, and religious uses.--Richard M. Stallman
for every group prepared to fight the good fight and take a spec through the relevant bodies, there are also those prepared to take it into the world via the marketing department. The fragility of the web is ever-increasing. Everything we build on top of something that's not-quite-conformant becomes even more fragile.--Edd Dumbill on the xml-dev mailing list
> "Microsoft XML" is demonstrably different from official XML. I've had the unpleasant experience of looking at the XHTML parts of a site where the webmaster had validated using MSXML, and then found that every MSXML-validated page showed a fatal error when used with a conformant XML parser. The interoperability lessons are clear: bugs like those hurt only people using non-Microsoft, standards-conformant, software.--David Brownell
MS claims they invented XML. They won't claim they invented ILOVEYOU. Which are they more responsible for?--Wendell Piez on the xml-dev mailing list
> although HTTP can be used as the transport for a "remote procedure call" paradigm, HTTP's protocol overhead, along with the connection setup overhead of TCP, can make HTTP a poor choice. A protocol based on UDP, or with both UDP and TCP variants, should be considered if the payloads are very likely to be small (less than a few hundred bytes) for the foreseeable future. This is especially true if the protocol might be heavily used, or if it might be used over slow or expensive links.--Keith Moore
> In the near future, expect to see libertarian dot-com CEOs tout the societal value of Chapter 11 bankruptcies.--Michael Kanellos
> There has been no progress in client software for the last seven years: Mosaic defined the Web feature set in 1993, and since then there has only been more fancy page layouts, no better user interfaces.--Jakob Nielsen
XML explicitly blesses anyone who wants to create their own language for their own purposes. That's what the X stands for. The "premise" never included everyone using the same language.--Tim Bray on the xml-dev mailing list
> Most DOM implementations are memory bound. That is to say, the entire document is read into memory before processing begins. For reasonably sized database dumps, memory bound DOM is at best inefficient -- at worst, unusable.--Sean McGrath
Two years ago, everyone was pushing XML like a tower of Babel that magically make all data formats compatible. That sounded a lot better than the truth -- that XML is a simple, common layer for representing tree structures in a character stream--David Megginson on the xml-dev mailing list
--Steve DeRose on the xml-dev mailing listXML does *not* solve the problem of semantic heterogeneity (a topic discussed in reams of database articles). XML never meant to; nor does it seem a feasible task. From my perspective, one of the best things I expected from XML was simply that it would make the *real* semantic problems visible: it is no longer so easy to miss the substantive differences behind a veil of syntactic obfuscation. By making the problems visible, at least we can deal with them consciously and directly.
Well, now we're there. XML lets us all see, in plain text and schemas, just what the differences between two competing systems were to begin with; and companies that want to interchange, convert, or compromise can do so explicitly and clearly. The process also helps directly: as companies and whole industries move to XML, they get a perfect chance to review and perhaps fix schema problems that have been there a long time.
Companies that are arch-rivals may avoid compromise, or even introduce different schemas just to create incompatibility; I don't think there's anything we can do about that. But there's a Darwinian process going on here, and being gratuitously different can lead to extinction. Working together is more efficient than working alone, and economies of scale favor standards in the long run.
At the same time, perhaps we will find that some of the mavericks have seen something everybody else missed, and we'll all be tweaking our schemas to add it.
Namespaces in XML often don't solve the problems that seem like logical problems to ask them to solve. They really serve a quite simple purpose -- nothing at all like "maintaining a dictionary of... tags," or merging documents.--John E. Simpson on the XML-L mailing list
XML gets rid of totally pointless incompatibility over how to punctuate your data structures when you pass them around, but it doesn't solve the problem of disagreeing over the real data structures that ought to be passed.--Steve DeRose on the xml-dev mailing list
> If I were a bettin' man, as the saying goes, I�d probably not bet in favor of much across-the-board adoption of standard XML schemas. I say this despite the fact that several industries are putting a lot of collective effort into developing these schemas, such as those in repositories at Biztalk.org and Xml.org. If XML is to fulfill hopes that it'll be the data interchange lingua franca, we do need standards, but schema repositories just don�t provide adequate support for tag names.--William J. Lewis
I also hope that any future standards will have fewer ambiguities and vaguenesses than the current specs, rather than more. Everyday language, which is very good at using the same word to mean different things and different words to mean the same thing depending on the context, has no place in standards.--Michael Kay on the xml-dev mailing list
XML saves writing a parser for every app; it saves worrying about what to do with CR/LF, or what the 3rd bit over in the 'charformat' mask byte is; it saves worrying whether *this* RDB's comma-delimited export form escapes "," with backslash or by quoting -- all that mindless, useless junk that has nothing to do with your information, but causes a continuous drip of wasted programmer-years.--Steve DeRose on the xml-dev mailing list
> There is no compelling reason for a Java API to manipulate XML to be complex, tricky, unintuitive, or a pain in the neck.--JDOM Mission Statement
--Doug Wise on the XML-L mailing listWhen I saw the headline "Steve Case announces the release of Netscape 6" it was days before I felt clean again. Half of Netscape's appeal was the total lack of an evil empire behind it.
I wonder what version of Mosaic they're up to.
XML, although it claims to be a "language", is not a programming language, but a data representation standard. It means that, to use XML, you need something else which knows how to read XML and do something useful with it.--Thierry Bezecourt on the XML-L mailing list
--Michael Champion on the xml-dev mailing listWhile Tim, Paul, and Rick fight it out for the crown of Principal Pundit of Pedantry, I feel compelled be a bore and point out that the legions of software developers working with XML these days generally aren't amused by this stuff.
Maybe it doesn't matter ... they don't read the specs anyway, they read a couple of the shelf-full of books on XML at Borders and get on with life ... or they read the help pages on MSN to get XML According to Bill and get on with life. In either case, it would seem that we're breeding a sort of mutant form of XML developers who DON'T CARE ABOUT THE STANDARD! (Geez ... why would ANY sensible person just doing their day job care about a standard in which what everybody calls a "tag" in their code is Really and Truly an "element type name" but in an even higher plane of reality has the Abstract Metaphysical Buddha Nature of nameness and typeness ... as we learn from Tim Bray's annotation?
For me, XML is so exciting because it provides an infrastructure for capturing (some of) our current human2human discourse. Wherever humans communicate we get ontological loss. But some of this may be avoidable and XML highlights where the problems lie (it doesn't solve them, but it helps).--Peter Murray-Rust on the xml-dev mailing list
From the point of view of store footprint in embedded systems it seems to me that the current MinXML spec is a litttle conservative.--John Wilson on the sml-dev mailing list
Unicode is a standard in this true sense, arrived at by dialog and concensus among competing and conflicting interests, as is ISO 10646 as well as ISO 8859, 2022, 4873, 6429, and 646 before them. MIME character-set registration, on the other hand, blithely undercuts this time-honored and proven practice by conferring the aura of "standard" on quite literally any character set any company cares to make up, and this makes life increasingly difficult for everybody -- developers and users alike -- as a plethora of private character sets appears on web pages, in email, in netnews, and everywhere else. There is no excuse for this. If you make email or Web- authoring software, you have more than enough standard character sets to choose from, including Unicode. Use them.--Frank da Cruz on the Unicode mailing list
a Good Idea supported by Running Code is probably the most workable alternative to the W3C standards slowgress; it's undoubtedly more effective than mailing list flame wars.--Colin Muller on the xml-dev mailing list
--David Carlisle on the xsl-list mailing listAny use of disable-output-encoding is _always_ a hack.
In a few situations it is useful (I have used it sometimes in my sheets) but it is _always_ a bad idea, and means that your output tree will not work as expected unless it is going to be reparsed, usually it is only needed to overcome unfortunate markup in the input document (like useful structure being hidden in a CDATA section).
The document() function is THE MOST USEFUL item in XSLT, unless you count the entire concept of XSLT.--Scott Sanders on the xsl-list mailing list
We have sometimes discussed on XML-DEV whether it is possible to have schemas such that they are completely machine-interpretable. [I use this to mean "If my machine gets a *.xml +*.xsd from some other machine and there is no prior agreement, can my machine do something useful with the *.xml (other than print it out for a human to read).] The general consensus was that this was an ultimate goal but probably beyond most XML-ers immediate vision. Therefore there has to be some prior agreement about semantics and ontology.--Peter Murray-Rust on the xml-dev mailing list
All files are text files. Automatic compression is easy and for many sexp^WXML versions of originally non-XML data the compression of the XML version today beats the compression of the non-XML version--Jay Sulzberger on the wwwac mailing list
>--Simon St. LaurentXML revolutionizes business because it lets you use commodity components and existing Web infrastructures to transfer information. There's nothing XML can do that couldn't be done by other formats riding other protocols, but XML will let you do it cheaper. You never have to be locked into tools coming from a single vendor, and you can monitor and manage the flows using whatever tools you like.
Effectively, this reduces the level of agreement that businesses need to reach in order to exchange information. XML can travel over email, over HTTP, over whatever. The processing involved is pretty generic - there are no magic tricks to be figured out. You can layer cryptography or workflow on top of XML without having to change the information inside the document. None of these layers are tightly bound. There's no special X-HTTP Web servers need to support in order to transfer XML.
We all agree that every effort must be made to move from 7- and 8-bit character sets to Unicode. Good, let's do it! However, we must not believe that CP1252 is in some way a step in that direction, let alone a NECESSARY step. It isn't -- it is nothing but a time waster, and advocating its use on the Web, in email, or for any other interchange purpose is just plain wrong.--Frank da Cruz on the Unicode mailing list
The current syntax exposes (a) whether a DTD is available and (b) whether it will change the infoset. The application has to decide whether to read it, depending on the application's needs. I don't believe in compulsory schema reading, i.e. I don't believe that you can or should expect a signal in the instance saying what to do. Thus at the end of the day, the only sensible way forward is for software to make it possible and in fact easy to turn DTD-reading on or off.--Tim Bray on the xml-dev mailing list
Netscape lost the PC browser battle, but definitely not the war. Mozilla is essentially a guerrilla tactic against an occupying force, and it looks much more impressive than I expected it to when the initial plan was announced. I really never expected that they'd be able to build a smaller, faster, 100% standards-based browser that didn't, well, stink like Unix GUIs. What they've come up with is above and beyond what MS or the old Netscape had ever shown the capability of producing.--Nathaniel R. Merriam on the WWWAC mailing list
> In a lot of ways, HTML's built-in style information (shared understandings about what headers, paragraphs, tables, etc. look like) has limited CSS. CSS1 was about supplementing HTML's built-in capabilities, and only with CSS2 did the W3C really start thinking about working from a clean slate and using CSS to describe all of the formatting necessary to display a document.--Simon St. Laurent
Does XML's inefficiency /really/ matter that much, given Moore's law and corresponding advances in bandwidth, storage and, to a lesser degree, memory?--Chris Kaminski on the WWWAC mailing list
> Remarkably few people know what XML is.--Brett McLaughlin at the O'Reilly Conference on Java
>
The thing one needs to realize about computers when looking at
the current XMLification of everything is that everything
digital can be made to fit into everything else. Witness ken and
justin once using the netnews group names to transmit a message,
by registering
alt.justin.what.are.you.doing.tonight
. The medium
is completely plastic. If you want to store or transmit a
message, you can do cryptography, steganography, normalized
database tables, huffman codes, elliptic curves, web pages,
morse code, semaphores, java bytecode, bit-mapped images,
wavelet coefficients, s-expressions, basically anything you can
possibly dream up which codes for some bits. In all cases, if
you're coding bits and you are using a lossless system, the only
thing which matters is how convenient the encoding is. There's
nothing which makes one encoding "do it better" than any other,
aside from various external measurements of convenience such as
size, speed of encoding, speed of decoding, hand-editability,
self-consistency, commonality, etc.
--graydon hoare The people that design web pages think design first (and their schooling that emphasized traditional mediums, not mediums that encouraged information technology concepts like compatibility and efficient processing) and aren't usually paid to think about the ramifications of non-standard work (In their world, "standard" means viewable by average non-disabled American English speakers using Internet Explorer or Netscape Navigator).--Adrian Havill on the Unicode mailing list
The XSLFO revolution, whenever it comes, will not be less powerful or profound for happening a little later. But it can only happen on a standards basis, not a proprietary one. If it does not suddenly overwhelm the marketplace, and it takes even longer for its advantages to be realized by those to whom they are not immediately self-evident, that's fine too. Something with more bang right off may win out for now; but that doesn't make the basic principle of coupling descriptive encoding with non-proprietary query, output and behavior specifications, any less powerful in the longer run.--Wendell Piez on the xsl-list mailing list
>--Richard M. Stallmanin some fields, such as development of medicine, patents are beneficial because the expense of any new development (even a fairly simple system in terms of the number of parts) is very great.
Software is at the opposite extreme, though.
XSLT is a shiny new language, and it has a standard. If everyone agrees to stick to it, we can move towards the utopia of cross-platform and cross-client transparency. If we don't, then the last year or so has been a waste of time for me; I may as well have stuck to HTML and JavaScript - since XSLT will not become the universal language everyone hopes it will.--Ben Robb on the xsl-list mailing list
>--Richard M. StallmanWe singled out Amazon for a boycott, among the thousands of companies that have obtained software patents, because Amazon is among the few that have gone so far as to actually sue someone. That makes them an egregious offender. Most software patent holders say they have software patents "for defensive purposes", to press for cross-licensing in case they are threatened with patent lawsuits. Since this is a real strategy for self-defense, many of these patent holders could mean what they say. But this excuse is not available for Amazon, because they fired the first shot.
Bezos's letter reaffirms Amazon's continuing intention to engage in unrestricted patent warfare, saying that the decision of when and where to attack will be decided by "business reasons".
> I'm sure there'll be XML data in some of these systems for hundreds of years... I don't think anything will replace XML...--Bill Gates
At the end of the day, textual formats are better, at a deep level, than binary formats, because computers exist only to serve humans, and humans can read text - no matter how obfuscated - if they need to.--Tim Bray on the xml-dev mailing list
XML and XML implementations have finally reached the point where I can talk about this to Web developers in their own language, and I think programmers are at a similar point where the tools are maturing, though slowly. Hopefully, we can provide some solid backing for the last two years of hype.--Simon St.Laurent on the xml-dev mailing list
--David Carlisle on the xsl-list mailing listthe first rule of mailing lists and newsgroups is....
.... never believe anyone who posts untested solutions late at night, they always retract in the morning
Long live Omnimark, Perl, Python, ... ! The more tools the better.--James Robertson on the xsl-list mailing list
The SOAP spec has "Microsoft crud" written all over it (as in, overload it with as many features and complexities as it takes to convince people that they really shouldn't bother trying to implement it themselves but just get um, "authorized" software from um, "official" sources.) Alternatives like WDDX, XML-RPC, and LDO go a long way to show that the "problem" isn't nearly as complicated as a reading of the SOAP spec might lead one to believe.--Arjun Ray on the xml-dev mailing list
> judging by that reaction, Sun was scared. They're realizing that once you have standard XML on the wire, their lock-in that they're trying to get with Java goes away.--John Montgomery, Microsoft
comparing XSLT to OmniMark is like comparing an orange tree to a bunch of oranges coming down a high speed assembly line--Rick Geimer on the xsl-list mailing list
ISO has its own very small printshop in Geneva (I'd estimate fewer than 25 employees there), where they produce all the standards and handbooks in editions of 500 copies a time. The fixed production overheads are quite significant, because they throw away the offset plates after printing only 500 issues and they use very small sheet presses and a manual binding process. They have a huge warehouse where they can store up to 500 copies of each of the 12000 standards plus the camera-ready copy (often some paper cut & paste product) to produce more. [Whenever a fly shits onto the camera-ready copy in ISO's warehouse, you run the risk of having a new decimal point added to the authoritative master version of an ISO standard forever, which is the true reason why ISO strictly prefers decimal commas ... ;-]--Markus Kuhn on the unicode mailing list
I am not a fan of engineering for free, but to get XML into Microsoft, it was worth it because that put markup, one of the truly open technologies and the best hedge we have against information death by denial of serviceability, into a lot of hands.--Len Bullard on the xml-dev mailing list
--Don Park on the xml-dev mailing listI have no real complaint with using XML Schema for documents, but if W3C starts selling XML Schema as the standard schema language for all XML applications, I have to disagree.
With industry specific XML standards popping up daily, how does XML Schema help reduce the chaos? How do those slapstick "We are the World" schema respositories help? How does borrowing of schema fragments from other XML formats equate to reuse? How does namespaces that focuses only on building walls between vocabularies help? Let the schema population out of control and we will soon see a Cultural Revolution of our own.
Looking at the DOM in particular, it's actually easy to apprehend the difficulty of designing such a standard. We all can design a tree like structure, there is really nothing very fancy in this, and most students in computer science, if not all, have done it at least once. Now, imagine we all get together in a room and decide we must design ONE such thing...--Arnaud Le Hors on the general@xml.apache.org mailing list
--Don Park on the xml-dev mailing listI have no real complaint with using XML Schema for documents, but if W3C starts selling XML Schema as the standard schema language for all XML applications, I have to disagree.
With industry specific XML standards popping up daily, how does XML Schema help reduce the chaos? How do those slapstick "We are the World" schema respositories help? How does borrowing of schema fragments from other XML formats equate to reuse? How does namespaces that focuses only on building walls between vocabularies help? Let the schema population out of control and we will soon see a Cultural Revolution of our own.
--Steven R. Newcomb on the xml-dev mailing listWhen business communities wake up to the fact that they must choose between ...
(1) using AFs
or
(2) de facto ownership and control of their industry's B2B vocabulary by a single software vendor,
... AFs will be the rule, rather than the exception. AFs are, quite simply, the object-oriented way of supporting reliable, vendor-neutral information interchange. There is just no avoiding the object oriented paradigm; it offers too many advantages, and it saves too much money. In the long run, there is also no avoiding the requirement of industries to control the nature of their own lifeblood: the information that flows between business partners.
A parser system should always default to simple, stupid, and fast and only engage advanced functionality when its asked for.--Dean Roddey, IBM, on the xerces-dev mailing list
SVG is going to change the face of the Web.--Tim Bray on the xml-dev mailing list
Inventions are used to the extent they are needed once you get above the new, different, opportunistic threshholds. If RDF orSomethingLikeIt meets a clearly understood requirement, it will get used. XML is a good example of a technology disregarded until implementors discovered they needed it Eg, SGML worked well. HTML worked great. HTML quit being adaptible. Selling a project to subset SGML was easy after that. Now it is a huge success story instead of a CALS cause celebre.--Len Bullard on the xml-dev mailing list
Inventions are used to the extent they are needed once you get above the new, different, opportunistic threshholds. If RDF orSomethingLikeIt meets a clearly understood requirement, it will get used. XML is a good example of a technology disregarded until implementors discovered they needed it Eg, SGML worked well. HTML worked great. HTML quit being adaptible. Selling a project to subset SGML was easy after that. Now it is a huge success story instead of a CALS cause celebre.--Len Bullard on the xml-dev mailing list
RDF is nowhere near as difficult as people think. And it is incredibly significant. I don't think it's an exaggeration to say that RDF and RDFS will become *the* most important XML technologies as we try to build a web of information - not presentation. In fact if you re-read all the hype and 'promises' of XML, you'll find that XML on it's own cannot actually implement them - RDF can.--Mark Birbeck on the xml-dev mailing list
> secrecy, and the kinds of closed intellectual properties that are bound up with secrecy, are not efficient. They're not an effective way to generate value in software.--Eric S. Raymond
> I like Gilmore's quote about the Internet interpreting censorship as damage and routing around it. These days I like to generalize it to: The Internet interprets attempts at proprietary control as damage and routes around it.--Eric S. Raymond
Specifying a point-size for screen display is unimplementable, because you don't know the screen resolution of the target screen. Mac being the old gent of the GUI world sets its standard at 72DPI, but uncle Bill switched to 96DPI, screens being that much better when he got around to finishing a GUI, and incompatibility being good for business.--Philip Bath on the mrj-dev mailing list
> more people use the Web today than ever used MS-DOS--D. F. Scott
Having all your content removed from a database and stored physically as XML files, you'll lose the lightning-fast query times that enterprise relational databases can give the portal. The strategy most are going for is dynamically serving "slices" of data from (sometimes fairly hairy and finely-performance-tuned) SQL queries as dynamic XML content--Steve Muench on the xsl-list mailing list
Having all your content removed from a database and stored physically as XML files, you'll lose the lightning-fast query times that enterprise relational databases can give the portal. The strategy most are going for is dynamically serving "slices" of data from (sometimes fairly hairy and finely-performance-tuned) SQL queries as dynamic XML content--Steve Muench on the xsl-list mailing list
While XML started out 'small enough to learn', the forest growing up around it is much harder to grasp and integrate. As a larger audience tries to grasp what XML is becoming, I think we're going to see more and more people opting for lowest common denominator solutions - because the full power and glory is once again too damn hard to learn and use.--Simon St.Laurent on the xml-dev mailing list
I am not too happy with SAX2 because it seems bloated. Each small steps we have taken seems very logical, yet the result is less satisfying than I expected. Perhaps the approach we have taken is wrong.--Don Park on the xml-dev mailing list
hand-held devices will be browsing the Web in great numbers in the near future. XML and related standards make that possible. Look at the history of radio, which is almost a non-issue nowadays except in portable devices. Not everyone has no life away from their 17 inch monitor and really wants to surf the Web like early radio listeners spent hours glued to their custom consoles by headphone cords.--Lee Anne Phillips on the xml-dev mailing list
There is no *point* to supporting XML in the browser if you don't support the DOM. If all you want is to display nice-looking stuff to humans, HTML does an excellent job of that.--Tim Bray on the xml-dev mailing list
the world desperately needs a universal data abstraction. I think that one of the reasons that XML has gotten so much attention is that it *looks like* such an abstraction (even though it's not).--W. Eliot Kimber on the xml-dev mailing list
The name collision problem is largely illusory: it stems from the unnecessary supposition that validation means that a well-formed document must validate in its *entirety* to *one* DTD (hence the vexing question of how to merge DTDs syntactically - a more or less useless exercise.) All that's really needed is to map the various pieces of a document to the *relevant* schemes, and validate each "projection" separately.--Arjun Ray on the XML-L mailing list
The quintessence of XML is that it's a linear syntax for labelled trees. (Other characterizations are possible, of course.) The tree view is the key to the superiority of CSS and XSLT as an abstraction mechanism over text-macro based markup languages (like TeX/Latex)--Nils Klarlund on the XML-Dev mailing list
The most curious point about 3.3.3 is that it exists at all. Why the $#%%!@! should attribute values be "normalized" anyhow? This was a pure process failure: at no point during the 18-month development cycle of XML 1.0 did anyone stand up and say "why are you doing this?" I'd bet big bucks that if someone had, the silly thing would have died a well-deserved death.--Tim Bray on the xml-dev mailing list
When the time comes to rationalize all the W3C's XML auxiliary specs with each other, the cleanup cost is likely to be quite frightening. A tremendous amount of good-faith effort, on the part of many people who have been trusting the W3C to provide rational design leadership, will be lost, and at least some of XML's current forward momentum will have been dissipated.--Steven R. Newcomb on the xml-dev mailing list
XML isn't really a client-side technology, although that sector will be growing soon enough. Thus, for all that we talk about Internet Explorer 5.0 as being the only XML browser, the important thing to keep in mind here is that most of XML's uses are currently on the server, or are used by the server to help create "standard" HTML output. To concentrate on IE5 is to get misled into thinking that XML is a Microsoft technology, is something that can only work through one browser, and really has no relevance to the users of your technology. In you case especially, that is far from the truth.--Kurt Cagle on the "Computer Book Publishing" mailing list
The best hope we have is to base what we develop directly on concepts that we can assume have been somewhat understood through training or mathematical intuition. Groves appear well thought-out for their purpose, but the mathematical abstractions they embody are not necessarily any easier to grasp than if those abstractions were applied directly to the problem domain at hand. In fact, it is my experience that formal methods frameworks are often a hindrance to exposing simple ideas. Talking mathematically is not bad, but the talking must be in mathematics that's known: sets, maps, trees, etc, not in a little-known lingo that requires extra training.--Nils Klarlund on the xml-dev mailing list
As I've spent more time looking at this stuff I've become less paranoid about the "separate standards" issue. The transformational component of XSL, XSLT, seems to make it so simple to turn one kind-of-thing into another that the Tower of Babel dilemma shouldn't (famous last words) actually cause insurmountable problems.--John E. Simpson on the "Computer Book Publishing" mailing list
On XML-Dev and a few other XML-centric lists, I feel like we're conquering the world. On almost every other list on which I participate, I feel like the lone XML whacko.--Simon St.Laurent on the xml-dev mailing list
failing to take Lynx and other text-mode browsers into account when you design a site is quite probably engineering malpractice and an invitation to an ADA lawsuit for many sites, assuming the site does anything useful for which a person with a disability might need equal access and reasonable accommodation to use.--Lee Anne Phillips on the xml-dev mailing list
As I've spent more time looking at this stuff I've become less paranoid about the "separate standards" issue. The transformational component of XSL, XSLT, seems to make it so simple to turn one kind-of-thing into another that the Tower of Babel dilemma shouldn't (famous last words) actually cause insurmountable problems.--John E. Simpson on the "Computer Book Publishing" mailing list
> The fact that DoubleClick is not disclosing the names of the companies who are feeding them consumers' names is a shameful hypocrisy. They are trying to protect the confidentiality of the violators of privacy.--Jason Catlett, JunkBusters
--Lee Anne Phillips on the xml-dev mailing listStandards don't mean that innovation stops any more than standardizing on using SAE (or metric) bolts, four rubber wheels, a gas pedal, and a steering wheel as the basis for an automobile means that we're all still driving Model T Fords. The Model T was made possible by standardization, and quickly surpassed custom-made cars within a few years. That doesn't mean that custom car makers are out of business; there are probably more custom car makers now than there ever were. But how many people do you know who own one?
The problem with browsers has been that heretofore the browser makers have been inventing their own nuts and bolts, so the tool kits of every designer have had to include tools to deal with all of them at great expense. While this is typical of an immature industry, standardization quickly drives custom solutions to the edge of the marketplace rather than the center.
If you look at the history of tools, *real* innovations and improvements in productivity have followed quickly on standardization. When SAE (and metric) nut and bolt sizes were agreed upon, socket sets and air wrenches followed because they were newly possible. Before that you had the choice of adjustable wrenches or custom wrenches designed for the particular item and that was it. A socket was too expensive to build when it only fit one nut and an air wrench needs sockets to be practical.
--Brandt Dainow on the xml-dev mailing listwe should all remember that most of the human race do NOT have access to high-tech PC's. There is a massive movement of 386 & 486 machines from the US and Europe into places like Africa. Timbuktu, for example, has precisely 2 PC's, both 486's. The doctors there use them to access medical information over the web which they couldn't afford to buy via medical journals. They claim this has saved hundreds of lives in the last year alone. Now maybe I'm an idealist, but I happen to think getting IT into the third world is a little more important than allowing some company to make an extra 2% profit.
My message to developers is thus:- don't raise the tech barrier, lower it. Don't wish for better technology, work out how to do more with less.
WYSIWYG just isn't a viable concept where (X|HT|XHT)ML is concerned, and it never was; the notions of interoperability and WYSIWYG authoring are inherently opposed to each other. The best you can do on the Web is make an educated guess - and I have news for you; "everyone's using IE5" just isn't even close to right.-- Robert L. Hood on the xml-dev mailing list
--Tim Bray on the xml-dev mailing listI used to be a regular speaker at the Seybold conferences. The Seybold crowd (heavy design geeks) initially kind of hoped the Web would go away, because they couldn't have the fine control over pixels and so on that they were used to. I remember like yesterday going to one in '96 (maybe '95) when there was a keynote-with-demo of the first release of Adobe's PageMill... the demo featured drag&droping pictures into place on web pages, and having them come up just right in the browser. WYSIWYG for the Web! The crowd exploded in a roaring ovation. I thought "These people Just Don't Get It".
4 years later, that crowd generally does get it. And is living with the fact that you can have design excellence without WYSIWYG. And PageMill is still a piece of crap.
The most important lesson I took from XSchema/DDML is that running code rules. While we had great community participation, we never got real code on the ground. The spirit of cooperation, in the absence of real code, was effectively the end of the project.--Simon St.Laurent on the xml-dev mailing list
If there's an underlying lesson, it's that the Web is all about doing a lot with a little; the HTML/HTTP/URI trio have to count as one of the great 80/20 point bullseyes in the history of technology.--Tim Bray on the xml-dev mailing list
The W3C's secrecy policies are there because it is a treaty organization of competitors, not a friendly group of collaborators. Imagine the chaos if IBM, Oracle, Sunday, Microsoft, et al. were allowed to leak each other's proposals to the press in order to put their own spin on everything to press a competitive advantage. Imagine if the technical people working on the specs had to answer to their PR people every time they posted to a W3C mailing list! Look at the relatively open US political system, with the press and pundits and consultants/lobbyists entangled in the whole process, and nobody willing to take a coherent controversial stand on anything. That's what a truly open W3C would look like. It would make the current W3C look like a thing of beauty by comparison.--Michael Champion on the xml-dev mailing list
The Internet standards process started in the RFC methodology, which, though sometimes awkward, chaotic, and slow, allowed rapid innovation and standardization when warranted and was fully public, ensuring participation by the *real* stakeholders in the process, the community served, rather than being dominated by the vendors who want to sell products to them.--Lee Anne Phillips on the xml-dev mailing list
The widespread popular use of the term "XML file" is also imprecise and misleading. I use the term "XML instance" because an "XML file" is not necessarily even well-formed, much less validatable, much less interoperable. "XML instance" refers to the logical thing that is a complete XML expression. "XML file" refers to a storage object that contains some data that has something to do with XML, but it does not imply that the contents of the file constitute a complete XML instance. Therefore, while it is meaningful to speak of the "interoperability" of an "XML instance", it is not meaningful to speak of the "interoperability" of an "XML file".--Steven R. Newcomb on the xml-dev mailing list
> A Web site is the same as the press. It's the same thing as a person publishing a newspaper. The press has an exemption under the law; the press is allowed to endorse candidates, and it doesn't have to register with the government.--Zack Exley, gwbush.com
> Steve Case is the Benedict Arnold of the digital age. He means open access on AOL Time Warner terms, like AT&T proposed doing last month with MindSpring. These are rules that favor them and not a level playing field. We need enforceable federal rules to ensure nondiscriminatory access to cable broadband infrastructure.--Jeff Chester, executive director of the Center for Media Education,
it's still too early to simply dive into using XML software without knowing anything about XML itself. That is to say, don't expect to find something like FrontPage for XML. Be prepared to do a lot of reading (books, online resources, and/or mailing lists) before you step forth.--John E. Simpson on the XML-L mailing list
schema validation should not be a hostage to connectivity and/or URL stability. Our approach was, however, NOT to design YACM (Yet Another Catalog Mechanism), but allow for ANY alternative schema location mechanism which people come up with.--Henry S. Thompson on the xml-dev mailing list
Junk in standards is like rat feces in chocolate bars -- there's always going to be some, but we try to keep it to a minimum. Think of ANSI C trigraphs, XML unparsed entities, and Unicode combining characters, three things that nobody but the hard-core-I'm-going-to-prove-they're-useful-if-it-kills-me zealots are ever going to use.--David Megginson on the xml-dev mailing list
Consider these two--Tim Bray on the xml-dev mailing list<html:a href="foo"> <html:a html:href="foo">
The namespace spec could have said one of three things:
1. These must always be treated as identical
2. These must always be treated as different
3. Applications can make up their minds
The then-Working Group eventually went for #3. It's kind of like non-Euclidean geometries; each of the three options above produces a self-consistent universe, so the only question is, which one do we want to live in?
most people who use SAX haven't even read the XML 1.0 REC much less the Namespaces REC, and I wouldn't expect them to have done so. After all, they're programmers who have to deal with XML as one (often small) part of their work, not XML specialists.--David Megginson on the xml-dev mailing list
I think one of biggest design mistakes in XML was keeping it SGML compatible, but it's easy to say this now and I think I would have probably made the same mistake.--Stefano Mazzocchi on the Apache XML mailing list
If you think about it, the namespace idea is the real key to XML success: namespaces are "versors" of an "infinite-dimension" solution space. Topologically speaking, while SGML is an single infinite dimension, XML is an infinite set of infinite dimensions. Mathematically speaking, you can create a one to one relationship with all the points of XML to SGML (like it's done considering, for example, "xsl:stylesheet" like a one dimensional SGML element, rather than the "stylesheet" element of the "xsl" namespace). This may lead to imply that SGML and XML have the same "multidimentional" volume. Thus, namespaces don't alter the topological tissue of XML (which remains flat and one-dimentional), but simply adds "classes" of elements to allow elements to share the same name, but have different meanings.--Stefano Mazzocchi on the Apache XML mailing list
> 2000 was the year that XML stepped away from its research background and became a real technology. Now it's got the support of the software vendor community, as well as the user community, who've proven its usefulness in their strategic projects.--Phil Costa, Giga Information Group