Topic maps, RDF, and mushroom lasagna: [Closing Keynote Address]

C. M. Sperberg-McQueen


Kitchen wisdom sheds some unexpected light on technical issues.

Keywords: Markup Languages

C. M. Sperberg-McQueen

C.M. Sperberg-McQueen is a member of the technical staff at the World Wide Web Consortium; he serves on the W3C XML Schema Working Group, the XSL Working Group, the XML Processing Model Working Group, and the Service Modeling Language (SML) Working Group, and chairs the XML Coordination Group.

Topic maps, RDF, and mushroom lasagna

[Closing Keynote Address]

C. M. Sperberg-McQueen [World Wide Web Consortium, MIT Computer Science and Artificial Intelligence Laboratory]

Extreme Markup Languages 2007® (Montréal, Québec)

Copyright © 2007 C. M. Sperberg-McQueen. Reproduced with permission.

Sometimes, finding the right way to talk about something can be difficult. But the right word or the right metaphor can make all the difference. Because there are usually more ways than one. I have a friend who tells me that long years of experience have taught her that if she hosts people for dinner and puts out two dishes of lasagna, one unlabeled and one labeled “vegetarian,” the meat lasagna will be consumed, maybe with a little left over, and maybe 20% of the vegetarian lasagna will be taken. If, on the other hand, they’re labeled “beef” and “mushroom,” then about 70% of each will be taken.1

How you frame the difference, how you frame the distinction, makes all the difference. Partly, of course, it’s a question of a predominantly negative definition, saying what something is not — “this does not contain meat” — versus a more positive description, both in the logical and in the psychological sense, in terms of what it does contain. But, of course, that is not an absolute distinction. Whenever you define anything, you delineate the fines, the limits, the boundaries, and therefore, you are necessarily defining and identifying both what is on this side of the boundary and what is on that side of the boundary. So, it’s not an absolute distinction, but it makes a huge amount of difference.

Sometimes it’s a difference of positioning, of branding or marketing. Some people here will remember the discussions in 1996 about what to call the stripped down version of SGML that several hundred of us were trying to design, and the strong sentiment on the part of some people that it really needed to contain some indication that it was a stripped down version of SGML and needed therefore a name that included the name or acronym SGML, and the ultimate decision that it should have a name that did not contain the name SGML. This has had both positive and negative consequences; one of the negative consequences is that there is a small but indefatigable group of people who believe that this was the first unmistakable sign that the developers and promoters of XML were intellectual thieves who did not want to acknowledge their profound and total intellectual debt to the developers of SGML. Every couple of years I have an email exchange with someone who describes XML in terms like “carcinoma” and “excrescence” and “ingrates” and so forth. Sometimes I have to look them up, but they are all unpleasant medical terms. I once asked him “When you said the tragedy of XML was that it lost all the expressive power of SGML, which features of SGML that were omitted in XML did you find essential to your work?” And he said, “Oh, well, I’ve never actually got into those arcana of SGML, but ....” That is, he had no idea what features of SGML had been omitted from XML, but he was certain that in omitting them XML had lost all the wonderful properties of SGML. It was about that point that I decided to do my best to make sure that our email correspondence happened once every couple of years instead of more frequently. A more serious source of pain to some of us is the repeated utterances of people as smart as Len Bullard, who clearly believe that XML doesn’t pay sufficient homage to its heritage.

On the other hand, it has had a remarkably positive set of consequences as well, and those are, I think, the consequences that were hoped for at the time. If the technology we developed had mentioned SGML in its name, then all of the people who had spent thirty minutes or thirty seconds looking at SGML and deciding it was too complicated and not useful and at best could be described as “sounds good, maybe later” — all of those people would not need to spend even thirty seconds looking at XML: they would know in advance they didn’t need to look at it again and didn’t need to use it, didn’t need to care about it. Because their knowledge of the relevant technologies was so deep, however, the simple mechanism of omitting the letters “SGML” from the name meant that they had to look at it again. They didn’t notice that it was substantially identical to SGML in every aspect that counted, and they looked at it and said “Oh, this is great, unlike that SGML stuff which would never work; this has what we need.” And the result is that those of us interested in markup theory in a world containing XML live in a world that in some ways is rather happier than the one we had before XML. We were certainly happy ten and twelve and fifteen years ago, and we enjoyed ourselves, but there was this omnipresent puzzle: Why was the technology that we cared about not being taken up more widely? And now at least we don’t have that problem. We have a lot of idiots using the technology and not knowing how to use it correctly, but there are an awful lot of tools that make it easier for those of us who do know what we’re doing — or think we know what we’re doing — to get done what we think we want to do.

Marketing can make a lot of difference.

Sometimes, of course, finding the right word or the right metaphor is not really a question of marketing; it’s a question of capturing, or in some cases, catalysing an insight. I think, for example, of Steve DeRose’s remark on Monday during the Overlap Workshop that the distinction between some classes of approaches to overlap and other classes could be thought of by analogy with the distinction between graph theory or topology on one side and geometry on the other [DeRose 2007]. That is a metaphor which I think has legs. It enables us to look again at questions of overlap and the characterization of systems like LMNL or NITE or XConcur and so forth and understand the differences. It gives us hints about where to go, what other things we can usefully think about, what other angles we can usefully look at the problem from. Here the right metaphor throws new light on the problem.

Of course, some problems resist being completely revealed by new light being thrown on them from any single point of view. For some problems it turns out that what you really want is to be able to have two or more terms or metaphors or spots of focus, and to be able to shift your focus at will from one to the other, like students of physics who spend years training themselves so that at will they can think of light as a wave phenomenon or as a particle phenomenon. As far as I can tell, possibly no one except Bohr has ever managed to think of it as both at once. But being able to shift back and forth is better than nothing.

And, of course, the week has been full of those kinds of dichotomies or bifocal problems: theory versus practice, power versus simplicity or speed, high-level abstractions versus low-level abstractions (or non-abstractions ... concretizations? low-level details), syntax versus semantics, precision versus recall, technical merit versus marketing. It would be wrong to suggest that it won’t do to focus only on one of those or the other. On the contrary, many of us love this conference because we can come here and for four days focus exclusively on technical merit and not think at all about marketing. We focus on what is right and not on what will be the next hot thing, because, let’s face it, the mass appeal of an idea is neither a guarantee of its merit nor a guarantee of its lack of merit. It’s just completely orthogonal. So we come to Extreme in order to think about technical issues and not marketing; at least that is what the marketing for the conference says.

I have found it very rewarding this week to allow my focus to be shifted, from one talk to the next, from one side of each of these dichotomies to the other. On the one hand, we have, for example, had a number of new installments in our continuing discussion about the theory of meaning and how meaning is made in markup and how markup means things, with implications for the design of vocabularies. Anne Wrightson’s deployment of game theory [Wrightson 2007] (or, as Henry Thompson re-interpreted it in the discussion afterwards, execution traces for the programming language interpreter in our heads) — that is an idea that, I think, has a number of applications that will help us understand why some vocabulary designs seem to work well and others don’t seem to work quite so well. Yves Marcoux’s continuing elaboration of his theory of meaning and ways to document it, which extends his account to make it handle more aspects of our markup languages [Marcoux 2007]; David Dubin’s exploration of the unsolved problem of identity and reification in RDF [Dubin 2007]; Fabio Vitali’s suggestion of a design discipline which intentionally trades off expressive power for simplicity and which provides (or will provide when they do the proofs) formal guarantees for certain kinds of equivalence relations [Dattolo 2007]; David Birnbaum’s discussion of tables [Birnbaum 2007]. All of those at what I’ll call the theoretical end.

And, on the other hand, lots of reports of straight practical work illustrating the benefits of having good semantics and good knowledge of our data, now, even if some aspects of the formal theory of semantics remain unsolved problems. Moody Altamimi on MathML [Altamimi 2007]: finally, practical ways of writing mathematics in our documents in a form that software can actually understand as mathematics, not just as typography. It’s been a long time coming — it will still be a long time coming, I’m sure it’s going to be a very long curve. But what could be better than that?

Felix Sasaki demonstrating that if you can get convenient access to key parts of your schema definitions, you can in a mechanical way suitably localize (or de-localize) both the schema and document instances that conform to it [Sasaki 2007]. And as Martin Bryan told us, “You get the right semantics, XML can even help you get your car fixed.” [Bryan 2007] Hard to beat that!

Access to good semantics, good understanding of the data, seems to me at the heart of some other talks that don’t look overtly to be about semantics at all. Michael Kay illustrated on Tuesday how much you can do to optimize XSLT in XSLT, if you have performed a suitably deep and intelligent semantic analysis of the expression, and if you have managed to design a syntactic representation of the expression structure, such that the structure of the expression matches up with the structure of the semantics well enough to allow pattern matches operating on a purely syntactic level to capture semantically interesting classes of instances [Kay 2007]. That’s a wonderful instantiation of power through semantics. I feel the same way about Roy Amodeo’s discussion of manipulating program languages by translating them into markup constructs [Amodeo 2007], or Mario Blažević’s discussion of template languages [Blažević 2007]; they illuminate the benefits we can get by having good representation of semantics.

One of the dichotomies that struck me this year is the alternation of attention between what I’ll call “cool stuff” and what I think of as “grownup stuff,” although this is not what the television in your hotel room means when it talks about “mature themes.” The cool stuff may be obvious: all of us may have our own list of fun topics here. I think particularly of the talks and discussion on overlapping structures and their representation, like the work of the research group in Lyon (represented here by Souha Kaouk) on MultiX [Chatti 2007], or the several papers on one concrete instance of the overlap problem, naming multi-layered linguistic annotation: Jean Carletta’s overview of the NITE system developed in Edinburgh [Carletta 2007], Richard Eckart’s discussion of the work being done in Darmstadt [Eckart 2007], or Andreas Witt and his colleagues’ description of the GENAU effort in Tübingen [Witt 2007].

I may think of overlap as a fun topic just because I’m obsessed with it, but we will all have fun topics to think of here.

And next to those and alternating with them, a series of talks which are themselves fun and interesting in their own way but which call our attention to topics which we must not neglect even if we don’t find them fun in the short run, if we want to make XML make a lasting difference in information technology practice and enable the long-term preservation of our cultural heritage. Michael Kay, again, on optimizing XSLT [Kay 2007]. Also Juanhui Li from a very different angle on the same problem of making XSLT tractable for high performance [Jones 2007]. Pekka Kilpeläinen on validator performance and the question “Are our validators fast enough that we can use them keystroke by keystroke in an editor?” [Saesmaa 2007] José Carlos Ramalho on what it takes to record a relational database in a way that makes it still useable even after the SQL industry is swallowed in the dust of history [Ramalho 2007]. (If of course that can happen — there are some people who say “No, the disappearance of the SQL industry is one of the signs of the apocalypse. They will never be able to test whether José Carlos’ representation can survive.”) Tomasz Müldner on XML compression and its performance in a peer to peer context [Demmings 2007], or Stefano Zacchirolli’s work on improving (and detecting) the streamability of XPath expressions [Marinelli 2007]. The outstanding example in my head at the moment, of course, is the work of Henry Thompson and John Cowan on the utterly grownup problem of grappling with our legacy data and with HTML [Thompson 2007]. My hat is off to Henry and John for being willing to deal with this. It requires a degree of emotional maturity that I personally am not prepared to bring to these problems.

[Sidebar: Laughter, followed by John Cowan saying, “I really did it because it’s cool” to which Michael responds, “Okay, now that I can relate to. If you did it because it’s cool, then I feel less overawed. I’m still impressed, but less overawed.” More laughter.]

Optimization, of course, can be hard, and all of these grownup topics can require us to turn our attention from the beautiful, high-level abstractions that we work so hard to achieve to the sometimes less beautiful, sometimes less inspiring nitty-gritty of our implementation details. But it matters. It matters because, as Donald Knuth once memorably said, “It doesn’t matter how fast the CPU gets, an algorithm that is twice as fast will still execute in half the time.”2 And sometimes that can make the difference between something you can do in theory but cannot do in practice, and something that you can do in practice.

It’s perhaps fitting that here at Extreme — the Markup Theory and Practice Conference — I have been most struck (as often happens here) by the constructive duality of theory and practice, long-term goals and strategies and short-term practice. Our long-term goals remain the same: understanding how we understand things, how we manage information, understanding what texts and documents and subjects and resources really are, so that we can represent them and manage them successfully.

I don’t know how many of you spend time looking at the Jules Verne-style illustrations on some of the walls of this hotel, but I’ve spent some time doing so, and I find myself thinking when I look at those illustrations about what kinds of information technology that world has. Surely, in such a wonderful world, information technology (like all other technologies) has been perfected. Surely, every subject of common interest has a published subject identifier. Surely, I can find the right one when I need it, so there is no reason not to use it when it’s appropriate. So not only do we have names that distinguish all the things we want to talk about, so we have solved the problem of ambiguity, but we have just the names we need, with only trivial exceptions, so that by and large we have also solved the problem of synonymy or aliasing, and the few synonyms we have are either accidental and soon repaired, or else they are harmless and tolerated like pet fleas. Having solved the problems both of ambiguity and synonymy, we would have a system that would make Leibniz smile, and when the speaker and the questioner at the microphone had a difference of opinion, Leibniz would stand up and, speaking immaculate Latin, wave us over to the calculator to resolve the question once and for all.

How can we build that Jules Verne universe?

In that world, surely, we could all save time if I just told you that the identifier for the specific concatenation of ideas in this talk is “AcfX jmNG wUBS svpZ QWri PYTY qDhT A==” You can look it up, and we can all go home much faster.

However, until we have that registry of all ideas and all the subjects that we might ever want to think about and all of their concatenations, what do we do? I have to say that the strongest impression made on me at this conference is probably by a set of papers that persuaded me of something that I had basically expected not to be persuaded of, ever: Tom Passin’s paper on RDF [Passin 2007], Jim Mason’s organ topic map [Mason 2007], Steve Pepper’s discussion of topic maps for the Dublin Core [Pepper 2007], Lars Marius Garshol’s discussion of using topic maps in practice to improve search and retrieval [Garshol 2007]. That one sounded almost like it was from Jules Verne’s world, except there was a clue that it was about this world and not the world of Jules Verne: the clue was that he mentioned some things that work pretty well but not perfectly. And that was when I knew, no, he really is talking about this world and not that other world to which he has access through some mechanism like the one in the William Gibson’s story about the Gernsback Continuum [Gibson 1981].

Topic maps and RDF appeared to me in these talks for possibly the first time not as important technologies that I need to track and that may eventually be crucial to my work, but as things I might want to try in the next six months not as a demonstration and not to teach myself something but to solve some problem. That is, they seemed practical; they seemed tractable in ways that, in some earlier talks that focused more on the Jules Verne-like aspects, they had not always seemed.

Now, of course, big questions remain. In particular, when we face a huge problem like building that Jules Verne information technology, a problem so big that we know we cannot just solve it, attack it whole with a frontal attack, we are, of course, tempted to try to divide and conquer. Is there a way to know in advance, looking at a big problem, whether it is susceptible to solution by divide and conquer techniques? I think the answer is “No, there is no such way. You have to go on your gut feelings. You have to make a leap of faith.” Negotiating the competing demands of the long-term and the short-term is hard, and the failures of our attempts to solve the problem of naming illustrate just how hard it is. I have colleagues — some of them very smart people, some of them very senior people — who believe that the true advantage and the correct use of namespaces is to allow semantic reuse. Once one person has invented an idea, has defined and tagged and named a certain thing, the rest of us don’t have to reinvent it; we can just refer to that definition using a namespace-qualified name. The problem is that when I need that solution in a particular context, I as a vocabulary designer, and almost any user of my vocabulary, would really rather have a name that is suitable in this context than whatever name seemed appropriate in the context in which the thing was originally invented or described.

Even if the name is the same, the users of a particular vocabulary do not typically want to be distracted by having to remember for each concept in the vocabulary not only what it is so that they can correctly identify what’s in their text but also where the idea came from so they can remember which namespace qualifier to use. When the HTML vocabulary borrowed some ideas from the Text Encoding Initiative, they did not put them in a TEI namespace. They integrated them into the HTML namespace. I don’t resent this at all. I would have done the same thing. I did do the same thing: when the TEI vocabulary stole ideas from earlier vocabularies, we certainly wouldn’t have put them in a namespace even if namespaces had been invented, because we wanted a coherent set of names that worked together. We didn’t want to be bound by other people’s naming disciplines. Naming is intimately associated with the way we think of things and what kind of things we think there are.

Architectural forms, on the other hand, got that bit right; they understand the importance of being able to rename things. And architectural forms work much better in that way. On the other hand, architectural forms, it must be admitted, work best if there is an architectural form that I know about and I consciously design a specialization of that architectural form. They work a little less well — sometimes not at all — for retrofitting descriptions of things that I invented in complete ignorance of the architectural form.

So that in effect despite the utterly sincere rhetoric about letting 100 flowers bloom and letting decisions about information be made in a distributed way, both namespaces and architectural forms succeed only in putting a thin veneer of distributed responsibility over the brutal reality that they work best if there is centralized, preconcerted decision-making. If we want the Jules Verne world of information technology, we’re going to have to do better, and I don’t know how we’re going to do better. Relying on identity of concepts is very strong when it works, but it works in a vanishingly small set of cases. Being able to rely on subsumption, which is easy with architectural forms — a little less easy but still possible with RDF and OWL — is a little better but also still effectively a minority case. If we allow individual projects and organizations and people to think about things and decide on their own ontologies — which is one of the fundamental values of the markup community — then we will find for better and for worse that people don’t all think the same way. The problems of translating from ontology to ontology will be — even though ontologies are formal languages, not natural languages — essentially the same as those of natural language translation. And purely automated tools will have exactly the same problems that purely automated machine-translation tools have. They will work in some restricted areas well enough that you don’t necessarily want to spend the time doing hand fixup. But in a lot of areas and in particular the areas we care most about ourselves, we will find those results unsatisfactory.

A few years ago a very prominent promoter of the semantic web gave a talk in which he described how important it is that pre-arrangement, prior agreement, not be necessary, that we be able to integrate information sources developed independently. As an illustration, he said “You and I don’t have to agree in advance about what terms to use for color. You develop your own color ontology and name things, and I’ll develop my own color ontology and name things, and when we need to communicate, then we’ll simply say what is identical to what.” And at that point, I burst out laughing in what I fear may have been rather a rude way. And I expected the entire audience to burst out laughing. As it happens, I was the only one so I may have been more prominent in my laughter than I had expected. [Laughter] I explained later to the person sitting next to me that ... well, perhaps it is just because I come from a humanities and philological background, and in first year courses in general linguistics, fundamental problems with untranslatability are typically illustrated with color terms. Henry Thompson has told me that anthropologically speaking it is true that if you have in different languages the same number of primary color terms, then the archetypal values for the colors tend to be similar. But if you have a language that has three primary colors built into it and another that has four or some other number, then, of course, the core values, the paradigmatic instances of the various color terms vary somewhat, and there is no possibility of one-to-one translation. Even within a language, historians of language and literature will point out to you that things change. If you speak modern French and read seventeenth-century French poetry, you might be surprised and curious to know why they considered the night to be “brun” or “brown.” It doesn’t look particularly “brown” to us. But if you study the color usage carefully, attending closely to the texts, you may conclude, with most historians of the French and German lexicons, that the term is a false friend. It didn’t look “brown” to them either. The word “brun” didn’t mean “brown”; it just meant “dark.” The specialization to the particular hue that we think of as “brown” came later. The same phenomenon occurs in German, which, of course, got it by direct translation from French.

When we design similar, but different, systems, our choices of names and metaphors vary because we think differently. We may think differently from other people, and we may think differently from ourselves on another day. Those differences in terminology and relations are deeply embedded in the way we think and in the way our organizations work. Even when you have a complex network of information that works perfectly in a complicated system and even when it’s very well documented, if you lift it out of its original context and try to transplant it into a different context, it frequently resists. Like mandrake roots, many of our systems will resist re-potting, and they can make a hell of a noise when we try to do it anyway. Sometimes, for those and other reasons, I wonder whether our daydreams about IT in the futuristic world of those Jules Verne illustrations are all idle fantasies and will come to nothing — will come to grief, I wonder, perhaps, on the fundamental, inescapable fact that we don’t always think about things in the same way. That difference extends not only to not agreeing about how things are, but to not agreeing about what things there are for us to have opinions about. Like many things, this may have been put best by Jorge Luis Borges in his story Tlön, Uqbar, Orbis tertius [Borges 1961]. (If you haven’t read it, you should read it. It is one of the world’s best tutorials on ontology.)

Hume noted for all time that Berkeley’s arguments did not admit the slightest refutation nor did they cause the slightest conviction. This dictum is entirely correct in its application to the earth, but entirely false in Tlön. The nations of this planet are congenitally idealist. Their language and the derivations of their language — religion, letters, metaphysics — all presuppose idealism. The world for them is not a concourse of objects in space; it is a heterogeneous series of independent acts. It is successive and temporal, not spatial. There are no nouns in Tlön’s conjectural Ursprache, from which the “present” languages and the dialects are derived: there are impersonal verbs, modified by monosyllabic suffixes (or prefixes) with an adverbial value. For example: there is no word corresponding to the word “moon,” but there is a verb which in English would be “to moon” or “to moonate.” “The moon rose above the river” is hlör u fang axaxaxas mlö, or literally, “upward behind the on-streaming it mooned.”

The preceding applies to the languages of the sourthern hemisphere. In those of the northern hemisphere (on whose Ursprache there is very little data in the Eleventh Volume), the prime unit is not the verb, but the monosyllabic adjective. The noun is formed by an accumulation of adjectives. They do not say “moon,” but rather “round airy-light on dark” or “pale-orange-of-the-sky” or any other such combination. In the example selected the mass of adjectives refers to a real object, but this is purely fortuitous. The literature of this hemisphere (like Meinong’s subsistent world) abounds in ideal objects, which are convoked and dissolved in a moment, according to poetic needs. At times they are determined by mere simultaneity. There are objects composed of two terms, one of visual and another of auditory character: the color of the rising sun and the faraway cry of a bird. There are objects of many terms: the sun and the water on a swimmer’s chest, the vague tremulous rose color we see with our eyes closed, the sensation of being carried along by a river and also by sleep. These second-degree objects can be combined with others; through the use of certain abbreviations, the process is practically infinite. There are famous poems made up of one enormous word. This word forms a poetic object created by the author. The fact that no one believes in the reality of nouns paradoxically causes their number to be unending. The languages of Tlön’s northern hemisphere contain all the nouns of the Indo-European languages — and many others as well.

We may never quite reach that Jules Verne world. But, like the character in Gibson’s story [Gibson 1981], parts of if may come to us, and that was the sense I got from some of the talks on topic maps and RDF at this conference. It may come to us. We may be able to use some of those techniques here even in a world where we can’t all agree on what things there are or what to call them. Whether we call it “vegetarian lasagna” or “mushroom lasagna,” the fact remains that when it is properly prepared, it tastes very good when you’re hungry.

Let’s go to lunch. [Applause]



My thanks to Tonya Gaylord of Mulberry Technologies both for making and for transcribing the tape of my remarks. Descriptions of audience reaction are hers. I have taken the opportunity to clean up the structure of some sentences which lost their way while in progress, and Tonya has supplied bibliographic references to the conference papers mentioned in my remarks.


I have scanned all the books by Knuth I have on my shelves trying to find the location and exact source of this bon mot, but thus far I have been unsuccessful. It does sound like the kind of thing he might say, but it’s also the kind of thing someone might make up and attribute to him, consciously or unconsciously.


[Altamimi 2007] Altamimi, Moody E. and Abdou S. Youssef. “A More Canonical Form of Content MathML to Facilitate Math Search.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Amodeo 2007] Amodeo, Roy. “Applying Structured Content Transformation Techniques to Software Source Code.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Birnbaum 2007] Birnbaum, David J. “Sometimes a table is only a table: And sometimes a row is a column.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Blažević 2007] Blažević, Mario. “Composable Templates.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Borges 1961] Borges, Jorge Luis. Tlön, Uqbar, Orbis tertius, tr. James E. Irby. New World Writing 18 (April 1961); rpt. in J.L.B., Labyrinths: Selected stories & other writings, ed. Donald A. Yates and James E. Irby. New York: New Directions, 1964. In the New Directions paperback, the passage quoted here appears on pp. 8-9.

[Bryan 2007] Bryan, Martin and Jay Cousins. “MYCAREVENT: OWL and the automotive repair information supply chain: In Praise of the Noble OWL.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Carletta 2007] Carletta, Jean. “The NITE approach to overlap.” In International Workshop on Markup of Overlapping Structures. Available on the Web at

[Chatti 2007] Chatti, Noureddine, Souha Kaouk, Sylvie Calabretto and Jean Marie Pinon. “MultiX: an XML based formalism to encode multi-structured documents.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Dattolo 2007] Dattolo, Antonina, Angelo Di Iorio, Silvia Duca, Antonio Angelo Feliziani and Fabio Vitali. “Converting into pattern-based schemas: a formal approach.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Demmings 2007] Demmings, Brian, Tomasz Müldner, Gregory Leighton and Andrew Young. “Using XML Compression to Increase Efficiency of P2P Messaging in JXTA-based Environments.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[DeRose 2007] DeRose, Steve. “Trojan Markup and other XML milestone-tagging techniques” In International Workshop on Markup of Overlapping Structures. Available on the Web at

[Dubin 2007] Dubin, David. “Instance or expression? Another look at reification.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Eckart 2007] Eckart, Richard. “Limits of XML Super-models for Linguistic Annotation.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Garshol 2007] Garshol, Lars Marius. “Semantic Search with Topic Maps.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Gibson 1981] Gibson, William. “The Gernsback Continuum.” In Universe 11, ed. Terry Carr Garden City, NY: Doubleday, 1981; rpt. in Gibson’s Burning Chrome New York: Arbor House, 1986; Ace, 1987.

[Jones 2007] Jones, Kevin, Jianhui Li and Lan Yi. “Building a C++ XSLT Processor for large documents and high-performance.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Kay 2007] Kay, Michael. “Writing an XSLT Optimizer in XSLT.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Marcoux 2007] Marcoux, Yves and Élias Rizkallah. “Exploring intertextual semantics: A reflection on attributes and optionality.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Marinelli 2007] Marinelli, Paolo, Fabio Vitali, and Stefano Zacchiroli. “Streaming validation of schemata: The lazy typing discipline.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Mason 2007] Mason, James David. “Organized Mapping: Documenting a Complex Musical System.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Passin 2007] Passin, Thomas B. “Easy RDF For Real-Life System Modeling.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Pepper 2007] Pepper, Steve. “Expressing Dublin Core using Topic Maps.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Ramalho 2007] Ramalho, José Carlos, Miguel Ferreira, Luís Faria and Rui Castro. “Relational Database Preservation through XML modelling.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Saesmaa 2007] Saesmaa, Mikko and Pekka Kilpeläinen. “On-the-fly Validation of XML Markup Languages using off-the-shelf Tools.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Sasaki 2007] Sasaki, Felix. “Localization of Schema Languages.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Thompson 2007] Thompson, Henry Swift. “Declarative specification of XML document fixup.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Witt 2007] Witt, Andreas, Oliver Schonefeld, Georg Rehm, Jonathan Khoo, and Kilian Evang. “On the lossless transformation of single-file, multi-layer annotations into multi-rooted trees.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

[Wrightson 2007] Wrightson, Ann. “Is it Possible to be Simple Without being Stupid?: Exploring the Semantics of Model-driven XML.” In Proceedings of Extreme Markup Languages 2007®. Available on the Web at

Topic maps, RDF, and mushroom lasagna

C. M. Sperberg-McQueen [World Wide Web Consortium, MIT Computer Science and Artificial Intelligence Laboratory]