In Praise of the Edge Case: [Opening remarks at Extreme Markup Languages® 2005]

B. Tommie Usdin

Abstract

Opening Keynote: The Extreme Markup Languages conference, the organizers are sometimes told, devotes too much time to edge cases. This complaint inspires reflection on the value of exploring, learning about, and learning from the technological edge. Remember: today’s main stream application was yesterday’s edge case.

Keywords: Markup Languages

B. Tommie Usdin

Tommie Usdin has been working with XML and XSLT since their inception, and with SGML since 1985. Ms. Usdin chairs the Extreme Markup Languages conference. She was co-editor of Markup Languages: Theory & Practice, a peer reviewed quarterly publication published by the MIT Press. Ms. Usdin has led teams doing document analysis and vocabulary (DTD/schema) development for medical reference works, scientific and technical textbooks, industrial manuals, legal treatises, and historical literature. She has taught SGML and XML to executives, managers, technical writers, publications staffs, and typesetters. Her courses have varied from high-level overviews of the concepts underlying SGML and XML, to the impact of conversion to these markup languages on the workplace, the technical details of DTD development and maintenance, document analysis, how to tag and correct autotagged documents, and details of particular SGML and XML applications.

In Praise of the Edge Case

[Opening remarks at Extreme Markup Languages® 2005]

B. Tommie Usdin [Mulberry Technologies, Inc.]

Extreme Markup Languages 2005® (Montréal, Québec)

Copyright © 2005 B. Tommie Usdin. Reproduced with permission.

Last year’s Extreme feedback

As those of you who are regulars at Extreme know, we ask that each of you fill out an evaluation form at the end of the conference. We even bribe you to fill them out: in the conference closing we do a drawing of the evaluation forms, and the winner gets a free conference registration; the next few names drawn get free books.

We read what you write on these forms very carefully. We want to know which talks you liked and which you disliked. We want to hear your suggestions on improving the conference.

I have always enjoyed reading the conference evaluations, and have learned a lot from them. One consistent pattern is that the same talks show up in the “favorites” and “least-favorites”. Most talks don’t appear in either list. From time to time we have a dud, a talk with no redeeming features that shows up on “worst” lists but not “best” lists. I don’t actually mind this (well, while sitting in the room watching a speaker lay an egg, I squirm and feel sorry for the speaker and the audience), but in the larger scale of things I don’t mind because that is the price of taking risks on the program. (This year we took a few risks, too, hoping that they would introduce us to new ideas and knowing that we might have a dud or two.) We do listen to you on this; if a speaker gave a real dud at one Extreme, the criteria for that speaker getting in again are far more stringent; not only must the peer reviewers like the proposal but also the committee must be convinced that the speaker has really done the work necessary to make the new proposed talk very interesting.

Every year there are a few talks that show up once or twice in one list or the other; that’s to be expected. But every year there are two or three talks that are mentioned over and over, evenly divided between the best and worst lists. These are often the talks I found most exciting.

I have considered several possible explanations for the way a few talks each year polarize the audience.

A few years ago I taught a series of XML classes for a large company. You know the type of company; they have a big training department that administers internal training and a whole host of policies and procedures that probably made sense once but that are now just followed because that’s the way we do things here. One of these was that at the end of every class we had to ask every student to fill out an evaluation form, and we got a “bonus” if our average score was above a certain number. And I noticed that most of our students gave us 1’s and 2’s (the top scores) and every class there were a couple of 5’s and 6’s. This bothered me, until I read the comments on the forms — and noticed that the 5’s and 6’s came from people who wrote on the comment lines things like “Best class I’ve taken this year” and “Really learned a lot”. So, I changed the procedure. Before I handed out the evaluation sheets, I wrote on the board in front of the class: 1 = Good, 6 = Bad. Then I handed out the forms. And my average score went way up!

But I don’t think it’s that people at Extreme don’t know whether they are saying they like or dislike the talks. I think they know exactly what they are saying. And we as a group are split when we hear a talk that “pushes the envelope” or “straddles the bleeding edge”. Some of us are excited about it, some are bored, and some get quite cranky — which makes me think perhaps some of us are a bit insecure when faced with something new.

Over the years the talks that have been on both best and worst lists have been about new ways of thinking about old problems, about new problems that can be addressed with old tools, and case studies of projects (often academic projects) that have seriously pushed the limits of what can be done with XML. Some of the best and worst papers have been about graph theory, RDF, topic maps, parsing, SOAP, and artificial intelligence. I think of these papers as the core of Extreme; the papers we as a community are both fond of and despise are what Extreme is all about. I take pride in helping to provide a place where these ideas can be aired, discussed, and grown.

So perhaps you can imagine my concern when I read in the evaluations of last year’s Extreme:

  • Too many edge cases; too little useful content
  • More mainstream talks
  • Ditch the overlap
  • Too much topic maps
  • More practical content

Wow! I was shocked. While it may be that you won’t model overlap this year or next, that doesn’t mean that it isn’t useful to think about it. And while you may not be using Topic Maps or RDF that doesn’t mean that you can’t learn useful things from the talks about them.

Frankly, if you are here at Extreme because you want to learn tricks or techniques that you will be able to put into practice next week or that will make your job easier immediately, I think you are going to be disappointed. Either you got that yesterday in a tutorial, or you didn’t. Extreme is about ideas, not tools. (Actually, Extreme is MOSTLY about ideas, not tools. This year we do have a couple of talks about tools — that you could use next week. I have to be careful about absolutes.)

Anyway — what’s this about too many edge cases? The edge is where we grow. The edge is where new ideas come from. The edge is the future!

Besides, today’s edge case is tomorrow’s mainstream application

The edge case is about things we can’t do, or can’t do as well as we want to. Edge case talks tend to be by people who are willing to live with failure — sometimes for a long time — because they think they have an interesting or important problem to solve. These tend not to be people who work in corporate IT departments. This is because corporate IT managers (and developers in other large organization environments) are success-driven. They need to define relatively short projects in which they can be successful on budget and on schedule. If someone asks them to do something they don’t know how to do (and they don’t know anyone who knows how to do it) they declare it beyond the state of the art and refuse the project! Back in the bad old days when I worked in corporate America I had several very cool projects cancelled by IT directors who said, “That can’t be done” by which they meant “I don’t know how to do it, we can’t do it with our existing tools and capabilities, I don’t know how to budget it, and it’s outside my comfort zone. And if we try we may fail”.

I don’t blame these people; they were in a situation in which success was required. However, I do know that the things I wanted to do were possible, and that other people have since done most of them — although not necessarily easily.

Corporate IT people often dismiss as “nut cases” the people who work in other environments. People who can, and do, attempt the impossible. People who take “I don’t know how to do it and neither does anyone else” as a challenge, not a turn down. These people tend to work in academia, as consultants, independently, or in isolated corners of the corporate world. They are the untrainable — the people who didn’t learn when they were told that they couldn’t need to do something and weren’t allowed to want it.

But you know what? Sometimes those nut cases succeed. They manage to do the impossible, and they get up at conferences like Extreme and tell everyone what they did, how they did it, and why it is important. Or, more likely, they just tell us what they did and how they did it, leaving us to figure out why it’s important. And sometimes the rest of us, in the more mainstream word, look at what they did and say, “That would solve a problem I have been defining away, or a problem I’ve been failing to solve gracefully for ages”. And most of the time we just pack it away in our mental toolboxes, and months or years later may find a use for it.

Yesterday’s edge case is today’s “normal” use

Just last evening one of the speakers at this conference said, “I was really freaked out when I saw a computer that had lower case. If it could do lower case, it could do anything”.

I remember when text processing was an edge case. Computers were for use with structured data: numbers and things that could be expressed in simple lists of values (each of which could be assigned a number). This was, I was told, the nature of computers. Since computers worked in 0’s and 1’s, they were number-oriented. Prose was about words, and that was what typewriters were for. Using a computer to write text was misuse of the technology. Using computers to analyze text was, well, if not misuse of the technology, at least so expensive, so slow, and so silly that academic computer centers gave those of us who did it bottom priority. Our jobs took forever, ran only at night, and were postponed any time anyone re-running an SPSS job with the same data and the same specifications in hopes of getting a result they liked better asked for the time. Making lists of the words in a text corpus, making keyword in and out of context (KWIC and KWOC) indexes was way past the bleeding edge. And nobody in the corporate world would consider such silliness. Well, try telling corporate computer users (that’s everyone except perhaps the people who vacuum the floors) that they can’t process their text! What was an edge case is now so mainstream that MOST of you have never touched a typewriter!

I remember when relational databases were an interesting concept, but everyone knew that there would never be enough computing power to actually put one into practice. They simply couldn’t be built. Relational databases were an impractical theoretically interesting edge case. Relational database are no longer an edge case.

It is interesting to contemplate mixed content in the context of edge cases.

Attempts to describe mixed content have led to some of the best, and silliest, analogies in the XML literature:

  • Chunks floating in soup, or sometimes tortellini in broth
  • Raspberry-swirl ice cream (as contrasted with Neapolitan ice cream, where the flavors are separated and in a predictable sequence)

It is mixed content, and the ability to handle it gracefully, that characterizes a text processing environment. If your tools can’t handle mixed content gracefully, they simply are not text processing tools. It used to be, a very very long time ago, that you really couldn’t do much useful with text in computers because you couldn’t handle what we now call mixed content. And then, long enough ago that most of you don’t remember it, and some of you actively don’t care because it was before the advent of internet time, along came the notions of generic markup and SGML, and there were ways to identify and handle mixed content.

Nobody even considered creating SGML tools that couldn’t handle mixed content. I don’t know if this is because while we talked about using SGML for non-text applications we actually used it primarily for text, or because it didn’t occur to anyone to subset SGML in that way. So, for a while, in the markup community, mixed content was not an edge case.

But I am hearing now, from quite a few XML users and developers of XML tools, that mixed content is an edge case now; that “nobody needs it”, and that it should be removed from the next version of XML. Well, aside from noting that “nobody needs it” rarely means more than “I don’t think I need it for what I’m doing right now”, this is an interesting claim. It means that XML has come so far from its roots that the uses on which it was based are now considered, at least by some, to be unimportant edge cases.

Today’s edge cases are tomorrow’s use cases

I am confident when I say that some of what we see as an edge case today will be a main stream use tomorrow. I am not as confident when I try to identify which, specifically, those cases will be.

Come to think of it, it is not necessarily easy to recognize an edge case if (or is it especially if?) you are elbow deep into it. Few techno-geeks identify their work as being an obscure edge case; most of us think that what we are doing is important and interesting.

There are few ways to identify edge cases:

Have you ever joined a group and known that there was something nobody was talking about? You get to a family reunion, and something is going on — but you don’t know what. There is a certain stiffness of conversation as everyone works hard not to be the first to openly acknowledge the elephant in the living room. Eventually you find out that your 15-year-old second cousin is pregnant, or that an uncle is being investigated for stock fraud, or that .... Well, I don’t know what the scandals in your family are. And the first thing that tipped you off is that there was something you were spending a lot of energy NOT TALKING ABOUT! In the technology world, if there is something we are spending a lot of energy not talking about, it’s a good bet it’s an edge case — and probably an important one.

A second clue that there is an edge case somewhere in sight is that people are being told to do something “because I said so”. Rules that are declared and enforced but that are not explained make me suspicious. Perhaps we are not allowed to “look in that cupboard” or “buy an ice cream cone” or “look ahead at data we haven’t reached yet” for good reasons with which we would be sympathetic. But perhaps it’s because Aunt Mary hides her liquor in that cupboard, we can’t afford ice cream this month, and we thought you weren’t smart enough to write tools reliably with look-ahead.

In the markup community, there are several things we are actively NOT talking about, and some that a few of us are talking about and scandalizing the rest. At Extreme, we are allowed to talk about these. (Perhaps that’s because we don’t have the managers of possible future XML users here, so we can’t scare them away.) Perhaps it is because this is a gathering of people who aren’t socially adept enough to know that they shouldn’t talk about it. Perhaps it is because this is a gathering of people who have banged into the elephants in the living room often enough to be confident that other people see them too.

At Extreme, we talk about overlap, but I haven’t seen much about it at any other markup-related events. (Actually, there is a TEI committee to deal with overlap issues, which I think is very cool. But I would be very surprised to see discussion of overlap at the November XML conference because that’s not something we discuss when the children — umm, excuse me — the customers, are present.)

From the other comments on the evaluation forms of the people complaining about edge cases last year, I think their prime complaint was the amount of time spent discussing overlap. And, in fact, we did have an overlap love-in last year. We have been having papers about overlap for years, and we have a few overlap papers this year.

It seems likely to me that some mechanism to recognize and deal with overlap will become mainstream in the reasonably near future — perhaps in the next 5 years. So far we have been ignoring this elephant in the XML living room, or defining it away. We specify that in XML element boundaries may not overlap, and when we teach XML we explain this with hand gestures. Elements may not do this:

[Link to open this graphic in a separate page]

Why? Because I said so! I remember telling that to a group of linguists who were learning SGML, actually. One of the students stuck his hand up in the air, waved it frantically, and then shouted out, “You must be wrong about that. Nobody would be stupid enough to write a specification like that, and if they did, nobody would be stupid enough to adopt it. It’s like ... it’s like ... it’s like tying one hand behind your back and then changing a tire.” When I assured him that this really was what the specification said, and proved it by reading the relevant part of the specification, he gathered his stuff and stomped out of the class. He wasn’t going to waste his time learning such a stupid language. He was very unusual; most people are willing to accept rules like this because the spec said so.

Most of us are so obedient that we simply refuse to acknowledge that there are times when this doesn’t match the real world we are trying to model. I suspect that, right about now, some of you have just decided that this talk is about useless airy-fairy scholarly things and nobody doing real serious markup work needs overlap. You know that because it’s against the rules, and because the people working on it all seem to be too academic to be cool, and because nobody working on overlap seems to have any money or any power. Nonsense. Who do you think is concerned with change control and effectivity markers? Who wants to know what has changed between one version of a document and another, and who changed it, and when? Who wants to be able to display the state of a constantly changing document as it was a any specified date in the past? Can you really dismiss the military and the courts as “airy-fairy scholars”? I don’t think so.

But in my opinion overlap is the small grey elephant in the XML living room. The big pink elephant that we are all ignoring is subject access. We have a lot people arguing passionately about the appropriate syntax and methodology for storing and manipulating metadata. We have a lot of absurd discussions about the line between metadata and content. We have “fights” between the RDF and the Topic Map advocates, and peacemakers talking about how we can have both RDF and Topic Maps. But you know what? I don’t think it matters until we deal with the hard part. I don’t care if an application uses RDF or a topic map, or an old-fashioned linked list; if we don’t have the information we need to store, it doesn’t matter how we intend to store it! If we don’t have the information we need to search, it doesn’t matter what tools we have with which to search it! XQuery isn’t the elephant in the living room; RDF isn’t the elephant in the living room; Topic Maps aren’t the elephant in the living room; subject indexing is. As long as we can talk about using one of these technologies to provide high quality access to large quantities of information without talking about how those index terms, subject codes, taxonomic designators, or topic indicators are going to be associated with the content, we are ignoring the biggest of the elephants in our living rooms.

At Extreme, we do reject submissions, but not for being edge cases

In fact, we sometimes reject material for being too mainstream.

One of the great pleasures in working on Extreme is reading the comments of the peer reviewers. We send anonymized copies of each submission to several reviewers and ask them several questions about the papers. Our reviewers do a great deal of work; some of our reviews have been multi-page detailed suggestions on how papers could be improved, and many of the reviewers write very articulate notes to the authors and/or committee.

Here are a few excerpts from the peer reviews of papers we did not accept:

You make a very interesting and important point, but it will be totally lost on the Extreme audience because they will walk out on you (or turn to their email) after two paragraphs. We have heard way too many papers that ‘place XML in context’ by starting with pre-history. We don’t need it, we don’t want to hear it, and we won’t listen to anything that starts this way.

This particular dead horse has been beaten so many times there’s little left but tattered flesh. Whatever the ‘holistic approach’ was supposed to be, it was never articulated, and if that was the point of this paper, we didn’t find that out until the next-to-last page.

The paper reads like marketing material. (Repeated claims of ‘ease of use’, bulleted list of ‘Features and Benefits’ on p. 1, etc.)

Relation of topic to XML is very weak. I don’t see anything interesting from XML point of view inside the article.

A thinly-veiled product pitch, and the thinly-veiled product isn’t even very new or interesting. These folks haven’t done their research very well — products like this are a dime a dozen.

It didn’t seem new or exciting to me. I’m tired of all the work involved in learning new ways to make programming less work.

Given the recent news of conference submissions being spoofed by automated software, I can’t say that this is not an example of such a submission.

A one-trick pony. This is a well-described idea that can be described in ten minutes or less. What would they say for 45?

On the other hand, we did take papers that got these responses:

Probably not rocket science, but rang a bell in my head (‘of course, that’s how I should do it’)!

Poke a little harder at your ‘cannot’s; I think some of them can.

Yes, Extreme is the appropriate venue because no one else in the world could possibly care. Oh, but track it. Most of us won’t care, either.

This paper is very theoretical: a lot of definitions, rules and theorems. It would be much easier for readers to follow if the authors could add more concrete examples explaining these definitions, rules and theorems. Or at least, the authors could provide some intuitive insights before presenting those formal definitions, formula, etc.

My major concern with this paper is its accessibility to non-experts. It’s true that at Extreme this paper can get the interest and scrutiny it deserves, and I am not suggesting the paper be ‘dumbed down’. Nevertheless, I think it would be strengthened by placing its argument into a wider context. What is the benefit of being able to conduct the kind of analysis the paper describes? How can this research be applied in practical ways to solve problems? Too much here is left for the reader to fill in — and when the reader (or listener) doesn’t have the depth that the authors do, filling in becomes impossible.

Somewhat before you hit your first example, I was pleading with you for examples. Now I’ll just plead with you for more examples — smaller examples — earlier.

Don’t explain the obvious. Don’t drag in RELAX NG gratuitously just because it’s fashionable.

The work seems solid but rather Quixotic. HyTime has already tried and failed to do this. There doesn’t really seem much chance that this will succeed, and there doesn’t seem to be that much point in trying.

I suggest that you assume that the audience has no clue about any of the imperatives that drive this work, or the motivations of its authors. (It’s a very valid assumption for you to make, by the way.) Start by outlining explicitly and briefly these imperatives and motivations. Then make sure that the concerns and criteria for design decisions, etc., are all explained in terms of these imperatives and motivations. Examples are far more compelling and understandable than disembodied theories and broad generalities.

By the middle of the abstract I was dismissing the author as a crank with a bee-inhabited bonnet. By the end of it, I was shaking my head in patronizing indulgence.

The only reason why not [to listen to this presentation] is that the paper is very clear and, having read it, I’m not sure the talk would or could add much.

Hard soap, yes. Soft soap, no. That is, yes, he’s selling something. But not for money, and the sales pitch is to shine a bright light on all the technical details.

I find the very premise of this paper aggravating. It naively accepts hierarchies as good and useful things, seems to think that what worked for objects should work for XML, etc. While I disagree with all of that, this does a splendid job of presenting that position and should be excellent fodder for further conversation.

Welcome to Extreme

So, here you are at Extreme and faced with a variety of presentations, all on topics related to markup in some form or another, but with little else in common. Some are practical, some theoretical. Some case studies describe ongoing projects. A couple of talks are about products!

I urge you to listen carefully to presentations about edge cases and think about how your world will change if that edge case becomes mainstream. Listen to people who are doing something strange or foreign, and think about how it can relate to what you do. Figure out what’s important about each presentation. Don’t wait for the speaker to tell you what’s important; assume that the speaker does not know what’s important to you. Open your ears, and your minds. Welcome to Extreme Markup Languages 2005.


In Praise of the Edge Case

B. Tommie Usdin [Mulberry Technologies, Inc.]