Slashing brambles and sating digital hunger

Neill A. Kipp
neill@kippsoftware.com
Sam Hunting
info@etopicality.com

Abstract

Computing, markup, and knowledge technologies can and should be commoditized through a radical simplification of Internet protocols, XML, and programming languages. Such simplification would open vast new markets to software development and information technology. Consumers would benefit because software and appliances would become cheaper and more reliable. In addition, consumers’ “digital hunger” for relevant and timely information would be more easily satisfied using an increasingly humane information appliance and computing environment.

Keywords: Technology Adoption

Neill A. Kipp

Neill A. Kipp is the Principal Architect of Kipp Software (kippsoftware.com). He has more than 12 years experience as a software architect and keeps bringing creative product concepts to market. In his most recent accomplishment, he designed the user interface for the best-in-class Wealth Management System (multi-million dollar deployment to thousands of users and double-digit growth). He also invented AtomicML, a markup language whose documents are 20-50% smaller than XML and whose parsers fit in one page of code. He has written and published more than 20 articles and newsletters, authored and maintained more than five different Web sites, has given at least six workshops, and has spoken at more than 12 industry and academic conferences. He is a member of the ACM (acm.org) and Denver’s Internet Chamber of Commerce (icc.org), is an instructor at CU-Denver, and is usability advisor to CTEK.biz, a Boulder-based business catalyst.

Sam Hunting

Sam Hunting is the president of eTopicality, Inc. (etopicality.com), a consultancy whose service offerings include topic map development, topic map training, content analysis, and DTD and schema development. He is a co-editor of the Topic Maps Model for ISO 13250. He was a founding member of TopicMaps.Org, which developed the XML Topic Maps (XTM) specification, and an editor of the initial release of that specification. He is a co-author of the XTM 1.0 DTD. He is the technical editor of XML Topic Maps: Creating and Using Topic Maps for the Web, published by Addison-Wesley. He is a co-founder of the GooseWorks project (gooseworks.org) for creating open source topic map tools. He is a frequent speaker at conferences and user groups. He has been working with markup technologies for more than 10 years.

Slashing brambles and sating digital hunger

Neill A. Kipp [Kipp Software]
Sam Hunting [eTopicality, Inc.]

Extreme Markup Languages 2003® (Montréal, Québec)

Copyright © 2003 Kipp Software Corporation & Robert S. Hunting. Reproduced with permission.

Digital hunger manifesto

Sating one’s appetite for order — taming the wild — is a repeating theme in evolution. Electricity combines methane, ammonia, hydrogen, and water into amino acids. Amino acids hunger for community and pull together to become single-celled organisms. Organisms evolve into plants and animals. Survival instinct urges animals to make tools, plant crops, explore, conquer, negotiate, civilize — and perform other information processing tasks. Visionaries at DARPA, Berkley, CERN, and elsewhere identified the “digital wilds” to be tamed. They persevered through distractions like EBCDIC and bang-path addressing to achieve forward-thinking results: asynchronous messaging, overnight product delivery, and global collaboration.

But decay follows growth. The dinosaurs no longer roam. The Roman senate became corrupt and a dark age followed. Similarly, our online world is overgrown with unsolicited email, network viruses, popup advertisements, and identity theft. Information competes for our attention and interrupts us in our ongoing quest for knowledge. Even well-domesticated information — email, bookmarks, Web sites, blogs, Wikis, chat transcripts, word processing documents, and spreadsheets — seems to rot or sprout thorns when left idle. As a result, the digital wilds are again upon us — now we are information workers combating the thickets of unmanaged digital volume.

Sating our appetite for knowledge is the next frontier. Prehistoric humans beat back the brambles by fashioning tools. The spade replaced the hand. The plow and ox replaced the spade. Surely, we can improve knowledge tools and infrastructure to sate our knowledge appetite — to satisfy our digital hunger.

“Digital hunger” is the cognitive need [Maslow 1971] of an entity (human or otherwise) to acquire relevant information (and not irrelevant information) at the lowest possible cost (expended resources) to satisfy the ongoing need to make decisions, inspire action, negotiate agreements, compete in markets, combat entropy, and produce more information.

In the accompanying scenario (Figure 1), Harriet’s interaction with information is complicated and painful. Her authoring tools are not connected. The travel reservation system cannot find her frequent flyer number. Her search engine cannot give a list of hotel vacancies sorted by price and distance from the conference center. She is much more interested in her paper topic than in making last-minute hotel reservations! How often has she abandoned the Web feeling stupid, frustrated, and cheated?

Figure 1: Usage scenario
[Link to open this graphic in a separate page]

Harriet accesses many resources preparing for the conference.

Surely, we can provide a knowledge infrastructure that meets Harriet’s digital needs — and profitably so. As humankind commoditized the food-providing industry to sate its physical hunger, so must we commoditize the information industry to sate our digital hunger.

Technology shakeout

If history repeats itself, the computing industry will soon endure a “back-to-basics” reduction. Products follow a typical adoption curve: low adoption, surge, and decay [Moore 1991]. Subsequent growth after surge is achieved by adding a product feature, extending a brand, monopolizing (i.e., ensuring vendor lock-in), or by finding and serving a new market entirely. Inferior competitors are squeezed out as the market matures, leaving only a few major players to compete for shrinking margins as the price and volume wars begin. In technology recently, we have seen considerable feature pile-up, which tells us that the digital marketplace is on its post-growth downward slope and the competition shakeout is nigh.

Some technologies have already lost ground. Feature-challenged FORTRAN, popular in the 1970s, bowed to the richer and simpler C language. Similarly, the modular Pascal language has been replaced by object languages C++ and Java. SGML was made simpler by XML in the 1990s; and syntactically convoluted Perl, once the language of choice for Web scripting, may have met its ultimate competition in more straightforward Python and PHP.

Many more technologies have not yet shaken out. Popular protocols now include IP, UDP, TCP, SMTP, FTP, Telnet, Z39.50, rsh, ssh, HTTP, https, and SOAP. High-level languages include C++, Java, Visual Basic, JavaScript, JScript, ECMAScript, C#, LISP, Linda, Eiffel, Perl, Python, ASP, JSP, PHP, and XSLT. Document syntaxes now include SGML, HTML, PDF, XML, MathML, and ebXML. Image formats include GIF, TIFF, JPEG, BMP, PNT, EPS, WMF, EMF, PNG, SVG, and DrawML. Operating systems have piled up: DOS, Windows 3.x/95/NT/2000/CE/XP, Linux, LinuxCE, BSD, MacOS, OSX, Solaris, HP-UX, MUMPS, Palm, Symbian, and RIM. Software libraries have seen the greatest expansion: stdio, string, std, regexp, math, XPath, XSLT, SAX, DOM, MFC, COM, JAWT, Swing, JDBC, JAXB, and JAXM, to name a famous few. The JVM [Java Virtual Machine] runs Java, Jython, and even Cobol. Microsoft’s .Net Common Language Runtime platform hosts C#, VB.NET, JScript, and anything that used to speak COM (Figure 2). Legacy systems endure (as we saw with COBOL and the Y2K problem) while new technologies appear and demand more attention and more dollars.

Figure 2: Pile-up before shakeout
[Link to open this graphic in a separate page]

Operating systems, protocols, libraries, file formats, markup languages, programming languages, and platforms have been piling up.

Most of the aforementioned will shake out to become exhibits in the Museum of Dead Technology, along with punch cards, SNOBOL, and CP/M. What software technologies will survive? Those that best satisfy the business, popular, and engineering requirements of a system that sates digital hunger. In the end, consumers (not marketers) will declare the victors [Kotler, et al. 2002].

The “Mother of All Information Appliances”

Progress toward commoditization puts satisfying the consumer first. Having generic prescriptions, for example, means that more people can afford life-sustaining medication. Having an electrical grid with standards for sockets and current means that more people can afford power. Marketers combat commoditization on the decay side of the adoption curve through feature creep and lock-in among other tactics [Shapiro 1998]. But these tactics simply entangle users and prevent them from satisfying digital hunger. What Harriet needs is commoditized computing.

For Harriet, having commoditized computing would mean having an integrated information appliance with the following features: (1) an online library of information sources with bibliographic management, (2) an authoring platform with collaboration facility and submission templates, and (3) an automated and federated travel and passport agent.

For Harriet, this appliance would find topical information, cull out the irrelevant information (possibly through an ongoing reference interview), present her the information with millisecond drill-down capability and with similarly efficient navigation to related topics, set up and perform statistical analysis (histogram, linear regression, trend spotting, etc.), set up and execute models (financial, political, biological, etc.), manage all her human contacts and communications, arrange face-to-face communication appointments, design and confirm her travel arrangements, follow her everywhere (she could not leave it in a taxi), work anywhere, and survive component obsolescence.

Harriet’s appliance would collect information; propose alternatives; present and accept offers on her behalf; act quickly, accurately, and efficiently; produce more high-quality information; make her lots of money; and not knowingly lose her any money. It would be aesthetically pleasing, fun to be around, polite, energetic (as opposed to being a desktop-sprawling, feature-bloated, mind-sucking, energy sink), not make her feel stupid, improve her quality of life and that of those around her, and provide her plenty of leisure time [Cooper 1999].

Harriet would appreciate metaphors like butler, chauffeur, agent, manager, secretary, staff, and jester [Morgan 1992], [Asimov 1950]. The system would involve features from the best appliances, PDAs, Web services, and speech interpreters. For Harriet, this all-encompassing, hyper-nurturing computing environment would be “MA” — the “Mother of all Information Appliances” (Figure 3).

Figure 3: The MA
[Link to open this graphic in a separate page]

Harriet needs the Mother of all Information Appliances.

The appliance, a user-facing client, would focus on display and communications while Internet servers would provide the large and expensive software components, including speech recognition and gross storage. Appliances, software, and operating systems would work together to provide a standard platform with standard connectors to effect this dream.

Unfortunately today’s Internet with its concomitant “World Wide Wait” cannot support such a robust application. The Internet protocols are too slow and too prone to intrusion; languages are too complicated; and applications are not interoperable enough.

To bring MA into being, we must radically improve the four fundamental components of network applications: (1) communications paths between components, (2) message transfer between those components, (3) languages and libraries that implement message interpreters, and (4) application creation and interoperability.

First, however, for those not familiar with TCP/IP, we must give some detail on the infrastructure that we plan to improve.

TCP/IP and structured documents

The Internet is an unfathomably large, widely-distributed, structured document transferring machine [Fielding 2000].

The Internet is an ever-growing network of hosts and routers that transfer packets of structured data following the IPv4 [Internet Protocol standard, version 4] . IP packets have a header and a body. The packet header conveys (among other things) the host number of the destination machine and the port number of a listening process. The body carries the payload — data to be sent to the receiving host. Packets pass through routers — relays that understand local characteristics of the network topology. Packets sent from one host to another need not arrive in the order they were sent, and they need not have traveled the same path to get to their destination. This increases each packet’s individual likelihood to arrive even if some paths between source and destination are cut or overloaded. Packets can be dropped or lost in transit due to cycle termination, repeated packet resolution, router queue overflows, and lossy physical connections [Postel 1981a], [Postel 1981b].

The transition to IPv6 has already begun. IPv6 boasts a simplified header structure and mandatory security support (IPsec). The new Internet will offer increased confidentiality, interoperability, normalization, and efficiency [Kent and Atkinson 1998].

The TCP [Transmission Control Protocol] uses IP. The TCP sender divides its message into IP packets and calls the operating system to send these packets to a TCP receiver, naming target host number and port number in each packet. The TCP receiver assembles the packets that comprise the TCP message. If the TCP receiver detects that the message is missing packets, it requests that the sender resend the missing packets. TCP gives up (“times out”) if the transmission is not acknowledged in a timely fashion. Because of the acknowledgment component of TCP, receivers must participate synchronously in the sender’s transmission.

While IP is a one-way, packet-based protocol, TCP is a bidirectional streaming protocol. Synchronous TCP communication can continue indefinitely until terminated by sender, receiver, or interrupted transmission. Various port numbers have been reserved for different uses of TCP. FTP uses port 21. Telnet uses port 23. SMTP (send mail) uses port 25. POP3 (get mail) uses port 110. HTTP (the Web protocol) uses port 80.

Programmers can easily access TCP using the “sockets” API provided by IP-compatible operating systems. The server program (1) creates a socket, (2) binds to a port number, (3) listens for a connection, (4) accepts the connection, (5) receives and transmits data until done, and then (6) closes the socket. The client program (1) creates a socket, (2) connects to a port, (3) sends and receives data until done, and then (4) closes the socket. TCP programmers determine what data will be sent and when to close the connection.

The structures of transmitted data differ widely between TCP utilizations. SMTP and telnet implementations use a line-based protocol with an end-of-transmission convention. Z39.50 follows a run-length encoding paradigm to transmit, among other things, MARC metadata records between library catalogs. The Web’s HTTP [Hypertext Transfer Protocol] uses a combination of single lines, text regions, HTML, binary data, end-of-section markup, and run-length encoding. SOAP (formerly, the Simple Object Access Protocol) uses XML for message structuring and HTTP for message transfer (Figure 4).

Figure 4: Reengineering SOAP
[Link to open this graphic in a separate page]

Typical SOAP transaction. Reengineering the highlighted areas should improve performance.

Novus ordo seclorum: TP, SX, and PL

We propose three technical innovations to commoditize computing: (1) the TP [Transfer Protocol] will handle communications paths between components; (2) SX [Simplified XML] will expedite message transfer between those components; and (3) the PL [Programming Language] will simplify the languages and libraries that implement message interpreters. The result will be socioeconomic change: a marketplace for commoditized computing components will emerge.

TP [Transfer Protocol]

We propose a simplified TP [Transfer Protocol] that uses IPv6 and implements the Facade design pattern [Gamma 1995]. Like TCP, TP connections follow the familiar client-server model. Instead of host number and port number, TP uses host name and protocol name respectively. TP encapsulates access to IP Packet data, including socket type, protocol type, host number, port number, and buffer size. TP also encapsulates the operating system interface for sending packet objects (Figure 5).

Figure 5: TP [Transfer Protocol]
[Link to open this graphic in a separate page]

TP [Transfer Protocol] implements the Facade pattern.

Again unlike TCP, TP senders transmit packetized, serialized forests instead of byte streams. This can reduce transmission size of structured documents substantially. Packet boundaries substitute for markup characters and thereby TP obviates the need for buffering and parsing the transmission stream (Figure 6).

Optimizing structured document transfer is paramount not only for XML interchange, but also for graphic standards (e.g., GIF, PNG, JPEG) that serialize colormaps, boundaries, and run-length encodings; email that serializes headers, body, and attachments; and PDF that serializes embedded page markers, fonts, and text areas. Indeed, all electronic documents are and must be structured (some less visibly so than those in XML). We believe that eliminating this overhead can make transmission of XML documents over TP at least twice as fast as TCP.

Figure 6: TP and IP
[Link to open this graphic in a separate page]

TP [Transfer Protocol] builds IP packets directly from preorder tree traversal.

In TP, trees serialize as follows: The name and attributes of a node are placed in a packet (or packets, if overflow), possibly followed by tree-traversal commands. Command “push” indicates the next node is the first child of this node. Command “pop” indicates the list of siblings has ended and the next node is a sibling of a higher level node. The default action indicates that the next node is the sibling of the current node [Kipp 2000].

Depending on configuration, the TP receiver can process the packetized events as they come in (as with SAX), or it can build a tree starting with the root node, child nodes, their children, and so on until the tree is completed (as with DOM). To build the tree from packets received, TP must keep a stack of node pointers that respond to the tree-building commands. On “push”, the stack is pushed, and the child is added to its parent. On “pop”, the stack is popped. The default action replaces the stack top with the next node, and the child is added to its parent.

TP promises tremendous advantages over TCP and its application protocols. TP obsoletes port numbers — the operating system places the TP service on its one available communications portal (Singleton design pattern) and calls a named service. TP subsumes firewalls — the TP service rejects requests from specific clients or for specific services based on its configuration. TP removes the usability workaround that inspired “tunneling through port 80,” because any new service would be registered by name with the operating system, thereby directly defining and securing servers by the services they implement, not the numbered ports they listen to. TP expedites Web services — firms install the listener component on a host operating system and publish its API in distributed component directories; and clients attach to the service, send requests, and receive responses as if they were using a local library API. Finally, SOAP applications could use TP for direct transfer of envelopes (in place of HTTP and TCP).

TP furthermore allows the implementation of the REST [Representational State Transfer] distributed hyperdocument architecture [Fielding 2000]. TP can transmit hyperdocuments — it does so as forests. Server state need not be maintained between transmissions. Transmissions can be cached and forwarded. Transmissions can be initiated by the client. TP allows separation of data and formatting.

SX [Simplified XML]

SX simplifies XML, SAX, and DOM, by attaching document generator components directly to document consumer components using TP. Visual editors will make creating, editing, and managing structured documents easier for users, and document workers will never have to bother with the details of the underlying XML, SGML, or HTML syntax. An improved transmission and archive format will make file sizes smaller and protocol transmissions more efficient without sacrificing usability. SX visual editors, because they would hide the underlying data structure implementation from users, will implement the Facade design pattern.

SX has two node types: element and data (DOM, by conparison, has twelve). An element node is a multipurpose container with the following abstract structure: a name, a set of named attribute values (map, hash), and a list of zero or more contained child nodes. A data node simply contains data, be it text or binary (Figure 7).

Figure 7: SX [Simplified XML]
[Link to open this graphic in a separate page]

UML for SX [Simplified XML] nodes.

Any XML or SGML document can be parsed into this structure, ideally with a normalized conversion process. XML can be generated trivially from any instance of this structure for archival and historical purposes. Rows and cells of relational tables can be structured and communicated conveniently with SX. Graphs (e.g., hypertexts, topic maps) are also easily represented.

SX nodes can participate in the Command design pattern and, of course, the Memento design pattern (e.g., for marshaling or pickling). The trees of SX can be visited in the Visitor design pattern.

PL [Programming Language]

We propose the generic PL [Programming Language] to replace specific programming language syntaxes with an easy-to-use, well-managed, all-purpose syntactic abstraction (Figure 8). Already, Microsoft’s Common Language Runtime (CLR) and Sun’s JVM [Java Virtual Machine] support multiple simultaneous programming languages and libraries, so implementation of PL should pose no great difficulty.

Figure 8: PL [Programming Language]
[Link to open this graphic in a separate page]

PL [Programming Language] works with multiple syntaxes.

Visual PL editors would allow programmers to write code using their favorite object-oriented legacy syntax (e.g., Python, Java, C++) or design their own by setting preferences in the editor. Access to library calls would be managed as it is in today’s integrated development environments — users choose a specific method from a context-sensitive list of those available.

Software object libraries of PL could be easily created to implement SX, algorithms, database access, graphic user interfaces, and the various protocol applications of TP — email clients and servers, Web browsers and servers, etc.

PL software programs would be perfectly portable as long as the software libraries they used behaved identically across platforms. Large libraries need not be replicated across clients — rather, they can be implemented as a TP service and controlled, managed, and maintained by the service provider.

PL saves corporations money because all source code is stored in its abstract form and can be retrieved for editing in any syntax. Legacy systems (remember Cobol and Y2K?) would no longer be problematic because PL source code can be made available for editing and sharing in any programmer’s preferred syntax. For example, two programmers, one fluent in C# syntax and the other in Java syntax, can work on the same PL code base without causing problems. As a result, companies can hire excellent programmers of any PL-compatible language without imposing a syntactic prerequisite.

New roles and industries

We predict that the TP/SX/PL platform will begin to commoditize software application development. As we reach this common computing platform, a marketplace for the computing components will emerge, with the following players.

Network Engineers will install and maintain physical storage and network hardware and be responsible for setting up and maintaining TP systems and security.

Platform Engineers will create and maintain operating systems that support TP, virtual machines that run PL, and libraries that support SX. They will troubleshoot interoperability problems between versions of low-level libraries and components.

Application Engineerswill use SX, maintain and aggregate PL programs, implement GUIs, and create and publish PL libraries. They will troubleshoot application integration issues and be able to upgrade application components as better licensing terms become available.

Application Architects will continue to collect user requirements and collaborate with Application Engineers on software application designs. Types of application architects include Information Architects (because, as in all publishing, the information is the application), Database Architects, GUI designers, Interaction Designers, and Usability Engineers.

Market Analysts will continue to identify user groups and user needs, obtain funding, and build teams of stakeholders, architects, and engineers to generate new software applications. With the very low overhead provided by this simplified and stable infrastructure, institutions and individuals will generate a hugely increased volume of low-cost applications and application components.

Energetic small teams (e.g., a market analyst, a application architect, and two application engineers) will replace entire departments [Brooks 1975]. They will populate desktops with focused tools that leverage common library components and server-based bulk processing to enable information workers to aggregate, eliminate, synthesize, and publish new and useful information with increasing quality and speed.

New markets will emerge, and new industries will serve them.

Component Brokers — the digital “Home Depots” — will provide client and server licenses to interoperable software components for applications.

Application Brokerswill be digital interior decorators that piece together functional components to meet specific functionality needs.

Integration Diagnosticians will work like digital plumbers or auto mechanics to diagnose post-deployment application integration problems.

Application Integration Managers will be the digital general contractors for large scale projects.

Conflict Resolution Engineers will resolve contractual and liability problems resulting from substandard application construction. (You thought lawyers would become extinct?)

Simplification and the coming second renaissance

TP simplifies the Internet protocols for their most common payload — structured documents. SX simplifies XML because syntactic constraints have been eliminated. PL lets software developers focus on bug-free implementations without worrying about portability. Application programs will be easier and faster to develop as a result. Consumers will benefit from software can be made more cheaply and reliably. Digital hunger will be satisfied far more easily in a computing environment that is potentially humane and nurturing.

The asymptote of the desktop application is the singular MA (the “meta” application) that meets the needs of information workers, such as Harriet, by employing its own auto-configuration capability. After a critical mass of workers has the MA technology, there will no longer be a digital divide — natural sciences will flourish, as will medicine (leveraging medical and pharmaceutical informatics), exploration, and the arts. As the renaissance followed the archaeological discovery of Greek and Roman manuscripts, the “neo-renaissance” will remove the info-glut — the (digital) earth that has covered past philosophy.

We conclude, therefore, that to maximize long-term human benefit, efforts that impede platform singularity should be eschewed and efforts that support singularity should be redoubled. Markup and knowledge technologists must endure the technological contractions as uncomfortable but necessary steps in the path to a more ordered and less hungry world.

Epilogue

Harriet sits at her computer monitor preparing a paper for a familiar conference. All her research and research tools are online — books, papers, proceedings — and available in a federated information space. Her library membership gives her unlimited access to older titles, and she has purchased licenses to a number of more recently published books and journals. The system keeps a detailed network of citations to all material she has read or visited, and maintains a bibliographic database of her favorite titles.

Harriet’s authoring tool works with a combination of typing and speaking. It permits her to allow other authors and reviewers to edit and annotate; color codes indicate where her coauthor has been working.

Harriet and her computer can converse (chatterbot call center) to make travel reservations, order products, and arrange meetings. She can license additional conversation modules.

She develops her ideas and fleshes out the outline with the help of her coauthor, who occasionally appears in a video window on her display, although four time zones and an ocean away. She checks over the final draft and drags the completed paper to the submission inbox provided by the conference. The system archives the submitted version and gently reminds Harriet of her dinner engagement, showing a map and offering to order transportation.

“No thanks, I’ll walk,” says Harriet, already headed to the door.


Acknowledgments

This work was sponsored by Kipp Software Corporation and eTopicality, Incorporated, who share its copyright. The authors wish to thank the anonymous conference reviewers whose detailed comments helped tame, shape, and focus the submitted version of this paper.


Bibliography

[Asimov 1950] Asimov, Isaac. I, Robot.1950.

[Brooks 1975] Brooks, Frederick P., Jr. The Mythical Man Month. Addison Wesley, 1975.

[Cooper 1999] Cooper, Alan. The Inmates are Running the Asylum. Sams, 1999.

[Fielding 2000] Fielding, Roy Thomas. “Architectural Styles and the Design of Network-based Software Architectures.” Doctoral dissertation, University of California, Irvine, 2000.

[Gamma 1995] Gamma, Erich, et al. Design Patterns. Addison Wesley, 1995.

[Kent and Atkinson 1998] Kent, S., and R. Atkinson. “Security Architecture for the Internet Protocol.” 1998. http://www.ietf.org/rfc/rfc2401.txt.

[Kipp 2000] Kipp, Neill A. “AtomicML: An Extremely Usable Markup Language.” In Proceedings of Extreme Markup 2000. Graphic Communications Association, August 2000.

[Kotler, et al. 2002] Kotler, Philip, et al. Marketing Moves: A New Approach to Profits, Growth, and Renewal. Harvard Business School Press, 2002.

[Maslow 1971] Maslow, Abraham. The Farther Reaches of Human Nature. Viking Press, 1971.

[Moore 1991] Moore, Geoffrey A. Crossing the Chasm. HarperCollins, 1991.

[Morgan 1992] Morgan, Eric Lease. A Day in the Life of Mr. D. In Thinking Robots, An Aware Internet, and Cyberpunk Librarians. R. Bruce Miller and Milton T. Wolf, eds. pp. 151-156, 1992.

[Postel 1981a] Postel, Jon. “Internet Protocol.” 1981. http://www.ietf.org/rfc/rfc0791.txt.

[Postel 1981b] Postel, Jon. “Transmission Control Protocol.” 1981. http://www.ietf.org/rfc/rfc0793.txt.

[Shapiro 1998] Shapiro, Carl, and Hal Varian. Information Rules: A Strategic Guide to the Network Economy. Harvard Business School Press, 1998.



Slashing brambles and sating digital hunger

Neill A. Kipp [Kipp Software]
neill@kippsoftware.com
Sam Hunting [eTopicality, Inc.]
info@etopicality.com