Extending Representation Capability: Representation Extention Through the Corresponding Metaphor Process

David Dodds
drdodds@open-meta.com

Abstract

Markup is a means of specifying tokens with standardized start-end delimiters. Each unique token is given its meaning through inclusion in a name space, which defines its usage. A Representation is an object with semantic properties. This paper is about a programmatic means of extending the capability of representation in computers. In many if not most computer systems the storage is purposely fine-grained using either a numeric value or an enumerated-type of some kind. In this paper we see discussion and computer code - to provide examples of technique - of a means which broadly expands this fine-grained representation. The use of an associated set of context mechanisms is shown to be highly leveraging of this process .

Representation is extended by employment of metaphor making and perceiving software. One means for accomplishing this is provided by the mechanism of the Transfer and Corresponding Metaphor Process. What these are and how they may be implemented is discussed in some detail in this paper. Java, known by many, is the almost universally used language with XML technologies (along with (J)Python, and XSLT) and there are a few brief examples of Java in this paper shown to provide logical processing means to accompany the shown (spatial) ontologies.

Perhaps the single most important thing to come away with from this paper is that programmatically performed metaphor processing brings to ontological items the controlled multiplicity or nuance of meaning that speakers of natural languages, such as English, enjoy (mostly unaware) with each of the words they use daily.

Keywords: Knowledge Representation

David Dodds

David Dodds has worked with computers since 1968, when he wrote Continuous Systems Models (CSMP) on IBM mainframe computers at university. Later he worked at Nortel (Northern Telecom Bell Northern Research) where he designed and developed graphical interfaces and scientific and technical visualization systems, and wrote text understanding software, these were in-house projects to allow extraction of content from telephony specification documents for use by graphical-interface programs, and he also wrote expert systems in C and Prolog. Prior to that, in university environments, he programmed a speech synthesis system; which produced the first ever machine spoken Coast Salish; and designed and developed technical scientific models and simulations; including a simulated town council in a continuous system Forrester Limits to Growth model. He was Sessional Lecturer and taught computing science in a university computing science department.

He has been working the last several years on the various emerging XML technologies, was on the W3C SVG (Scalable Vector Graphics) workgroup to develop the specification for SVG 1.0, and on the Idealliance committee to develop XML Topic Map (XTM) specifcation. David has published numerous papers in robotics and on fuzzy systems. Two of these papers were published in the SPIE proceedings Space Station Automation III. He was lead-author of the book WROX Professional XML Meta Data. He also worked as technical reviewer for Kurt Cagle’s SVG book. He has presented a number of papers on XML SVG and RDF, Intelligent and Content Aware Graphics Systems.

David presented two papers at SVGOpen 2003, one on Accessing SVG Content Linguistically and Conceptually, the other on Programming SVG Advanced Access Using Metadata and Fuzzy Sets. He presented a paper, Natural Language Processing and Diagrams, about the use of ontologies and logic, at The 2004 International Conference on Machine Learning; Models, Technologies and Applications; which is a part of The 2004 International Multiconference in Computer Science and Computer Engineering.

Extending Representation Capability

Representation Extention Through the Corresponding Metaphor Process

David Dodds [Open-Meta Computing]

Extreme Markup Languages 2004® (Montréal, Québec)

Copyright © David Dodds 2004. Reproduced with permission.

Background

    The Most Important Ideas In This Paper
  • metaphor generation and discovery hugely qualitatively increases the representational capability of computers.
  • programmatic discovery that an object may continue to exist even after it is no longer visible. (and that this is important)

A possibly breakthrough idea is proposed. Do hybrid computing: digital - analog. Do the fuzzy sets processing as analog calculations like an analog computer - CSMP (Continuous Systems Modeling Program). Use either XSLT to interpret XML data structure as an input describing / invoking which analog computation to invoke / do in XSLT and its parameters (like the author did in the XGMML XSLT stylesheet, named XGMMLContext1.xsl; or use Java or Javascript external computation via XSLT extensions. Use APACHE XALAN et all or SAXON. Dont forget that you can use a schema! instead of a DTD! This paper discusses several ideas; what an ontology is; and the process of Corresponding Metaphor. The latter is the programmatic means whereby the computer is able to compare two non-identical multi-component semantic objects and detect a metaphorical correspondence. Metaphors in this paper are data structures and ontological items, they are not passages from Shakespeare. The sciences of Physics and Astronomy legitimately and powerfully employ technical metaphors as means of explanation and elucidation, such as the popular “Black Hole” and “Event Horizon”.

http://www.open-meta.com

Fuzzy set calculations comprise part of the system of this paper and they are implemented by means of continuous mathematical functions, which map a variable of the fuzzy function’s x-axis into a membership value which ranges from 0 to 1. These mathematical functions are the analog part of the computation which implements this system. The analog calculation provides a continuous coverage of the logical range 0:1 and is a computationally legitimate way of providing a non-binary non-enumerated-value scope.

Figure 1: The fuzzy function f(x) calculates a sigmoidal curve, with k1 being the center of the curve , and k2 the slope of the curve. The concepts of Near-Far, for example, may be represented using this function. Far is computed by f(x), and Near is computed as its complementary function f'(x) = 1-f(x). [From the proceedings Dodds 1989, see the bibliography]
[Link to open this graphic in a separate page]
Figure 2: g(x) calculates the fuzzy function labelled “at_the_center”, k1 is the “center” or peak membership value of the abscissa (x-axis), k2 is the “Q” or slope of the function (“sides”). [From the proceedings Dodds 1989, see the bibliography]
[Link to open this graphic in a separate page]
Figure 3: Graphical depiction of the fuzzy function f(x) “sigmoidal”, in this case x ranging from x=0 to x=+120. MV axis (y-axis) is the grade of membership value. The sigmoidal labelled Far is the graph of f(x), while the sigmoidal labelled Near is its complement, 1 - f(x). [From the paper Dodds 1988, see the bibliography]
[Link to open this graphic in a separate page]
Figure 4: Graphical depiction of the fuzzy function g(x) “at the center”, in this case centered on x=0, x ranging from x=-100 to x=+100. MV axis (y-axis) is the grade of membership value. [From the paper Dodds 1988, see the bibliography]
[Link to open this graphic in a separate page]
Figure 5: Computation equation for at the center using particular values for k1 and k2.
[Link to open this graphic in a separate page]
Figure 6: Plan view of the surface areas designated as labelled in the picture by the use of fuzzy sets (functions). The labels Left and Right should be swapped. [From the paper Dodds 1988, see the bibliography]
[Link to open this graphic in a separate page]
Figure 7: Photograph of the “Grunt” autonomous robotic vehicle from Frontline Robotics. It has (simultaneous) 360 degree viewing sensor plus infra-red.
[Link to open this graphic in a separate page]
Figure 8: Object - Concept - Symbol relationship diagram. Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 9: Concept of representation diagram. Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 10: Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 11: Example display of an ontology from the Protege system
[Link to open this graphic in a separate page]
Figure 12: A representation of the SAW Situation Awareness Ontology.
[Link to open this graphic in a separate page]

Lakoff spatial metaphors have an “origin” (locus) or “starting place” (“zero”) based on the body of the person using the metaphor. With substantial ease adult humans assign the origin of this space to being at the eyes of the self, or generally the outerbody surface (“my outside”) and as needed the innerbody volume bounded by that outerbody surface (“my insides”). Humans are able to effectively “move” the origin from the body of self to some other location. This other place then gives the “zero perspective”.

This is a visual depiction of constrained activity via the actors. It is a depicted context. The non- permeable barrier (obstruction) object visually represents a constraint.

This particular deBono diagram depicts goal oriented group behaviour in pursuit of achieving a goal but is blocked by an obstruction. Lakoff spatial metaphors use orientation in three dimensional Euclidean space as representation.

The SVG animation file which accompanies this set of slides is an instance of deBono diagram combined with Lakoff spatial metaphor. This particular deBono diagram depicts goal oriented group behaviour in pursuit of achieving a goal but is blocked by an obstruction. Lakoff spatial metaphors use orientation in three dimensional Euclidean space as representation. deBono diagrams, shown in Atlas of Management Thinking, depict particles, which have trails, progressing from a starting point moving along a path to some end point. The particles represent agents or actors (people) and the path represents spatial progress toward some location. The agents move toward a labeled goal area, with the passage of time. There are “physical constraints” in the picture which prevent the paths from going just anywhere. These constraints represent contexts, such as “focus”. The diagrams have an implied “gravity” “naive physics” model) and visual objects (like walls) are not permeable. There is a temporal factor as well as spatial one.Visually the deBono diagram shows that the spatial progress of two “agents” is halted by an obstruction, which moves into place as they approach near the goal, and the HML terms referenced depict that there is a concommitant socio-cultural aspect to such deliberate deflection or blockage.this animation, then, visually presents through time the following metaphor. “Actors” proceed from initial starting points directly towards a goal area. The path represents effort or a journey and in this example is directly oriented to arriving at the goal area after some time has passed in the transit. Just as they are approaching near the goal area a physical barrier slides in front of them, preventing their further approach to the goal area. There is a tunnel object in the picture which prevents the path from going to either side to circumvent the obstruction. This is a visual depiction of constrained activity via the actors. It is a depicted context.The non- permeable barrier (obstruction) object visually represents a constraint. Another actor, the obstructor, causes the occurance of the obstruction and it enters the context of the animation near the end of the animation duration time. The animation stops with the actors blocked from achieving the collective goal.

The metaphor expressed via this computer representation is that people act through time to achieve a goal. Metaphorically they take a journey along a path and ever more closely near the goal (fixed goal in this case). Constraints in the world affect their path and control their efforts. Achieving a goal takes time, it is not instantaneous and not without constraints.An actor can introduce deflections or blockages of progress. There is concomitant social and other affect which precedes actions and which is produced as the result of actions and events. These are modeled by use of HML. deBono diagram paths and nonpermeable barriers are visual metaphors for activity and constraints in the world. Lakoff spatial aspects in this visual animation of metaphor are that there is directionality imposed on the visual space. The top is “UP”, and the bottom of the visual space is “DOWN”. While this seems commonsensical computers do not have any such built in notions and must be programmed with that information. Superimposed on top of deBono’s naive physics model of agent paths is Lakoff’s metaphor “SUCCESS IS UP”. The closer the agent gets to the UP location (in this case the top of the picture) the greater his SUCCESS is. This metaphor is often accompanied by the “GOOD IS UP” one as well, resulting in “I felt really up having achieved my goals”. Because the two meaning contexts (deBono diagram and Lakoff spatial) are superimposed the animated diagram can depict, in a way meaningful to the computer; agent activity, constraints on that activity and can model or associate relevant HML terms with the events and the actors participating. Example: for most of the period of the animation the actors are increasingly nearing the goal area, they are becoming increasingly successful. When the obstruction to further progress occurs they no longer have increasing success. There is a measure of success but incomplete. There is thwarting of goal achievement. Frustration could be inferred to result from this. HML is able to provide representation for this and a production system which uses HML terms can be used to infer (“predict” / detect) likely social outcomes from events in the actors physical world (such as deliberate thwarting). The Human Markup Language (HML) structured vocabulary provides a standardized reference for the representation of socio-cultural information conveyed and implied in the deBono (Lakoff) diagram. Contexts and inferencing provide means for computer determine appropriate terms use. Constructing Metaphors:Because the metaphors are depicted visually and use XML SVG and other XML technologies, it is possible for the computer to do the following: The locations, colors, sizes etc of all the graphics elements constituting an animation can be read by a program and used in inferencing. The SVG code can be created by XSLT to produce animations anew. The animations could be designed to reflect a set of situations depicted by a graph structure like context graphs and or those indicated by HML terms, which may have been participants in a chain of inferences. DAML is used in OpenCYC and SUMO to express taxonomic interrelationships, amongst the general physical, cultural and social knowledge coded there. Terms like #$PurposefulAction and #$performedBy are related to other CYC concepts represented, in such a way that a reasoner can perceive “connections” not directly stated in input.

Metaphors permit people to convey their understanding of one thing through the symbol associated with another. A representation of such metaphors provides a useful means to characterize the commonalities among different complex sequences of action that are observed, or that of spatial correspondences.

Introduction

This paper discusses several ideas; what a context is; what an ontology is; and the process of Corresponding Metaphor. The latter is the programmatic means whereby the computer is able to compare two non-identical multi-component semantic objects and detect a metaphorical correspondence. Metaphors in this paper are data structures and ontological items, they are not passages from Shakespeare. The sciences of Physics and Astronomy legitimately and powerfully employ technical metaphors as means of explanation and elucidation, such as the popularized terms “Black Hole”, “Event Horizon” and “Dark Matter”.

A Context Ontology

An ontology defines the common words and concepts (the meaning) used to describe and represent an area of knowledge. We see code a little later in the paper illustrating an example of a computer ontology. The W3C designed a semantic information representation language, called RDF, to be the underpinning of the Semantic Web. It is considered hard-to-use and limited, so DARPA designed DAML-OIL to improve upon the representative capabilities of XML RDF. It was found through use to have some shortcomings of its own. The W3C has designed the successor to DARPA’s DAML-OIL and called it OWL Web Ontology Language. Sometimes it is called Ontology Web Language just so that “OWL” makes sense.

Ontologies are meant to represent concepts and their relationships. Ontologies are like nouns. What is needed is the equivalent of a verb to accompany such nouns so that actions (in the computer) may be undertaken. Actions are things like conditionals and assertions, is something the same as something else and put this value into this variable name, respectively. Later in this paper we see examples of Java code which are easy to understand which show actions executed in concert with respective ontology.

Following is our first example of an ontology. In this case we see a snippet from a larger ontology located at the University of Ottawa, Canada. This ontology uses the DAML-OIL markup language, an XML namespace, and it is also presented here as an example of what the contents of an ontology for context might look like. Note that terms in a context ontology, such as actorLocation, have in my system associated logic external to the ontology which is used to obtain/set a “value” for that term. (Although I did not see such code posted, associated or referenced for the uottawa.ca/Context ontology at the UOttawa site.) Later in this paper we see how in my system associated logic external to the ontology works in a further code example. The author has provided an external XSLT program which scans the uottawa.ca/Context ontology and outputs the names and parameters of function programs which are activated as a result of the content of the ontology as used to further define context as initiated by the basis context called POV. The XSLT program is named uottcontext1.xsl. Ontologies tend to be verbose and so complete ontologies are not shown in-line in this paper. Those interested in seeing the complete uottawa.ca/Context ontologies need only link to them at http://site.uottawa.ca/Context#.

An Ontology in DAML-OIL

    uottawa.ca/Context           shown here is a small subset of it
    
    <daml_oil:ObjectProperty rdf:ID="actionExpectedDuration">   
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#ActionProfile"/>
      <daml_oil:range rdf:resource="http://www.w3.org/2000/10/XMLSchema#duration"/>
    </daml_oil:ObjectProperty>
    
    <daml_oil:ObjectProperty rdf:ID="precondition">   
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#Application"/>
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#Resource"/>
      <daml_oil:range rdf:resource="http://www.daml.org/2004/03/daml+oil#Thing"/>
      <daml_oil:range rdf:resource="http://www.w3.org/2000/10/XMLSchema#anyType"/>
    </daml_oil:ObjectProperty>
    
    David's comment: One of the components of my notion of "situatedness"; the 
    location of any actors involved in the current context. Note that the ontology 
    defines actorLocation but is not able to "get" a (location) value for it. That 
    is up to the (executable) logic which exists and is used outside the ontology.
    Later in this paper we see code that does this. The uottawa.ca/Context ontology 
    does not provide such code.
    
    <daml_oil:ObjectProperty rdf:ID="actorLocation">   
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#Actors"/>
      <daml_oil:range rdf:resource="http://site.uottawa.ca/Context#Location"/>
    </daml_oil:ObjectProperty>
    
    David's comment: One of the components of my notion of "situatedness"; the 
    identification of exactly what are the constituents of the context. Note that
    the ontology defines Physical but is not able to "get" a value for it. That 
    is up to the (executable) logic which exists and is used outside the 
    ontology. Program logic, in a Java program called Physical, provides the 
    information for the ontology slots like Physical (the ontology item) such 
    that all of the ContextFeatures in the Context are determined. (i.e. an OWL 
    version Context ontology would have a slot for Context.ContextFeature. 
    Physical and the slot would be populated with a value by the Java code).
    
    <daml_oil:Class rdf:ID="Physical">   
      <rdfs:subClassOf rdf:resource="#ContextFeature"/>
    </daml_oil:Class>
    
    David's comment: Here we see how the ontology conveys to the computer the 
    distinction between an actor who is a person (a human), and an actor which 
    is not a human (software or a machine). The distinction is made via the 
    disjointWith part of the statement below. There are scenarios where the 
    expected sophistication of an action can be estimated by knowing what kind 
    of actor is performing the action. Through inferencing then the system would
    be able to determine that evidence of stealth be sought when the actor is 
    not an agent. Such inferencing on the Context provides the means to add new 
    otherwise tacit items to the Context, thereby making it more explicit or 
    concrete in its coverage of the situation.
    
    <daml_oil:Class rdf:ID="Agent">   
      <rdfs:subClassOf rdf:resource="#Actors"/>
      <daml_oil:disjointWith rdf:resource="#Person"/>
    </daml_oil:Class>
     
    David's comment: Notice with these next three predicates that knowledge of
    Actors and Roles (which is based on their appearance in the resource parts 
    of the daml:oil expression of the knowledge-base)is used to detect and define
    the constituency of the context (under construction via use of these 
    predicates). A context of the SAW Situation Awareness type is populated / 
    constructed incrementally / dynamically by utilizing (the (usually Java-based
    or XSLT) action code associated by name with) such predicates, and the 
    further predicates whose relevancy is based on discovery (of, in this case, 
    other Actor and or Role related ontological knowledge association.) 
    Simple-relevancy is discovered by searching the predicate base for (rdf) 
    resources which are Actors or Roles, in this case. One might here liken 
    Actors and Roles to context focus (POV), which may be used to inform the 
    discovery process. (Which might be performed by an XSLT search of the 
    predicate base looking (in a predicate's rdf:resource) for "Actor" and 
    "Role".)
     
    <daml_oil:ObjectProperty rdf:ID="actorRole">   
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#Actors"/>
      <daml_oil:range rdf:resource="http://site.uottawa.ca/Context#Role"/>
    </daml_oil:ObjectProperty>
    
    <daml_oil:ObjectProperty rdf:ID="thingName">   
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#ThingRole"/>
      <daml_oil:range rdf:resource="http://www.w3.org/2000/10/XMLSchema#anyType"/>
    </daml_oil:ObjectProperty>
    
    <daml_oil:ObjectProperty rdf:ID="hasCurrentAction">   
      <daml_oil:range rdf:resource="http://site.uottawa.ca/Context#Action"/>
      <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#Actors"/>
    </daml_oil:ObjectProperty>
    
    The appearance (presence) of the predicate "hasCurrentAction" in the current 
    context is used by my system to activate the action-code of the same name 
    (i.e. hasCurrentAction()). A simple looping Java-based or XSLT-based scan of 
    the current context ontology (its data structure) can be used to detect any new 
    entry in the context such as the initial or new appearance of the predicate 
    "hasCurrentAction" and then run the program of the same name, hasCurrentAction()
    function which examines (other) data structures for information relevant to 
    detecting - determining what currently any identified actors are doing. (In 
    the example used in this paper, a diagram animation, anim01.svg, there are 
    three actors and they are identified as such because they show change in 
    location ("movement") in the animation. The function hasCurrentAction() need 
    only (in this case) find the "animate" SVG term in the SVG diagram picture 
    (i.e. in the SVG code). Later in this paper we see how the SVG program is 
    examined for other information such as tacit spatial relationships. That is 
    also driven by ontological and contextual knowledge.
    
    

A context may be represented in various ways, the two used in this system under discussion is the ontology which may be instantiated as a knowledge base, and by a graph structure which may used as a network. The graph structure used conforms to the ISO definition describing a topic map. An example of an ontology used as a context is, http://site.uottawa.ca/Context#, shown in part above which uses the DAML-OIL markup namespace, and defines the context by defining a collection of classes, subclasses and object properties. This is an example of use of an ontology to define a context. It is not the definition source of the context (mechanism) used by the system being explained in this paper. It is simply an instance of one context defined in the real world and it is available for linking to. Context as used in the context mechanism of this paper was described in [Dodds 2004] and via ,http://www.open-meta.com, is provided to it so that it can be seen by anyone interested. It is too large to include in-line in this paper.

A Context in XGMML graph structure

Here is a context represented by a graph structure. The XML name space is called XGMML [XML Graph Modeling and Markup Language] with a few additions of my own. The attribute element “att” contained in a “node” element provides the information constituting a property-list. Nodes and edges may take weight values. A timestamp value may be a property-list member and in some circumstances may be the sole discriminator which distinguishes one otherwise identical sub-graph from another. This represents the evolution of semantic data with the passage of time, and is one of the ways the system represents (and recognizes) “persistence”.

    <?xml-stylesheet type="text/xsl" href="xgmmlContext1xsl.xsl"?>  
    <!DOCTYPE graph SYSTEM "xgmml.dtd"> 
    <graph xmlns="http://www.cs.rpi.edu/XGMML" > 
    <node id="1" label="timestamp" weight="0">
    <graphics type="circle" x="270" y="90" h="10" >
    </graphics>
    <att name="systemtimestamp" value="2004-06-27T17:47:33+07:00"/> 
    <!--EXSLT gets time and date from system puts it in node-->
    </node>
    <node id="2" label="AtRight" weight="0">
    <graphics type="circle" x="350" y="190" h="10" >
    </graphics>
    <att name="object" value="1"/>
    <att name="object" value="2"/>
    <att name="origin" value="upperleft"/>
    </node>
    <node id="3" label="object1" weight="0">
    <graphics type="circle" x="190" y="190" h="10" >
    </graphics>
    <att name="xcoord" value="110"/>
    <att name="ycoord" value="35"/>
    <att name="origin" value="upperleft"/>
    </node>
    <node id="4" label="object2" weight="0">
    <graphics type="circle" x="290" y="290" h="10" >
    </graphics>
    <att name="xcoord" value="120"/>
    <att name="ycoord" value="39"/>
    <att name="origin" value="upperleft"/>
    </node>
    <node id="5" label="object1" weight="0">
    <graphics type="circle" x="390" y="390" h="10" >
    </graphics>
    <att name="xcoord" value="210"/>
    <att name="ycoord" value="27"/>
    <att name="origin" value="upperleft"/>
    </node>
    <node id="7" label="object2" weight="0">
    <graphics type="circle" x="490" y="090" h="10" >
    </graphics>
    <att name="xcoord" value="220"/>
    <att name="ycoord" value="29"/>
    <att name="origin" value="upperleft"/>
    </node>
    <edge source="2" target="4" weight="0" label="Edge from node AtRight to node 
    object2" >
    </edge>
    <edge source="1" target="2" weight="0" label="Edge from node timestamp to node 
    AtRight" >
    </edge>
    <edge source="2" target="3" weight="0" label="Edge from node AtRight to node 
    object1" >
    </edge>
    </graph>
    
The context graph immediately above is purposely very simple so that visual inspection of it and reading in text what it represents should provide an immediate grasp of what it is and is used for. A context graph in a production system would not be this simple or small, it would have many more nodes and links , representing (other) data objects, (other) spatial and temporal relationship predicates, other timestamps, and graph-edges (“connections”) to ontological-structures in RDF, DAML-OIL, Protege, etc., some of which may represent additional contexts including “situation”. There is an XGMML example of context graph diagram SVG visualization-rendering with system datetimestamp. The XSLT program “xgmmlContext1xsl.xsl” is available for download at http://www.open-meta.com. This program scans a context graph network and translates it into a viewable SVG diagram picture, which makes it easier to visualize the relationships among a complex graph-node structure.

Figure 13: Graph Diagram Visualization
[Link to open this graphic in a separate page]

This XML markup shows XGMML graph with a timestamp node at id=1 which points at a relationship node AtRight at id=2 and that node in turn points at object1 and object2 nodes at id=3 and id=4 respectively. The upshot is that a graph structure is defined of two objects in the relationship AtRight, occurring at time-date 2004-06-19T17:47:33+07:00. At some later time or date a timestamp node, say id=8, might point at id=2 but then id=2 might be pointing at id=5 and id=7. This is how change and tracking of events is represented using a graph structure. The relationship might stay the same, as in AtRight, but the objects in the relationship change with time. A “fixed place” or “anchor” node labelled “now” could successively point at new/current timestamp nodes. Edges can be labelled with the timestamp at the time of their generation. In this way we have a (re-visitable) synchronic episodic “memory”. (These are the basic elements for a representation for A. Damasio’s “proto-self” sequences.)

The node attribute named “origin“ indicates the type of SVG-axis graphic-origin for the SVG object represented. The default origin is in the upper left corner but SVG allows the origin to be redefined. By including the origin as an item in the context and the value for it obtained from the node attribute value (i.e. member of the property-list for the node) the system can know which test or condition to use, because the mathematical logical-comparison (for spatial orientation information) changes as the SVG origin location changes.

The listing of the DAXSVG ontology (in RDF/S) follows. It is also available at, http://www.open-meta.com. The DAXSVG spatial ontology is written in RDFS and was explained in [Dodds 2001]. It has an accompanying Java logic-set which provides the requisite actions for this ontology. While the RDFS version of this ontology can not take values a Protege ontology version can have slots defined and populated via the Java code.

An Ontology in RDFS, the DAXSVG Ontology

    <!-- Copyright 2001 - 2004 David Dodds
    Context { 
              change of state, 
              occurrence of event, 
              occurrence of particular pattern of states and or events, 
              occurrence of a sequence (detected via HMM Hidden Markov Model), 
              or graph (such as XGMML)
    }
    -->
    
<rdf:RDF xml:lang="en"
         xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
         xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#">


<rdfs:Class rdf:ID="SvgEntity">
        <rdfs:comment>The class of SVG entities, 
        referenced by their id in the SVG code. 
        THIS schema is modeled on the schema at 
        http://www.w3.org/2000/01/rdf-schema#Resource. 
        It is intended for use in tandem with it. It is called daxsvg-schema-rdf.xml
        </rdfs:comment>
        <rdfs:subClassOf rdf:resource="http://www.open-meta.com/2000/01/
        rdf-schema#Resource"/>
</rdfs:Class>


<rdf:Property ID="Near">
        <rdfs:comment>has a degree of nearness (by value). g1(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Far">
        <rdfs:comment>has a degree of farness (by value). complement of near, 
        1 - g1(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Large">
        <rdfs:comment>has a degree of largeness (by value). g2(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Small">
        <rdfs:comment>has a degree of smallness (by value). complement of large, 
        1 - g2(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Big">
        <rdfs:comment>has a degree of largeness (by value). g2(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Little">
        <rdfs:comment>has a degree of smallness (by value). complement of large, 
        1 - g2(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Fast">
        <rdfs:comment>has a degree of quickness (by value). g3(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Slow">
        <rdfs:comment>has a degree of slowness (by value). complement of isfast, 
        1 - g3(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Before">
        <rdfs:comment>has a degree of occurance (by value) at time 
        values monotonically decreasing from a value t1, on the T(x) 
        timeline. g31(x). a named ordered collection may be used instead 
        of the timeline.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="During">
        <rdfs:comment>has a degree of occurance (by value) at time 
        values between t1 and t2, on the T(x) timeline. g32(x). a named 
        ordered collection may be used instead of the timeline.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="After">
        <rdfs:comment>has a degree of occurance (by value) at time 
        values monotonically increasing from a value t2, on the T(x) 
        timeline. g33(x). a named ordered collection may be used instead
        of the timeline.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Often">
        <rdfs:comment>has a degree of quickness (by value). g4(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Seldom">
        <rdfs:comment>has a degree of infrequency (by value). complement of often,
        1 - g4(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Simultaneously">
        <rdfs:comment>has a degree of multiplicity (by value). g5(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Egressive">
        <rdfs:comment>has a degree of departureness (by value). g6(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Ingressive">
        <rdfs:comment>has a degree of arrivedness (by value). complement of 
        egressive, 1 - g6(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Changing">
        <rdfs:comment>has a degree of changingness or quickness (by value). 
        g7(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Static">
        <rdfs:comment>has a degree of unchangingness (by value). complement of 
        changing, 1 - g7(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Touches">
        <rdfs:comment>has a degree of same location (by value). same location as 
        object's x,y g14(z)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="AtRight">
        <rdfs:comment>has a degree of to the right (by value). g15(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="AtLeft">
        <rdfs:comment>has a degree of to the left (by value). g16(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Center">
        <rdfs:comment>has a degree of [at the] centerness (by value). g8(x). 
        SVG maxx - minx / 2, maxy - miny / 2</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Periphery">
        <rdfs:comment>has a degree of outerness (by value). complement of center,
        1 - g8(x)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Containz">
        <rdfs:comment>has a degree of containingness (by value). uses center g8(x). 
        [Note: Contains (with an s) is already defined in the schema 
        axsvg-schema-rdf at w3.org] </rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Lacunarity">
        <rdfs:comment>presents a curve (by value). Its fractal lacunarity L. g9()
        </rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="FractalDimension">
        <rdfs:comment>presents a curve, surface, or volume (by value).Its fractal 
        dimension D. g10()</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Significance">
        <rdfs:comment>has a degree of importance (by value). Product of relevance 
        g11() and novelty g12()</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Top">
        <rdfs:comment>absolute marker, maximum y value in 2D reference system. (SVG 
        miny, origin upper left)</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Bottom">
        <rdfs:comment>absolute marker, minimum y value in 2D reference system. SVG 
        maxy</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Left">
        <rdfs:comment>absolute marker, minimum x value in reference system. 
        SVG minx</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Right">
        <rdfs:comment>absolute marker, maximum x value in reference system. 
        SVG maxx</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Front">
        <rdfs:comment>absolute marker, 0 degree angle in reference system 
        x axis</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Back">
        <rdfs:comment>absolute marker, 180 degree angle in reference system 
        x axis</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Above">
        <rdfs:comment>absolute marker, 90 degree angle in reference system 
        z axis</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Below">
        <rdfs:comment>absolute marker, -90 degree angle in reference system 
        z axis</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Inside">
        <rdfs:comment>has a degree of insideness (by value). g27(x). inside is the 
        containee perspective of containz</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="Self">
        <rdfs:comment>reference to outer extents of (containz) subjective reference
        system</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="colour">
        <rdfs:comment>numerical representation of colours using rgb model
        </rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="morphology">
        <rdfs:comment>description vector defining shape of something. includes 
        fractal D and L.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="size">
        <rdfs:comment>a second order descriptor which is a relative measure of
        object area or volume. The SVG canvas information contained in the SVG 
        element of a picture is usually the basis of the relative comparison. 
        This is used as the reference frame. The box method of fractal dimension 
        may also be used to obtain a measure of size.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="location">
        <rdfs:comment>a second order descriptor which uses the Near, Far, AtRight,
        AtLeft, InFrontof, Above, Below, Behind, Inside, Containz, Beside, 
        Periphery, Center, Touches predicates as necessary and is represented 
        either by a property list representation of the resulting findings or by
        an XGMML context graph structure as needed.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="mobility">
        <rdfs:comment>description vector defining position change of something. 
        includes fractal D and L.</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>


<rdf:Property ID="time">
        <rdfs:comment>description vector defining time count of something relative
        to a beat source; or sequence. The default source of time is the computer
        system clock (i.e. timestamp).</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
</rdf:Property>
    
</rdf:RDF>
    

The above DAXSVG ontology contains the semantics for describing the important aspects of the spatial domain including in particular relative spatial location (predicates) from a reference point making those semantic elements situated. Situated “perception” is very important in deciding what to do to achieve goals (i.e. planning) and previous other systems which used context-free planning generated much more complicated plans to achieve a same goal.

The RDF Schema comment in each of the above ontology items does not contain executable information. That is to say that RDF cannot read and interpret the content of a comment. Comments in ontologies are meant to be annotations, for human eyes not machine consumption.

CYC OWL Ontology

Some ontology systems, such as CYC 0.60b OWL, have much more extensive comment sections than my DAXSVG has. A single item from CYC OWL is shown below for comparison of use of rdfs:comment.

    <owl:Class rdf:ID="ActorSlot">
            <rdfs:label xml:lang="en">predicates describing actors in events</rdfs:label>
            <rdfs:comment>A collection of binary predicates; a
                specialization of #$Role.  Each instance of #$ActorSlot
                relates some instance of #$Event to a temporal thing
                involved in that event (here called a  participant ,
                although the thing in question might not be playing an
                active role in the event).  The first argument of every
                instance of #$ActorSlot is constrained to be an instance of
                some specialization of #$Event, and the second argument is
                constrained to be an instance of some specialization of
                #$SomethingExisting.  All instances of #$ActorSlot have
                #$actors as their #$genlPreds, directly or indirectly, so
                that the actor slots form a kind of hierarchy.  Each
                specialized actor slot indicates _how_ its participant
                participates in the event, i.e., in what role (e.g.,
                #$inputs, #$outputs, #$doneBy).  Actor slots are _not_ used
                to indicate the time of an event&apos;s occurrence, external
                representations of the event, and other more remotely
                related things that are not directly or indirectly  involved
                in the occurrence of the event.  Time and other quantities
                are relevant to events but are not instances of
                #$SomethingExisting; thus, they are related to events by
                some non-#$ActorSlot predicate.  Things which are remotely
                related to the event -- for instance, someone who is
                affected by the event but doesn&apos;t exist when the event
                occurs -- may be related using some instance of #$Role that
                does not belong to #$ActorSlot, such as #$affectedAgent.
                See also #$Role.</rdfs:comment>
            <guid>bd588029-9c29-11b1-9dad-c379636f7270</guid>
            <rdf:type rdf:resource="#PublicConstant-DefinitionalGAFsOK"/>
            <rdf:type rdf:resource="#PublicConstant-CommentOK"/>
            <rdf:type rdf:resource="#PublicConstant"/>
            <rdf:type rdf:resource="#PredicateCategory"/>
            <rdf:type rdf:resource="#AtemporalNecessarilyEssentialCollectionType"/>
            <rdfs:subClassOf rdf:resource="#BinaryRolePredicate"/>
    </owl:Class>
    

DAXSVG Ontology has Associated Java Code

In the case of the DAXSVG ontology above the comments tell (us) the nature of the logic implemented by the Java code which is associated with this ontology. For example, the AtRight predicate in the above ontology has associated Java code which calculates the degree to which the predicate is true. RDF is designed to operate as though a true or false system but my associated Java is able to calculate shades of grey for predicates because context is used. How context is used to do this is explained in the presentation [Dodds 2004]. Code details showing how that is done are explicitly given further on in this paper.

An example of a boolean determination of AtRight, is shown next, in Java.

        public boolean AtRight( int x1, int x2 )
                {
                    if (x1 < x2 )
                    {
                        return true;
                    }
                    else
                    {
                        return false;
                    }
            }
    

Each of the RDF predicates in DAXSVG has a program member written in Java. These programs , like the one above, provide required actions for the DAXSVG ontology. The Java program above is purposely simplified for illustrative purposes.

The nature of the simplification is this, the above program provides a boolean or true / false dichotomization of there being a “to the right” spatial relationship existing at the time of testing between two (SVG) objects whose (SVG) x-axis origin values are x1 and x2. The DAXSVG, as can be seen by reading the comments for the predicates in it, is used not as a boolean type system of true / false but rather a continuous system implementing shades of grey. A boolean function such as the one shown above can only detect “to the right” being true or false, all or nothing. Code used in association with the DAXSVG predicates computes a degree or how much the spatial (or other) predicate is true. The predicates themselves in the DAXSVG are conceptual identifiers they do not detect or compute the shade of grey themselves. Each DAXSVG conceptual identifier has associated with it one or more Java code modules. Together the DAXSVG predicates and the associated code modules comprise a kind of (collection of) ontological-object. Objects in computing have both a data aspect and an executable aspect (such as methods).

The programs in the Java collection have a scheduler program which runs them as needed, and provides the information needed for their parameters. This scheduler and parameter definer are components of the meta-programming system which operates my system discussed in this paper.

The above program is designed to be run to obtain a truth value for the predicate AtRight (in the DAXSVG ontology as shown above). There are two (SVG) objects being analyzed by the program, object one and object two. The context, uses the item actorLocation (among other predicates such as hasCurrentAction, actorRole, which were seen above in the section “An Ontology in DAML-OIL”). The context item actorLocation is used to activate the Java module which performs the required actions associated with that predicate. The Java program actorLocation( ) therefore is run to perform the required activities. In the program actorLocation( ) there is code to examine the SVG picture for the presence of and corresponding values for the property AtRight, AtLeft, InFrontof, Above, etc as deemed necessary in the spatial ontology. The term “actorLocation”, for example, has a (morphological) component of “location” in its name and the system is able to determine from the spatial ontology that the spatial concept “location” is constituted by spatial elements whose predicates are Near, Far, AtRight, AtLeft, InFrontof, Above, Below, Behind, Inside, Containz, Beside, Periphery, Center, Touches (in the DAXSVG spatial ontology). The knowledge to use this particular collection of predicates is procedurally embedded in the program actorLocation( ). [An example program, uottcontext1.xsl, implements an ontology scanner (for the context ontology uottcontext1.xml) which identifies such items as actorLocation and indicates the corresponding Java module to run and its parameters. An example of a parameter would be the range which the x value ranges over in spatial calculations in the module. Such ranges are defined by context.]

The meta-programming scheduler, which acts as the system's activity dispatcher, was initially hand-coded with the following knowledge, and hence “knows what to do” in order to use the context appropriate to “perceiving” (the content of) an SVG picture. (such as anim01a.svg)

    POV
      Context
        SpatialFeatures
          actorLocation
            AtRight
            AtLeft
            InFrontof
            Above
        TemporalFeatures
          actionExpectedDuration
    

A generalized meta-program for defining a scheduler sequence to run programs for perceiving an SVG picture could be programmatically arrived at by parsing/scanning the ontologies for spatial and temporal terms, such as DAXSVG, the UTContext ontology (above) and the Temporal Ontology (time and date predicates). The resulting plethora of predicates that could be scheduled would needs be pruned or thinned by using a measure of relevancy or applicability to remove some predicates from the meta-program's scheduler list.

Remember actorLocation from the section above titled A Context Ontology?

    <daml_oil:ObjectProperty rdf:ID="actorLocation">   
    <daml_oil:domain rdf:resource="http://site.uottawa.ca/Context#Actors"/>
    <daml_oil:range rdf:resource="http://site.uottawa.ca/Context#Location"/>
    </daml_oil:ObjectProperty>
     

The (context item) actorLocation (predicate) has associated Java code. (It would be more complex in a production system than in this simplified example.)

        public boolean actorLocation(  )
                {
                found = collection( getLocation( identifyActor( ) ) );
                    schedule( AtRight( found ), AtLeft( found ), InFrontof( found ), 
                    Above( found ) );
                    return true; 
            }
    

identifyActor( ) is a function which examines the SVG picture, anim01a.svg in this case, and finds actor 'arrowstreamer' and function getLocation( ) locates 'arrowstreamer' at x="110", and similarly actor 'particlestreamer' at x="120". (See the section below titled Partial listing of SVG animation code to see the actual SVG code that is examined by the functions getLocation( identifyActor( ) ).) In this purposely simplified scenario 'arrowstreamer' and 'particlestreamer' qualify as actors because they have motion or move. They are animated in the SVG animation anim01a.svg. This is explicit in the SVG picture code and easy to detect. (What is less simple to perceive programmatically is the difference between a moving actor and a moving direct object.)

Now the system knows actor 'arrowstreamer' is actor1, and actor 'particlestreamer' is actor2. (The system is able to count actors found.) It can therefore assign x="110" to x1, and x="120" to x2. Using the “ontological calculation” AtRight( ), which we just saw above, the system can "see" (determine) that actor 'particlestreamer' is to the right of (AtRight) actor 'arrowstreamer'.

An enlarged scenario of simplified (read boolean not degree or magnitude) code appears next. Following that is the code for how the DAXSVG Java modules can receive their parameters, such as (via) a description of the environment the relationships are embedded or working in. That explains, using code modules, how degree and magnitude are computed.

Here is a further example of simplified Java code which illustrates how the system can perform logic using both the ontological predicates like AtRight and actual data values obtained from the SVG diagram or picture under discussion, as in the SVG animation code shown further below.

Figure 14: An illustration in the Edward deBono book “Atlas of Management Thinking”. The non-permeable funnel represents contextual guidance or constraints and the arrow represents action by an agent. Such diagrams as seen in this book are totally intuitive in that context, no text is required there. Illustration copyright Edward deBono.
[Link to open this graphic in a separate page]
Figure 15: Snapshot of SVG animation at its conclusion
[Link to open this graphic in a separate page]

This boolean-style Java code produces a brief narration of the spatial relationships of objects. A usage scenario for this code appears below the code.

        for (int i= 0; i < relationCount; i++ )
                        {
                            if (relation.compareTo(Relations[i] )== 0 )
                            {
                              switch (i )
                              {
                                case 0:
                                    if ( Containz(x1, x2, y1, y2, r1, r2) )
                                    {
                                        System.out.println("Object " + obj1 + 
                                        " contains object " + obj2);
                                    }
                                    else
                                    {
                                        System.out.println("Object " + obj1 
                                        + " does not contain object " + obj2);
                                        
                                    }
                                    
                                case 1:
                                    if ( Center(x1, x2, y1, y2, r1, r2) )
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is center of object " + obj2);
                                    }
                                    else
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is not center of object " + obj2);
                                        
                                    }
                
                                case 2:
                                    if ( AtRight(x1, x2  ) )
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is at right of object " + obj2);
                                    }
                                    else
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is not at right of object " + obj2);
                                        
                                    }
             
                                case 3:
                                    if ( AtLeft(x1, x2 ) )
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is at left of object " + obj2);
                                    }
                                    else
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is not at left of object " + obj2);
                                        
                                    }
                
                                case 4:
                                    if ( IsAbove( y1, y2  ) )
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is above object " + obj2);
                                    }
                                    else
                                   {
                                        System.out.println("Object " + obj1 
                                        + " is not above object " + obj2);
                                        
                                    }
             
                                case 5:
                                    if ( Below( y1, y2  ) )
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is below object " + obj2);
                                    }
                                    else
                                    {
                                        System.out.println("Object " + obj1 
                                        + " is not below object " + obj2);
                                        
                                    }
             
             
                                case 6:
                                    if (IsNear( x1, x2 ) )
                                    {
                                       System.out.println("Object " + obj1
                                       + " is near object " + obj2);
                                 }
                                 else
                                 {
                                      System.out.println("Object " + obj1
                                      + " is not near object " + obj2);
                                                            
                                    }
                            
                              }//switch
                           }//if
                           
                        }//for
                     }//while
                     
                }//doResults
                
                
                    public boolean Containz( int x1, int x2, int y1, int y2,  
                    int r1, int r2)
                    {
                        if ( (r1 == 0) || (r2 == 0) )
                        { return false; }
                        
                        if ( (x1 == x2 ) && (y1 == y2) && (r1 > r2 )  )
                        { return true;}
                        else
                        {return false;}
                    }
                    
                    public boolean Center( int x1, int x2, int y1, int y2, int r1, 
                    int r2 )
                    {
                        if ( (x1 == x2) && (y1 == y2)  )
                        { return true;}
                        else
                        {return false;}
                    }
                    
                    public boolean AtRight( int x1, int x2 )
                    {
                        if (x1 > x2 )
                        {
                            return true;
                        }
                        else
                        {
                            return false;
                        }
                    }
                    
                    public boolean AtLeft( int x1, int x2 )
                    {
                         if (x1 < x2 )
                        {
                            return true;
                        }
                        else
                        {
                            return false;
                        }
                    }
                        
                    
                    public boolean Below( int y1, int y2 )
                    {
                         if (y1 > y2 )
                        {
                            return true;
                        }
                        else
                        {
                            return false;
                        }
                    }
                    
                    public boolean IsAbove( int y1, int y2 )
                    {
                         if (y1 < y2 )
                        {
                            return true;
                        }
                        else
                        {
                            return false;
                        }
                    }
                    
                    public boolean IsNear( int x1, int x2 )
                    {
                         if ( x1-x2 < 20 )
                        {
                             return true;
                        }
                        else
                        {
                             return false;
                        }
                    }
                }
                This Java code example was written by Lesley Evensen
        

The meaning of x1 and x2 in the Java code for predicate AtRight is more easily seen by first glancing at an example of SVG code below. It is an SVG animation which I coded to illustrate a reified spatial metaphor. Notice that SVG element id=“arrowstreamer” has an x-coordinate of x=“110” and that SVG element id=“particlestreamer” has an x-coordinate of x=“120”. Also note that the size of the SVG display space is set at 300 by 200. The SVG animation picture-element called id=“arrowstreamer” is used as the SVG object corresponding to x1, and SVG element id=“particlestreamer” is used as x2. (More exactly x1 is set to the x-axis value of the first SVG object id=“arrowstreamer” and x2 is set to the x-axis value of the second SVG object id=“particlestreamer”.) The Java code above then computes is x1 less than x2 ? (i.e. is 110 less than 120?) Since the result is yes the Java code returns true, i.e. SVG object particlestreamer is to the right of arrowstreamer.

Additionally it is possible to program the Java code to emit a meta data assertion of the form

    <rdf:Description about="#arrowstreamer">
      <daxsvg:AtRight resource="#particlestreamer" />
      
      and having a quantifier (of degree or magnitude) specified orthogonally 
      to the RDF, using an xml graph : (doing it this way simplifies handling 
      of first order predicate logic complexities)
     
    <?xml-stylesheet type="text/xsl" href="xgmmlContext1xsl.xsl"?>  
    <!DOCTYPE graph SYSTEM "xgmml.dtd"> 
    <graph xmlns="http://www.cs.rpi.edu/XGMML" > 
        <node id="1" label="timestamp" weight="0">
                <graphics type="circle" x="270" y="90" h="10" >
                </graphics>
                <att name="systemtimestamp" value="2004-06-27T17:47:33+07:00"/>
                <!--EXSLT gets time and date from system puts it in node-->
        </node>
        <node id="2" label="AtRight1" weight="0">
                <graphics type="circle" x="350" y="190" h="10" >
                </graphics>
                <att name="SVGds" value="anim01a"/>
                <att name="aboutId" value="arrowstreamer"/>
                <att name="nodeSource" value="metadataAssertion"/>
                <att name="degreeOfMembership" value="42"/>
                <att name="predicate" value="AtRight"/>
        </node>
        <edge source="1" target="2" weight="0" label="Edge from node timestamp to
        node AtRight1" >
        </edge>
    </graph>
    

In [Dodds 2001] I showed how such RDF metadata could be embedded into the SVG picture itself using the SVG built-in metadata element. Next is the example metadata code I provided for the SVG 1.0 specification, metadata element. I have here added the metadata assertion mentioned above to the specification example below.

    <?xml version="1.0" standalone="yes"?>
    <svg width="4in" height="3in" version="1.1"
        xmlns = 'http://www.w3.org/2000/svg'>
    <metadata>
      <rdf:RDF
           xmlns:rdf = "http://www.w3.org/1999/02/22-rdf-syntax-ns#"
           xmlns:rdfs = "http://www.w3.org/2000/01/rdf-schema#"
           xmlns:dc = "http://purl.org/dc/elements/1.1/" >
        <rdf:Description about="http://example.org/myfoo"
             dc:title="MyFoo Financial Report"
             dc:description="$three $bar $thousands $dollars $from 1998 $through 
             2000"
             dc:publisher="Example Organization"
             dc:date="2000-04-11"
             dc:format="image/svg+xml"
             dc:language="en" >
          <dc:creator>
            <rdf:Bag>
              <rdf:li>Irving Bird</rdf:li>
              <rdf:li>Mary Lambert</rdf:li>
            </rdf:Bag>
          </dc:creator>
        </rdf:Description>
      </rdf:RDF>
      <rdf:Description about="#arrowstreamer">
      <daxsvg:AtRight resource="#particlestreamer" />
    </metadata>
  </svg>
        

Next we see an SVG program which creates the bar-chart seen in Figure 21. The meta-program can parse the SVG program itself and derive metadata assertions (explained above) and insert them into the SVG picture. Among other things a natural language description of the picture can be programmatically generated using those metadata predicates. The SVG program barchart.svg is shown next.

    <?xml version="1.0" standalone="yes" ?>
    <svg xmlns = 'http://www.w3.org/2000/svg'>
    <metadata xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
              xmlns:rdfs="http://www.w3.org/TR/. ..-schema#" 
              xmlns:daxsvg="http://www.open-meta.com/daxschema/" >
      <rdf:Description about="#text1">
         <daxsvg:Below resource="#xbaseline"/>
      </rdf:Description>
      <rdf:Description about="#text1">
         <daxsvg:IsNear resource="#xbaseline" />
      </rdf:Description>
      <rdf:Description about="#text2">
         <daxsvg:Below resource="#text1"/>
      </rdf:Description>
      <rdf:Description about="#text2">
         <daxsvg:IsNear resource="#text1" />
      </rdf:Description>
      <rdf:Description about="#endlineleft">
         <daxsvg:AtRight resource="#line1"/>
      </rdf:Description>
      <rdf:Description about="#endlineleft">
         <daxsvg:IsNear resource="#line1" />
      </rdf:Description>
      <rdf:Description about="#endlineright">
         <daxsvg:AtLeft resource="#bar13"/>
      </rdf:Description>
      <rdf:Description about="#endlineright">
         <daxsvg:IsNear resource="#bar13" />
      </rdf:Description>
      <rdf:Description about="#line1">
         <daxsvg:AtRight resource="#line2" />
      </rdf:Description>
      <rdf:Description about="#line2">
         <daxsvg:AtRight resource="#line3" />
      </rdf:Description>
      <rdf:Description about="#line3">
         <daxsvg:AtRight resource="#line4" />
      </rdf:Description>
      <rdf:Description about="#line4">
         <daxsvg:AtRight resource="#line5" />
      </rdf:Description>
      <rdf:Description about="#line5">
         <daxsvg:AtRight resource="#line6" />
      </rdf:Description>
      <rdf:Description about="#line6">
         <daxsvg:AtRight resource="#line7" />
      </rdf:Description>
      <rdf:Description about="#line7">
         <daxsvg:AtRight resource="#line8" />
      </rdf:Description>
      <rdf:Description about="#line8">
         <daxsvg:AtRight resource="#line9" />
      </rdf:Description>
      <rdf:Description about="#line9">
         <daxsvg:AtRight resource="#line10" />
      </rdf:Description>
      <rdf:Description about="#line10">
         <daxsvg:AtRight resource="#line11" />
      </rdf:Description>
      <rdf:Description about="#line11">
         <daxsvg:AtRight resource="#line12" />
      </rdf:Description>
    </metadata>
    <rect id="lineval18" x="37" y="190" width="280" height="1" style="stroke:black; 
      stroke-width:1" />
    <text id="text3" x="317" y="194" style="font-family:Verdana; font-size:12.333; 
      fill:indigo">
    18
    </text>
    <rect id="xbaseline" x="37" y="200" width="329" height="1" style="stroke:blue; 
      stroke-width:1" />
    <rect id="endlineright" x="333" y="96" width="1" height="104" 
      style="stroke:black; stroke-width:1" />
    <rect id="endlineleft" x="37" y="96" width="1" height="104" 
      style="stroke:black; stroke-width:1" />
    <rect id="line1" x="40" y="160" width="20" height="40" style="stroke:green; 
      fill:green; stroke-width:0" />
    <rect id="line2" x="60" y="140" width="20" height="60" style="stroke:yellow; 
      fill:yellow; stroke-width:0" />
    <rect id="line3" x="80" y="111" width="20" height="89" style="stroke:red; 
      fill:red; stroke-width:0" />
    <rect id="line4" x="100" y="130" width="20" height="70" style="stroke:yellow; 
      fill:yellow; stroke-width:0" />
    <rect id="line5" x="120" y="173" width="20" height="27" style="stroke:green; 
      fill:green; stroke-width:0" />
    <rect id="line6" x="140" y="191" width="20" height="09" style="stroke:green; 
      fill:green; stroke-width:0" />
    <rect id="line7" x="160" y="140" width="20" height="60" style="stroke:yellow; 
      fill:yellow; stroke-width:0" />
    <rect id="line8" x="180" y="167" width="20" height="33" style="stroke:green; 
      fill:green; stroke-width:0" />
    <rect id="line9" x="200" y="175" width="20" height="25" style="stroke:green; 
      fill:green; stroke-width:0" />
    <rect id="line10" x="220" y="129" width="20" height="71" style="stroke:yellow; 
      fill:yellow; stroke-width:0" />
    <rect id="line11" x="240" y="150" width="20" height="50" style="stroke:green; 
      fill:green; stroke-width:0" />
    <rect id="line12" x="260" y="139" width="20" height="61" style="stroke:yellow; 
      fill:yellow; stroke-width:0" />
    <rect id="line13" x="280" y="125" width="20" height="75" style="stroke:yellow; 
      fill:yellow; stroke-width:0" />
    <text id="text1" x="37" y="210" style="font-family:Verdana; font-size:12.333; 
      fill:black">
    87  88  89  90  91  92  93  94  95  96  97  98  99
    </text>
    <text id="text2" x="37" y="230" style="font-family:Verdana; font-size:12.333; 
      fill:brown">
    Mean High Ratings August 1999
    </text>
    </svg> 

In the same way ontology statements are generated and placed into a knowledge-base which represent a world model by situated context.

    Thing
      Context
        timedate.slot=(2004-06-27T17:47:33+07:00)
        SVGdata.slot(true)
          x-axis.slot(300)
          y-axis.slot(200)
          object.slot(arrowstreamer)
            x-coord.slot(110)
            y-coord.slot(180)
            animation.slot(y)
            atright.slot(particlestreamer)
          object.slot(uplabel)
            x-coord.slot(230)
            y-coord.slot(20)
            isabove.slot(leftfunnelside, somewhat)
            animation.slot(n)
          object.slot(leftfunnelside)
            stroke.slot(green)
            stroke-width.slot(10)
            path.slot(d="M 99 180 L 99 57")
            atright.slot(rightfunnelside)
            isnear.slot(arrowstreamer, very)
            animation.slot(n)
                        :
                        :
     

Code for how the DAXSVG Java modules can receive their parameters, such as (via) a description of the environment the relationships are embedded or working in is shown above in a context constructed via the meta-program which marshalls the code building and execution in this system. A context for that environment, the SVG animation anim01a.svg, is built by “discovery”, the meta-program issues a parsing program of the SVG code itself and the findings are used to populate the context.

This section of the paper explains, by use of code modules, how degree and magnitude are computed.

By looking at the SVG code (below) and comparing it with what is in the context above we see that the context is basically an ontology of SVG elements and their parameters, defined as slots in the context. Other items in the context are not present in the SVG code itself, such as a timedate-stamp and predicates from other ontologies. In this case the AtRight predicate from the DAXSVG ontology occurs two times (in our example above) giving a spatial relationship contextual item to each (SVG) object they are detected to be relevant to. The IsNear predicate from DAXSVG is detected as relevant for the SVG object whose id is leftfunnelside, and degree or magnitude of nearness is “very”. Very is a linguistic hedge and the computation which derives this term from the SVG scalar value “centroiddistance” is explained next. Similarly the predicate IsAbove appears in the context attributes of SVG object uplabel. Its computation is also explained next.

Notice that this ontology has overlaps with elements appearing in the SAW ontology designed by Christopher J. Matheus et al. See the elements in the figure titled “A representation of the SAW Situation Awareness Ontology”.

Partial listing of SVG animation code

      <?xml version="1.0" encoding="iso-8859-1"?>
      <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20000303 Stylable//EN"   
      "http://www.w3.org/TR/2000/03/WD-SVG-20000303/DTD/svg-20000303-stylable.dtd">
      <svg xmlns:a="http://www.adobe.com/svg10-extensions" 
          a:timeline="independent"  viewBox="0 0 300 200">
      <desc>Copyright 2001 - 2004 David Dodds Example anim01a - demonstrate 
      deBono diagram SVG animation with Lakoff spatial metaphor</desc>
      <rect x="1" y="1" width="253" height="199"
          fill="black" stroke="blue" stroke-width="7" />
        
      <text id="uplabel" x="230" y="20"
          style="font-family:Verdana; font-size:12.333; fill:blue">
          UP
      </text>
      
      <text id="downlabel" x="200" y="180"
          style="font-family:Verdana; font-size:12.333; fill:blue">
          DOWN
      </text>
      
      <text id="goallabel" x="110" y="42"
          style="font-family:Verdana; font-size:12.333; fill:blue">
          GOAL
      </text>
      
      <g id="leftfunnelside">
          <path d="M 99 180 L 99 57"
              style="fill:none; stroke:green; stroke-width:10"/>
      </g>
      
      <g id="rightfunnelside">
          <path d="M 153 57 L 153 180 "
              style="fill:none; stroke:green; stroke-width:10"/>
      </g>
      
      <a:audio xlink:href="sounds1.wav" volume="10" begin="0.5s">
      </a:audio>
      <a:audio xlink:href="sounds2.wav" volume="10" begin="12s">
      </a:audio>
      
      <rect id="arrowstreamer" x="110"  width="3" height="20" >
          <animate attributeName="y" attributeType="XML"
              begin="0s" dur="5s" fill="freeze" from="180" to="55" />
          <animate attributeName="height" attributeType="XML"
              begin="0s" dur="5s" fill="freeze" from="20" to="143" />
          <animateColor attributeName="fill" attributeType="CSS"
              from="rgb(0,0,255)" to="rgb(110,0,0)"
              begin="0s" dur="5s" fill="freeze" />
      </rect>
      
      <rect id="particlestreamer" x="120"  width="3" height="3" >
          <animate attributeName="y" attributeType="XML"
              begin="2s" dur="5s" fill="freeze" from="170" to="55" />
          <animateColor attributeName="fill" attributeType="CSS"
              from="rgb(0,0,255)" to="rgb(110,0,0)"
              begin="2s" dur="5s" fill="freeze" />
      </rect>
      
      <rect id="misdirectedbehaviourstreamer" x="147" y="190" width="5"
        height="5">
          <animate attributeName="y" attributeType="XML"
              begin="3s" dur="5s" fill="freeze" from="190" to="130" />
          <animate attributeName="x" attributeType="XML"
              begin="3s" dur="5s" fill="freeze" from="147" to="230" />
          <animateColor attributeName="fill" attributeType="CSS"
              from="rgb(0,0,255)" to="rgb(128,0,0)"
              begin="3s" dur="5s" fill="freeze" />
      </rect>
      
      <rect id="obstructor" class="hitBox" x="40" y="49" width="50" 
        height="5">
          <animate attributeName="x" attributeType="XML"
              begin="3s" dur="5s" fill="freeze" from="40" to="100" />
          <animateColor attributeName="fill" attributeType="CSS"
              from="rgb(10,0,0)" to="rgb(255,0,0)"
              begin="3s" dur="3s" fill="freeze" />
      </rect>
      
      <text id="obstructorlabel" x="120" y="53"
          style="font-family:Verdana; font-size:4; fill:black">
          <animateColor attributeName="fill" attributeType="CSS"
              from="rgb(0,0,0)" to="rgb(255,255,255)"
              begin="7s" dur="2s" fill="freeze" />
              obstructor
      </text>
      
      <text id="misdirectedbehaviourlabel" x="200" y="125"
          style="font-family:Verdana; font-size:4; fill:black">
          <animateColor attributeName="fill" attributeType="CSS"
              from="rgb(0,0,0)" to="rgb(255,255,255)"
              begin="9s" dur="3s" fill="freeze" />
              misdirectedbehaviour
      </text>
      

(In addition to SMIL-based animation my above SVG animation includes sound as well. It has the ability to play a motion synchronized narrative. In the future voicexml might be used to produce synthesized quality speech where the words are chosen by the program and not prerecorded.) SVG animations and pictures may be viewed in a browser such as Internet Explorer by using an SVG plugin available for free downloading from Adobe. There is also the Batik viewer from the Apache project. By looking at the SVG section of the W3C site one may find any number of viewer programs listed there.

In the preceding SVG animation there were three (modeled) actors. Actor1 started at location x=110 y=180 in the SVG pallet co-ordinates. At the end of the animation activity Actor1 was at x=110 y=55. Java and SAX2 (XML parser) are able to parse this SVG animation (program) and locate these locations. The context item actorLocation from the context ontology shown above may then programmatically, using Java for example,locate and associate the (location) co-ordinates x=110 y=180 with that ontological item. Were the ontology to be represented using OWL then a “location” slot in the OWL context-ontology could receive those co-ordinate values. The existance of the actorLocation item in the system context ontology allows the system to know to use the associated action-logic (code), such as Java, to perform situated actions.

Context and Situation

    This section talks about
  • POV Point Of View
  • Context
  • Situation

Context and Situation will be explained in more detail during the spoken presentation by showing the progress of a Context being built/populated by input from the system environment. Situation will be shown to be a particular flavour of Context and explained more fully as well.

Point Of View, POV, in this paper is a mechanism which marshalls the programs which operate the Context system. POV may be likened to a super-context or context of contexts. POV provides the parameterization of the Context mechanism and hence acts as selector or disposition for the operation of the Context system.

The simplest POV will be illustrated via some code example in this paper. Diagrams and pictures are represented in the computer in this system by means of SVG (Scalable Vector Graphics). Figure 14 shows an example of an SVG depicted Diagram. In order for my system to make sense of the Diagram certain things must be known. Among these things is the dimensions of the SVG display space used to present the Diagram. A program can easily parse the SVG code which constitutes the Diagram and obtain this information. In the case of the Diagram the SVG code was presented earlier in this paper titled “Partial listing of SVG animation code”. The relevant part is listed again below. The SVG statement contains the viewbox attribute which specifies the extent of the x and the y axis, here x-axis=300 units and y-axis=200 units.

 
      <svg xmlns:a="http://www.adobe.com/svg10-extensions" 
        a:timeline="independent"  viewBox="0 0 300 200">
      

Running the program which parses the SVG Diagram (in this case the “universe” of the system) is finding the point of view. Once that program has run the POV is defined as SVGdata with x-axis=300 and y-axis=200. A start of an ontology can be constructed from this.

      POV
        SVGdata.slot(true)
        x-axis.slot(300)
        y-axis.slot(200)
     

The POV system provides parameterization to systems it marshalls (such as the Context system) by means of these populated ontologies, such as POV above. The parameterization provided is that SVG (SVG data sets) is the “universe” under consideration and that this instance of that universe has x and y extents of 300 and 200, respectively. This x and y extents information is crucial in the logic which processes the content of the SVG picture. For example, the notion of long or large visual object in the Diagram can be determined only with respect to the relative magnitude of the picture object compared with the magnitude or extents of the picture's x and y axes.

When we look at how the value for the fuzzy calculation Near(x) is computed elsewhere in the paper we see that the (POV provided)parameters x-axis.slot(300)and y-axis.slot(200) are used as Context-based scaling for the calculation f(x) as shown in Figure 1.

Point of view is often spoken of by means of a spatial metaphor, such as viewing an object from different angles. Our experience in the world tells us that any three-dimensional object cannot be seen from all possible angles simultaneously by oneself. One can progressively view the object from all possible angles and then consult one's memory of that viewing but even then most people are unable to visualize the object simultaneously from all possible angles.

Point of view is also sometimes explained via the following parable. Three visually challenged individuals encounter an elephant, one feels its trunk, the second feels its leg, and the third feels its tail. They start talking, describing the elephant. The first describes the trunk and says that is what an elephant is. No, no, say the other two, the second person says that an elephant clearly is like and he describes the elephant's leg. No, no says the third man. An elephant obviously is like and he describes the tail. For each of these men an elephant is what they touched (their individual point of view). The elephant in this parable is like the context obtained by point of view.

Yet another way of explaining what point of view is is that it is a snapshot in time of sensor or system state data. Multiple temporally-adjacent snapshots provide an evolving picture, equivalent to moving and viewing (or touching) an object from a different physical angle in the parable. Thus the temporal sequence of snapshots would present a trunk pov, followed by a leg pov, followed by a tail point of view. The point of view is determined by the angle at which the metaphorical object is looked at. Changing the angle means entering a new moment in time.(Time may be used as a name or index to provide a handle to a pov, especially when that time is associated with a “plan”, an action-plan (a description of what was done).

Context in this paper is a mechanism for specifying and detecting the state of a collection of information items including (that from) a set of real or simulated sensors. Examples of sensor-data information items would be sound data from a microphone or ultra-sonic yardstick, and image data from a camera or imaging infra-red device. (All of these exist as standard robotic systems hardware.) The data structure of a context in this paper is either a graph-structure (nodes and edges) or an ontology (generally frames).

The SAW ontology system (see Figure 19) is based on use of recurrent, real (or simulated) sensor information to develop a representation of the assessment of a situation. Since the data from which the assessment is constituted comes from local sensors and in current time the situation derived by the SAW system can be said to be situated. (i.e. Relative to self.) Context defines the constituents of the situation from both a required and discovered perspective.

In the section titled A Context Ontology at the start of this paper we saw illustration of an ontology which could be used to represent a context. This ontology consisted of DAML-OIL statements which specified one kind of action-context domain, defining context terms which the DAML-OIL statements related through their class structure. This context ontology did not specify instance values for any of the constructs, which means that the conceptual structure was defined but there was no “world”-based or sensor-based values or “quantified instances”.

By treating the DAML-OIL structure as a two-dimensional (XML) plane it is possible to provide dynamic instance data for these context items through orthogonal reference using a graph structure, perhaps topic maps. The graph references point at XML IDs in the DAML-OIL statements. These graph edges are orthogonal to the DAML-OIL XML plane and dynamic, existing and removed as time passes.

The following context, represented via the XGMML graph structure, depicts how orthogonal references are accomplished into two xml “surfaces”, the aforementioned SVG animated Diagram (anim01a.svg), and also the DAML-OIL Context ontology. The graph structure below is an orthogonal reference graph which depicts (is) the context two visible SVG objects in the Diagram animation have the spatial relationship AtRight; the actor, arrowstreamer, in the animated SVG Diagram executes an action that lasts 5 seconds; and that the (final) location of the actor is 110,55. The code, next, shows how the graph-structure form of context can (orthogonally) point into or reference the XML-based SVG animation and the XML-based DAML-OIL action-context ontology.

    Copyright 2004 David Dodds
    <?xml-stylesheet type="text/xsl" href="xgmmlContext1xsl.xsl"?>  
    <!DOCTYPE graph SYSTEM "xgmml.dtd"> 
    <graph xmlns="http://www.cs.rpi.edu/XGMML" > 
    <node id="1" label="timestamp" weight="0">
    <graphics type="circle" x="270" y="90" h="10" >
    </graphics>
    <att name="systemtimestamp" value="2004-06-27T17:47:33+07:00"/> 
    <!--EXSLT gets time and date from system puts it in node-->
    </node>
    <node id="2" label="AtRight" weight="0">
    <graphics type="circle" x="350" y="190" h="10" >
    </graphics>
    <att name="SVGds" value="anim01a"/>
    <att name="nodeId" value="object1"/>
    <att name="nodeId" value="object2"/>
    </node>
    <node id="3" label="object1" weight="0">
    <graphics type="circle" x="190" y="190" h="10" >
    </graphics>
    <att name="xcoord" value="110"/>
    <att name="ycoord" value="35"/>
    <att name="SVGId" value="leftfunnelside"/>
    </node>
    <node id="4" label="object2" weight="0">
    <graphics type="circle" x="290" y="290" h="10" >
    </graphics>
    <att name="xcoord" value="120"/>
    <att name="ycoord" value="39"/>
    <att name="SVGId" value="rightfunnelside"/>
    </node>
    <node id="5" label="actionExpectedDuration" weight="0">
    <graphics type="circle" x="390" y="390" h="10" >
    </graphics>
    <att name="timesec" value="5"/>
    <att name="SVGId" value="arrowstreamer"/>
    <att name="otcontext" value="actionExpectedDuration"/>
    </node>
    <node id="7" label="actorLocation" weight="0">
    <graphics type="circle" x="490" y="090" h="10" >
    </graphics>
    <att name="xcoord" value="110"/>
    <att name="ycoord" value="55"/>
    <att name="otcontext" value="actorLocation"/>
    </node>
    <edge source="2" target="4" weight="0" label="Edge from node AtRight to node 
    object2" >
    </edge>
    <edge source="1" target="2" weight="0" label="Edge from node timestamp to node 
    AtRight" >
    </edge>
    <edge source="2" target="3" weight="0" label="Edge from node AtRight to node 
    object1" >
    </edge>
    </graph>
    
[above] XGMML example of context graph which depicts a timestamped relationship (AtRight) between the two SVG objects leftfunnelside and rightfunnelside, and also depicts that the duration of the action of actor arrowstreamer is 5 seconds, and the actor's location is (SVG x,y) 110,55. (Remember that the POV for this Diagram was x-axis.slot(300), y-axis.slot(200), so that the system is also able to infer that the actor location is plausible.)Notice that this graph-structure provides instantiation of data-values for the data-less ontology elements actionExpectedDuration and actorLocation. The context graph in effect dynamically binds not-yet-instantiated concepts from an ontology with visual metaphor elements (reified / executed via SVG animation).

Figure 16: Semantic visualization diagram. Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 17: Semantic visualization diagram, expansion of details. Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 18: Semantic visualization diagram, further expansion of details. Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 19: Semantic visualization diagram, details. Conceptual Graph. Copyright John F. Sowa
[Link to open this graphic in a separate page]
Figure 20: A representation of the SAW Situation Awareness Ontology.
[Link to open this graphic in a separate page]

Metaphor in Science and Technology

In [Bateson 1979][Jones 1982][Ricoeur 1978] metaphor in science is discussed and the role that the various kinds of metaphor play in Physics and Astronomy in representation and modeling of phenomena which have no simple analog in geometry or simple mathematics. Metaphor in science has been a means whereby the partial ordering structure of the transfer metaphor process has been a great lever to creative models. Some of these models are well known to the public such as “Black Hole”, “Event Horizon”, “Dark Matter”, and “Gravity Lens”. It is because the partial structure transfer properties of the metaphors [Bateson 1979][Lakoff 1980] suggest properties that at first were not as obvious as the metaphor seed or initial insight which provided the metaphor that it is a mainstay of science. Through the use of context and ontologies computers can be made capable of generating and recognizing these very powerful means of representation. In the next section we see example code and ontology data sets which illustrate how a computer can generate and “perceive” / detect metaphors.

Now we have a conceptual example, many people are familiar with the funnel diagram often used to depict the concept of both a gravity well and of a black hole. The gravity well term is easy to understand as a scientific metaphor because most everybody knows what a (water) well is and what they usually look like. The metaphor is made whereby the (depth of the) walls of a typical generic waterwell are used to represent strength of gravity. By analogy depth is gravity-strength most anyone can comprehend that. Because gravity-strength changes analogically as one descends the well the well has its walls depicted more and more close together to reinforce the relationship of actual gravity-strength at a given depth.

The bottom of the funnel or inverted cone is the location of the black hole and the top mouth part of the funnel is the event horizon. SVG can represent the cone diagram figure quite easily. Looking down the funnel from the mouth can be represented in SVG as radial-gradient, which is displayed visually as either a continuously changing shade of grey from the outer circumference towards the center of the circle, or from the opposite direction. SVG can also represent the same SVG radial-gradient as continuously changing colour spectrum from the outside to the inside (or vice-versa). Either way the radially changing colour pattern is a metaphor for the changing gravity strength according to location in the (metaphorical) cone or well. See Figures 21 and 22 below for SVG illustrations of colour radial gradients.

Also most everyone has observed the swirling pattern water makes as it empties from a sink down the drain. A not extra-ordinary imagination can be used to transpose this swirling water going down the sink drain hole onto a circular event horizon with its matter swirling around and dropping down into a black hole. It is this ability to visualize the correspondences such as swirling water draining in a sink and swirling matter draining down into a black hole that give us the ability to recognize and to generate metaphors. (Yes, many metaphors are inspired by visio-spatial cognition and not initially by words or language. The languaging is applied afterward to the visio-spatial realizations. A daily example of this is the ability to walk along a crowded sidewalk and not collide with others there. It is a cognitive visio-spatial projection of the other’s short term path, there is no language involved. Some honest introspection while walking will demonstrate this to you.)

Not surprisingly many metaphors are spatial and derived from the subjective experience of one’s own body. (My height, my size, the reach of my arm, three paces(my steps), running quickly.)

Figure 21: SVG illustration of Stepped Colour Radial Gradient
[Link to open this graphic in a separate page]
Figure 22: SVG Spec 1.0 illustration 1 of Colour Radial Gradient
[Link to open this graphic in a separate page]
Figure 23: SVG Spec 1.0 illustration 2 of Colour Radial Gradient
[Link to open this graphic in a separate page]
Figure 24: SVG Spec 1.0 illustration 3 of Black Radial Gradient
[Link to open this graphic in a separate page]

Transfer and Corresponding Metaphor Process CMP

How Transfer and Corresponding Metaphor Process are performed using spatial and temporal ontologies and their Java support code is discussed here. The ontologies will be shown and the Java source code as well. This section discusses what transfer is in the metaphor process. In this paper the term Corresponding Metaphor Process is used to refer to a means of implementing transfer. In the Metaphor in Science and Technology section we saw that the metaphor of Black Hole and its Event Horizon can be represented as a visual using SVG radial-gradient capability as a means of producing instances of such visualization. Examples of SVG radial-gradients were shown. Since the SVG code which produces these radial-gradients is simply XML text in the SVG program the code is available to reading and analysis by other programs. By analyzing the values of the parameter settings used to set a radial-gradient statement it is possible to get a numerical sense of what the picture looks like to the eye. In the case of the continuous shade of gray radial-gradient discussed in the Metaphor in Science and Technology section a programmatic analysis of the SVG radial-gradient code (all three lines of it) would yield amongst other things a scalar value depicting the visual or picture distance across which the specified radial-gradient occurred. This is a simple numerical representation depicting the shade of grey varying with respect to (radial) distance. In the case of a conical representation of the gravity well the cone is a simple geometry to produce in SVG as a side view. Analysis of the SVG code which produces the cone illustration by the analysis program returns a number of SVG object parameters, representing the locations of the lines constituting the outer edge of the cone. The analysis program is able to analyze the findings of both illustrations and detect that one has a wall separation dependent on depth (i.e. the well) and the other has a gray level varying by distance from the center of the radial. The gray level and the wall separation both display a value according to distance (from a reference point). This is the Correspondence. A value represented by one can be systematically TRANSFERRED to the other. For example a given shade of grey in the radial model has the analog of a CORRESPONDING wall distance separation in the other (well) model.

Another metaphor example may make the ideas clearer. This metaphor uses the spatial aspects of the human body as the reference point or source of the metaphor. (Much has been written in various fields about The Body Percept.) This reference point is known as the tenor of a metaphor. The other element which is participating in the metaphor is called the vehicle of the metaphor. The human body is an obvious subjective reference as it is the “me” who engineers/makes the metaphor and “my” subjective body is the easiest and hence most obvious source of measure. My height, my arm’s reach, my running speed, etc.

The computer (program) is provided with a data set which constitutes its body. Since we are talking about using the human body image as a tenor (metaphor source) we provide a human body spatial representation. While an exact 3D model could be used, for the puposes of this illustration a simplified if not somewhat casual data set or spatial model is used. See Figure 25 for a rendering of the SVG data set used as the metaphor-tenor or source in this example.

Figure 25: SVG illustration displays Business Man Bob. It is a collection of SVG paths which constitute a visio-spatial data set processable by the computer as “lines” rather than “dots”, and which is used as the basis of a metaphor-tenor. (i.e. Visible metaphor source.) [source jpg from Radicalman site p147]
[Link to open this graphic in a separate page]

To us the rendered paths (lines) of the data set constitute a recognizable depiction of a male human, if not a charicature or cartoon thereof. It is not a photograph or hologram nor are they needed. A texture-mapped X3D three-dimensional humanoid would be nice but is not necessary. At the same time that we recognize instantly that the rendered SVG paths depict a man, the computer, not being able to “see”, “perceives”/detects a collection of SVG paths and nothing else. (Our “seeing” of the “man” is based on non-conscious inferencing which we call (pattern) “recognition”. If we were to look at what was actually falling on our retina it would be a collection of “lines” and “dots”. It is the combination of our visual neurons and our visual cortex in our brain which piece together those visible elements into a “man”. “Man” is a mental phenomenon in our subjective mind, there is no “man” in the picture or on the screen or paper. There are only black-coloured marks.)

The SVG path element defines a set of x,y co-ordinate pairs which describe a complex geometry. Simple geometries like circles, rectangles, etc have their own SVG elements such as circle, rect, etc. The SVG path collection which defines the Business Man Bob data set is a collection of named sequences of x,y-pair co-ordinates. The frame or box that the Business Man Bob data set is seen in in the illustration has a standard SVG (graphics) origin of x,y = 0,0 being the upper left corner of the frame. (The system relies on that piece of knowledge in order to make sense of the terms found in the DAXSVG ontology, “where” up and down, left and right “are”.)

The system in this paper uses the gensym (generate symbol) technique of creating names, the Business Man Bob collection of SVG paths uses the SVG Group element (g) to allow an SVG id to be defined for each path. The twenty-seventh SVG Path is shown next. It has been named g27, the twenty-seventh generated gensym name (g#). Each SVG group in the Business Man Bob data set has a unique SVG id using the gensym naming system.

    <g id="g27">
       <path d="M 126 349 L 152 359 153 364 142 365 124 364 Z"
             style="fill:none; stroke:black; stroke-width:2"/>
    </g>
    
    
There is no particular information contained in the name g27 other than it is the twenty-seventh name generated by the system (name generator). By giving the SVG path element a name (eg. g27) it is possible for XML programs to address the path x,y values symbolically, and also the name can be used in a context or ontology. The arm-length of the figure is 123 (y-units). The arm-length of this figure is its “my arm’s reach”. The “My” body height is 275 (y-units).

The values shown for the SVG path element in the illustration above defines the actual set of x,y co-ordinate pairs which describe a particular geometry in the Business Man Bob data set. To our human eye we easily recognize it as a foot (specifically, the foot of Business Man Bob).

Since it is the Business Man Bob data set (shown above) which is defining to the system what a body is (visually) there is no way for the system to know magically that the SVG path called g27 “is a foot”. We call it a foot and inform the computer system of this. The system has a collection of ontologies of spatial-thing, where one of these is an ontology for human-body. The instance name for this ontology is My.

    Thing
      spatiotemporalThing
        ontologyType.slot(human-body)
        ontologyName.slot(My)
          objectName.slot(g27)
          stringName.slot(foot)
          frameRelLoc.slot(bottom:at,near)
     
The stringName is the name that is provided by a human, which is defined culturally, via linguistics. In German we might find the stringName.slot(foos), in French stringName.slot(pied). Those were shown to depict that there is no magic in and of the string “foot” itself.

That being said we now have a name which can be used in ontologies as a linguistically derived term. When we look at the CYC knowledge-base we see that it has defined a number of valuable useful knowledge items to do with foot. A piece of the CYC knowledge-base is shown next. This OWL (Web Ontology Language) item is just one of a myriad of OWL entries in the CYC knowledge-base. CYC may be used to infer tacit information, that is information which is not explicitly present in an input / data set. Notice that the CYC OWL item below tells the computer in semantic web terms about “foot”.

    <owl:Class rdf:ID="Foot-AnimalBodyPart">
        <rdfs:label xml:lang="en">feet (types of things)</rdfs:label>
        <rdfs:comment>The collection of all vertebrates&apos; feet.  A
            foot is a terminal part of a #$Vertebrate #$Leg.  Feet are
            used in locomotion, support, balance, kicking, etc.</rdfs:comment>
        <guid>bd58be93-9c29-11b1-9dad-c379636f7270</guid>
        <rdf:type rdf:resource="#PublicConstant"/>
        <rdf:type rdf:resource="#SymmetricAnatomicalPartType"/>
        <rdf:type rdf:resource="#AnimalBodyPartType"/>
        <rdfs:subClassOf rdf:resource="#Appendage-AnimalBodyPart"/>
        <rdfs:subClassOf rdf:resource="#Individual"/>
        <owl:disjointWith rdf:resource="#Digit-AnatomicalPart"/>
        <owl:disjointWith rdf:resource="#Limb-AnimalBodyPart"/>
    </owl:Class>
     
    
This ontological item tells the computer that a foot is not a finger, arm, or leg; and that its location is a terminal part of a leg (i.e. at the end of a leg, not the middle or elsewhere). This ontological semantic knowledge is valuable for it allows the program we are discussing in this paper to identify / locate a foot in a visual data set yet without being committed to a particular photograph of one, nor drawing or X3D three-dimensional data set of a foot.

In our human-body ontology above we see that frameRelLoc.slot(bottom:at,near), the relative location in the visual frame (the box enclosing the drawing of Bob), has an instance slot value of “bottom”. Notice that “bottom” is a member of the DAXSVG ontology we saw listed earlier in the paper. The exact entry there was

  
    <rdf:Property ID="Bottom">
        <rdfs:comment>absolute marker, minimum y value in 2D reference system. 
        SVG maxy</rdfs:comment>
        <rdfs:range rdf:resource="#SvgEntity" />
        <rdfs:domain rdf:resource="#SvgEntity" />
    </rdf:Property>
     
    
Bottom, then, is a semantic term for the location in an SVG picture which corresponds to maxy. Maxy is a named constant which is set to the y-axis value which has the largest possible magnitude for that SVG frame. In the case of the data set Business Man Bob the largest y-axis value of the drawing is y=364. That is the y co-ordinate of the “bottom of Bob’s foot” in the drawing. As shown elsewhere in this paper, the SVG viewBox, typically (0 0 canvasmaxx canvasmaxy), can be read by an xslt program and the value, canvasmaxy in this case, of the largest extent in the y-axis direction placed into the working constant maxy. Since the drawing in our example case does not extend down to the very bottom of the frame (the SVG canvas) the program is able to scan all the SVG code, the SVG path groups in this example, and easily locate the path with the largest magnitude y-axis value. Here it is path g27, with a largest y-axis value of 364. The semantic term Bottom, in the DAXSVG ontology is then resolved to have the associated value of (y=) 364. That is the defined (semantic) Bottom of the picture under consideration.

Further, the function f(x) defined earlier in the paper, which computes a curve like that of Far (the Near-Far graph labelled Fig. 1.), can be applied, as f(y), applying that function to the y-axis values of the drawing to obtain “shades of grey” values for the y-axis values ranging from 0 through 364. In English what this means is that a y-axis value of 364 computes to be a value of 1.00 or “white”, and as some given value for y is lower than 364 that y-value computes to a progressively darker shade of grey, until y=0 which computes to a shade of grey equal to black. The f(y) function calculates a scalar value which is proportional to the nearness a given y-axis location or value is to the Bottom of the picture (which is y=364). Thus the function f(y) allows for the detection of things located NEAR the Bottom as well as things AT the Bottom. So something located at y=364 is AT the Bottom, and something located at y=347 is NEAR the Bottom. This allows the system to recognize things as being at the “Bottom”, notice the quote marks around Bottom in this case. People often use language this way, the Bottom with the quote marks around it is the figurative or colloquial Bottom. Instead of being exactly equal to one value only (i.e Bottom= y=364) the progam is able to perceive “Bottom”, being (a possible range of) location values sufficiently near the Bottom.

Next have a look at Figure 26, the photograph of Monument Valley, and also look at Figure 27.

Figure 26: Monument Valley. Illustration shows rock formation vertically oriented, compare with similar vertical orientation of source data set human figure Business Man Bob. That figure is the source or origin comparator for metaphors based on the spatial features of the human body.
[Link to open this graphic in a separate page]
Figure 27: Illustration shows rock formation vertically oriented. It is recognizable to humans as a wall or canyon side. The data set Business Man Bob serves as the basis of a metaphor-tenor, i.e. visible metaphor source. This rock wall serves as the metaphor-vehicle, or also known as metaphor comparand.
[Link to open this graphic in a separate page]

Here is where we get to the metaphor. “The Indians were at the foot of the canyon writing ecommerce systems on their wi-fi as we came through the mouth of the gorge where the river roared and ran swiftly.” There are two metaphors in that sentence which will be discussed. The first metaphor is “at the foot of the canyon”. “At” is treated like it is figurative or colloquial, and hence alluding to the use of NEAR instead of only strictly AT. “Foot of the canyon” is a noun followed by an “of” prepositional phrase. A noun is a person, place or thing. Since the noun “foot” is not a person nor a thing, there is no actual foot involved, “foot” must here be a place. It also is “of” the canyon or about the canyon, signified by the prepositional phrase.

In order to make sense of or detect the meaning of the metaphor the preceiving program must make some analyses of the picture painted by the sentence (Indians etc). That there is a vertically oriented visual structure in the picture, such as Figure 25 the canyon wall, and a vertically oriented visual structure in the source drawing (of Bob) is a key feature to detect. That the term “foot” is used explicitly in the metaphor-vehicle and that the word “foot” occurs in the metaphor-tenor (i.e. the spatiotemporalThing ontology stringName.slot(foot)), allows the system to “line up” “foot” in the metaphor-tenor and metaphor-vehicle. The next thing that is done is that the system sees frameRelLoc.slot(bottom) and uses Bottom predicate in DAXSVG to determine if the same spatial feature (bottom), which is an area or range of location also occurs (correspondingly) in the canyon visual such as Figure 25 or 26. In the case of the Monument rock structure “Bottom” does exist, it is the sloped covered area at the base of the rock formation. Bottom can be taken to be ground level of the rock formation base. Therefore, “the foot of the canyon” is the place at or near the ground level or base part of the Monument rock formation. “Foot” refers to a metaphorical location, and not to an actual foot. The canyon does not have an actual physical foot. (The computer may not always posses as extensive a background knowledge of things like canyons and feet that we do, and so things that are obvious or common sense to us are unknown or unconsidered by a program.)

For most metaphors the ontolgies involved in the knowledge base would be considerably bigger, more extensive than the one shown for this example. Showing hundreds of lines of ontology does not make for a clear example however and so the one used here was purposely kept simple to make things easier to follow.

The second metaphor, “the river roared”, is a metaphor based on process or functionality rather than location. Bob's mouth, located “at” (the path of) path M 98,158 L 117,155 in the drawing, and labelled g15 is NEAR the (DAXSVG) Top of the drawing. But it isnt the appearance or location of the mouth that makes the metaphor it is the function or process of the mouth, in this case as will be seen. There are two such processes performed by the mouth 1) making sound, and 2) ingesting nutrients. We know and the system would have to know also that “roaring” is making of sound, usually by means of lungs and mouth. When we look at the CYC ontology element “ID=“Speaking”” below next we see that Speaking is a type of human capability and the making of an oral sound. A mooing cow would fit with the Utterance - IBT element (below), as would the roaring of a lion, but Utterance - IBT requires an agent which water is not. The ontological item to handle “the river roared” would require “a sound creating thing” and “sound”.

     <owl:Class rdf:ID="Speaking">
        <rdfs:label xml:lang="en">speaking (type of thing)</rdfs:label>
        <rdfs:comment>A specialization of #$Talking, which includes
            non-verbal talking such as the use of sign-language. The
            collection of actions generating utterances (c.f.
            #$Utterance-IBT) which are speech. Hence, #$Speaking
            normally includes only those utterances using some
            #$Language as a communication convention, unlike other
            utterances such as #$Booing and #$Cheering.</rdfs:comment>
        <guid>bd58bf82-9c29-11b1-9dad-c379636f7270</guid>
        <rdf:type rdf:resource="#PublicConstant"/>
        <rdf:type rdf:resource="#HumanCapabilityType"/>
        <rdf:type rdf:resource="#PublicConstant-CommentOK"/>
        <rdf:type rdf:resource="#TemporalStuffType"/>
        <rdf:type rdf:resource="#DefaultDisjointScriptType"/>
        <rdfs:subClassOf rdf:resource="#MakingAnOralSound"/>
        <rdfs:subClassOf rdf:resource="#Talking"/>
        <rdfs:subClassOf rdf:resource="#Individual"/>
    </owl:Class>
    <owl:Class rdf:ID="Talking">
        <rdfs:label xml:lang="en">talked (type of thing)</rdfs:label>
        <rdfs:comment>A collection of actions. Each instance of
            #$Talking is an action in which somebody (often by
            #$Speaking) creates a meaningful phrase.  #$Talking is often
            a subevent in various #$Communicating events. Note that not
            all instances of #$Talking involve #$Speaking, since
            sometimes people can talk using some non-oral language,
            e.g., #$AmericanSignLanguage.  #$Talking, however, is
            disjoint with #$Writing.</rdfs:comment>
        <guid>c066b286-9c29-11b1-9dad-c379636f7270</guid>
        <rdf:type rdf:resource="#ProposedPublicConstant"/>
        <rdf:type rdf:resource="#PublicConstant"/>
        <rdf:type rdf:resource="#HumanCapabilityType"/>
        <rdf:type rdf:resource="#TemporalStuffType"/>
        <rdf:type rdf:resource="#DefaultDisjointScriptType"/>
        <rdfs:subClassOf rdf:resource="#IBTGeneration"/>
        <rdfs:subClassOf rdf:resource="#LearnedActivity"/>
        <rdfs:subClassOf rdf:resource="#Individual"/>
        <owl:disjointWith rdf:resource="#Writing"/>
    </owl:Class>
    <owl:Class rdf:ID="Utterance-IBT">
        <rdfs:label xml:lang="en">utterance - i b t</rdfs:label>
        <rdfs:comment>A specialization of #$Sound.  Each instance of
            this collection is a sound initially generated by some
            #$Agent speaking or making some sound with his/her mouth (or
            other specifically sonic-information-conveying organ or
            device). Such sounds may or may not have propositional
            content -- that is -- instantiate some
            #$PropositionalInformationThing.  If such a sound is
            recorded and played back, the sound generated is still
            considered an instance of #$Utterance-IBT.  Note that only
            the sounds themselves are instances of this collection --
            not the activities of making them. This collection is not a
            specialization of #$Action. (For that, see
            #$CommunicationAct-Single and its specializations.) An
            important specialization of this collection is 
            #$AnimalUtterance-IBT.</rdfs:comment>
        <guid>41a85618-5c96-11d6-8000-0001031bfeec</guid>
        <rdf:type rdf:resource="#FirstOrderCollection"/>
        <rdf:type rdf:resource="#TemporalStuffType"/>
        <rdf:type rdf:resource="#ObjectType"/>
        <rdf:type rdf:resource="#ProposedPublicConstant"/>
        <rdf:type rdf:resource="#ProposedPublicConstant-DefinitionalGAFsOK"/>
        <rdfs:subClassOf rdf:resource="#SoundInformationBearingThing"/>
        <rdfs:subClassOf rdf:resource="#Sound"/>
        <rdfs:subClassOf rdf:resource="#Individual"/>
    </owl:Class>
    
    

Figure 28: SVG illustration displays a barchart. See barchart.svg for the SVG code which creates this picture.
[Link to open this graphic in a separate page]

In the case of our SVG code example shown earlier in this paper we can look at that code and see that “arrowstreamer” started its animated journey from SVG coordinate x=110, y=180 and stopped its progress at x=110, y=55. The SVG (visible) object “obstructor” slides into place near the end of the animation, prior to that time there is an open path between any location of the tip of “arrowstreamer” and “goallabel”. The latter visibly displays the word goal in the SVG display space, the picture. At all times from the start of “arrowstreamer” animated “motion” till the end of the animation the “goallabel” exists in the animated illustration. Once the SVG object “obstructor” slides into place near the end of the animation the path between “arrowstreamer” and “goallabel” is blocked (by “obstructor”). Yet even after the path is blocked the SVG object “goallabel” continues to exist (and to exist in an unchanging location). If the path between the end of “arrowstreamer” and “goallabel” is defined to represent visibility of x (x in this case is “goallabel”) then “goallabel” becomes not-visible after “obstructor” slides into place. Yet “goallabel” continues to exist in the picture / diagram! When the programmatic analysis of this animation occurs the system is able to detect something profound, when a visible thing (an ontological entity) is visible (an ontological relationship) and then becomes not-visible it still continues to exist! (Young babies are not able to conceive this yet.) The other relationship that the programmatic analysis of this animation detects is that occlusion by a non-transparent object renders previously visible things not-visible. For you and me this is duh, for a machine this is a realization of causality. The SVG object “goallabel” and these analytical findings can be treated as the source of a metaphor. When some object is detected by a sensor it can be metaphorically compared as that “goallabel” and if the sensor further detects an occlusion of the detectability / visibilty path occuring then the system can metaphorically TRANSFER using the “goallabel” / sensed object CORRESPONDENCE that the sensed object may (likely) continue to exist even though it is no longer sensed. This scenario might happen if a sensed intruder ducked behind some object suddenly. It would be desirable for a sentry system to behave as though the intruder had not suddenly disappeared into thin air, as a young baby would perceive.

“The nature of the mapping function used to implement linguistic-variable quantification, pioneered by Zadeh and Goguen, is shown to be governed by context.” [Dodds 1981] The mechanism which implemented context was thoroughly explained in the paper Natural Language Processing and Diagrams. Explicit computer programming was presented used to implement context, no verbal hand-waiving.

“Certain types of metaphorical usages in ordinary language are shown to be implementable via a spatial, or eidetic transfer mechanism.” [Dodds 1981] To further show the programmatic implementation of this mechanism the author presented equations and graphics of mechanism in [Dodds 1988] in an international journal on sets and mathematics. (Negoita [Negoita 1985] shows how fuzzy set mathematics is used in expert system type scenarios.) In the WROX book [Dodds 2001] the author provided actual code listings to implement spatial mechanism accompanied by a thorough explanation of how the code worked.

“...it is shown that such mechanisms can significantly expand the scope of construct representation of a computer system which is based upon a collection of spatial primitives which are used as representation extending nucleii.” [Dodds 1981]

“occasionally look into some big ideas”, “see far into the future”. [Dodds 1981]

Summary

There will be an extended version of this paper available at the site http://www.open-meta.com. Implementing other kinds of metaphors is covered there and additions to the spatial correspondence metaphor covered here. The (concepts behind the) words that people use in everyday natural language have multiple meanings as defined in the dictionary, as numbered nuances. People are able to exchange these words in conversations without explicitly stating a standardized nuance number as found in the dictionary. Yet people almost always understand the intended nuance tacitly. This is done by means of recognition and use of context. The receiver infers the context building it up via recollection of previous utterances.

In a similar way the computer may build or populate a context using features and their values obtained from its input environment. [Dodds 2004] and this paper has shown how the context is recognized or built. Context is represented by both ontology structures such as we have seen above, using XML structures such as Protege, and also by graph structures, in this paper represented by XGMML. Context “informs” the metaphor recognition system such that the range of possible metaphor “targets” is constrained to one or a few at most. It is the context which detects/represents the “nuance” in the situation and references the correct “meaning”, the correct “transfer” or “corresponding” of the metaphor. In this way a reference or base semantic object, such as the SVG animation in this paper may be used to allow the computer to “understand” the persistance of objects even though they are occluded from sight. Note that young babies clearly think that an occluded object becomes non-existant. Only when they are older do they realize that objects (continue to) exist when not in view. This is a big deal in abstract conceptualization, and it is a powerful addition to representation in computers that semantic object metaphors are programmatically constructable and recognizable. The upshot, practical use of this is a sentry robot which detects an intruder, who hides behind something, yet the robot “realizes” that the person did not dematerialize or become non-existant. The robot detects that the view of the intruder is obfuscated and tries a new location to continue with dealing with the intrusion. A sentry which did not have the ability to recognize “persistance”, as in (visually) occluded continuity via the metaphor capability, would simply roll away blithely forgetting about the intruder’s existance!

The CYC footnote

WHAT IS OPEN-CYC? This section is purposely placed after the summary as it was originally intended to be a footnote. It became too large to be a footnote and so was promoted to a section. “OpenCyc is the open source version of the Cyc technology, the world’s largest and most complete general knowledge base and commonsense reasoning engine.” Read the opencyc frequently asked questions at http://www.opencyc.org/faq/opencyc_faq.

An ontology is “An explicit formal specification of how to represent the objects, concepts and other entities that are assumed to exist in some area of interest and the relationships that hold among them. An ontology is the attempt to formulate an exhaustive and rigorous conceptual schema within a given domain, a typically hierarchical data structure containing all the relevant entities and their relationships and rules (theorems, regulations) within that domain.”

“Cyc enhances XML by providing a powerful universal semantics for modeling objects described via XML. [Cyc natural language processor performs] word-to-concept correspondences (e.g. the word “eating” means the same thing as the Cyc concept #$EatingEvent; the word “yellow” means the same as the Cyc concept #$YellowColor). This information is used to transform parses [of English word strings] into CycL expressions.” CycL is the language in which Cyc is programmed. Basic Cyc is modelled after the KIF representation. By parsing natural language strings into CycL Cyc is able to then perform commonsense reasoning on it. The results of the reasoning can be translated from CycL into natural language (such as English) by means of the Cyc system CycL-to-string transformer. This is a program-capability built-into the Cyc system.

Following are a few examples of the CYCSUMO ontology. The large size of this ontology may be appreciated by noticing that the examples represent much less than 1% of the total system.

         (#$comment #$ableToAffect "'(#$ableToAffect AGENT THING)' means that AGENT
         is capable of causing some change in THING.  This does not imply that AGENT
         ever actually does cause any change in THING, but that THING is within 
         AGENT's 'zone of influence'.  For instance, I am able to affect the 
         ceiling panel above my head, even though I've never done anything to it.
         In contrast, I cannot affect the moon.  This is an inherently vague 
         notion, since one's ability to influence objects tends to diminish as they
         grow larger--or smaller--and farther away.  However, it's an important 
         common sense concept, since we must learn what we can and cannot affect
         in order to understand our capabilities and limitations and plan actions 
         accordingly.") 
       
         (#$comment #$above-Directly "(#$above-Directly ABOVE BELOW) means either 
         that (1) the volumetric center of ABOVE is directly above some point of 
         BELOW, if ABOVE is smaller than BELOW; or that (2) some point of ABOVE is 
         directly above the volumetric center of BELOW, if ABOVE is larger than, 
         or equal in size to, BELOW.") 

         (#$comment #$above-Generally "(#$above-Generally OBJ1 OBJ2) means that 
         the #$SpatialThing-Localized OBJ1 is more or less above the 
         #$SpatialThing-Localized OBJ2. To be more precise: if OBJ1 is within a 
         cone-shaped set of vectors within about 45 degrees of #$Up-Directly 
         pointing up from OBJ2 (see #$Up-Generally), then (#$above-Generally OBJ1 
         OBJ2) holds. This is a more general predicate than #$above-Directly 
         (q.v.), but it is a more specialized predicate than #$above-Higher (q.v.).
         It probably most closely conforms to the English word \"above.\"") 
  
         (#$comment #$above-Higher "(#$above-Higher OBJ-A OBJ-B) means that OBJ-A
         is at a greater altitude (from some common reference point) than OBJ-B.
         In terrestrial contexts (see #$TerrestrialFrameOfReferenceMt),
         (#$above-Higher OBJ-A OBJ-B) typically means that OBJ-A is at a greater 
         altitude above sea level (see the predicate #$altitudeAboveSeaLevel) than 
         OBJ-B.") 
    
         (#$comment #$above-Overhead "(#$above-Overhead ABOVE BELOW) means that 
         ABOVE is directly above BELOW (see the predicate #$above-Directly), all 
         points of ABOVE are higher than all points of BELOW, and ABOVE and BELOW 
         do _not_ touch.") (#$genlPreds #$above-Overhead #$above-Directly) 

         (#$comment #$above-Touching "(#$above-Touching ABOVE BELOW) means that 
         ABOVE is located over BELOW and they are touching.  More precisely, it 
         implies both (#$above-Directly ABOVE BELOW) and that ABOVE #$touches 
         BELOW.  Examples: a person sitting on a chair; coffee in a cup; a boat 
         on water; a hat on a head. (Note that not every point of ABOVE must be 
         higher than every point of BELOW.)") 

Note that in the above CYC predicate, (#$ableToAffect AGENT THING), the ability of AGENT to affect THING is modulated or constrained by relative size and separating distance of AGENT and THING. The system, in order to maintain this constraint, has some tests or conditionals which are automatically invoked when that predicate is used in a reasoning cycle. The test would look something like : (cond(AND(NOT(large(difference(vol(AGENT),vol(THING))))),(NOT(large(separation(AGENT,THING))))). In English that conditional tests that both the difference in the relative sizes of AGENT and THING is not large and also that the distance separating the locations of AGENT and THING is not large. The relative sizes of AGENT and THING must be neither too large nor too small. That conditional performs a calculation which provides (a test for the presence or absence of) a common-sense context or situation whereby an agent might reasonably provide a sphere of influence upon the receiving THING. In other words the AGENT can reasonably affect something which is near enough to it and which is neither too small nor too large to be (reasonably) manipulated or affected.

Notice that the predicate IsNear(x), from the previously mentioned DAXSVG RDF Schema can be used to compute “separation(AGENT,THING)”. While the example shown above of case 6 of relation, IsNear(x1,x2) is “crisp”, that is the returned response from it is boolean or two valued, the calculation of IsNear in the DAXSVG RDF Schema is calculated via g1(x), which computes a scalar value which represents the fuzzy grade of membership value for that context. (x is the scalar value “separation(AGENT,THING)”). Large calculates a scalar value which comes from a comparison of the input value, the “difference” in this case, with the value of the extent of the x-axis of the SVG canvas, which defines the “visual world” used here. If the canvas is 700 units in extent in the x-axis, say, then large() calculates a value which compares the value of “separation(AGENT,THING)” with 700. This measure of largeness then is “situated in” the perceived world (space). It is a representation of a relative or situated (concept of) “large”.


Acknowledgments

The author would like to thank John Searle for an interesting and validating personal discussion in Berkeley about the importance and use of situated context in actions taken by intelligent systems. The author would also like to thank George Lakoff for producing stimulating and informative work on the nature of scientific and technical metaphor; to thank Edward deBono for being insightful and creative and publishing the innovative work showing functional depiction and usage of spatial metaphors (along with some “naive physics”) made visible or non-tacit [deBono 1981], and Lotfi Zadeh for introducing fuzzy mathematical representation and reasoning to the world over a period of a few decades [Zadeh 1975]. Berkeley certainly has some insightful people.


Bibliography

[Bateson 1979] Bateson, Gregory. Mind and Nature . 1979. Dutton: NY 0-525-15590-2

[deBono 1981] deBono, Edward Atlas of Management Thinking . 1981. Maurice Temple Smith Ltd 0-140224610 pp200

[Dodds 1981] Dodds, David. Fuzzy Logic Computer Implementation of Metaphor from Ordinary Language . 1981. AAAS Annual Meeting (American Association for the Advancement of Science) [publishers of the journal Science]

[Dodds 1988] Dodds, David. Fuzziness in Knowledge-Based Robotics Systems . 1988. Fuzzy Sets and Systems 26: North-Holland 179-193

[Dodds 1989] Dodds, David. Fuzziness in Knowledge-Based Robotics Systems . 1989. Mobile Robots IV: Volume 1195 SPIE - The International Society for Optical Engineering 56-65

[Dodds 2001] Dodds, David. et al WROX Professional XML Meta Data . 2001. Wrox Press Inc; ASIN: 1861004516 600 pages

[Dodds 2004] Dodds, David. Natural Language Processing and Diagrams The 2004 International Conference on Machine Learning; Models, Technologies and Applications.

[Jones 1982] Jones, Roger S. Physics As Metaphor . 1982. Meridian: NY, 0-452-00620-1 pp254

[Lakoff 1980] Lakoff, George. Metaphors We Live By . 1980. University of Chicago Press: 0-226-46801-1 pp242

[Negoita 1985] Negoita, C V. Expert Systems and Fuzzy Systems . 1985. The Benjamin Cummings Publishing Company: Menlo Park. 0-8053-6840-X pp190

[Ricoeur 1978] Ricoeur, Paul. The Rule of Metaphor . 1978. U of T Press: 0-8020-6447-7 pp384

[Zadeh 1975] Lotfi, Zadeh. Fuzzy Sets and Their Applications to Cognitive and Decision Processes . 1975. Academic Press: 0-12-775260-9 pp496



Extending Representation Capability

David Dodds [Open-Meta Computing]
drdodds@open-meta.com