OntoClock, The Difference Between Having Ontological Knowledge and Knowing It : Ontological Reflection Services -- The Hoarse Whisperer

David Dodds


An SVG program, named OntoClock, is used to illustrate ontological knowledge. [Dodds2 2006] The SVG program OntoClock System (OCS) contains NASA JPL ontologies about Space, and Time. It is discussed in this paper that while OntoClock has knowledge of space and time its awareness of this knowledge is rather dim. A model of (non-experiential) awareness is described, providing OntoClock with a circumscribed awareness of what actions its systems are performing, what goals are being serviced and knowing that it has knowledge of / knows about space and time. This 'knowing' is a second order thing. Most machine intelligence systems HAVE knowledge but do not enact KNOWING, they are operated by "procedural knowledge", which is not in any way aware of itself, what it is doing, or even the state of its own situation. 'Knowing' is shown here to provide OCS a means of 'being embedded in time'.

Ontologies are a means of representing knowledge, the semantics of terms. An ontology can provide knowledge of spatial concerns and (associated) concommitant logic can provide details of how to perform needed operations, such as knowing how to scan for spatial concerns and being able to recognize them once there. This can be done employing procedural know-how and the metaprogramming system uses this know-how intelligently by implementing "knowing that".(which may be achieved by monitoring the activation record, simple introspection of activity).

Keywords: Semantics

David Dodds

Bio: David Dodds has worked with computers since 1968, when he wrote Continuous Systems Models (CSMP) on IBM mainframe computers at university. Later he worked at Nortel (Northern Telecom Bell Northern Research) where he designed and developed graphical interfaces and scientific and technical visualization systems, and wrote text understanding software, these were in-house projects to allow extraction of content from telephony specification documents for use by graphical-interface programs, and he also wrote expert systems in C and Prolog. Prior to that, in university environments, he programmed a speech synthesis system; which produced the first ever machine spoken Coast Salish; and designed and developed technical scientific models and simulations; including a simulated town council in a continuous system Forrester Limits to Growth model. He was Sessional Lecturer and taught computing science in a university computing science department.

He has been working the last nine years on the various emerging XML technologies, was on the W3C SVG (Scalable Vector Graphics) workgroup to develop the metadata element specification for SVG 1.0, and on the Idealliance committee to develop XML Topic Map (XTM) specifcation. David has published numerous papers in robotics and on fuzzy systems. Two of these papers were published in the SPIE proceedings Space Station Automation III. He was lead-author of the book WROX Professional XML Meta Data. He has presented numerous papers on XML SVG and RDF, Intelligent and Content Aware Graphics Systems.

David presented two papers at SVGOpen 2003, one on Accessing SVG Content Linguistically and Conceptually, the other on Programming SVG Advanced Access Using Metadata and Fuzzy Sets. He presented a paper, Natural Language Processing and Diagrams, about the use of ontologies and logic, at The 2004 International Conference on Machine Learning; Models, Technologies and Applications; which is a part of The 2004 International Multiconference in Computer Science and Computer Engineering. David's paper, Extending Representation Capability (Representation Extention Through the Corresponding Metaphor Process) was in the Extreme 2004 conference proceedings. He presented a paper, Components of Meta-Programming, Computer Analogies and Metaphors, at The 2005 International Conference on Programming Languages and Compilers which is a part of The 2005 International MultiConference in Computer Science and Computer Engineering.

David chaired his own session; Second Order Meta-Programming - Cognitive Fusion and Autonomous Robots ; and presented two papers; Second Order Metaprogramming, and Ontologies and Conceptual Metaphor in Autonomous Robotics; at The 2006 International Conference on Artificial Intelligence which is a part of WorldComp 2006. He chaired his own session and presented two papers at The 2007 International Conference on Artificial Intelligence, which is a part of WorldComp 2007. He presented; OntoClock, The Difference Between Having Ontological Knowledge and Knowing It; and Second Order Meta-Programming Situatedness, Awareness, Knowledge. He presented a third paper, on Situated Systems, at The 2007 International Conference on Wireless networking. David's paper; OntoClock, The Difference Between Having Ontological Knowledge and Knowing It; is in the Extreme ML 2007 conference proceedings.

OntoClock, The Difference Between Having Ontological Knowledge and Knowing It

Ontological Reflection Services -- The Hoarse Whisperer

David Dodds [Open-Meta Computing]

Extreme Markup Languages 2007® (Montréal, Québec)

Copyright © David Dodds 2007. Reproduced with permission.


    The Most Important Idea In This Paper
  • - Systems may have knowledge but they must have knowing of that knowledge before it is truly useful. This is second-order metaprogramming. Reflection.


For every da Vinci and Einstein there are 500 million villagers with torches and pitchforks..


OCS can detect that it 'makes the hands of the clock move'. One of the greater points of this paper is that computer based systems which implement models of cognition are not 'integrated' or 'wholistic' the way biological systems are. Specifically there is no innate or built-in process in the computer which logically unites the various assemblies and collections of programs and subprograms which might effect anything remotely resembling 'common sense'.

Declarative and Procedural Knowledge

In the computer knowledge occurs in two forms : declarative and procedural. The declarative form of knowledge exists in the SVG program as content of the SVG Metadata element. Almost all of the metadata of the OCS is embedded in the program itself. A smaller amount of the metadata is external and referenced via RDF pointer.

SVG based clock

An SVG based clock is used to illustrate some of the ideas of a second order metaprogramming system. The code can be seen by visiting the link openmeta.com/svgclockont.svg. The SVG file is much too long to include directly in this paper.


The paper is about aspects of second order metaprogramming. Rather than involving a complex system such as a robot vehicle that participates in the Third DARPA Grand Challenge we look at a (special) wall clock and see how aspects of second order metaprogramming affect it. (Figures 2 and 3 illustrate two existing robots which are programmable and have articulated limbs and fingers., and a vision system located in the 'head' area.)

XML namespaces

We see, looking at the "code" of this SVG clock that it is comprised of a number of XML namespaces and scripting programming. Looking at the "spec" for SVG (of which this author was one of the authors) we see that SVG iself is an XML namespace and that many other XML namespaces may interact with SVG. One of the elements of SVG is the metadata element which may be used to embed and/or reference metadata of many diverse kinds (including W3C's own "RDFs") and topic maps and others [Dodds 2001]

Multiple ontologies

In this particular clock there are a handful of ontologies in the metadata element content. The value used of metadata / ontologies to "the clock" is explained in this paper as a means of depicting how (etherial) knowledge can functionally interact with concrete machinery, to implement aspects of a second order metaprogramming system. The SVG clock "runs" and "shows the time" just like an ordinary physically real wall clock. But there are some important differences between the latter clock and the SVG clock, for the SVG clock has "a bit of a clue" and the clue is what this paper is about.

Reticular routines

If the awareness graph had a pointer to this detection of normal behaviour (DNB) then , for the time the awareness graph did point at the DNB, the system would be aware that 'everything was fine'. By having a (simple) monitor process monitor the clock's DNB a kind of primitive reticular activation system (RAS) could be effected. Since these 'reticular-routines' would not ordinarily be pointed at by the awareness-graph the reticular monitor activity would not be part of the system's awareness. In a human these might be called unconscious monitors. This is exactly the function of the reticular system.


If the activity or process that a reticular monitor is monitoring is received by the monitor to have gone outside that as what is typical or normal then the reticular monitor can send a signal to the awareness-graph system that that particular reticular-monitor needs attention. Literally requests attention of itself via the attention-system. The attention-system is the vehicle by which awareness-graph is pointed at new items and 'old items' are disengaged or 'depointed'. [This is similar to what happens to a routinename when it is popped off the top of the call stack in a regular programming environment. It disappears from the context, which is maintained via the content of the call stack. In both cases the item becomes de-referenced and 'fallen off the edge of the world'.]

Reticular monitor

So typically, the smooth operation of the clock is detected by the reticular monitor as typical action and so no exception is raised and, therefore, no commotion at the attention-system and hence smooth operation of the clock is not in the awareness of the system. This is typical of the human nervous sytem too. If your wrist-watch band does not change the pattern of its pressure on your wrist after a while you no longer feel it on your arm / wrist. The reticular-monitor system reduces traffic to the awareness to only exceptions.

The OntoClock System

The OntoClock system (OCS) has a goal graph (GG) which is used by the second-order system scheduler to spawn programs to address goals and subgoals on it. This graph is similar in ways to the THTREE tree of Micro- Planner (uP) [Sussman 1970] The graph is not necessarily maintained as a hierarchical tree but generally has no cycles and may be directed in part (selectively DAG). The OCS also shares with uP/THTREE the ability to associate a description of the goal or task accomplished by an action so that in theorem prover style activation OCS can place a goal on the posted agenda (implemented via the GG) which basically says "somebody do this'__' (priority P)". The "this" description is associated with the routine name used to achieve it and the scheduler can point into the agenda and run the routine "routinename.this'_'" as a P priority task. [Minsky 2006] [Minsky 1985] [Dennett 1987] This is like the pattern-activated programs in some LISP based systems. In some respects it amounts to nothing more than maintaining a string aliasname for the print name of a routine.

Example: "sqrt( );" function has a string aliasname of "compute the square root of the arguments".

Function aliasname

It is the burden of the matcher to know how the function is described and hence what pattern to look for. The solution to this burden is to have something more sophisticated than just simple text as the function's aliasname, because the text itself may have little or no "meaning" to the computer in and of itself. [Ingram 2005] This brings the danger of implementing using arbitrary strings to match on.

Symbolic routine activation

The way to improve this situation is to have symbols instead of just plain text. Symbols are referents, which can be text or other data-types, which are associated with an object, and which are pointed at by one or more instances of "clarification". This latter thing may be part of a topic map, or an ontology, or a mixture of both. In the simplest form a symbol is some text which has a topic map pointing at it which also orchestrates the reference to some items in one or more ontologies. In this way the text is "given meaning".

What ongoing things are happening? Ask the question (initiates attention / focus). This is one of the top procedural knowledge processes that iteratively runs in the OCS. [Schank 1977]

'Awareness' in this system is in some ways similar to the processing which N. Amosov talks about in his book "Modeling of Thinking and The Mind" [Amosov 1967], more specifically in Section 4 where he describes his model of attention-driven lower level consciousness. There is no consciousness of any kind in the system of this paper. There is a model of awareness there which is non-experiential, there is no qualia. There is no process in place which is the subjective consumer of the data traffic in the system. There is no homunculus present. The model of awareness in this paper is a cross between Amosov's lower-level consciousness model [Amosov 1973] [Amosov 1979], Situation Awareness systems [Matheus 2003] (such as used by the military and autonomous roboticists [Dodds 1988] [Dodds 1981] [Culbertson 1963] and Roger Shanks' "Reminding"' system [Schank 1982]. Eidochronic Behaviour Grammar, written about by Colby, was a hierarchically defined graph structure which described successive "moves" or macro actions in a social setting. [Colby 1973] It can be seen that a hierarchical structure does not encompass the full range of real world behaviour and is a force fit. (Figure 1 illustrates an example of a situation awareness system, called SAW. [Matheus 2003] A larger version of this diagram may be found at open-meta. com.)

Figure 2: Example Core Situation Awareness Ontology
[Link to open this graphic in a separate page]
Figure 3: Qrio robot (Sony), an actual bi-pedal autonomous robot
[Link to open this graphic in a separate page]
Figure 4: Asimo robot (Honda), an actual bi-pedal autonomous robot
[Link to open this graphic in a separate page]
Figure 5: NASA's Robonaut
[Link to open this graphic in a separate page]

In its simple form awareness is like the contents of a call stack. The stack defines a context whereby content of the stack is referenced. Records on the call stack are 'in context' (by virtue of being on the call stack, at the time 'in context' is declared.) In effect whatever material on the call stack is 'in context' is what the system is 'aware of', that being the point of creating the context. Another way of looking at awareness is via the spatial metaphor of 'the container'. A container called 'awareness' metaphorically contains 'things', and when the awareness container contains 'things' that containment is called 'awareness' or 'being aware of'. There are common expressions which use this metaphor "He had many thoughts swirling about in his head", "She had many chores to do in mind", "He is empty headed", "He had a thick steak on his mind".

Instead of having some named-variable 'space' defined where data could literally be copied to, so as to 'implement' the containment metaphor it is less costly to refer to or point at data that would otherwise have to be copied. A graph-structure of pointers is the implementation of the awareness reference. This may be seen as sort of like "pass by reference" in programming function arguments. It is also suggestive of a reference in a topic map referring to an external object.

A model of 'attention' similar to Amosov's is used to direct the pointing activity. The number of different things that can be pointed at by the awareness-graph is small, kept to ten or fewer. One or more of those ten things is soon dereferenced as the attention-mechanism points to a new thing which has greater value to the system. Value (of the thing pointed at) to the system is the mechanism by which the attention-model chooses selections. Once something is pointed at it is automatically de-referenced after a period of time in order that the ensemble of referenced things doesn't grow to an unmanageable count of things that are currently being processed as 'objects of / in attention."

There is a list of goals, in a "posted agenda" fashion which the system proceeds to attain. The relevancy of an action or a datum associated with an action to one or more of the goals on the agenda is the measure by which the attention-mechanism selects or chooses what to 'point at' / reference / 'attend to'. Recall that a forward chaining process is used to identify goals to which a given action partially completes. A tag indicating this 'connection' or relevancy of an action to be associated with the action is made and the attention-mechanism can use this tag as a context to once again point at the action as currently being relevant to the goal-agenda after having been de-referenced and effectively forgotten. The tag allows the system to capture the effort (and time) it took to pair-up the action with one or more items on the goal-agenda. The tag thus performs a savings of effort and time by the system so that it does not have to re-perform the inferencing again after the action item has been de-referenced in the awareness-graph.

Deliberation about the state of something, such as the pressure of your wrist-watch band against your wrist allows one's awareness to 'get' a pressure-sensation once again from the band by means of changing the 'expectancy' of the signal, in this case the level of detection pressure from the band. The activity of resolution processing in the situation-awareness system allows the re-introduction of clock-tickmaking and other 'clock-actions' into the awareness-graph. (And then it fades out again if the new inspection of the clock operations return that it is 'normal or expected operations'.)

Change of location of the clock

If the location (GPS, latitude longitude, city name etc) of the clock is changed then the reticular monitor for clock location will initiate attention-focus on the location so that an interpreter of the meaning of location (and, hence, any change in location) can determine if anything of consequence has occurred by virtue of the change to a new location. Ontological knowledge in the embedded ontologies tells the clock that geospatial location defines the time-zone the clock is in. Some position changes do not change the time-zone, flying the clock on a jet airliner will likely change the timezone, possibly more than once. If there were a process for formulating questions, such as 'What happens if my location changes from Montreal to Vancouver?' then the clock could apprehend, however, dimly, that flying on jet plane from Montreal to Vancouver would change the time zone by a few hours. But there is no code in the clock, nor intentionality, so the clock cannot speculate on the effects of change of (its) location on its own initiative. If a question was put to the clock by a human using say Jena or SWRL then the ontological system knowledge would provide an answer (about timezones and such) but the clock's awareness-system would not be hooked up to this and hence the human would see the ontology based reasoning about timezones but the clock itself would not become aware of it. When the clock is actually moved it becomes aware of the new location and new-timezone if it is a large enough location change. But there is no process in place such that the clock deliberates in any way about the(sudden) new location / timezone. It just is.

Embedded in time-stream

This is likely the way it is in a house spider and probably in a cat or dog that has been kept in a transport cage for the entire flight from Montreal to Vancouver.

Humans orient themselves cognitively with time. Another way of saying this is that the experience of consciousness is embedded in the sensation of time ("passing"). Since the activation records are time stamped there is the data present in the clock to discover that there is a correlation between the achievement or accomplishment of a goal (via multiple micro-actions) and a range of timestamps values (a computer system's version of "the passage of time").

Some simple inferencing by the clock would allow it to discover that all ITs actions and behaviour 'take time'. This is the rudiment that places the clock into an embedded time-stream of its own. [Dodds1 2006]

The clock, by virtue of monitoring the graphics values of the second hand, the minute hand and hour hand, is able to detect that it (the clock) causes a change in the position of a clock hand by executing one or more actions (as depicted in the activation record). The clock is also able to detect that the change in position of the hands is highly correlated with the values of the "time variable", being able to "connect" a one-second numerical change in the "time variable" with a particular change in the position of the second hand. The clock can detect an iron-clad correlation between one second time-variable changes and particular changes of the sweep second hand. The correlation is 100% but this is not the same as "causality". So the clock can infer the sweep second hand one second tick or jump but cannot claim the number value of the time variable "caused" it, just that they always happen as a particular temporal pattern or relationship.

The clock has knowledge of time and space by virtue of the ontologies. The clock knows that it has knowledge of time and space by virtue of the activation record regarding processing that knowledge, such as the action of performing inferencing using those ontologies. It would be enlightening for the clock to record the outcome of an inference in the activation record which records the activity of performing the inference. This is , in part, how a socalled Case Based System works, but I hasten to add that no Case Based System I have heard of attempts to include awareness in its processing, they all are strictly mechanical logic, unaware.

So we have a clock which shows the time to human eyes, is sensitive to location / timezones, knows what actions it is performing, what goals it is addressing; and what it knows, what knowledge is in its knowledge base, instead of merely having that knowledge in its knowledge base and not knowing that it is there as in most conventional ontological systems. This knowing or awareness of own knowledge is an important step towards having an ontological reflection capability. Unless a permanent record is made of the inferences or 'findings' of the awareness system those things which the system becomes aware of 'disappear'. The result of this can be seen in the video "The Man With the Seven Second Memory". The OCS can initiate pre-defined SWRL questions and has a limited ability to put identifiers in SWRL rules to formulate queries about knowledge content. An example of such a query on the ontology would be inferencing the ramifications of geographical location on the timezone value. When OCS initiates a SWRL query upon the embedded ontologies there is an ARL record pushed onto the ARL list stating the name / characterization of the SWRL query. By monitoring such records the awareness-mechanism can detect that an action on the posted agenda was executed as an (SWRL based) ontology query.

Call stack and symbol table

call stack: contains - string name token, time date stamp, from where.

A tracker 'notes' start timedate and completion, it is part of the reticular monitor, and in effect performs a simple quality assurance of action completion.

Call stack ptr 1, ptr 2, ptr 3 - points into layers of the call stack to provide special subcontexts as needed. Ex. grouping a collection of micro-actions into a single macroaction based on a point-of-view context.

Symbol table: A symbol table in assembly language or in a compiler is a table which represents a set which associates the token (string name) with some type information and other.

In this system (OCS) the function of the symbol table is to map the tokens used to make larger descriptions / phrases / sentences which convey what the token is about. Pointer into a synonym token list, pointer into a list of functions. Implements part of Micro-Planner's THTREE, which allows symbollic reference to goal for function activation rather than using the function name.

When an action is executed on the system an activation record is placed on the call stack and activation record list (ARL). The ARL is a list instead of a stack because it is meant to "contain a persistent memory" of what actions were taken, whereas the purpose of a traditional call stack is to maintain a context of active routines only. Once a routine "returns" it is popped off the call stack. It is then 'out of context' and lost to the history of the system. In the system discussed in this paper (OCS) the proper overall functioning of the system requires that a history of activation be maintained, beyond the time that the processes were "active" and "returned".

An example of a system which maintained such a "post return" history was Terry Winograd's SHRDLU. SHRDLU maintained its "action" history so that it could answer questions typed by humans about what it (SHRDLU) "did". This same history was also used to deal a little with the (infamous) "frame problem". SHRDLU's universe was a simple virtual world of coloured toy blocks of rectangular solid and conical solid morphology.

It was a "closed world". Meaning that there was never a change in SHRDLU's Block World unless SHRDLU did the change itself. That is about agency! It means that physical processes, such as oxidation, are effectively "agents". In SHRDLU's world nothing ever rusts, or molds, or "fades" or grows.

Each block had dimension and point location values in a world objects database.

The mention of SHRDLU is relevant in that OCS has a "post return" history as well, the ARL, which is used to both track what actions have been executed and to infer several things of interest, such as OCS's situatedness includes being 'embedded in time'. (Inferred through correlation of timedatestamp values of actions pushed onto the ARL which are synonymous with the movement of the sweep second hand of the clock. All actions which the OCS can be "aware of" always occur during a range of time. (Humans say "a passage of time", and that "things take time to do or occur".) OCS' world is also closed so only actions which OCS performs occur. Sensed information, such as the value of the computer's system time variable, and the geospatial location of OCS via a location variable, are treated as though sensing is an own OCS action. A laptop running OCS might be moved by a human, but OCS does not move itself in the same sense, even though it "moves" the hands of the clock displayed.

Planner programs. Actions are based on implied / inferred situation / context. For example, people do not crash into each other on the sidewalk, or generally piloting cars and boats. Planning programs 'build' or 'assemble' an activity using items from a pre-defined set of micro-actions or primitives. The context or situation inferred from sensordata is used to define the goal to which achievement the planner strives.

Procedural Knowledge based scheduling / polling system runs the attention-system such that the aware-graph for (a) 'what I am doing' and (b) 'what was I doing' (c) 'what I know ("what I am knowing")', and (d) 'what knowledge I have' are kept up-to-date. Items a and b are about goals and plans, while c and d are about actions and states.

Following is some of the CYC code which is embedded in the OCS SVG program, showing how knowledge is packaged as RDFS ontological elements.

<rdf:Property rdf:ID="above-Directly">
<rdfs:label xml:lang="en">above - directly</rdfs:label>
<rdfs:comment>(#$above-Directly ABOVE BELOW)
means either that (1) the volumetric center of ABOVE is
directly above some point of BELOW, if ABOVE is smaller
than BELOW; or that (2) some point of ABOVE is directly
above the volumetric center of BELOW, if ABOVE is
larger than, or equal in size to, BELOW.</rdfs:comment>
<rdfs:subPropertyOf rdf:resource="#above-Generally"/>
<rdfs:domain rdf:resource="#SpatialThing-Localized"/>
<rdfs:range rdf:resource="#SpatialThing-Localized"/>
<rdf:Property rdf:ID="above-Generally">
<rdfs:label xml:lang="en">above</rdfs:label>
<rdfs:comment>(#$above-Generally OBJ1 OBJ2) means
that the #$SpatialThing-Localized OBJ1 is more or less
above the #$SpatialThing-Localized OBJ2. To be more
precise: if OBJ1 is within a cone-shaped set of vectors
within about 45 degrees of #$Up-Directly pointing up from
OBJ2 (see #$Up-Generally), then (#$above-Generally
OBJ1 OBJ2) holds. This is a more general predicate than
#$above-Directly (q.v.), but it is a more specialized
predicate than
#$above-Higher (q.v.). It probably most closely conforms to
the English word above. </rdfs:comment>
<rdfs:subPropertyOf rdf:resource="#above-Higher"/>
<rdfs:domain rdf:resource="#SpatialThing-Localized"/>
<rdfs:range rdf:resource="#SpatialThing-Localized"/>
<rdf:Property rdf:ID="above-Higher">
<rdfs:label xml:lang="en">above - higher</rdfs:label>
<rdfs:comment>(#$above-Higher OBJ-A OBJ-B) means
that OBJ-A is `higher up&apos;&apos; than OBJ-B. Since
most contexts are terrestrial (see
#$TerrestrialFrameOfReferenceMt) ``higher
up&apos;&apos; typically means that the
#$altitudeAboveGround of OBJ-A is greater than that of
<rdfs:subPropertyOf rdf:resource="#spatiallyDisjoint"/>
<rdfs:domain rdf:resource="#SpatialThing-Localized"/>
<rdfs:range rdf:resource="#SpatialThing-Localized"/>
<rdf:Property rdf:ID="above-Overhead">
<rdfs:label xml:lang="en">above - overhead</rdfs:label>
<rdfs:comment>(#$above-Overhead ABOVE BELOW)
means that ABOVE is directly above BELOW (see the
predicate #$above-Directly), all points of ABOVE are
higher than all points of BELOW, and ABOVE and
BELOW do _not_ touch.</rdfs:comment>
<rdfs:subPropertyOf rdf:resource="#above-Directly"/>
<rdfs:domain rdf:resource="#SpatialThing-Localized"/>
<rdfs:range rdf:resource="#SpatialThing-Localized"/>
<rdf:Property rdf:ID="above-Touching">
<rdfs:label xml:lang="en">above - touching</rdfs:label>
<rdfs:comment>(#$above-Touching ABOVE BELOW)
means that ABOVE is located over BELOW and they are
touching. More precisely, it implies both (#$above-Directly
ABOVE BELOW) and that ABOVE #$touches BELOW.
Examples: a person sitting on a chair; coffee in a cup; a boat
on water; a hat on a head. (Note that not every point of
ABOVE must be higher than every point of
<rdfs:subPropertyOf rdf:resource="#above-Directly"/>
<rdfs:subPropertyOf rdf:resource="#touches"/>
<rdfs:domain rdf:resource="#PartiallyTangible"/>
<rdfs:range rdf:resource="#PartiallyTangible"/>

Next we see a brief snippet from a NASA JPL ontology set of 13 ontologies (SWEET). It is in OWL web ontology language, an advanced version of RDF Here it says, 'a bottom thing occurs in the most downward direction'. [Dodds3 2006] The bottom or base of a tree is the lowest part of it, as is the bottom of a well, or the bottom of a cloud for that matter. Reconciles sense location data.

<owl:Class rdf:ID="Downward">
<owl:equivalentClass rdf:resource="#Down"/>
<owl:Class rdf:ID="Base">
<owl:equivalentClass rdf:resource="#Bottom"/>
<owl:Class rdf:ID="Bottom">
<owl:onProperty rdf:resource="#hasDirection"/>
<owl:allValuesFrom rdf:resource="#Down"/>

Next we see a brief example of some Java Reflection code. It is shown as an example of programmatic reflection services available in Java. Examination of the code will show the kinds of things Java Reflection services can provide to a PROGRAM, not just to human eyes. Reflection in ontological systems is a similar idea, providing a PROGRAM doing the reflection with information about the content and structure of the ontologies so examined. An ontology, in and of itself is a data structure and "doesnt do anything" this is what procedures are for. By providing ontological reflection a PROGRAM can "see" the knowledge in the ontology. This is what is meant by "knowing" the knowledge in the ontology instead of merely having (storing) it, this is second-order metaprogramming.

This Java example shows how to get the fully-qualified and non-fully-qualified name of a
reflected object. See also e60 Getting the Name of a Class Object.

    Class cls = java.lang.String.class;
    Method method = cls.getMethods()[0];
    Field field = cls.getFields()[0];
    Constructor constructor = cls.getConstructors()[0];
    String name;

    // Fully-qualified names
    name = cls.getName();     // java.lang.String
    name = cls.getName()+"."+field.getName();     // java.lang.String.CASE_INSENSITIVE_ORDER
    name = constructor.getName();      // java.lang.String
    name = cls.getName()+"."+method.getName();    // java.lang.String.hashCode

    // Unqualified names
    name = cls.getName().substring(cls.getPackage().getName().length()+1);  // String
    name = field.getName();            // CASE_INSENSITIVE_ORDER
    name = constructor.getName().substring(cls.getPackage().getName().length()+1); // String
    name = method.getName();           // hashCode

e60. Getting the Name of a Class Object

    // Get the fully-qualified name of a class
    Class cls = java.lang.String.class;
    String name = cls.getName();        // java.lang.String

    // Get the fully-qualified name of a inner class
    cls = java.util.Map.Entry.class;
    name = cls.getName();               // java.util.Map$Entry

    // Get the unqualified name of a class
    cls = java.util.Map.Entry.class;
    name = cls.getName();
    if (name.lastIndexOf('.') > 0) {
        name = name.substring(name.lastIndexOf('.')+1);  // Map$Entry
    // The $ can be converted to a .
    name = name.replace('$', '.');      // Map.Entry

    // Get the name of a primitive type
    name = int.class.getName();         // int

    // Get the name of an array
    name = boolean[].class.getName();   // [Z
    name = byte[].class.getName();      // [B
    name = char[].class.getName();      // [C
    name = short[].class.getName();     // [S
    name = int[].class.getName();       // [I
    name = long[].class.getName();      // [J
    name = float[].class.getName();     // [F
    name = double[].class.getName();    // [D
    name = String[].class.getName();    // [Ljava.lang.String;
    name = int[][].class.getName();     // [[I

    // Get the name of void
    cls = Void.TYPE;
    name = cls.getName();               // void

e114. Getting the Field Objects of a Class Object
There are three ways of obtaining a Field object from a Class object.

    Class cls = java.awt.Point.class;

    // By obtaining a list of all declared fields.
    Field[] fields = cls.getDeclaredFields();

    // By obtaining a list of all public fields, both declared and inherited.
    fields = cls.getFields();
    for (int i=0; i<fields.length; i++) {
        Class type = fields[i].getType();

    // By obtaining a particular Field object.
    // This example retrieves java.awt.Point.x.
    try {
        Field field = cls.getField("x");
    } catch (NoSuchFieldException e) {

e115. Getting and Setting the Value of a Field
This example assumes that the field has the type int.

    try {
        // Get value

        // Set value
        field.setInt(object, 123);

        // Get value of a static field

        // Set value of a static field
        field.setInt(null, 123);
    } catch (IllegalAccessException e) {

e116. Getting a Constructor of a Class Object
There are two ways of obtaining a Constructor object from a Class object.

    // By obtaining a list of all Constructors object.
    Constructor[] cons = cls.getDeclaredConstructors();
    for (int i=0; i<cons.length; i++) {
        Class[] paramTypes = cons[i].getParameterTypes();

    // By obtaining a particular Constructor object.
    // This example retrieves java.awt.Point(int, int).
    try {
        Constructor con = java.awt.Point.class.getConstructor(new Class[]{int.class, int.class});
    } catch (NoSuchMethodException e) {

e117. Creating an Object Using a Constructor Object
This example creates a new Point object from the constructor Point(int,int).

    try {
        java.awt.Point obj = (java.awt.Point)con.newInstance(
            new Object[]{new Integer(123), new Integer(123)});
    } catch (InstantiationException e) {
    } catch (IllegalAccessException e) {
    } catch (InvocationTargetException e) {

e118. Getting the Methods of a Class Object
There are three ways of obtaining a Method object from a Class object.

    Class cls = java.lang.String.class;

    // By obtaining a list of all declared methods.
    Method[] methods = cls.getDeclaredMethods();

    // By obtaining a list of all public methods, both declared and inherited.
    methods = cls.getMethods();
    for (int i=0; i<methods.length; i++) {
        Class returnType = methods[i].getReturnType();
        Class[] paramTypes = methods[i].getParameterTypes();

    // By obtaining a particular Method object.
    // This example retrieves String.substring(int).
    try {
        Method method = cls.getMethod("substring", new Class[] {int.class});
    } catch (NoSuchMethodException e) {

e119. Invoking a Method Using a Method Object

    try {
        Object result = method.invoke(object, new Object[] {param1, param2, ..., paramN});
    } catch (IllegalAccessException e) {
    } catch (InvocationTargetException e) {

Java Reflection tutorial info
The program first gets the Class description for method1, and then calls getDeclaredMethods to
 retrieve a list of Method objects, one for each method defined in the class. These include
 public, protected, package, and private methods. If you use getMethods in the program instead
  of getDeclaredMethods, you can also obtain information for inherited methods.

Once a list of the Method objects has been obtained, it's simply a matter of displaying the
information on parameter types, exception types, and the return type for each method. Each of
these types, whether they are fundamental or class types, is in turn represented by a Class
descriptor. The output of the program is:

  name = f1
   decl class = class method1
   param #0 class java.lang.Object
   param #1 int
   exc #0 class java.lang.NullPointerException
   return type = int
   name = main
   decl class = class method1
   param #0 class [Ljava.lang.String;
   return type = void


Reflection is a tool or capability allied with the capability to make, use, and recognize plans. There is a subfield of artificial intelligence called Planning. We will have some discussion of planning later in the paper. Schank, has shown the value of being able to (make a) plan. He has published much research in this area, including a book "Scripts Plans Goals and Understanding." Allen has shown how plan recognition and understanding are important to people in their understanding of implied or tacit information and events occurring as part of discourse spoken (or written) by others. Winograd, in his famous system, SHRDLU, showed the value of a computer program being able to plan actions, execute them, and even answer typed questions about them. In all of these examples of planning programs there was no consciousness (present in the computer.) Reflection and planning are both components of "agency."


The author would like to thank John Searle for an interesting and validating personal discussion in Berkeley about the importance and use of situated context in actions taken by intelligent systems.


[Amosov 1967] N. Amosov. Modeling of Thinking and the Mind . 1967. Macmilland and Co., pp.192, 1967.

[Amosov 1973] Amosov, N M. Automata and the mindfull behaviour . 1973. Amosov, N.M., Kasatkin, A.M., Kasatkina, L.M., Talayev, S.A. Kiev: Naukova Dumka, 1973.

[Amosov 1979] Amosov, N M. Algorithms of the Mind . 1979. Kiev: Naukova Dumka, 1979.

[Colby 1973] B. N. Colby. A Partial Grammar of Eskimo Folktales American Anthropologist, New Series, Vol. 75, No. 3 (Jun., 1973), pp. 645-662

[Culbertson 1963] J. Culbertson. The Minds Of Robots . 1963. University of Illinois Press, pp466, 1963.

[Dennett 1987] D. Dennett. The Intentional Stance MIT Press, pp 388, 1987.

[Dodds 1981] Dodds, David. Fuzzy Logic Computer Implementation of Metaphor from Ordinary Language . 1981. AAAS Annual Meeting (American Association for the Advancement of Science) [publishers of the journal Science]

[Dodds 1988] Dodds, David. Fuzziness in Knowledge-Based Robotics Systems . 1988. Fuzzy Sets and Systems 26: North-Holland 179- 193, 1988.

[Dodds 2001] Dodds, David. Professional XML Meta Data . 2001. Wrox Press Inc.; pp. 600, 2001

[Dodds1 2006] Dodds, David. Ontologies and Conceptual Metaphor in Autonomous Robotics . 2006. ICAI06

[Dodds2 2006] Dodds, David. SVG Using Meta-Data, Ontologies, Rules . 2006. SVGOpen 2006

[Dodds3 2006] Dodds, David. GML Processing Augmented with SVG Embedded Ontology and Meta Data . 2006. Proc GeoWeb 2006

[Ingram 2005] J. Ingram. Theater of the Mind Harper Collins, pp 304, 2005.

[Matheus 2003] Matheus, C. A Core Ontology for Situation Awareness . 2003. C J Matheus, C M M Kokar, and K Baclawski Cairns, Queensland, Australia, pages 545-552, July 2003

[Minsky 1985] M. Minsky. The Society of Mind . 1981. Simon and Schuster, pp 337, 1985.

[Minsky 2006] M. Minsky. The Emotion Machine . 2006. Simon and Schuster, pp 387, 2006.

[Schank 1977] R. Schank Scripts Plans Goals and Understanding . 1977. Lawrence Erlbaum Associates, Publishers, pp 248, 1977.

[Schank 1982] R. Schank. Dynamic Memory . 1982. Cambridge University Press, pp 234, 1982.

[Sussman 1970] G. J. Sussman, T. Winograd. AIM-203 Micro-Planner Reference Manual . 1970. MIT The Micro-Planner interpreter implements a subset of Carl Hewitt's language PLANNER

OntoClock, The Difference Between Having Ontological Knowledge and Knowing It

David Dodds [Open-Meta Computing]