Ontology then agentology:

A finer grained framework for enterprise modelling

Data integration of enterprise systems typically involves combining heterogeneous data residing in different sources into a unified, homogeneous whole. This heterogeneity takes many forms and there are all sorts of significant practical and theoretical challenges to managing this, particularly at the semantic level. In this paper, we consider a type of semantic heterogeneity that is common in Model Driven Architecture (MDA) Computation Independent Models (CIM); one that arises due to the data’s dependence upon the system it resides in. There seems to be no relevant work on this topic in Conceptual Modelling, so we draw upon research done in philosophy and linguistics on formalizing pure indexicals – ‘I’, ‘here’ and ‘now’ – also known as de se (Latin ‘of oneself’) or the deitic centre. This reveals firstly that the core dependency is essential when the system is agentive and the rest of the dependency can be designed away. In the context of MDA, this suggests a natural architectural layering; where a new concern ‘system dependence’ is introduced and used to divide the CIM model into two parts; a system independent ontology model and a system dependent agentology model. We also show how this dependence complicates the integration process – but, interestingly, not reuse in the same context. We explain how this complication usually provides good pragmatic reasons for maximizing the ontology content in an ‘Ontology First’, or ‘Ontology then Agentology’ approach.

Developing an Ontological Sandbox:

Investigating Multi-Level Modelling’s Possible Metaphysical Structures

One of the central concerns of the multi-level modelling (MLM) community is the hierarchy of classifications that appear in conceptual models; what these are, how they are linked and how they should be organised into levels and modelled. Though there has been significant work done in this area, we believe that it could be enhanced by introducing a systematic way to investigate the ontological nature and requirements that underlie the frameworks and tools proposed by the community to support MLM (such as Orthogonal Classification Architecture and Melanee). In this paper, we introduce a key component for the investigation and understanding of the ontological requirements, an ontological sandbox. This is a conceptual framework for investigating and comparing multiple variations of possible ontologies – without having to commit to any of them – isolated from a full commitment to any foundational ontology. We discuss the sandbox framework as well as walking through an example of how it can be used to investigate a simple ontology. The example, despite its simplicity, illustrates how the constructional approach can help to expose and explain the metaphysical structures used in ontologies, and so reveal the underlying nature of MLM levelling.

Coordinate Systems

Level Ascending Ontological Options

A major challenge faced in the deployment of collaborating unmanned vehicles is enabling the semantic interoperability of sensor data. One aspect of this, where there is significant opportunity for improvement, is characterizing the coordinate systems for sensed position data. We are involved in a proof of concept project that addresses this challenge through a foundational conceptual model using a constructional approach based upon the BORO Foundational Ontology. The model reveals the characteristics as sets of options for configuring the coordinate systems. This paper examines how these options involve, ontologically, ascending levels. It identifies two types of levels, the well-known type levels and the less wellknown tuple/relation levels.

Grounding for an Enterprise Computing Nomenclature Ontology

We aim to lay the basis for a unified architecture for enterprise computer nomenclatures by providing the grounding ontology based upon the BORO Foundational Ontology. We start to lower two significant barriers within the computing community to making progress in this area; a lack of a broad appreciation of the nature and practice of nomenclature and a lack of recognition of some specific technical, philosophical issues that nomenclatures raise. We provide an overview of the grounding ontology and how it can be implemented in a system. We focus on the issue that arises when tokens lead to the overlap of the represented domain and its system representation – system-domain-overlap – and how this can be resolved.

Grounding for an Enterprise Computing Nomenclature Ontology - Long Version

We aim to lay the basis for a unified architecture for nomenclatures in enterprise computer systems by providing the grounding for an ontology of en-terprise computing nomenclatures within a foundational ontology. We look at the way in which nomenclatures are tools both shaped by and shaping the prevailing technology. In the era of printing technology, nomenclatures in lists and tables were ‘paper tools’ deployed alongside scientific taxonomic and bureaucratic clas-sifications. These tools were subsequently embedded in computer enterprise sys-tems. In this paper we develop an ontology that can be used as a basis for nomen-clature ‘computer tools’ engineered for computing technology.

Implicit Requirements for Ontological Multi-Level Types in the UNICLASS Classification

In the multi-level type modeling community, claims that most enterprise application systems use ontologically multi-level types are ubiquitous. To be able to empirically verify this claim one needs to be able to expose the (often underlying) ontological structure and show that it does, indeed, make a commitment to multi-level types. We have not been able to find any published data showing this being done. From a top-level ontology requirements perspective, checking this multi-level type claim is worthwhile. If the datasets for which the top-level ontology is required are ontologically committed to multi-level types, then this is a requirement for the top-level ontology. In this paper, we both present some empirical evidence that this ubiquitous claim is correct as well as describing the process we used to expose the underlying ontological commitments and examine them. We describe how we use the bCLEARer process to analyse the UNICLASS classifications making their implicit ontological commitments explicit. We show how this reveals the requirements for two general ontological commitments; higher-order types and first-class relations. This establishes a requirement for a top-level ontology that includes the UNICLASS classification to be able to accommodate these requirements. From a multi-level type perspective, we have established that the bCLEARer entification process can identify underlying ontological commitments to multi-level type that do not exist in the surface linguistic structure. So, we have a process that we can reuse on other datasets and application systems to help empirically verify the claim that ontological multi-level types are ubiquitous.

Thoroughly Modern Accounting:

Shifting to a De Re Conceptual Pattern for Debits and Credits

Double entry bookkeeping lies at the core of modern accounting. It is shaped by a fundamental conceptual pattern; a design decision that was popularised by Pacioli some 500 years ago and subsequently institutionalised into accounting practice and systems. Debits and credits are core components of this conceptual pattern. This paper suggests that a different conceptual pattern, one that does not have debits and credits as its components, may be more suited to some modern accounting information systems. It makes the case by looking at two conceptual design choices that permeate the Pacioli pattern; de se and directional terms - leading to a de se directional conceptual pattern. It suggests alternative design choices - de re and non-directional terms, leading to a de re non-directional conceptual pattern - have some advantages in modern complex, computer-based, business environments.

How an Evolutionary Framework Can Help Us To Understand What A Domain Ontology Is (Or Should Be) And How To Build One

Situating domain ontologies in a general, long-term, diachronic information technology framework helps us to understand better their role in the evolution of information. This perspective provides some innovative insights into how they should be built.

Building the foundations for better decisions

This presentation describes the Top level Ontology (TLO) that is being developed for the Information Management framework (IMF). It starts with a brief outline of how the TLO emerged from the work on the IMF. It notes the initial focus on providing a foundation for Euclidean standards. It touches on the foundation - the core constructional ontology - built from a unified constructor theory with three elements: set, sum and tuple constructors. It then looks at the data components of the TLO and how these are used to build four-dimensional space time: taking in mereotopology, chronology and worldlines.
Presentation Structure:

  • Introducing the IMF Team
  • Background
    • Information Management Framework
    • Choice-based framework
  • TLO Initial Use
  • Situating the TLO in the IMF
  • Data Section: Core Constructional Ontology
  • Data Section: Top Level Ontology

A survey of top-level ontologies: framework and results

Launched in July 2018, the National Digital Twin programme was set up to deliver key recommendations of the National Infrastructure Commission 2017 “Data for the Public Good Report”

  • to steer the successful development and adoption of the Information Management Framework for the built environment
  • to create an ecosystem of connected digital twins – a national digital twin– which opens the opportunity to release value for society, the economy, business and the environment