Demonstrating a Successful Strategy for Network Enabled Capability

Responsive, agile, collaborative planning and execution is a key requirement for the development of a successful Network Enabled Capability (NEC), whether at the national or international level. This paper makes the case that it is not possible to achieve this agility without solving the semantic interoperability problem. The semantic issues facing NATO’s Network Enabled Capability (NNEC) are also faced by its members in their national NECs. There are currently many proposed strategies attempting to address these issues. Finding the one that will provide the hoped for integration and at the same time only cause minimal changes to existing infrastructure is a major challenge. In this situation it is vital to be able to demonstrate the effectiveness of a strategy. This paper presents the findings from a project tasked with both identifying a strategy and demonstrating its effectiveness - the Joint Tactical Air Defence Integration System (JTADIS) project. This project was funded by the UK Ministry of Defence (MoD) and undertaken by QinetiQ – the semantic analysis was undertaken by BORO Solutions.

Semantic Modernisation:

Layering, Harvesting and Interoperability

There is a well understood requirement for semantic interoperability within NATO and an emerging strategy to address it. One of the strategy’s key components – the ‘semantic description’ – requires further clarification. What is less well recognised is that this ‘semantic description’ can also be viewed as a component of a wider strategic requirement for semantic modernisation. This paper describes how the semantic modernisation techniques of layering and harvesting provide a strong foundation for the production of semantic descriptions. It describes two projects that illustrate how these techniques are being used to do this. Finally, it reflects upon how this could help to refine the current NATO NEC (NNEC) semantic interoperability strategy.

A Novel Ontological Approach to Semantic Interoperability between Legacy Air Defense Command and Control Systems

In common with many other government defence departments, the UK Ministry of Defence (MoD) has realised that it has a plethora of legacy systems that were procured as domain specific with little emphasis given to integration requirements. In particular, it realised that the lack of integration between a significant number of the legacy air defence command and control (AD-C2) systems meant it could not deliver the increased agility needed for joint force AD and that current approaches to integration were unlikely to resolve the problem. They realised that they needed a new approach that demonstrably worked. This paper describes a programme initiated by the MoD to address this problem through the formulation of a novel solution and its demonstration in the tactical AD-C2 environment using a sample of these existing legacy systems. It describes the ontological solution deployed to resolve the 'hard' semantic interoperability challenge. It outlines the physical and semantic architecture that was developed to support this approach and describes the implemented planning and collaborative execution (PACE-based) and semantic interoperability engine (SIE) solution.

What is a service?

Presentation of the report 'An Analysis of Services' prepared for the UK MoD.

This describes a forensic approach to developing a common understanding of Service across business and IT.

The goal of this report is to provide an in-depth common conceptual understanding of services end-to-end across the enterprise – one that encompasses business, IT and technical services and gives a picture of what, in essence, a service is. Prepared for the UK MoD in 2010.

Ontology then agentology:

A finer grained framework for enterprise modelling

Data integration of enterprise systems typically involves combining heterogeneous data residing in different sources into a unified, homogeneous whole. This heterogeneity takes many forms and there are all sorts of significant practical and theoretical challenges to managing this, particularly at the semantic level. In this paper, we consider a type of semantic heterogeneity that is common in Model Driven Architecture (MDA) Computation Independent Models (CIM); one that arises due to the data’s dependence upon the system it resides in. There seems to be no relevant work on this topic in Conceptual Modelling, so we draw upon research done in philosophy and linguistics on formalizing pure indexicals – ‘I’, ‘here’ and ‘now’ – also known as de se (Latin ‘of oneself’) or the deitic centre. This reveals firstly that the core dependency is essential when the system is agentive and the rest of the dependency can be designed away. In the context of MDA, this suggests a natural architectural layering; where a new concern ‘system dependence’ is introduced and used to divide the CIM model into two parts; a system independent ontology model and a system dependent agentology model. We also show how this dependence complicates the integration process – but, interestingly, not reuse in the same context. We explain how this complication usually provides good pragmatic reasons for maximizing the ontology content in an ‘Ontology First’, or ‘Ontology then Agentology’ approach.

Developing an Ontological Sandbox:

Investigating Multi-Level Modelling’s Possible Metaphysical Structures

One of the central concerns of the multi-level modelling (MLM) community is the hierarchy of classifications that appear in conceptual models; what these are, how they are linked and how they should be organised into levels and modelled. Though there has been significant work done in this area, we believe that it could be enhanced by introducing a systematic way to investigate the ontological nature and requirements that underlie the frameworks and tools proposed by the community to support MLM (such as Orthogonal Classification Architecture and Melanee). In this paper, we introduce a key component for the investigation and understanding of the ontological requirements, an ontological sandbox. This is a conceptual framework for investigating and comparing multiple variations of possible ontologies – without having to commit to any of them – isolated from a full commitment to any foundational ontology. We discuss the sandbox framework as well as walking through an example of how it can be used to investigate a simple ontology. The example, despite its simplicity, illustrates how the constructional approach can help to expose and explain the metaphysical structures used in ontologies, and so reveal the underlying nature of MLM levelling.

Formalization of the classification pattern:

survey of classification modeling in information systems engineering

Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the “one and the many.” Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor’s work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus to the ISE literature. The literature survey follows the evolution of ISE’s understanding of how to formalize the classification pattern. The various proposals are assessed using the classical example of classification; the Linnaean taxonomy formalized using powersets as a benchmark for formal expressiveness. The broad conclusion of the survey is that (1) the ISE community is currently in the early stages of the process of understanding how to formalize the classification pattern, particularly in the requirements for expressiveness exemplified by powersets, and (2) that there is an opportunity to intervene and speed up the process of adoption by clarifying this expressiveness. Given the central place that the classification pattern has in domain modeling, this intervention has the potential to lead to significant improvements.

Guidelines for Developing Ontological Architectures in Modelling and Simulation

This book is motivated by the belief that “a better understanding of ontology, epistemology, and teleology” is essential for enabling Modelling and Simulation (M&S) systems to reach the next level of ‘intelligence’. This chapter focuses on one broad category of M&S systems where the connection is more concrete; ones where building an ontology – and, we shall suggest, an epistemology – as an integrated part of their design will enable them to reach the next level of ‘intelligence’. Within the M&S community, this use of ontology is at an early stage; so there is not yet a clear picture of what this will look like. In particular, there is little or no guidance on the kind of ontological architecture that is needed to bring the expected benefits. This chapter aims to provide guidance by outlining some major concerns that shape the ontology and the options for resolving them. The hope is that paying attention to these concerns during design will lead to a better quality architecture, and so enable more ‘intelligent’ systems. It is also hoped that understanding these concerns will lead to a better understanding of the role of ontology in M&S.

Coordinate Systems

Level Ascending Ontological Options

A major challenge faced in the deployment of collaborating unmanned vehicles is enabling the semantic interoperability of sensor data. One aspect of this, where there is significant opportunity for improvement, is characterizing the coordinate systems for sensed position data. We are involved in a proof of concept project that addresses this challenge through a foundational conceptual model using a constructional approach based upon the BORO Foundational Ontology. The model reveals the characteristics as sets of options for configuring the coordinate systems. This paper examines how these options involve, ontologically, ascending levels. It identifies two types of levels, the well-known type levels and the less wellknown tuple/relation levels.

Grounding for an Enterprise Computing Nomenclature Ontology

We aim to lay the basis for a unified architecture for enterprise computer nomenclatures by providing the grounding ontology based upon the BORO Foundational Ontology. We start to lower two significant barriers within the computing community to making progress in this area; a lack of a broad appreciation of the nature and practice of nomenclature and a lack of recognition of some specific technical, philosophical issues that nomenclatures raise. We provide an overview of the grounding ontology and how it can be implemented in a system. We focus on the issue that arises when tokens lead to the overlap of the represented domain and its system representation – system-domain-overlap – and how this can be resolved.

Grounding for an Enterprise Computing Nomenclature Ontology - Long Version

We aim to lay the basis for a unified architecture for nomenclatures in enterprise computer systems by providing the grounding for an ontology of en-terprise computing nomenclatures within a foundational ontology. We look at the way in which nomenclatures are tools both shaped by and shaping the prevailing technology. In the era of printing technology, nomenclatures in lists and tables were ‘paper tools’ deployed alongside scientific taxonomic and bureaucratic clas-sifications. These tools were subsequently embedded in computer enterprise sys-tems. In this paper we develop an ontology that can be used as a basis for nomen-clature ‘computer tools’ engineered for computing technology.

Implicit Requirements for Ontological Multi-Level Types in the UNICLASS Classification

In the multi-level type modeling community, claims that most enterprise application systems use ontologically multi-level types are ubiquitous. To be able to empirically verify this claim one needs to be able to expose the (often underlying) ontological structure and show that it does, indeed, make a commitment to multi-level types. We have not been able to find any published data showing this being done. From a top-level ontology requirements perspective, checking this multi-level type claim is worthwhile. If the datasets for which the top-level ontology is required are ontologically committed to multi-level types, then this is a requirement for the top-level ontology. In this paper, we both present some empirical evidence that this ubiquitous claim is correct as well as describing the process we used to expose the underlying ontological commitments and examine them. We describe how we use the bCLEARer process to analyse the UNICLASS classifications making their implicit ontological commitments explicit. We show how this reveals the requirements for two general ontological commitments; higher-order types and first-class relations. This establishes a requirement for a top-level ontology that includes the UNICLASS classification to be able to accommodate these requirements. From a multi-level type perspective, we have established that the bCLEARer entification process can identify underlying ontological commitments to multi-level type that do not exist in the surface linguistic structure. So, we have a process that we can reuse on other datasets and application systems to help empirically verify the claim that ontological multi-level types are ubiquitous.

Thoroughly Modern Accounting:

Shifting to a De Re Conceptual Pattern for Debits and Credits

Double entry bookkeeping lies at the core of modern accounting. It is shaped by a fundamental conceptual pattern; a design decision that was popularised by Pacioli some 500 years ago and subsequently institutionalised into accounting practice and systems. Debits and credits are core components of this conceptual pattern. This paper suggests that a different conceptual pattern, one that does not have debits and credits as its components, may be more suited to some modern accounting information systems. It makes the case by looking at two conceptual design choices that permeate the Pacioli pattern; de se and directional terms - leading to a de se directional conceptual pattern. It suggests alternative design choices - de re and non-directional terms, leading to a de re non-directional conceptual pattern - have some advantages in modern complex, computer-based, business environments.

A Framework for Composition:

A Step Towards a Foundation for Assembly

Component breakdowns are a vital multi-purpose tool and hence ubiquitous across a range of disciplines. Information systems need to be capable of storing reasonably accurate representations of these breakdowns. Most current information systems have been designed around specific breakdowns, without considering their general underlying formal structure. This is understandable, given the focus on devising the breakdown and that there is not a readily available formal structure to build upon. We make a step towards providing this structure here.

At the core of the notion of a component breakdown is the component as an integral (dependent) part of the composite whole. This leads to a rich formal structure, one that requires careful consideration to capture well enough to support the range of specific breakdowns. If one is not sufficiently aware of this structure, it is difficult to determine what is required to produce a reasonably accurate representation – in particular, one that is sufficiently accurate to support interoperability.

In this report, enabled by the Construction Innovation Hub, we describe this rich formal structure and develop a framework for assessing how well a data model (or ontology) has captured the main elements of the structure. This will enable people to both assess existing models as well as design new models. As a separate exercise, as an illustration, we develop a data model that captures these elements.

Associated with the notion of component (as an integral, dependent part) is the notion of replaceable part (see Appendix A for more details). We do not characterise this here but will do so in a later report.