ISO TC211 workshop: to consider the impact of non-relational technologies on TC211 standards: BORO Solutions experience
The presentation covers:
- Is there a workable UML profile for managing ontologies?
- What should the output of such a model be like?
- (we covered how neither UML nor OWL is ideal for this
- there are certainly problems generating OWL ontologies from the current TC211 UML profile
- the TC211 use of UML could be improved, even within its own profile)
- What Chris brings is experience (in his domain) of using UML to create/manage ontologies
- (quite probably not expressed in OWL)
A Framework for Composition:
A step towards a foundation for assembly: An Introduction
The presentation is an introduction the paper: “A Framework for Composition”, which outlines ‘a step towards a foundation for assembly’It:
- is a contribution to the Foundation Data Model (FDM), which
- is part of the Information Management Foundation (IMF), which
- is part of UK’s National Digital Twin programme (NDTp)
The paper aims to ensure composition (and so the FDM) is built upon a solid foundation. At the core of the notion of a component breakdown is the component as an integral (dependent) part of the composite whole. This has a rich underlying formal structure – which is described in the paper and outlined in this presentation. This structure, in turn, provides a framework for assessing how well a data model (or ontology) has captured the main elements of the structure enabling both the assessment of existing models as well as the design of new models
The paper is technical with a focus on the rich formal structure of the abstract general component breakdown architecture. This presentation provides a short overview of the concerns the paper addresses as such, it provides a simpler introduction to the paper.
Presentation Structure:
- What is composition?
- How is composition modelled?
- What kind of formal structure is emerging?
- The proposed formal structure
BORO: Business Objects Reference Ontology
This presentation shows a foundational ontology that aims to underpin a range of enterprise systems in a consistent and coherent manner and takes data-driven re-engineering as its natural starting point for domain ontology building. It has two closely intertwined components, a foundational ontology and a re-engineering methodology.
The origin and predominant area of application has been the enterprise. Suitability has been demonstrated in many industrial projects across a range of business domains including finance, oil and gas, and defense.
Core Constructional Ontology (CCO): a Constructional Theory of Parts, Sets, and Relations
This presentation introduces the Core Constructional Ontology (CCO). It firstly provides the background to the development of this ontology. It secondly, provides a summary of the approach to the development, looking at its key features and giving an overview of the formalisation.
Digitalisation Levels
An overview of the digitalisation levels being used in the Nation Digital Twin (NDT) programme.
A Framework for Composition: A Step Towards a Foundation for Assembly
Component breakdowns are a vital multi-purpose tool and hence ubiquitous across a range of disciplines. Information systems need to be capable of storing reasonably accurate representations of these breakdowns. Most current information systems have been designed around specific breakdowns, without considering their general underlying formal structure. This is understandable, given the focus on devising the breakdown and that there is not a readily available formal structure to build upon. We make a step towards providing this structure here.
At the core of the notion of a component breakdown is the component as an integral (dependent) part of the composite whole. This leads to a rich formal structure, one that requires careful consideration to capture well enough to support the range of specific breakdowns. If one is not sufficiently aware of this structure, it is difficult to determine what is required to produce a reasonably accurate representation – in particular, one that is sufficiently accurate to support interoperability.
In this report, enabled by the Construction Innovation Hub, we describe this rich formal structure and develop a framework for assessing how well a data model (or ontology) has captured the main elements of the structure. This will enable people to both assess existing models as well as design new models. As a separate exercise, as an illustration, we develop a data model that captures these elements.
Associated with the notion of component (as an integral, dependent part) is the notion of replaceable part (see Appendix A for more details). We do not characterise this here but will do so in a later report.
Developing Thin Slices
An Introduction to the Methodology for Developing the Foundation Data Model and Reference Data Library of the Information Management Framework
This Developing Thin Slices report provides a technical description of the process at the heart of the Thin Slices Methodology with the aim of providing a common technical resource for training and guidance in this area. As such it forms part of the wider effort to provide common resources for the development of the Information Management Framework.
It focuses on the process at the core of the Thin Slices Methodology. In particular, it identifies a requirement for a minimal foundation for these kinds of processes. In the companion report, Top-Level Categories (Partridge, forthcoming), the foundation adopted by the Information Management Framework is described. Together, the two reports cover the details of the developing thin slices process.
Top-Level Categories
Categories for the Top-Level Ontology of the Information Management Framework
This report identifies the top categories that characterise the top-level ontology that will underpin the Information Management Framework’s Foundation Data Model (where top categories exclusively and exhaustively divide the world’s entities by their fundamental kinds or natures). With these in place, the IMF’s top-level ontology has been characterised.
A thin slices approach (described in Developing Thin Slices (Partridge, forthcoming)) has been adopted for the development of the foundation data model. The category structure described in this report is being used as the foundation for that process. With these categories in place, that process has a firm foundation.
Core Constructional Ontology
The Foundation for the Top-Level Ontology of the Information Management Framework
The purpose of this report is to give an understanding of the technicalities of the foundation and formalisation underpinning a foundational ontology.
This report is directed at a technical audience interested in understanding what the foundation of the foundational ontology is and how it is formalised. In particular, we expect the report to be of interest to logicians and formal ontologists.
This is part of a project to build a unified foundation, called the Core Constructional Ontology (CCO). This stage of the project has developed a transitional framework that establishes the feasibility of building the CCO. The framework is formalised by means of a theory we call the Core Constructional Theory (CCT). Here we describe the CCT and its associated CCO. Later stages of the project will further develop and enhance this framework. Appendix E.5 gives some indication of what these enhancements could be. This novel theory develops the idea that all the objects in the CCO emerge during construction. We start from an initial collection of objects—often called givens—and a small number of constructors, and the entire ontology unfolds from repeated constructions. So from the givens and constructors one knows, in principle, all the objects in the ontology. Using the technical resources of plural logic, the CCT formalises the arrangement of constructions in stages, where the intended ontology arises after exhausting all the stages. This report documents the CCT and provides a proof of its consistency.
How the IMF Team is building a four-dimensional top-level ontology
This describes the IMFs approach to building a four-dimensional top-level ontology (TLO). It starts with the background, describing the Information Management Framework (IMF) and its approach to top level ontologies; with a focus on fundamental ontological choices that typically boil down to a choice whether to stratify or unify. It outlines the TLO use case - 'Euclidean' Standards - and ontological scope it creates. It the situates the TLO in the Foundation Data Layer of the IMF - built upon the ground layer - the Core Constructional Ontology (CCO). It then describes the CCO and the TLO in terms of its components.
Presentation Structure
- Introducing the IMF Team
- Background
- Information Management Framework
- Choice-based framework
- TLO Initial Use
- Situating the TLO in the IMF
- Data Section: Core Constructional Ontology
- Data Section: Top Level Ontology
Building the foundations for better decisions
This presentation describes the Top level Ontology (TLO) that is being developed for the Information Management framework (IMF). It starts with a brief outline of how the TLO emerged from the work on the IMF. It notes the initial focus on providing a foundation for Euclidean standards. It touches on the foundation - the core constructional ontology - built from a unified constructor theory with three elements: set, sum and tuple constructors. It then looks at the data components of the TLO and how these are used to build four-dimensional space time: taking in mereotopology, chronology and worldlines.
Presentation Structure:
- Introducing the IMF Team
- Background
- Information Management Framework
- Choice-based framework
- TLO Initial Use
- Situating the TLO in the IMF
- Data Section: Core Constructional Ontology
- Data Section: Top Level Ontology
How to – and How Not to – Build Ontologies: The Hard Road to Semantic Interoperability
The digitalisation journey that takes us to semantically seamlessly interoperating enterprise systems is (at the later stages - where ontology is deployed) a hard road to travel. This presentation aims to highlight some of the main hurdles people will face on the digitalisation journey using a cultural evolution perspective. From this viewpoint, we highlight the radical new practices that need to be adopted along the journey. The presentation looks at the concerns this evolutionary perspective raises. For example, evolutionary contingency. It seems clear that if we don’t adapt in the right way, we will not evolve interoperability. While we have some idea of what the practices are, what the trajectory of the journey is. This is not enough, the community also needs find the means to (horizontally) inherit these. The presentation then does a quick tour around so of the new practices that need to be adopted.
Avoiding premature standardisation: balancing innovation and conformity
Overview
- Currently work in a niche area:
- ontologies for operational system semantic interoperability – integration
- think integrating enterprise SAP and Maximo operationallyhave been working here for a while (since late 1980s)
- not many (any?) other people working here
- Believe that:
- there are opportunities for architectural (radical and disruptive) innovation in this and other ontology area
- at this stage, the approach in my area needs to be agile
- that premature standardisation could stifle the innovation
- Want to suggest that:
- there is a need to balance the conformity (of standards) with the agility needed to produce innovation
- the balancing involves recognising when to standardise,
- so, recognising when there is premature standardisation
- it is not yet time to standardise in my area
A brief introduction to BORO
This is a brief introduction to the BORO approach and its two main components; the BORO Foundation and the bCLEARer methodology. The introduction will give an overview of both the history and the nature of the approach. It will finish with a brief look at some current enhancement work on modality and graphs as well as implementations.
Modernising Engineering Datasheets a bCLEARer project
A case study in migrating from legacy engineering standards
This presentation aims to provide a sense of what happens when a bCLEARer project is faced with data in the FORM paradigm (a kind of semi-structured data). It aims to show how the implicit FORM syntax can be made explicit. The presentation walks through a sanitised case history based upon real project, but both simplified and sanitised. This is a sample of actual work, but not intended as an example of best practice – or even good practice. It does, however, show some typical approaches and the challenges faced, particularly at the early alpha stage.