Ontology-Driven Re-engineering of Business Systems

This tutorial presents an introduction to the BORO methodology, an ontology-based systems engineering approach. The authors present both the ontological foundations of the approach as well as business examples of the application of this approach.

Software Stability:

Recovering General Patterns of Business

The software stability approach is required to balance the seemingly contradictory goals of stability over the software lifecycle with the need for adaptability, extensibility and interoperability. This workshop paper addresses the issue of how software stability can be achieved over time by outlining an approach to evolving General Business Patterns (GBPs) from the empirical data contained within legacy systems. GBPs are patterns of business objects that are (directionally) stable across contexts of use. The work explains, via a small worked example, how stability is achieved via a process of ‘sophistication’. The outcome of the process demonstrates how the balance that stability seeks can be achieved.

Ontology Mining versus Ontology Speculation

When we embed the building of an ontology into an information system development or maintenance process, then the question arises as to how one should construct the content of the ontology. One of the choices is whether the construction process should focus on the mining of the ontology from existing resources or should be the result of speculation (‘starting with a blank sheet of paper’). I present some arguments for choosing mining over speculation and then look at the implications this has for legacy modernisation.

The Role of Ontology in Semantic Integration

More and more enterprises are currently undertaking projects to integrate their applications. They are finding that one of the more difficult tasks facing them is determining how the data from one application matches semantically with the other applications. Currently there are few methodologies for undertaking this task – most commercial projects just rely on experience and intuition. Taking semantically heterogeneous databases as the prototypical situation, this paper describes how ontology (in the traditional metaphysical sense) can contribute to delivering a more efficient and effective process of matching by providing a framework for the analysis, and so the basis for a methodology. It delivers not only a better process for matching, but the process also gives a better result. This paper describes a couple of examples of this: how the analysis encourages a kind of generalisation that reduces complexity. Finally, it suggests that the benefits are not just restricted to individual integration projects: that the process produces models which can be used as to construct a universal reference ontology – for general use in a variety of types of projects.

Ontology meets Big Data:

Immutability

From the perspective of enterprise computing, ontology is seen as a kind detached pure science. When enterprise computing ventures into ontological topics it does not look to ontology to provide it with theories - it devises its own theory-lite solutions. This keynote aims to make a case for joining up these two by identifying an area where enterprise computing can usefully apply ontological theory. It does this using an example; immutability, a current concern in big data. It argues that ontology’s theories about change, in particular McTaggart’s analysis of ways of viewing time in terms of series, provide a strong explanatory framework for enterprise computing’s immutability and have the potential to lead to better solutions. This approach also reveals that there is an aspect of change in computing systems – the epistemic aspect – where a mutable approach (McTaggart’s Series A) provides a better explanatory framework.

Re-engineering Data with 4D Ontologies and Graph Databases

The amount of data that is being made available on the Web is increasing. This provides business organisations with the opportunity to acquire large datasets in order to offer novel information services or to better market existing products and services. Much of this data is now publicly available (e.g., thanks to initiatives such as Open Government Data). The challenge from a corporate perspective is to make sense of the third party data and transform it so that it can more easily integrate with their existing corporate data or with datasets with a different provenance. This paper presents research-in-progress aimed at semantically transforming raw data on U.K. registered companies. The approach adopted is based on BORO (a 4D foundational ontology and re-engineering method) and the target technological platform is Neo4J (a graph database). The primary challenges encountered are (1) re-engineering the raw data into a 4D ontology and (2) representing the 4D ontology into a graph database. The paper will discuss such challenges and explain the transformation process that is currently being adopted.

Grounding for Ontological Architecture Quality:

Metaphysical Choices

Information systems (IS) are getting larger and more complex, becoming ‘gargantuan’. IS practices have not evolved in step to handle the development and maintenance of these gargantuan systems, leading to a variety of quality issues. The community recognises that they need to develop an appropriate organising architecture and are making significant efforts. Examples include the System Engineering Modeling Language (SysML), the Reference Model for Open Distributed Processing (RM-ODP) and 4+1 Architectural Blueprints. Most of these follow IEEE 1471-2000’s recommendation to use view models. We believe that these efforts are missing a key component – an information grounding view. In this paper, we firstly describe this view. Then we suggest a way to provide an architecture for it – foundational ontologies – and a way of assessing them – metaphysical choices. We illustrate how the metaphysical choices are made and how this can affect information modelling.

Report from the ECOOP 2004 Workshop on Philosophy, Ontology, and Information Systems

The workshop aimed at providing a forum to discuss the use of philosophical ontology in object-oriented information systems. Whilst ontology is now more widely used in computing circles – knowledge representation, system integration, legacy transformation, and the semantic web for example – initial attempts have been modest in their outcomes. This is because computing ontology to-date has been used primarily for (often competing) concept definitions: Pragmatically, ontologies have either been developed in an abstract sense (based on some authorative perspective), or people have taken materials at hand (data models and the like) and tried to glue them together. A sound basis on which to properly align different views on aspects of the world in order to work towards a consistent whole is missing. With this in mind, the workshop aimed to secure a measure of agreement on:

  • What philosophical ontology is,
  • How ontology can assist in software development,
  • Key obstacles to the deployment of ontology, and
  • Possible collaborative efforts among the participants.

Selection of participants was based on short position papers and/or previously demonstrated interest in related areas of activity.
The title of this report should be referenced as “Report from the ECOOP 2004 Workshop on Philosophy, Ontology, and Information Systems”.

Formalization of the classification pattern:

survey of classification modeling in information systems engineering

Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the “one and the many.” Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor’s work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus to the ISE literature. The literature survey follows the evolution of ISE’s understanding of how to formalize the classification pattern. The various proposals are assessed using the classical example of classification; the Linnaean taxonomy formalized using powersets as a benchmark for formal expressiveness. The broad conclusion of the survey is that (1) the ISE community is currently in the early stages of the process of understanding how to formalize the classification pattern, particularly in the requirements for expressiveness exemplified by powersets, and (2) that there is an opportunity to intervene and speed up the process of adoption by clarifying this expressiveness. Given the central place that the classification pattern has in domain modeling, this intervention has the potential to lead to significant improvements.

Software Stability:

Recovering General Patterns of Business

With re-engineering of software systems becoming quite pronounced amongst organisations, a software stability approach is required to balance the seemingly contradictory goals of stability over the software lifecycle with the need for adaptability, extensibility and interoperability. This paper addresses the issue of how software stability can be achieved over time by outlining an approach to evolving General Business Patterns (GBPs) from the empirical data contained within legacy systems. GBPs are patterns of business objects that are (directionally) stable across contexts of use. The approach is rooted in developing patterns by extracting the business knowledge embedded in existing software systems. The process of developing this business knowledge is done via the careful use of ontology, which provides a way to reap the benefits of clear semantic expression. A worked example is presented to show how stability is achieved via a process of ‘interpretation’ and ‘sophistication’. The outcome of the process demonstrates how the balance that stability seeks can be achieved.

Improving Model Quality through Foundational Ontologies:

Two Contrasting Approaches to the Representation of Roles

Several foundational ontologies have been developed recently. We examine two of these from the point of view of their quality in representing temporal changes, focusing on the example of roles. We discuss how these are modelled in two foundational ontologies: the Unified Foundational Ontology and the BORO foundational ontology. These exhibit two different approaches, endurantist and perdurantist respectively. We illustrate the differences using a running example in the university student domain, wherein one individual is not only a registered student but also, for part of this period, was elected the President of the Student Union. The metaphysical choices made by UFO and BORO lead to different representations of roles. Two key differences which affect the way roles are modelled are exemplified in this paper: (1) different criteria of identity and (2) differences in the way individual objects extend over time and possible worlds. These differences impact upon the quality of the models produced in terms of their respective explanatory power. The UFO model concentrates on the notion of validity in “all possible worlds” and is unable to accurately represent the way particulars are extended in time. The perdurantist approach is best able to describe temporal changes wherein roles are spatio-temporal extents of individuals.

A 4-Dimensionalist Top Level Ontology (TLO):

Mereotopology and Space-Time

This presentation describes what the 4-dimensionalist top level ontology (TLO) based upon mereotopology and space-time being developed for the Information Management Framework (IMF) looks like. It describes the agile, iterative, modular approach adopted. It situates the 4-dimensional approach in terms of its ontological choices. It outlines the scope of the first iteration, based upon requirements that emerge from industrial standards such as; Building Smart, STEP amd TC211/INSPIRE. It describes the spatio-temporal candidates for ontological analysis that emerge from these standards. It then provides a historical overview of the use of worldlines to characterise these candidates. And builds upon this for one example, coordinate systems. Finally it provides an overview of how space-time can be modularised. 
Presentation Structure

  • Preliminaries - overall approach: How, broadly speaking, do we develop the ontology?
  • Situating 4D in ontological space: A requirement for space-time is central
  • Broad modularisation context
  • First iteration: scope : What should the scope of the first ‘MVP’ be?
  • Top-down and bottom-up approach
  • Space-time – top-down workstream
  • Space-time: Foundation Data Model : from worldlines to spatial objects and locations
  • Space-time: top-level-ontology: from core to worldlines