iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1515/auto-2022-0076
KI-Engineering – AI Systems Engineering Skip to content
BY 4.0 license Open Access Published by De Gruyter (O) September 3, 2022

KI-Engineering – AI Systems Engineering

Systematic development of AI as part of systems that master complex tasks

KI-Engineering – AI Systems Engineering
Systematische Entwicklung von KI als ein Teil von Systemen zur Lösung komplexer Aufgaben
  • Julius Pfrommer

    Julius Pfrommer is head of a research group on “distributed cyber-physical systems” at Fraunhofer IOSB focusing on the use of AI to solve “hard problems” in industry. He is furthermore the scientific director of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING) and holds an appointment an Karlsruhe Institute of Technology for his course on convex optimization theory for machine learning and engineering.

    EMAIL logo
    , Thomas Usländer

    Thomas Usländer holds a degree in Computer Science from the University of Karlsruhe, Germany, and a PhD in Engineering of the Karlsruhe Institute of Technology (KIT), Germany. He is head of the department “Information Management and Production Control” and spokesperson of the business unit “Automation and Digitalization” at Fraunhofer IOSB. His research interests include the analysis and design of open, secure and dependable service architectures for the Industrial Internet of Things and Industrie 4.0. Thomas Usländer is the project manager of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING).

    and Jürgen Beyerer

    Jürgen Beyerer has been a full professor for informatics at the Institute for Anthropomatics and Robotics at the Karlsruhe Institute of Technology KIT since March 2004 and director of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB in Ettlingen, Karlsruhe, Ilmenau, Görlitz, Lemgo, Oberkochen and Rostock. Research interests include automated visual inspection, signal and image processing, variable image acquisition and processing, active vision, metrology, information theory, fusion of data and information from heterogeneous sources, system theory, autonomous systems and automation. Jürgen Beyerer is the chair of the scientific board of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING).

Abstract

KI-Engineering – translated as AI Systems Engineering – aims at the development of a new engineering practice in the intersection of Systems Engineering and Artificial Intelligence. Its goal is to professionalize the use of AI methods in a systems engineering context. The article defines KI-Engineering and compares it with historical examples of research disciplines that founded engineering disciplines. It furthermore discusses the long-term challenges where further development is needed and which results were already achieved in the context of the Competence Center for KI-Engineering (CC-KING).

Zusammenfassung

KI-Engineering – im Langnamen KI Systems Engineering – beschreibt die Entwicklung einer neuen Ingenieurspraxis im Schnittpunkt von Systems Engineering und Künstlicher Intelligenz. Ziel ist es, die Anwendung von KI-Methoden im Kontext komplexer Entwicklungsprojekte zu professionalisieren. Der Artikel definiert KI-Engineering und vergleicht es mit historischen Beispielen der Entstehunng ingenieurwissenschaftlicher Disziplinen. Er diskutiert außerdem den langfristigen Entwicklungsbedarf und welche Ergebnisse bereits im Rahmen des Kompetenzzentrums für KI-Engineering (CC-KING) erzielt wurden.

1 Motivation

Artificial Intelligence (AI) and Machine Learning (ML) have shown tremendous success in many areas of application. However, many successes are reported for fixed and unchanging benchmark datasets or simulation environments. Many additional considerations come into play when moving from these benchmark environments to real systems. What can be observed is that many applications of AI in industrial systems achieve good results in proof-of-concept implementations, yet are not making the transition into long-term productive use.

Neither in science nor in international standardization, there is a universally agreed definition of AI. However, [12] defines an AI system as an “ engineered system that generates outputs such as content, forecasts, recommendations or decision for a given set of human-defined objectives”.

We consider that such AI systems are typically elements (or sub-systems) of a larger overall system where AI assists to achieve its purpose under defined constraints. The system model we assume is depicted in Fig. 1. An overall system is eventually commissioned and delivered into productive use. The overall system decomposes into a hierarchy of subsystems, linked by their interfaces, where some sub-systems may apply AI methods. Examples of other sub-systems of an overall system comprise mechanical and electrical components, sensors, actuators, data management and human-machine interfaces. Furthermore, AI-based enabling systems can be part of the development environment (that is not deployed into operation as part of the overall system). Examples for this are databases with training samples for machine learning methods, training algorithms and simulation environments.

According to Simon Ramo, Systems Engineering considers “ the design of the whole as distinguished from the design of the parts” [5]. We argue that this holistic approach has not yet been applied to the engineering of AI systems as part of overall systems, and that this is a reason why many proof-of-concept implementations do not transition into a long-term productive use. To resolve this, we propose the development of a dedicated engineering discipline for AI in large-scale and industrial applications with systematic procedures and tools. We call this discipline “ AI Systems Engineering” (i. e., engineering of AI systems), or in short in German “ KI-Engineering” by combining the German term “ Künstliche Intelligenz” with Engineering.

AI Systems Engineering addresses the systematical development and operation of AI-based solutions as part of systems that master complex tasks.

Figure 1 
Conceptual view of AI systems as part of an overall system. Some subsystems of the overall system make use of AI methods. Other possible subsystems are mechanical, electrical, and so on. Note that AI methods, such as the training of a machine learning model or a simulation model for data generation, can also be part of an enabling system in the development environment. The development environment differs from the overall system in that it is not deployed into operational use after the development. If an enabling system is eventually deployed for operational use, e. g., for retraining of a model during system maintenance, then it is considered a part of the overall system itself. See [20] for additional details.
Figure 1

Conceptual view of AI systems as part of an overall system. Some subsystems of the overall system make use of AI methods. Other possible subsystems are mechanical, electrical, and so on. Note that AI methods, such as the training of a machine learning model or a simulation model for data generation, can also be part of an enabling system in the development environment. The development environment differs from the overall system in that it is not deployed into operational use after the development. If an enabling system is eventually deployed for operational use, e. g., for retraining of a model during system maintenance, then it is considered a part of the overall system itself. See [20] for additional details.

This publication is structured as follows. In Section 2 we look at historical examples where results from basic research led to engineering practices. Sections 3 describes the long-term challenges to be solved by AI Systems Engineering in terms of application considerations, general development and operational challenges, as well as fundamental research questions. Section 4 outlines how we structure and organize its establishment as part of the Competence Center for KI Engineering (CC-KING). The paper concludes in Section 5 with a summary and outlook.

2 Characteristics of emerging engineering disciplines

Shaw argues that professional engineering discipline evolves when commercial expertise based upon craftsmen and production expertise (i. e., best-practices and routine knowledge) is combined with scientific knowledge [22]. We briefly trace the emergence of selected engineering disciplines and derive further conclusions for AI Systems Engineering. The selected engineering disciplines are Electrical Engineering, Control Engineering, Software Engineering and Systems Engineering, making a progression on the timeline when these disciplines emerged.

Electrical engineering

The theoretical body at the foundation of Electrical Engineering was laid out by Maxwell in 1873 [16]. Considering not only the flow of electrons, but also electro-magnetical fields is necessary for the understanding of many phenomena including electrical motors and radio communication. But not every electrical engineer works on the level of Maxwell’s equations. Many models of electrical systems with different levels of abstraction are commonly used. As an examples see CAD systems for the planning of electrical installations and the SPICE family of simulators for analog circuits [19]. The emergence of Electrical Engineering did not only happen at universities, but also as a profession with vocational training for the many electricians that install and maintain infrastructure and equipment in private and professional environments. The largest professional organization for electrical engineers is the Institute of Electrical and Electronics Engineers (IEEE, founded in 1963) with more than 400,000 members as of 2022.

Control engineering

The goal of control theory [24] is to improve the behavior of dynamical systems. A feedback controller observes the current system state in realtime and sets control values to act on it. For example, a controller might be tasked to approach a target set point or to follow a state trajectory. While mechanical systems for feedback control have been used for centuries, such as the governor on the Watt steam engine, the 20th century saw a change first to electrical and then to digital controllers [18]. Simple PID feedback controllers are today part of nearly every household appliance. More complex controllers govern the behavior of every modern car engine, space craft and robotic actuator to just name a few applications. Recent years have seen the increased use of data-driven methods for control. Methods like Reinforcement Learning [26] forego an explicit mathematical model of the target system and learn control policies directly from empirical observations. This decreases the complexity of control engineering yet requires more data. The inherent technical complexity and the amount of work to develop and qualify suitable controllers has prevented the emergence of a vocational profession. The education of control engineers is focused on the university level with practicing engineers mostly working in R&D where the effort to improve a product’s performance with advanced control methods is warranted. The largest professional organization for control engineers is the International Federation of Automatic Control (IFAC, founded in 1957).

Software engineering

Whereas software was originally subordinate to the development of computing hardware, the 1960s saw an explosion in software complexity. The resulting “ software crisis” of many failed large-scale projects gave rise to software engineering as its own discipline [15], [22], [29]. Today millions of professional programmers practice their craft – many with formal training as software engineers but also many talented autodidacts and on-the-job learners. The tools of the trade took many years to develop and refine. They include many different programming languages, source code versioning, object-oriented design, debugging tools, unit tests and many more. The last 20 years saw a big change in the way software is developed with the emergence of more iterative agile development models [1]. The oldest professional organization for software engineering is the Association for Computing Machinery (ACM, founded in 1947).

Systems engineering

Systems Engineering is a relatively new interdisciplinary field that appeared in the wake of large-scale military and civilian development projects like the NASA Apollo missions [5], [11]. According to [3] the competence profile sits between that of a project manager and that of a specialist engineer. Systems Engineering becomes particularly important when the size of projects increases. High technical complexity and large project teams require an increased amount of project structure and coordination. For this Professional organizations such as the International Council on Systems Engineering (INCOSE, founded in 1990) define best practices and organize conferences and events.

Beyond engineering disciplines, Greenwood has described the attributes of a profession [8]: Systematic Theory, Authority, Community Sanction, Ethical Code, and Culture. Together with the historical examples we derive six characteristics of newly established engineering disciplines:

C1 Systematic Theory

A new discipline has to define the domain for which it is competent – and fulfill the promise of competency. This is ensured by a mature and internally consistent theoretical framework. In the absence of a systematic theory, e. g., if practices are purely based on speculation and fashion, there is a risk that results cannot be properly attributed and that investments into tool development will be in vain once the fashion of the profession changes.

C2 Established Tool Ecosystem

The results from basic research are packaged in reusable tools, particularly software packages. Reusable tools reduce the time effort and also reduce the risk of failures from implementation errors as much more scrutiny can go into a reusable tool used across many project initiatives. Standard interfaces and exchange formats ensure that different tools for different development tasks can interoperate.

C3 Established Processes

Project planning in engineering cannot be a “ journey into the unknown”. The project manager may not be an expert of every domain touched by the development. Relying on established processes and ways to evaluate risks helps to plan and manage the overall development process. The steps to achieve a certain result should be predictable in their sequence and as much as possible also in their duration. Ideally well-defined development outcomes define clear input-output relations between development steps.

C4 Standardization and Legal Requirements

We distinguish between recommendations and widely accepted industrial practices (de-facto standards) and normative definitions that may even become relevant from a legal perspective (de-jure standards). Industries with normative regulation are for example health-care, agriculture, finance, machinery or the mobility sector. They typically not only define requirements but also certification procedures for engineering solutions. For new technologies, the absence of appropriate regulation can be an even bigger hurdle. For some AI-related developments, such as autonomous driving, the lack of prior experience for the certification of AI methods in a safety-critical context has delayed the deployment of solutions for years or even decades. On the other hand, the certification requirement ensures the safety of the public and drives the performance level of solutions.

C5 Education and professional training

The scientific body of knowledge as well as best practices need to be taught to the next generation of engineers on whose consistent expertise industry can rely on. For a large-scale development project, it has to be ensured that enough qualified personnel can be recruited or even trained during the runtime of the development. Also there should be a baseline curriculum whose content can be expected from holders of degrees.

C6 Professional Societies

The historical examples show that the emergence of new engineering disciplines is accompanied by the emergence of professional’s associations and learned societies for research. They organize events and the exchange within the emerging community. Some of the associations have put forward technical and ethical guidelines for its members.

We come back to these characteristics in Section 4 where we discuss the current and future steps to establish AI Systems Engineering.

3 Challenges of AI Systems Engineering

Figure 2 shows a classification of the challenges addressed by AI Systems Engineering. They are aligned on two contrastive axes from basic research to application-specific considerations and from the initial development to long-term operations. Cultural differences between the participating disciplines that come together in AI Systems Engineering permeate throughout all of them.

Figure 2 
Classification of the challenges to be addressed by AI Systems Engineering. The blue arrows indicate contrastive axis from basic research to application-specific considerations and from the initial development to the long-term operation of systems. Cultural differences between the disciplines coming together in AI Systems Engineering have an influence on all the other challenges.
Figure 2

Classification of the challenges to be addressed by AI Systems Engineering. The blue arrows indicate contrastive axis from basic research to application-specific considerations and from the initial development to the long-term operation of systems. Cultural differences between the disciplines coming together in AI Systems Engineering have an influence on all the other challenges.

Figure 3 
Dimensions where AI Systems Engineering applications differ from “ normal AI”.
Figure 3

Dimensions where AI Systems Engineering applications differ from “ normal AI”.

3.1 Application dimensions

AI Systems Engineering tackles applications that are different from “ any other AI”. We project these differences on three dimensions to largely characterize applications and to identify the origin of application-specific challenges for AI Systems Engineering.

Criticality dimension

Refers to the impact of a non-performing system on safety, business-critical functions, data protection, or other risks.

Impact for AI Systems Engineering: When the criticality is high, special measures and possibly an official certification are required to ensure correct performance of the overall system that the AI solution is a part of. How to measure: Failure mode and effects analysis (FMEA), or in general (corporate) risk management.

Organizational complexity dimension

Refers to coordination overhead for the development and operation of an AI-based system. This could be due to a large, distributed and heterogeneous teams or due to the need for cross-organizational alignment.

Impact for AI Systems Engineering: AI development often relies on individual key developers akin to „cowboy coding“. Large systems development in complex organizational settings needs more structured coordination. How to measure: Organization Theory defines metrics for the coordination effort of organizational structures.

Physical reality dimension

Refers to the application being grounded in the physical world with a direct relation to natural sciences (physics, chemistry, etc.) and traditional engineering disciplines (mechanical/electrical/industrial/civil engineering, etc.). This dimension is an indicator for criticality, but not all critical applications are close to the physical reality (for example an AI-based intrusion detection for cyber security).

Impact for AI Systems Engineering: The more immediate an AI is related to the physical reality, the more natural laws and phenomenological models can be integrated as prior knowledge. This requires new methods and tools. Furthermore, collaboration with traditional engineering disciplines often requires an adjustment to their practices (which are defined for good reason and possibly as a legal requirement). How to measure: Are the theories of natural sciences applicable? Does the application require collaboration with traditional engineering disciplines?

3.2 Development challenges

The following development challenges appear to be universal and are independent from the application dimensions discussed in the previous section. In this section we treat only those challenges that originate in the development phase. Additional challenges are discussed in Section 3.4 on operational challenges. Many operational challenges require a solution to be designed and implemented during system development. But the challenge itself does not originate from the development phase. We also do not discuss challenges that occur in “ normal” AI development as well and refer to existent discussions in [2], [21], [25].

3.2.1 Technical development challenges

Performance prediction

It is often the case that the performance of an AI-based approach cannot be estimated in advance, but must instead be determined empirically once a solution has been sufficiently developed for evaluation. This poses a risk for an overall development process where other subsystems are specified and implemented based on an anticipated performance of some AI-based subsystem.

Data availability

Data-driven methods such as machine learning require operational data very early in the development stage. This is problematic when a) data is difficult to procure or b) the system that produces the operational data is the result of the same development process as the ML solution.

Heterogeneous deployments

An AI-based solution has an increased added-value if it is deployed not just in one instance of the overall system, but if many systems of this type are deployed in parallel. Many of these deployments come with subtly different external environments and boundary conditions. These can be difficult to capture as requirements or assumptions on the system environment during the development phase.

Domain knowledge integration

The classical engineering fields have preexisting theories and expert knowledge for the systems they are concerned with. The integration of domain knowledge can greatly reduce the number of data samples required for AI methods, increases confidence in the system and allows to extrapolate outside the range of known sample measurements. But there is currently no universal interface to integrate expert domain knowledge and the commonly used AI methods – particularly data-driven machine learning. So the integration of domain knowledge requires additional work where the textbook AI methods need deep integration with external sources of knowledge and engineering models for certain system aspects, e. g., in the language of differential equations. This is typically custom work that comes with increased uncertainty in terms of the development duration and the eventual performance.

3.2.2 Organizational development challenges

AI Systems Engineering, as systems engineering in general, is a transdisciplinary and integrative approach [23]. There is a need to overcome these cultural and methodological differences between the major stakeholders and roles who are involved in such a project, possibly educated and socialized in different research and engineering disciplines.

Linear vs. iterative development models

Many engineers are trained not to start the design if the requirements are not yet fully fixed and agreed with the customers. On the other hand, software engineers and data scientists are more used to shifting requirements. As a consequence, representatives of different disciplines favor either linear, iterative (agile) or mixed (hybrid) development models. Although agile models lead to more flexibility in the assignment of resources and the adjustment of solution approaches, especially when requirements change during the project duration, they may be very expensive, if not impossible, if there are dependencies with suppliers and/or the delivery of hardware components as well as milestones to be achieved according to contracts.

Scaling to large heterogeneous teams

The management of large teams – or even development initiatives across organizational boundaries – requires more overhead for the coordination. Identifying and establishing the coordination between teams can be difficult particularly, especially if the professional cultures and development methodologies between teams differ too much.

Another big questions is which roles need to be filled and for which skills to hire. Often times a hiring decision equates also a technical decision as an expert in control theory will make use of that background where an expert in Reinforcement Learning might use a wholly different technical basis for the same task.

Project planning and risk management

The use of AI-based methods can introduce additional risk. For example an initial solution approach might be just not feasible for the amount of available data. Or a different type of sensor becomes necessary at a stage in the project where the mechanical and electrical development ought no longer be changed. Knowing about these risks and how they impact the overall project is an important task for a project manager. It becomes increasingly difficult for project managers to have sufficient insight into all the different disciplines for a project – not just the AI related ones.

3.3 Cultural differences

Cultural differences exist not only between nations but also between professions. These differences become apparent in the best practices for development processes and tools preferred by different communities and also in the social interaction between people. But cultural differences go beyond processes and tools. Hofstede [9] has established widely accepted dimensions for national and organizational cultures also shown in Figure 4. Some of these differences are reinforced by self-selection of people choosing their profession and also by their education and the transmission of the professional cultures.

Figure 4 
Hofstede’s national and organizational culture dimensions [9].
Figure 4

Hofstede’s national and organizational culture dimensions [9].

For example, according the stereotype, a safety engineer is trained to be process-oriented, favoring closed systems over which he exerts tight control following the normative rules of a regulated industry. As another example, an application engineer, e. g., a mechanical, electrical or chemical engineer, would tend to use mathematical models to describe relationship between physical properties, whereas a computer scientist is rather building a formal information model. In a joint development, both models have to be consistent.

3.4 Operational challenges

Apart from the initial development, Systems Engineering also includes “ use, and retirement of engineered systems, using systems principles and concepts, and scientific, technological, and management methods. [23]” Once in operation, there are further challenges for subsystems that rely upon AI and ML methods. We split the operational challenges into those arising during commissioning phase before the established use and those that arise during the established use.

3.4.1 Challenges to achieve established use

Besides the technical commissioning to bring a complex system into operation also organizational challenges like personnel training, organizational processes and change management play a role in successfully transitioning into the established use. It is especially important to achieve the acceptance of a solution by the end users – even though it may not be required for them to fully understand the inner workings of an AI-based system. Criteria that are relevant for the acceptance include the following:

Agency of system operators

Operators that interact with a system on a daily basis want to understand its behavior and take control over it if required. A perceived loss of control may lead to push-back from the operators. To keep their agency, operators need insight into a system – for example using techniques from Explainable AI – and appropriate user interfaces to interact with it. Note that a fully automated system that does not require any interactions may be accepted faster than a system that requires interactions but only gives partial access for control by the operator.

Usability

Even though AI-based systems can increase the degree of automation, they may require new and different work tasks performed by operators. For example, if manual settings of a system have to be additionally entered into a human-machine interface in order to be known to an AI component. If the usability of a system is not sufficient, its performance may degrade due to a lack of interaction with human operators. It is difficult to then disambiguate the origin of these performance losses.

Visibility of improvements

Performance metrics that show improvements made by an AI-based approach help to convince daily operators as well as management to accept novel solutions.

3.4.2 Challenges during established use

Systems that are “ left to their own devices” without supervision and maintenance often start to degrade and eventually fall out of use. The challenges after the system has achieved the status of an established include the following:

Distribution shift

Engineering assumptions have to be continuously monitored and validated as ML methods rely upon training data sets that depend upon the data, and/or simulated data, that was available at the time of development. In case of shifts, e. g., if the distribution of the actual data deviates too much from the distribution of the training data, there is a need for re-training, i. e., jump into a re-development or adoption of a sub-system. The origin of a shift may lie in gradual degradations, a change of the external operating conditions or structural changes in the underlying physical system itself. Such shifts may require re-training or other adaption in order to ensure correct operations.

Usage shift

Human operator’s skill, expertise and cultural background may change. This may lead to adapt the explanation component of an ML-based sub-system. Furthermore, it has to be assured that the ML methods inside are fitting to a possible alternate usage purpose or application domain of the originally developed AI system.

Maintenance

Typically, the maintenance activities of technical systems focus on the degradation of hardware components and its effects, and/or the update of software application and infrastructure components. An example for this is a faulty sensor that needs to be replaced and recalibrated. Now, in case of AI/ML methods, it has to be assessed if the assumptions that led to the original design and configuration still fit.

Equipment availability

For a long-term deployment it can be a challenge to keep the development and runtime environment of an AI system operational. Technology is moving fast and current systems quickly turn into legacy, especially, if operating systems and drivers are no longer commercially available and maintained. Many technical systems however need to be maintained over many years or even decades. For these it is important to ensure that faulty hardware can be replaced and that software is maintained (e. g., with security patches) throughout the lifecycle of the overall system.

Transmitting technical know-how

Key people involved in the development of a system may leave the organization tasked with the operational phase. If a system runs stable for a long period of time, the knowledge required for upkeep, maintenance and update may gradually become lost. This can pose a challenge if changes require an intervention for an AI sub-system that is little understood.

Typically, these challenges are considered at design times and appropriate measures and processes are defined that take effect during the operational phase. In some cases, there is however a need to jump back into a development phase for more substantial rework – especially if this impacts more than just one isolated AI sub-system.

3.5 Fundamental research questions

AI Systems Engineering aims for the use of AI in an engineering practice. But it is not a mere consumer of the results of basic AI research. AI Systems Engineering also poses new challenges and motivates further basic research to remove currently existing hurdles and boundaries. We have grouped the fundamental research questions that arise in the context of AI Systems engineering into three categories.

Low sample complexity

The term sample complexity describes the amount of data required for achieving a given level of performance. One approach for reducing the sample complexity include the integration of prior expert knowledge (also natural laws) [28]. Modern AI methods, especially those based on machine learning, receive information about the target domain mainly from empirical data. But this leaves out additional domain knowledge when it is not formulated in a way that can be easily integrated with the AI method. As an example, differential equations are very common in physical models. But this model representation of prior knowledge is largely incompatible with current machine learning methods (with a few exceptions such as [6]). This incompatibility leads to crude solutions for representing the prior knowledge in the data, such as data augmentation or the generation of synthetic data. Further approaches to reduce sample complexity are Active Learning and adaptive experimental design to generate samples with a high information content [7] and data augmentation to artificially increase an existing data set by exploiting known and desired symmetries and invariants [27].

Robustness and adaptiveness

Real-world systems change over time. This results in so-called distributional shifts [30] where the underlying distribution changes between the sampling of training data and the application of a model that was generated from the data. Hence, if AI-based systems are deployed operationally over a long time period, then they have to handle distributional shifts to achieve a high level of robustness without regular maintenance and manual adjustment. Approaches to increase the robustness include the runtime monitoring, detection and localization of distributional shifts. Furthermore the – possibly automated – transfer learning to switch existing models to the new distribution, as well as Domain Adaptation. An important example for distributional shifts is the sim2real problem [10] where models are trained in a simulated environment and then transferred for an application in the physical world.

Controllability

By controllability we understand that important properties of the behavior of an overall system including AI components can be designed and predicted in the development phase. (Controllability is a well-defined term in the field of Control Theory. Here we do not use the term in that narrow mathematical sense.) Interesting properties of an overall system include its performance and hard performance bounds (not just an empirical performance evaluation), the assurance of safety criteria, the explainability of decisions [4], fairness and the absence of certain biases [17], as well the prevention of misuse.

4 CC-KING and steps towards AI Systems Engineering

In order to lay the foundation for AI Systems Engineering as an emerging engineering discipline and establish an interdisciplinary community, we created the Competence Center Karlsruhe on AI Systems Engineering (CC-KING). CC-KING follows a cooperative, multi-partner approach motivated by the challenges and prerequisites to be met for engineering disciplines (see section 2 and 3). The CC-KING approach encompasses basic research, development of methods and tools for AI Systems Engineering as well as validation in real-world scenarios provided by industrial partners. The working levels of CC-KING are (cf. Fig. 5):

Drivers

The main drivers for CC-KING on a strategic level are the overarching need and trend for digitalization, the ambition to innovate by exploiting the results of AI research as well as the potential of AI methods and technologies in order to solve optimization, sustainability and resilience problems, and regulations such as the emerging EU Act on AI.[1]

Domains

The discipline for AI Systems Engineering is per se domain-agnostic. However, as a starting point, CC-KING focuses on the application domains of industrial production and mobility systems with the ambition to transfer the basic concepts, methods and tools to other application domains later on, e. g., smart cities and smart regions, medical applications or construction.

Scenarios

From the selected domains application scenarios are selected and formulated in term of use cases. The purpose of the use case specification is to provide a bridge between the requirements space (representing the demands and constraints of customers) and the solution space (representing the tools and components of a development environment and the components of a targeted system). Furthermore, CC-KING places a focus on transferring the reusable and generic design artefacts, concepts and tools back to the stakeholders of the application domain with the aim of re-use and commercialization.

Development

Driven by the requirements derived from the domain-specific scenarios and use cases, but also triggered and motivated by the results of the research working level (see below), CC-KING develops methods and tools that are offered to the community by means of publications and/or commercial offers including open source.

Research

As AI Systems Engineering is an emerging discipline, there is not yet an established methodological foundation upon which methods and tools may be relied upon. This working level comprises all the basic research activities of AI Systems Engineering, e. g., means to formally describe AI systems and methods, as well engineering approaches to guarantee the non-functional requirements of AI systems such as dependability, controllability, security and safety.

Note that the working levels do not imply a hierarchy but their activities are rather mutually dependent and synchronized in an agile manner. The working levels cover the prerequisites for the emergence of engineering disciplines from Section 2:

Figure 5 
Working levels of the CC-KING Competence Center for KI-Engineering.
Figure 5

Working levels of the CC-KING Competence Center for KI-Engineering.

The characteristic C1 Systematic Theory is met by work on basic research that is neither covered by traditional AI research nor by Systems Engineering. Showing the gaps and establishing a new body of work in the intersection helps to carve out the distinct domain of concern for AI Systems Engineering. The characteristic C2 Established Tool Ecosystem is met by the many technical developments done on the working level of CC-KING. Many of these tools are attached to steps of the PAISE process model for AI Systems Engineering [20] – which also covers the characteristic C3 Established Processes. The characteristic C4 Standardization and Legal Requirements is approached by input to and active participation in the standardization landscape. This includes ongoing standardization on ISO level (e. g., ISO/IEC 22989 [12], ISO/IEC 23053 [13]), the link to AI management systems (AIMS [14]) and the German standardization roadmap on AI, 2nd edition, in which the topic “ KI-Engineering” was included. Especially, the standardization with respect to AI in safety-critical systems is highly relevant. The objective is to establish the certification framework by which a safety certification becomes feasible. A systematic consideration of these certification issues in AI Systems Engineering would permit the use of AI methods and technologies where it is currently prohibited because of the associated risks. The prerequisite C5 Education and professional training is met by the inclusion of KI-Engineering topics to university courses at the Karlsruhe Institute of Technology (KIT) and the exchange with further universities to establish a common curriculum and a baseline of what should be taught to future practitioners. For the exchange between professionals and research, CC-KING is not a bona fide example of the C6 Professional Societies, as it has a focus on the Karlsruhe region only so far. Still it organizes events and assists in the establishment of the community overall. Further establishment is an emerging phenomenon once the number of professionals for AI Systems Engineering becomes sufficiently large. Until then it can be considered in the appropriate sections of the professional organizations for software engineering, control engineering and systems engineering.

5 Conclusion and outlook

We have introduced KI-Engineering, translated as AI Systems Engineering, a new engineering discipline to enable advanced overall systems that leverage AI in some of their subsystems and components. As the scale of systems engineering increased, a systematization and standardization of the tools and development models has taken place. We foresee a similar development for the use of AI in the context of larger overall systems. The article further identified and discussed the long-term challenges for AI Systems Engineering as well as prerequisites for the emergence of new engineering disciplines. These are compared with the working levels and results of the CC-KING Competence Center for AI Systems Engineering.

Going forward the first two years of CC-KING have delivered the results to now increase the outreach and raise the awareness for the topic. CC-KING can be the consistent first kernel that starts a much bigger trend. The exchange with other researchers and practitioners to develop an overall consensus is of the greatest importance for this.

Funding statement: This work was supported by the Competence Center Karlsruhe for AI Systems Engineering (CC-KING, https://www.ai-engineering.eu) sponsored by the Ministry of Economic Affairs, Labour and Tourism Baden-Württemberg.

About the authors

Julius Pfrommer

Julius Pfrommer is head of a research group on “distributed cyber-physical systems” at Fraunhofer IOSB focusing on the use of AI to solve “hard problems” in industry. He is furthermore the scientific director of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING) and holds an appointment an Karlsruhe Institute of Technology for his course on convex optimization theory for machine learning and engineering.

Thomas Usländer

Thomas Usländer holds a degree in Computer Science from the University of Karlsruhe, Germany, and a PhD in Engineering of the Karlsruhe Institute of Technology (KIT), Germany. He is head of the department “Information Management and Production Control” and spokesperson of the business unit “Automation and Digitalization” at Fraunhofer IOSB. His research interests include the analysis and design of open, secure and dependable service architectures for the Industrial Internet of Things and Industrie 4.0. Thomas Usländer is the project manager of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING).

Jürgen Beyerer

Jürgen Beyerer has been a full professor for informatics at the Institute for Anthropomatics and Robotics at the Karlsruhe Institute of Technology KIT since March 2004 and director of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB in Ettlingen, Karlsruhe, Ilmenau, Görlitz, Lemgo, Oberkochen and Rostock. Research interests include automated visual inspection, signal and image processing, variable image acquisition and processing, active vision, metrology, information theory, fusion of data and information from heterogeneous sources, system theory, autonomous systems and automation. Jürgen Beyerer is the chair of the scientific board of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING).

References

1. Agile Alliance. Manifesto for agile software development. www.agilemanifesto.org, 2001.Search in Google Scholar

2. Amershi, Saleema, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi and Thomas Zimmermann. 2019. Software engineering for machine learning: A case study. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, pp. 291–300.10.1109/ICSE-SEIP.2019.00042Search in Google Scholar

3. Arnold, Stuart. 2000. Systems engineering: From process towards profession. In: INCOSE International Symposium, volume 10. Wiley Online Library, pp. 796–803.Search in Google Scholar

4. Adadi, Amina and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access 6: 52138–52160.10.1109/ACCESS.2018.2870052Search in Google Scholar

5. Booton, Richard C. and Simon Ramo. 1984. The development of systems engineering. IEEE Transactions on Aerospace and Electronic Systems 4: 306–310.10.1109/TAES.1984.4502055Search in Google Scholar

6. Chen, Ricky T.Q., Yulia Rubanova, Jesse Bettencourt and David K. Duvenaud. 2018. Neural ordinary differential equations. In: Advances in neural information processing systems, volume 31.Search in Google Scholar

7. Ehrenfeucht, Andrzej, David Haussler, Michael Kearns and Leslie Valiant. 1989. A general lower bound on the number of examples needed for learning. Information and Computation 82(3): 247–261.10.1016/0890-5401(89)90002-3Search in Google Scholar

8. Greenwood, Ernest. 1957. Attributes of a profession. Social work, 45–55.10.1177/0145482X6005400504Search in Google Scholar

9. Hofstede, Geert. 1991. Cultures and Organizations: Software of the Mind. McGraw-Hill Book Company.Search in Google Scholar

10. Höfer, Sebastian, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Melissa Mozifian, Florian Golemo, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, et al. 2021. Sim2real in robotics and automation: Applications and challenges. IEEE transactions on automation science and engineering 18(2): 398–400.10.1109/TASE.2021.3064065Search in Google Scholar

11. Ibne Hossain, Niamat Ullah, Raed M. Jaradat, Michael A. Hamilton, Charles B. Keating and Simon R. Goerger. 2020. A historical perspective on development of systems engineering discipline: a review and analysis. Journal of Systems Science and Systems Engineering 29(1): 1–35.10.21079/11681/40259Search in Google Scholar

12. ISO/IEC 22989: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology (under development). ISO Technical Committee ISO/IEC JTC 1/SC 42 Artificial Intelligence, 2022.Search in Google Scholar

13. ISO/IEC 23053: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) (under development). ISO Technical Committee ISO/IEC JTC 1/SC 42 Artificial Intelligence, 2022.Search in Google Scholar

14. ISO/IEC 42001.2: Information Technology — Artificial intelligence — Management system (under development). ISO Technical Committee ISO/IEC JTC 1/SC 42 Artificial Intelligence, 2022.Search in Google Scholar

15. Loui, Michael C. 1995. Computer science is a new engineering discipline. ACM Computing Surveys (CSUR) 27(1): 31–32.10.1145/214037.214049Search in Google Scholar

16. Maxwell, James Clerk. 1873. A treatise on electricity and magnetism, volume 1. Clarendon press.Search in Google Scholar

17. Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54(6): 1–35.10.1145/3457607Search in Google Scholar

18. Morari, Manfred and Jay H. Lee. 1999. Model predictive control: past, present and future. Computers & Chemical Engineering 23(4-5): 667–682.10.1016/S0098-1354(98)00301-9Search in Google Scholar

19. Nagel, Laurence W. 1975. Spice2: A computer program to simulate semiconductor circuits. Technical Report ERL-M520. University of California, Berkeley.Search in Google Scholar

20. PAISE® – The process model for AI systems engineering. Whitepaper of the Competence Center for KI-Engineering CC-KING, 2022.Search in Google Scholar

21. Sculley, David, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo and Dan Dennison. 2015. Hidden technical debt in machine learning systems. In: Advances in neural information processing systems, volume 28.Search in Google Scholar

22. Shaw, Mary. 1990. Prospects for an engineering discipline of software. IEEE Software 7(6): 15–24.10.1002/0471028959.sof259Search in Google Scholar

23. Sillitto, Hillary, James Martin, Dorothy McKinney, Regina Griego, Dov Dori, Daniel Krob, Patrick Godfrey, Eileen Arnold and Scott Jackson. 2019. Systems engineering and system definitions. In: INCOSE – International Council on Systems Engineering.Search in Google Scholar

24. Stengel, Robert F. 1994. Optimal control and estimation. Courier Corporation.Search in Google Scholar

25. Studer, Stefan, Thanh Binh Bui, Christian Drescher, Alexander Hanuschkin, Ludwig Winkler, Steven Peters and Klaus-Robert Müller. 2021. Towards crisp-ml (q): a machine learning process model with quality assurance methodology. Machine Learning and Knowledge Extraction 3(2): 392–413.10.3390/make3020020Search in Google Scholar

26. Sutton, Richard S. and Andrew G. Barto. 1998. Reinforcement learning: An introduction. MIT press.10.1109/TNN.1998.712192Search in Google Scholar

27. Van Dyk, David A. and Xiao-Li Meng. 2001. The art of data augmentation. Journal of Computational and Graphical Statistics 10(1): 1–50.10.1198/10618600152418584Search in Google Scholar

28. von Rueden, Laura, Sebastian Mayer, Katharina Beckh, Bogdan Georgiev, Sven Giesselbach, Raoul Heese, Birgit Kirsch, Michal Walczak, Julius Pfrommer, Annika Pick, et al. 2021. Informed machine learning-a taxonomy and survey of integrating prior knowledge into learning systems. IEEE Transactions on Knowledge & Data Engineering (01): 1–1.10.1109/TKDE.2021.3079836Search in Google Scholar

29. Wirth, Niklaus. 2008. A brief history of software engineering. IEEE Annals of the History of Computing 30(3): 32–39.10.1109/MAHC.2008.33Search in Google Scholar

30. Zhang, Kun, Bernhard Schölkopf, Krikamol Muandet and Zhikun Wang. 2013. Domain adaptation under target and conditional shift. In: International conference on machine learning. PMLR, pp. 819–827.Search in Google Scholar

Received: 2022-06-26
Accepted: 2022-08-05
Published Online: 2022-09-03
Published in Print: 2022-09-27

© 2022 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 14.12.2024 from https://www.degruyter.com/document/doi/10.1515/auto-2022-0076/html
Scroll to top button