Category Archives: Blogs & Papers

The Temporal Dynamics of Intention: Integrating Libet, OODA, and FRAM

David Slater


Abstract


This work proposes a new functional model of volition that integrates empirical timing data from Libet’s experiments, the operational logic of the OODA decision cycle, and the systemic architecture of the Functional Resonance Analysis Method (FRAM). Rather than treating the brain as a collection of isolated centres responsible for discrete cognitive or emotional roles, the model conceptualises intention and action as emergent properties of dynamically synchronised neural assemblies distributed across the cortex and subcortex. These assemblies interact through rhythmic oscillatory mechanisms, forming transient system-wide avalanches that activate learned and innate behavioural pathways. The resulting framework offers a mechanistic explanation for the temporal evolution of conscious intention, veto control, action execution, and feedback learning. By grounding cognition in dynamic coupling rather than localisation, the model provides a basis for simulation, clinical insight, and the design of aligned human–AI interaction systems. By using the FRAM built system model, the natural variability of the functions can be examined systematically to determine the effects of system behaviour and performance. For example, what difference would variability in the reticular gating function have on the overall cognition process, etc. A real prospect of exploring neurodiversity scientifically?


Keywords; Volition; neural synchronisation; OODA loop; Functional Resonance Analysis Method (FRAM); predictive processing; agency; decision-making; Neurodynamics.

Download the PDF

Teaching FRAM: The Evolution of Understanding Complex Systems

INTRODUCTION — FROM CURIOSITY TO COMPLEXITY

When we first encounter the world, we do so like a child taking its first steps — seeing, touching, sensing, and asking the simplest of questions: What is it? How does it work? Why does it do that? These are the same questions that drive all human understanding, from early wonder at how toys move to the most sophisticated explorations of how societies and technologies function.


At first, the WHAT is tangible. A child learns that blocks fit together, that pushing a ball makes it roll, that pressing a lever releases a spring. The HOW emerges through play — through experimenting with things that can be touched and seen. We learn by building models: bricks, Lego, Meccano — small, hands-on systems that reveal cause and effect. These are our first experiments in reasoning about function.
As machines appeared, that same curiosity evolved into engineering. Early engineers were pragmatic thinkers, focused on keeping machines running. They needed to know how mechanisms worked, not necessarily WHY. The goal was to make systems reliable — to maintain function when parts failed, and to restore it when they broke. Analytical tools such as Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) emerged from this mechanical mindset. They decomposed systems into components and traced how failures propagated to effects.


But humans were never components. When people entered the system — as operators, decision-makers, and designers — the simple model of cause and effect began to fracture. Unlike a valve, or a gear, a person’s performance can vary with context, fatigue, or ambiguity. This variability could not be diagrammed in logic trees. Written procedures tried to codify human work, but they captured only the what and how, never the why. Once human and social factors entered the picture, systems became complex, not merely complicated.

Download the PDF

    Modelling Contemporary Complex Systems: From Structure to Possibility

    Abstract


    This piece explores the challenges of modelling contemporary complex systems, which are characterized by nonlinearity, feedback, adaptation, and emergence. It distinguishes complex systems from complicated ones using the Cynefin framework and highlights the limitations of traditional modeling approaches that rely on predefined structures and stable boundaries. A structural-semantic classification of modeling methods is proposed, emphasizing semantic substrate, structural commitment, and representational ontology. The Functional Resonance Analysis Method (FRAM) is introduced as a metamodel that focuses on functional dependencies and variability, enabling sensemaking under uncertainty. FRAM’s application in digital twinning is discussed, showcasing its ability to dynamically adapt to real-world system behaviour. The document concludes by advocating for diverse modelling methodologies to address the complexity of modern systems, with FRAM playing a pivotal role in modelling emergent and unpredictable behaviours.

    From Complicated to Complex Systems


    Across engineering, safety, healthcare, infrastructure, finance, and AI-enabled socio-technical domains, there is growing recognition that many systems of contemporary concern are complex rather than merely complicated. This distinction, articulated clearly in the Cynefin framework, (Figure 1), is not semantic but foundational (Snowden & Boone, 2007). Complicated systems may involve many parts, yet their behaviour remains largely decomposable, analysable, and predictable given sufficient expertise. Complex systems, by contrast, are characterised by nonlinearity, feedback, adaptation, and emergence; their behaviour cannot be reliably inferred from the properties of individual components alone (Cilliers, 1998; Mitchell, 2009).

    Download the PDF

    Is AI Intelligent?

    David Slater


    Wrong Question?

    Contemporary discussion of artificial intelligence is dominated by questions about intelligence: whether current systems are intelligent, whether they approach general intelligence, or whether scaling alone might eventually produce it (Russell & Norvig, 2021). These questions, however, rest on an assumption that is rarely examined—that intelligence is the primary phenomenon of interest, and that consciousness, if it appears at all, comes later as an emergent by-product. This paper argues that this assumption is likely inverted. Evidence from evolutionary biology, neuroscience, and dynamical systems theory suggests that consciousness emerges before intelligence, and that intelligence can only arise once a system already possesses a unified internal orientation to the world (Panksepp, 1998; Pessoa, 2017).


    How did it Evolve?


    To make this claim precise, evolution must be understood in a broader sense than its familiar biological formulation. In non-equilibrium thermodynamics, driven open systems can spontaneously self-organise into stable dynamical regimes—often described as dissipative structures—that persist by channelling energy flows and exporting entropy (Prigogine, 1977; Nicolis & Prigogine, 1989). Under sustained driving, some regimes prove more stable than others, introducing a form of differential persistence that does not depend on genes, replication, or selection in the biological sense (England, 2015). Biological evolution by natural selection can therefore be understood as a powerful special case of a more general phenomenon: the selection of stable dynamical regimes under constraint (Kauffman, 1993).


    As such systems accumulate constraints—feedback loops, memory, boundary conditions, and separations of timescale—their attractor landscapes become richer and more structured (Mitchell, 2009). At this point, new functional properties can arise. One of the earliest and most consequential is awareness: the capacity for internal dynamics to covary reliably with environmental regularities in ways that stabilise behaviour. When this internal orientation becomes globally integrated into a single, metastable dynamic that continuously binds perception, salience, value, and action readiness, a system crosses a functional threshold that is usefully described as consciousness (Tononi et al., 2016; Mashour et al., 2018).


    What is Consciousness?


    Consciousness is defined here operationally rather than phenomenologically. The argument does not depend on resolving the “hard problem” of subjective experience, nor does it require commitment to any particular theory of qualia. Instead, consciousness is treated as a system-level dynamical property: the maintenance of a unified internal state that orients the system to the world under uncertainty. This definition is consistent with integrative and dynamical accounts in contemporary neuroscience, although it remains theoretically contested, particularly with respect to minimal . .

    Download the PDF

    From Mind Maps to Neurons : The Evolution of FRAM Towards Quantitative Predictor-Corrector Systems

    Abstract This article traces the evolution of the Functional Resonance Analysis Method (FRAM) from its early role as a qualitative “mind map” of sociotechnical variability, through its analogy with neuronal and cognitive architectures, to its current development as a quantitative predictor–corrector framework. We demonstrate how FRAM can operationalise John Boyd’s OODA loop in the context of aircraft landing, modelling Observe, Orient, Decide, and Act not as abstract stages but as interdependent functions with measurable properties. Predictor–corrector dynamics, residuals between predicted and observed values, and doctrinal gates are encoded directly in metadata, enabling decision to be treated as a quantifiable process rather than a black box. Extending Llinás’s framework for situation control, the Orient phase is decomposed into functions that incorporate memory, doctrine, and cognitive filters alongside sensor fusion. Results show that FRAM can generate traceable time series of OODA activity, enforce stabilisation barriers, and reveal how decision restores congruity under stress. The approach demonstrates both the potential and the limitations of quantification, offering a credible pathway from metaphor to model in the analysis of decision-making within complex sociotechnical systems.


    Key words – Functional Resonance Analysis Method (FRAM); OODA loop; predictor–corrector; decision-making; situation awareness; aircraft landing; sociotechnical systems; variability; metadata modelling; resilience engineering.

    Download the PDF

    The Full Set – From Linear Risk to Emergent Safety Approaches in System Safety Analysis

    David Slater


    ABSTRACT


    Safety science has undergone a steady evolution from the analysis of mechanical failure to the modelling of emergent behaviour in complex socio-technical systems. Early quantitative methods such as Fault Tree Analysis (FTA) and Probabilistic Risk Assessment (PRA) established the foundations of analytical rigour, but their deterministic assumptions limited their capacity to explain human and organisational performance. The subsequent development of Task Analysis, Human Reliability Analysis (HRA), and the Human Factors Analysis and Classification System (HFACS) extended the scope to human variability but retained a linear, reductionist logic.


    The systems-thinking movement, beginning with Reason’s Swiss Cheese Model, Rasmussen’s AcciMap, and Leveson’s system-theoretic STAMP framework, introduced the ideas of feedback, hierarchy, and constraint. Hollnagel’s Functional Resonance Analysis Method (FRAM) completed this conceptual progression by modelling how variable functional interactions produce emergent outcomes. Together, these methods trace the transition from failure analysis to resilience analysis—from explaining what went wrong to understanding why things usually go right.


    In modern safety assessment, static or purely qualitative tools such as Bow-Ties, risk matrices, and LOPA are no longer sufficient. The integration of the quantitative precision of FTA and HRA, the systemic structure of STAMP, and the dynamic variability modelling of FRAM—augmented by metadata and AI reasoning—offers a unified, predictive framework. This convergence of control logic and functional resonance defines the next stage of system safety science.


    Keywords
    System Safety; Fault Tree Analysis (FTA); STAMP; STPA; FRAM; Functional Resonance; Resilience Engineering; Human Reliability; Probabilistic Risk Assessment (PRA); Safety-II; Socio-Technical Systems; AI-Assisted Safety Modelling; Large Language Models (LLMs)

    Download the PDF

    From Cortical Columns to Cognitive Circuits:Using FRAM to Model Recursive Reasoning inthe Brain


    David Slater

    Abstract


    This paper presents an integrated account of how the Functional Resonance Analysis Method
    (FRAM) can be applied to the cortical microcircuit as a means of visualising and understanding
    recursive reasoning. By mapping biological processes of prediction and error correction onto a
    function-based systems model, the study demonstrates that FRAM provides a coherent
    framework for representing distributed, self-correcting cognition. The cortical column is treated
    not as a static computational unit, but as a dynamic predictive engine—one that embodies the
    same iterative logic found in complex adaptive systems. The result is both a biological and
    analytical insight: recursion, not scale, is what enables deep reasoning. Through successive
    modelling, validation, and refinement, the project culminates in a fully functional FRAM model
    (.xfmv) that faithfully captures the cyclical flow of excitation, comparison, modulation, and
    learning found in cortical circuits.


    Key words:
    Cortical columns; predictive coding; active inference; recursive reasoning; FRAM; functional
    resonance; neural architecture; perception; error correction; hierarchical processing; cortical
    microcircuit; biological systems modelling; emergent cognition; complex adaptive systems

    Download the PDF

    Rethinking the Role of Non-Compliance in Complex Operational Systems


    Safety-critical industries traditionally operate on the assumption that procedural compliance
    ensures safe performance and that unsafe outcomes emerge primarily when individuals deviate from approved instructions, standards, or regulatory boundaries. This logic—compliance
    equals safety; deviation equals risk—underpins investigation frameworks, accountability
    structures, enforcement models, and training philosophies in domains such as aviation,
    healthcare, nuclear energy, rail, maritime operations, and chemical process industries.
    However, field investigations and observational studies increasingly demonstrate that real-
    world work rarely matches formal expectations. Variability in context, operational conditions,
    system states, timing, resource availability, and environmental constraints means that
    procedures describe how work is imagined, not how it is performed. Operators routinely adapt,
    modify, or bypass procedural steps to maintain operational continuity and preserve safety. In a
    recently published thesis, Ankersø and Nielsen refer to this form of deliberate, safety-oriented
    deviation as Selective Intentional Non-Compliance (SINC): purposeful departures from
    procedure undertaken to maintain functional performance under conditions where strict
    execution is insufficient or unsafe.

    This paper examines the tension between compliance-based safety models and the role of
    adaptive performance in preventing hazardous outcomes. Using the conceptual framework
    illustrated in Figure 1, we propose the identification of a previously unacknowledged region: the
    SINC-Avoid zone, the operational space where strict procedural compliance can contribute to
    hazard escalation while adaptive deviation preserves system safety. Recognising, training for,
    and governing this capability is essential if safety management systems are to remain effective
    within complex, variable, and dynamic environments.


    Keywords: compliance; non-compliance; resilience engineering; Work-as-Imagined; Work-as-
    Done; just culture; SINC; SINC-Avoid.

    Download the PDF

    Introducing Hierarchical Grouping in the FRAM Model Visualizer Sandbox

    The Functional Resonance Analysis Method (FRAM) has long empowered safety analysts, system engineers, and organizational researchers to capture the complex interplay of activities that constitute socio-technical processes. Yet, as FRAM models grow to encompass dozens or even hundreds of interlinked functions, users often face a dilemma: how to preserve the rich detail necessary for deep analysis without drowning in a sea of hexagons. The latest sandbox release of the FRAM Model Visualizer (FMV) addresses this challenge head-on by introducing a versatile “Grouping” facility that seamlessly bridges high-level overviews and granular drilldowns within a single canvas.


    At its core, is now a “grouping” feature enables you to bundle together a set of lower level “child” functions under a single “parent” node. Imagine you are modeling the tyre-change pit stop of a Formula 1 car. Traditionally, you might represent the entire tyre removal as one function—“Remove Tyre”—linked to its predecessor and successor activities. However, reality is never so simple. Beneath that umbrella task lies a precise sequence of micro-actions: lifting the chassis, positioning the pneumatic wrench, engaging the wheel nut, and finally loosening it. By modeling each of those steps individually—with their own Inputs, Outputs, Preconditions, Resources, Controls, and Timing—you gain insight into where delays or resource shortages could ripple through the system.


    The benefits of hierarchical grouping extend far beyond Formula 1 metaphors. In large healthcare networks, for instance, you might group all sub-functions of “Administer Medication”—from verifying patient identity and checking dosage to preparing the injection and disposing of sharps—under a single parent node. Logistics teams can collapse entire fleets of transport tasks into one macro-function, drilling down only when a delay demands it. Even in software development, you can bundle a suite of code-review activities or automated tests beneath a singular “Quality Assurance” label.


    By enabling multi-scale modeling within a single, interactive environment, the FMV sandbox’s grouping facility solves a perennial pain point: balancing the need for detail with the clarity of a simplified overview. Users no longer choose between a sprawling tangle of functions or a shallow diagram that glosses over crucial steps. Instead, they gain dynamic control over their model’s granularity, effortlessly shifting focus as analysis demands. In doing so, the FRAM Model Visualizer reaffirms its position as the premier tool for capturing, simulating, and communicating the nuanced choreography of complex systems.

    Click to download PDF

    Modelling the Clayton Tunnel Disaster

    David Slater


    That August morning in 1861 began like so many others on the London–Brighton line: express trains threading their way into the mouth of Clayton Tunnel, controlled by a pair of semaphore arms and the telegraph wires linking two signal cabins. Shortly after 8:30 am, Signalman Killick in the southern box levered his home signal to clear and admitted the approaching train.


    Seconds later, the automatic treadle—hidden beneath the sleepers—jammed. Almost at once, an alarm bell clanged in the cabin, announcing that the signal had not returned to danger.


    Barely three minutes after the first service entered—well short of the five‐minute timetable interval—the whistle of the second express pierced the morning air. Alerted by an alarm, Killick recognized the failure: the mechanical backstop he trusted had betrayed him. He sprinted from the box, red flag in hand, and dashed onto the ballast beside the tunnel mouth, in time to see the second train speeding past. The driver caught that frantic flag warning, braked hard, and ground to a halt deep within the tunnel, his wheels skidding to a standstill on the rails. Obedient to the emergency halt, he then reversed the train slowly toward the portal to comply with the flagged stop signal and find out why.


    Back in the box, Killick’s telegraph needle flickered with a “Train Out” indication. Believing this belatedly to refer to the halted second express, he realised that the home signal at Clear was appropriate in its failed state, and he would not need to signal stop. Thus, the third express thundered in at full speed. Deep in the darkness, the reversing train and the on-rushing locomotive met in a violent collision. In minutes, the wreckage would claim twenty-three lives and maim nearly a hundred and eighty more.


    That day, the alarm bell—intended as the fallback for a failed treadle—proved too late to prevent catastrophe. A single device’s malfunction, a compressed timetable, and a mis‐read telegraph message had resonated into tragedy, teaching the railway world that no one safeguard, however loud its warning, could be allowed to stand alone.

    Download the PDF

    Quantifying human factors in complex sociotechnical systems using the FRAM

    Fram Functional Resonance

    How do they make them work? | David Slater

    David Slater and Rees Hill


    ABSTRACT


    The Functional Resonance Analysis Method is being used more widely, but not just a visualisation to aid understanding of how teams in complex sociotechnical systems work together and adapt to challenges that can arise. There has now been a series of studies that attempt to use the visualisation as a legitimate complex system metamodel. The results of these studies have been quantitative estimates of system behaviours and functional effectiveness. The key aim of the original method was additionally to recognise and track the effects of real-world variability in the adequacy, reliability, and individual behaviours of component functions. The method also enables the same scrutiny of human contributed functions, which is allowing more insights into the importance of human adaptability in making the systems work in practice.


    This is redolent historically of the work of Human Factors specialists tasked with producing Human Error data for engineering systems’ probabilistic logic tree safety studies. This paper sets out to trace the influence of Human factors thinking in the development of the FRAM approach and to propose a way to produce credible human factors insights through the quantitative system modelling perspectives offered by FRAM.


    To implement this in FRAM analyses, we propose extending the metadata approach and utilising more sophisticated algorithms in the equations specified. For example,

    • We define a common probability of effectiveness to model the expected performance of functions in specific interactions.
    • We define Limits of Tolerability – maximum levels of variability in Aspect effectiveness tolerated to trigger successful execution or utilise back up functions.
    • This approach shows how a Human Function has an ability to adapt to variability, not normally found (unless designed in) in technological functions.
    • By monitoring and learning trends and patterns in variabilities to respond intelligently and pre-emptorily.
    • Borrowing a concept from Bow ties – Layers of Protection Analysis (LOPA), by identifying the checks and balances as “Barriers” with probabilities of Failure on Demand (PFD).(ref)
    • By leveraging concepts from distributed computing, particularly through the application of algorithms inspired by the Byzantine Generals Problem.

    Towards Explainable Anomaly Detection in Safety-critical Systems:Employing FRAM and SpecTRMin International Space Station Telemetry

    Fram Functional Resonance

    Shota Iino1, Hideki Nomoto1, Takashi Fukui2, Sayaka Ishizawa2, Yohei Yagisawa2,
    and Takayuki Hirose1 and Yasutaka Michiura1

    Japan Manned Space Systems Corporation, Tokyo, 100-0004, Japan
    iino.shota@jamss.co.jp
    nomoto.hideki@jamss.co.jp
    hirose.takayuki@jamss.co.jp
    michiura.yasutaka@jamss.co.jp
    JAPAN NUS Co., Ltd., Tokyo 160-0023, Japan
    fukui-t@janus.co.jp
    ishizawa-syk@janus.co.jp
    yagisawa-yhi@janus.co.jp


    ABSTRACT


    Ensuring the reliability and safety of space missions necessitates advanced anomaly detection systems capable of not only identifying deviations but also providing clear, understandable insights into their causes. This paper introduces a novel methodology for the detection of systemic anomalies in the telemetry data of the International Space Station (ISS), leveraging the synergistic application of the Functional Resonance Analysis Method (FRAM) and the Specification Tools and Requirement Methodology- Requirement Language (SpecTRM-RL). Integrated with machine learning-based normal behavior prediction model, this approach significantly enhances the explanatory of anomaly detection mechanisms.

    The methodology is verified and validated through its application to the thermal control system within the ISS’s Japanese Experimental Module (JEM), illustrating its capacity to augment diagnostic capabilities and assist flight controllersand specialists in preserving the ISS’s functional integrity. The findings underscore the importance of explainability in the machine learning-based anomaly detection of safety-critical systems and suggest a promising avenue for future explorations aimed at bolstering space system health management through improved explainability and operational resilience.

    Click to download PDF

    How System Analysts Can Utilise LLM’s to Generate Initial System Models for the FMV

    FRAM Functional Resonance

    David Slater


    Abstract

    Large language models are transforming the way we capture and explore complex workflows by automating the creation of FRAM models that until now demanded painstaking manual effort. Instead of laboriously crafting XML by hand, an analyst can simply describe a process in natural language and receive back a ready-to-load .xfmv file for the FRAM Model Visualiser. Under the surface, the model parses the narrative to identify every discrete activity—or “function”—traces how the outcome of each activity feeds into the next, and writes out the corresponding , , , and elements in FMV’s XML schema. The result is an interactive diagram of functions and couplings that can be immediately inspected or simulated.


    In this article, we delve into this method and walk through two concrete examples—making a cup of tea and preparing a ferry for departure—complete with illustrations drawn from the actual .xfmv files.


    Introduction


    Large language models (LLMs) are revolutionizing the way we formalize and visualize complex work systems. By encoding the Functional Resonance Analysis Method (FRAM) into a simple code template that utilises the FRAM Model Visualiser’s .xfmv XML schema, an LLM can transform a textual process description into a standard FMV model. The prompt for the LLM should first specify how the LLM should perform the FRAM analysis.


    The prompt used here :

    • describes the system to be studied and asks for an overview of what the system is, how it is supposed to work, what it produces, and any issues known to be a problem or for which improvements have been made. The LLM clarifies which actors, tools, documents, and physical environment constitute the scope.
    • Asks for the key functions involved and the tasks they have to fulfil to successfully operate the system, to be listed. Every discrete activity—what FRAM calls a “function”—is listed, each with a unique numeric IDNr, a clear IDName, a FunctionType (foreground, background, variable), and optional display coordinates (x, y) for layout within the Visualiser.
    • Invites the LLM to use a hierarchical Task analysis format to determine the sequence and couplings between these functions needed. The LLM performs a lightweight Hierarchical Task Analysis, ordering the functions sequentially and identifying the hand-offs. Each hand-off becomes an Output in the upstream function and a matching Input in the downstream function, sharing the same label (the IDName of the coupling).
    • Instructs to use a Python template ( from the GitHub website), to build the XML tree.
      The final output is a .xfmv file ready for drop-in import to FRAM Model Visualiser, where dynamic playback or FMI interpretation can begin immediately.

    Click to download PDF

    What is an Ontology for FRAM, and Why Does It Matter?

    FRAM Functional Resonance

    An ontology for the Functional Resonance Analysis Method (FRAM) would serve as a structured representation of the key concepts, relationships, and rules that underpin this powerful methodology for analyzing and modeling complex adaptive systems. It would be a critical tool for creating a shared understanding of this unique approach to analyzing and managing complexity in everyday systems. In FRAM, the ontology should go beyond being a shared vocabulary; it can become a framework for mapping the intricate web of interactions within systems, enabling us to grasp their emergent behaviour and variability.


    FRAM is built on the premise that complex systems are defined by their dynamic interactions, inherent variability, and emergent outcomes. Hollnagel maintains that a FRAM-built model can be manipulated formally and correctly as a production system, defined as:


    “A production system (or production rule system) is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behaviour, but it also includes the mechanism necessary to follow those rules as the system responds to states of the world.”
    It has recently been suggested that a FRAM-built model could also be described as a collection of linked interdependent Turing machines. With this in mind, a FRAM ontology should formalize these elements, offering a way to describe the system in terms of functions, the couplings between those functions, and the variability that naturally arises within and across them. This structure provides a foundation for exploring how individual elements of a system combine to produce outcomes—sometimes unexpected, sometimes resilient, and sometimes catastrophic.


    At its heart, a FRAM ontology is a layered framework. At the macro-level, it captures the overarching dynamics of the system, focusing on emergent behaviours that arise from the interplay of functions. At the meso-level, it identifies clusters of interconnected functions and couplings that reveal critical pathways and interactions. At the micro-level, it dives into the specific properties (metadata) and internal behaviour of individual functions, detailing their inputs, outputs, and dependencies. This hierarchy allows for zooming in and out, offering both a granular and holistic perspective of the system.

    Download the PDF

    A Standardised Reporting System for FRAM: Towards Wider Adoption and Practical Use

    FRAM Functional Resonance

    David Slater


    The Functional Resonance Analysis Method (FRAM), (Hollnagel 2012).has become an influential framework for modelling complex socio-technical systems. Its ability to account for variability, emergent properties, and non-linear interactions makes it a powerful alternative to linear causal analysis in safety and systems engineering. However, despite its conceptual strength and flexibility, the method remains underutilized in practice. This paper outlines the core barriers to its broader adoption and proposes the development of a standardized reporting tool to support practitioners in applying FRAM more consistently and effectively.


    From Theory to Practice: The Persistent Gap


    FRAM is unusual among systems analysis tools in that it accommodates both qualitative and quantitative paradigms. As Sujan et al. (2023) explain, the method can be employed from a realist perspective, producing computational models capable of simulating and evaluating functional variability with some degree of objectivity. Alternatively, it can be used phenomenologically, allowing analysts to craft reflexive, narrative-based accounts of how system functions interact to produce outcomes. These dual interpretations make FRAM remarkably versatile—but they may also contribute to its uneven application.
    This epistemological ambivalence plays into what Underwood and Waterson (2013) identified as a research–practice gap in systemic accident analysis (SAA). Despite evidence of the advantages offered by systemic approaches like FRAM, they are rarely used routinely in operational settings. As the authors note, “evidence within the scientific literature indicates that systemic analysis models and methods are not being widely used in practice.” This implies not only limited awareness but also a lack of support tools that bridge the conceptual elegance of FRAM with the practical constraints of real-world application.

    Click to download PDF

    The Turing Analogy in FRAM

    FRAM Functional Resonance

    If it quacks like a duck and walks like a duck, its probably a ———–?

    Abstract

    This note suggests that the Turing Machine analogy can be a valuable conceptual tool for understanding FRAM functions as active, dynamic entities within complex systems. However, its limitations, particularly regarding human variability and adaptability, caution against over-reliance on formalism. By integrating insights from cognitive systems engineering, the analogy can be expanded to better address the dual nature of socio-technical systems—leveraging both human adaptability and machine precision.
    This dual perspective ensures that FRAM remains a robust framework for designing systems that are not only deterministic but also resilient, capable of navigating the unpredictability of real-world interactions.


    A Turing Machine

    is a theoretical model of computation invented by Alan Turing in 1936. (1) It serves as a fundamental concept in computer science and mathematics for understanding what can be computed and how computation works. While it is a simplified abstraction, it has proven to be incredibly powerful and forms the basis for modern computing theory.


    The analogy of a Turing Machine in describing FRAM functions offers a thought-provoking lens through which to explore the dynamics of socio-technical systems. Aligning the structure of a FRAM function with the formalism of automata, particularly Turing Machines, draws intriguing parallels that illuminate the computational underpinnings of system behaviour. Both frameworks share foundational elements—inputs, outputs, states, transitions, and rules—which make this comparison particularly compelling.

    Click to download PDF

    What is a Function in FRAM?

    FRAM Functional Resonance

    DAVID SLATER


    FRAM METHODOLOGY DEVELOPMENT GROUP

    FRAMILY 2025

    OUTLINE


    Hollnagel revolutionised our ability to probe behaviours in complex sociotechnical systems with three key developments.

    Introduced a metamodeling (abstraction) approach to model systems as FUNCTIONS. (not detailed physical components)


    Recognised that the system behaves as an INTERACTING INTERDEPENDENT ASSEMBLY of these functions, not as a rigid array of isolated components.

    Pointed out that in the real world VARIABILITY in these interactions can lead to unexpected consequences. His FRAM methodology now allows quantitative prediction of this variability of functional outputs leading to emergent system behaviours.

    Click to download PDF

    Tutorial – Track 2: Intro to FMV use of metadata and advanced features

    FRAM Functional Resonance

    Presenter: Dr. Ing. Niklas Grabbe

    Postdoctoral Researcher at TUM, Chair of Ergonomics

    n.grabbe@tum.de

    FRAMily 2025 – Delft

    Tutorial – Track 2

    12.05.2025

    FRAMily 2025 – Delft, Tutorial – Track 2, 12.05.2025

    Agenda


    13:30 – 13:45 Get to know & expectations
    13:45 – 14:45 Intro to FMV use of metadata and advanced functions
    14:45 – 15:15 Coffee & Tea
    15:15 – 16:30 Guided FRAM modelling exercise (applying metadata)
    16:30 – 17:00 Outlook and Q&A

    Click to download PDF

    An “intelligent” FMV (Fram Model Visualiser)

    FRAM Functional Resonance

    For estimating probabilities of outcomes in complex systems. | David Slater and Rees Hill

    AN “INTELLIGENT” FMV, (FRAM MODEL VISUALISER) FOR ESTIMATING PROBABILITIES OF OUTCOMES IN COMPLEX SYSTEMS.


    David Slater – dslater@cambrensis.org
    Rees Hill – rees.hill@zerprize.co.nz


    ABSTRACT


    The Functional Resonance Analysis Method (FRAM) has emerged as a valuable tool for modeling and understanding the dynamic behaviour of complex socio-technical systems. While traditionally used as a qualitative method, recent advancements in the FRAM Model Visualizer (FMV) have introduced quantitative capabilities, enabling the systematic analysis of functional interactions and variability within a probabilistic framework. This paper explores the potential of FRAM to bridge the gap between human factors specialists, who prioritize qualitative insights, and engineers, who demand numerical rigour for system reliability and safety predictions.
    Drawing parallels between FRAM functions and neural processes, we develop a “Neuron FRAM” model that integrates probabilistic aspects of function coupling, inspired by the McCulloch-Pitts neuron model. By representing FRAM functions as computational units capable of transmitting, coupling, and adapting probabilistic metadata, this approach facilitates predictive modeling of emergent system behavior under variable conditions. Demonstrations include the prediction of success and failure probabilities for safety-critical systems, showcasing its practical relevance in real-world applications such as healthcare, aviation, and AI systems.


    The proposed framework highlights FRAM’s versatility as a system modelling methodology capable of quantifying emergent properties while maintaining its core focus on functional interactions and variability. This integration offers an innovative pathway to enhance the resilience, transparency, and reliability of complex systems, paving the way for broader adoption in domains that require both qualitative and quantitative insights.
    Key words – Safety, Risk, Bow Ties, LOPA, FRAM, AI
    INTRODUCTION
    The Functional Resonance Analysis Method is being increasingly utilised by a wide variety of users for a range of applications. It is becoming of interest to safety professionals who have traditionally relied on methods such as Root Cause, Failure Modes and Effects and Bow ties to try to understand how modern engineering systems will behave in practice. But to this group, the need to quantify the potential failure frequencies is a major requirement. Traditionally this was done by a full Probabilistic (Quantitative) Risk Analysis (PRA/QRA) of the system. But in today’s cost conscious environment the resources needed are generally too scarce and expensive for routine applications so alternatives such as Layers of Protection Analysis (LOPA) and semi quantitative assessments such as Risk Matrices or Heat maps have to suffice. It’s
    because none of these alternatives feel rigorous enough, that confidence in their predictions is not high among the professionals using them. It is for this reason that the FRAM approach now seems worth exploring.


    To most users, the FRAM approach relies on producing a “picture” (a metamodel) of the complex systems involved, sufficiently rigorous to be able to trace and predict the effects of the interdependencies between the functions interacting in the system to produce the outcomes expected and unexpected. To the engineers this is more like a traditional HAZOP of Process and Instrumentation Diagrams (P&ID’s) to trace the systemic behaviour in variable real-life conditions of operation. Both find FRAM approach valid and very helpful (“as done”) safety “audits” of (“as imagined”) designs and procedures.
    So for the mainly “human factors” users, quantification is not necessary and irrelevant, while to engineers, the lack of quantification disqualifies FRAM as a “serious” approach worth looking at. But recently the systematic rigour of modelling the details of the functional interactions that the FAM method enables, has been formalised to include the tracing of the properties and behaviour of individual functions as a process unfolds or develops, quantitatively. This feature is now incorporated in the software (the FRAM Model Visualiser, FMV), employed by most FRAM users.


    While this feature is proving invaluable to academic research groups, particularly in Healthcare, aviation, and AI applications such as autonomous vehicles and robot surgery, the main body of users is as yet unaware, or sees no need for this feature, and are discouraged by the need to understand the full details, of yet another acronymed safety modelling method (lost in the alphabet soup).


    But to any users of the FRAM approach, an ability to predict the system behaviour in ideal or under varying conditions is its main attraction. But an ability to communicate this is mainly based on conveying a visual understanding of what the FRAM models are implying. Having a more quantitative, consistent measure of these behaviours has obvious appeal, if only as relative ratings for scrutiny and comparison (what if?)


    To achieve this acceptance, the basis of the quantification needs to be self-explanatory and acceptable as valid by a wide range of users, from Human factors specialists to engineers.

    Click to download PDF

    Understanding the challenger disaster – a systemic approach

    FRAM Functional Resonance

    a FRAM- STPA approach | David Slater and James Pomeroy

    David Slater, Cardiff University, dslater@cambrensis.org


    James Pomeroy, Cranfield Safety and Accident Investigation Centre, steadypom@gmail.com


    ABSTRACT
    The Challenger disaster remains a critical study in the consequences of organizational “culture” on safety, with previous analyses often focusing on singular causes like “normalization of deviance.” This paper seeks to provide a more nuanced understanding through a dual application of two systemic analysis methods: System-Theoretic Process Analysis (STPA) and Functional Resonance Analysis Method (FRAM). The analysis re-examines the Challenger disaster by mapping the hierarchical structure of NASA and its contractors, highlighting decision-making processes at macro, meso, and micro levels. STPA reveals specific Unsafe Control Actions (UCAs) and control loop deficiencies, exposing gaps in NASA’s risk management and communication. Simultaneously, FRAM models trace critical functional variability within NASA’s organizational levels, The combined approach uncovers how political and budgetary constraints, normalized risk-taking, and diluted engineering feedback cumulatively degraded decision-making integrity, ultimately contributing to the Challenger’s tragic launch decision. Lessons drawn emphasize the need for resilient structures, adaptable feedback mechanisms, and a culture that values caution alongside achievement. This analysis underscores the potential of integrating STPA and FRAM in complex systems to facilitate successful operations, to enhance safety and anticipate organizational issues.


    Key Words- Accidents, Organisations, Systemic analysis, STPA, FRAM


    INTRODUCTION


    Pomeroy (1) notes that studying safety and risk often means examining disasters through the lens of theories, which can narrow our understanding of their complexity. In this view, well-known catastrophes are often paired with catchy, simplified explanations. For example, Chernobyl’s disaster is frequently attributed to a lack of “safety culture,”(1), Aberfan is described as a “man-made disaster” caused by a “failure of foresight,”(2), and the Challenger explosion is linked to an organization that became “normalized to deviance.” (3)


    He adds that although such theories provide useful frameworks, they risk oversimplifying disasters into neat abstractions that fail to capture the full context, conditions, and personalities involved. This narrowed focus can limit our perspective, framing our understanding of events through others’ interpretations and potentially stifling deeper insights.

    Download the PDF

    The Future Availability of the FRAM Model Visualiser (FMV) Software

    FRAM Functional Resonance

    Effective Date: June 1st, 2025


    The FRAM Model Visualiser (FMV), originally developed by Rees Hill with the endorsement of Erik Hollnagel, has served the FRAM community for over a decade as a freely available, validated tool for building FRAM models. This contribution has been continuously developed and supported pro bono for ten years, reflecting a commitment to the growth and application of FRAM methodologies.
    However, increasing demands on time and resources now make it necessary to change how FMV is shared and supported. Despite previous calls for assistance—most notably at the recent Delft meeting—support has been minimal. The sustainability of FMV requires a new approach.


    Key Changes :

    1. Community Version
      The community version of FMV will remain available via GitHub.
      We strongly encourage funded academic and commercial FRAM courses to make a modest contribution or sponsorship in recognition of the resources required to maintain this version.
      All requests for modifications, technical support, or updates will now be addressed through consultancy on a case-by-case basis.
    2. Sandbox Version
      General access to the sandbox version will be discontinued.
      In its place, tailored versions will be developed for specific applications.
      These will be made available under contract, with optional deployment on clients’ own servers.
      Ongoing support

    Download the PDF

    Using fram to model and improve AI-Human interactions in legal contract checking

    FRAM Functional Resonance

    David Slater, Cardiff University, UK, dslater@cambrebnsis.org and
    John Kunzler, Marsh Specialty UK, Marsh Ltd., John.Kunzler@marsh.com

    Abstract

    The integration of Artificial Intelligence (AI) into legal contract review processes promises faster, more consistent detection of errors and risks, yet introduces new complexities that traditional workflow models cannot adequately capture. This report applies the Functional Resonance Analysis Method (FRAM) to reframe AI-assisted contract review as a dynamic socio-technical system characterized by interdependent human and AI functions, each subject to performance variability. Drawing on real-world case studies, scenario simulations, and system modeling, the report demonstrates how minor fluctuations in AI reliability, human judgement, and task conditions can interact to produce resonant error patterns. By constructing a detailed FRAM model of contract review workflows, we identify critical pathways where variability amplifies risk and propose resilient system designs that maintain human interpretive authority. The findings underscore that optimizing AI-human collaboration requires not only better tools, but systemic redesign grounded in complexity science principles. FRAM provides a structured, predictive approach to making AI integration in legal practice safer, smarter, and more accountable.

    Keywords

    Functional Resonance Analysis Method (FRAM)
    Legal Technology
    Artificial Intelligence in Law
    Human-AI Collaboration
    Contract Review Reliability
    Socio-Technical Systems
    Risk Modeling and Resilience Engineering


    Introduction


    The review of legal contracts for risks, errors, inconsistencies, and omissions has long been a critical pillar of professional legal services. Traditionally reliant on meticulous human expertise, this painstaking work demands high levels of concentration, deep domain knowledge, and contextual understanding of client needs and regulatory frameworks. Although legal contract review can be conducted by many businesses themselves, law firms generate value from the level of assurance they bring to such tasks by providing high reliability, and heavily insured and guaranteed recourse in the event of an error. Over recent years, however, the rise of Artificial Intelligence (AI) systems, particularly those based on large language models (LLMs), has begun to reshape this landscape. LegalTech companies, internal law firm innovations, and broad technology integrations are increasingly supplementing human effort with machine-based assistance, promising faster, cheaper, and in some cases more consistent contract review.

    Download the PDF

    Semantical Challenges in FRAM While Developing the FRAMifier

    FRAM Functional Resonance

    REVIEW OF “SEMANTICAL CHALLENGES IN FRAM WHILE DEVELOPING THE FRAMIFIER” BY BOTS & ADRIAENSEN (2025)

    Reviewer: [D. Slater}

    Summary

    The paper “Semantical Challenges in FRAM While Developing the FRAMifier” presents a novel contribution to the FRAM (Functional Resonance Analysis Method) community by proposing the FRAMifier—an open-source browser-based modeling tool aimed at enforcing syntactic and semantic consistency in FRAM models. The authors identify four core challenges in current FRAM practice: abstraction hierarchy logic, computation of aspects, scope of aspects, and temporal representation. They describe the FRAMifier’s internal design philosophy, which prioritizes formal rules, expression-based computation, and interface affordances to support syntactically and semantically valid model construction.

    After Joel Thurlby

    Contribution and Strengths


    The authors are to be commended for their clear identification of real-world modeling difficulties encountered by practitioners of FRAM, particularly those involving large or complex system representations. Their discussion is framed around pragmatic concerns, including the “spaghetti effect” in unstructured models, ambiguous coupling semantics, and the challenge of capturing temporal dependencies.
    One of the strongest contributions of the paper is its practical framing of design decisions in the FRAMifier as semantic commitments, open to community interrogation and revision. By doing so, the authors foster a constructive dialogue about tool support and methodological clarity in FRAM applications.
    Noteworthy technical features of the FRAMifier include:


    A hierarchical function decomposition framework, visually and structurally compatible with Patriarca et al.’s abstraction-based FRAM extensions.

    A rule-based logic mechanism allowing modelers to define expressions for function activation, both at the level of functions and their aspects.

    Explicit support for time logic using operators like after, until, and symbolic references such as now and last, enabling conditional activation based on simulated time cycles.
    These design elements offer the potential to make FRAM modeling more structured and computationally tractable, particularly in pedagogical settings or simulation environments.

    Click to download PDF

    The FRAM Function as a Turing Machine

    FRAM Functional Resonance

    The ironies of automation | David Slater

    Which of these functions should be performed by human operators and which by machine elements? (Fitts)
    David Slater
    The Functional Resonance Analysis Method (FRAM) Hollnagel (2012), offers a unique lens to examine the complexity and variability of socio-technical systems, making it a powerful tool to analyze both human and machine interactions within such systems. Among its many insights, the description of a FRAM function shares intriguing parallels with the formal definition of an automaton, particularly the classical Turing machine, Turing (1936). This analogy bridges concepts from systems engineering and computational theory to deepen our understanding of system behaviour and emergent outcomes.
    An automaton can be formally described as a quintuple, Hollnagel (2024):
    A = (I, O, S, λ, δ)
    where:

    • I is the set of inputs,
    • O is the set of outputs,
    • S is the set of internal states,
    • λ: S x I → S is the set of rules governing state transitions, and
    • δ: S x I → O is the set of rules for determining the output based on the current state and input.
      This foundational structure has long been used to describe computational systems, such as finite automata and Turing machines, which operate by transitioning between discrete states in response to inputs, producing outputs based on predefined rules. Interestingly, this description can also encapsulate the essence of a FRAM function.
      In FRAM, a function is defined by six aspects: Input, Output, Precondition, Resource, Time, and Control. These aspects form a network of interdependencies that govern the variability and interactions within a system. Conceptually, this aligns closely with the automaton structure. The Input in FRAM corresponds to the automaton’s set of inputs (I), while the Output parallels the automaton’s outputs (O). The states (S) of the automaton can be likened to the function’s endogenous processing of the inputs to produce the outputs in the operational context, shaped by Precondition, Resource, Time, and Control; information transmitted in the metadata of the functions and processed by the algorithms used. The transitions between states (λ) they produce, and the outputs derived (δ) mirror the dynamic coupling and emergent behaviour captured in FRAM models.
      This analogy is particularly compelling because it frames a FRAM function as not merely a static representation, but an active element in the computational fabric of a system. Each function, influenced by variability, transitions dynamically between states, producing outcomes contingent on its interactions with other functions and the broader system context. This perspective elevates the role of a FRAM function to that of an automaton-like entity, capable of representing and interpreting the complexity inherent in socio-technical systems.

    Download the PDF

    Perception, human error, and safety?

    A constructed reality | David Slater

    ABSTRACT

    Perception is not an objective recording of the world but an active construction, shaped by the brain’s sensory gating, reticular activation, and predictive coding. These mechanisms filter,
    prioritize, and interpret sensory input, transforming fragmented data into a coherent experience. However, because this process is individualized, shaped by cognitive biases, neurobiology, and past experiences, perception of error is also subjective. What one person detects as a critical mistake may be overlooked by another, highlighting the variability in how individuals proces discrepancies between expectation and reality.

    This subjectivity has profound implications for human error and safety. Errors are not absolute but perceptual mismatches, influenced by an individual’s sensory thresholds, attentional biases, and predictive assumptions. In high-risk environments, failure to recognize the variability in error perception can lead to communication breakdowns, inconsistent risk assessments, and ineffective safety interventions. Human factors engineering must accommodate perceptual diversity, ensuring that systems are designed with redundancy, adaptability, and cognitive diversity in mind. Safety strategies must move beyond rigid protocols and instead embrace flexible, user-centered approaches that account for differences in attention, expectation, and sensory processing.

    Understanding perception as a constructed reality rather than a fixed truth allows us to reframe human error—not as failure, but as a natural consequence of subjective information processing.
    By designing systems that align with the way humans actually perceive, predict, and correct for mismatches, we can create safer, more resilient working environments that reduce the impact of perceptual variability and enhance collective problem-solving, decision-making, and risk
    management.


    Download the PDF

    The Turing Analogy in FRAM

    If it quacks like a duck and walks like a duck, its probably a ———–?

    Abstract

    This note suggests that the Turing Machine analogy can be a valuable conceptual tool for understanding FRAM functions as active, dynamic entities within complex systems. However, its limitations, particularly regarding human variability and adaptability, caution against over-reliance on formalism. By integrating insights from cognitive systems engineering, the analogy can be expanded to better address the dual nature of socio-technical systems—leveraging both human adaptability and machine precision.
    This dual perspective ensures that FRAM remains a robust framework for designing systems that are not only deterministic but also resilient, capable of navigating the unpredictability of real-world interactions.

    A Turing Machine

    is a theoretical model of computation invented by Alan Turing in 1936. (1) It serves as a fundamental concept in computer science and mathematics for understanding what can be computed and how computation works. While it is a simplified abstraction, it has proven to be incredibly powerful and forms the basis for modern computing theory.

    Click to download PDF

    A FRAM function in the Unified Foundational Ontology

    As the Patriarca paper1 suggests, the integration of the Functional Resonance Analysis Method (FRAM) with the Unified Foundational Ontology (UFO) could be a significant step forward in formalizing the conceptual underpinnings of the FRAM for safety modeling in complex socio-technical systems. FRAM has long been recognized for its ability to analyze systemic behaviour through a focus on functional interactions and variability. However, its flexibility and reliance on analyst interpretation often led to inconsistencies and subjectivity in its application. This note supports an ontological foundation for FRAM, using UFO to address these challenges and advance FRAM’s utility.

    At its core, FRAM is a method designed to represent how systems perform under varying conditions. It emphasizes emergent properties and variability, acknowledging that system behaviours arise from the dynamic interplay of functions rather than linear cause-and-effect chains. Central to FRAM is the concept of functions—activities or processes—and their interdependencies, which are depicted through inputs, outputs, preconditions, and controls. These functions serve as the building blocks of FRAM models, which aim to identify and understand potential resonances—unexpected amplifications of variability—that may disrupt system performance.

    Click here to download PDF

    Cambrensian “Intelligent” FMV (FRAM Model Visualiser)

    for estimating probabilities of outcomes in complex systems. | David Slater and Rees Hill

    David Slater – dslater@cambrensis.org
    Rees Hill – rees.hill@zerprize.co.nz


    ABSTRACT

    The Functional Resonance Analysis Method (FRAM) has emerged as a valuable tool for modeling and understanding the dynamic behaviour of complex socio-technical systems. While traditionally used as a qualitative method, recent advancements in the FRAM Model Visualizer (FMV) have introduced quantitative capabilities, enabling the systematic analysis of functional interactions and variability within a probabilistic framework. This paper explores the potential of FRAM to bridge the gap between human factors specialists, who prioritize qualitative insights, and engineers, who demand numerical rigour for system reliability and safety predictions.

    Click to download PDF