Category Archives: Blogs & Papers

Perception, human error, and safety?

A constructed reality | David Slater

ABSTRACT

Perception is not an objective recording of the world but an active construction, shaped by the brain’s sensory gating, reticular activation, and predictive coding. These mechanisms filter,
prioritize, and interpret sensory input, transforming fragmented data into a coherent experience. However, because this process is individualized, shaped by cognitive biases, neurobiology, and past experiences, perception of error is also subjective. What one person detects as a critical mistake may be overlooked by another, highlighting the variability in how individuals proces discrepancies between expectation and reality.

This subjectivity has profound implications for human error and safety. Errors are not absolute but perceptual mismatches, influenced by an individual’s sensory thresholds, attentional biases, and predictive assumptions. In high-risk environments, failure to recognize the variability in error perception can lead to communication breakdowns, inconsistent risk assessments, and ineffective safety interventions. Human factors engineering must accommodate perceptual diversity, ensuring that systems are designed with redundancy, adaptability, and cognitive diversity in mind. Safety strategies must move beyond rigid protocols and instead embrace flexible, user-centered approaches that account for differences in attention, expectation, and sensory processing.

Understanding perception as a constructed reality rather than a fixed truth allows us to reframe human error—not as failure, but as a natural consequence of subjective information processing.
By designing systems that align with the way humans actually perceive, predict, and correct for mismatches, we can create safer, more resilient working environments that reduce the impact of perceptual variability and enhance collective problem-solving, decision-making, and risk
management.


Download the PDF

The Turing Analogy in FRAM

If it quacks like a duck and walks like a duck, its probably a ———–?

Abstract

This note suggests that the Turing Machine analogy can be a valuable conceptual tool for understanding FRAM functions as active, dynamic entities within complex systems. However, its limitations, particularly regarding human variability and adaptability, caution against over-reliance on formalism. By integrating insights from cognitive systems engineering, the analogy can be expanded to better address the dual nature of socio-technical systems—leveraging both human adaptability and machine precision.
This dual perspective ensures that FRAM remains a robust framework for designing systems that are not only deterministic but also resilient, capable of navigating the unpredictability of real-world interactions.

A Turing Machine

is a theoretical model of computation invented by Alan Turing in 1936. (1) It serves as a fundamental concept in computer science and mathematics for understanding what can be computed and how computation works. While it is a simplified abstraction, it has proven to be incredibly powerful and forms the basis for modern computing theory.

Click to download PDF

A FRAM function in the Unified Foundational Ontology

As the Patriarca paper1 suggests, the integration of the Functional Resonance Analysis Method (FRAM) with the Unified Foundational Ontology (UFO) could be a significant step forward in formalizing the conceptual underpinnings of the FRAM for safety modeling in complex socio-technical systems. FRAM has long been recognized for its ability to analyze systemic behaviour through a focus on functional interactions and variability. However, its flexibility and reliance on analyst interpretation often led to inconsistencies and subjectivity in its application. This note supports an ontological foundation for FRAM, using UFO to address these challenges and advance FRAM’s utility.

At its core, FRAM is a method designed to represent how systems perform under varying conditions. It emphasizes emergent properties and variability, acknowledging that system behaviours arise from the dynamic interplay of functions rather than linear cause-and-effect chains. Central to FRAM is the concept of functions—activities or processes—and their interdependencies, which are depicted through inputs, outputs, preconditions, and controls. These functions serve as the building blocks of FRAM models, which aim to identify and understand potential resonances—unexpected amplifications of variability—that may disrupt system performance.

Click here to download PDF

Cambrensian “Intelligent” FMV (FRAM Model Visualiser)

for estimating probabilities of outcomes in complex systems. | David Slater and Rees Hill

David Slater – dslater@cambrensis.org
Rees Hill – rees.hill@zerprize.co.nz


ABSTRACT

The Functional Resonance Analysis Method (FRAM) has emerged as a valuable tool for modeling and understanding the dynamic behaviour of complex socio-technical systems. While traditionally used as a qualitative method, recent advancements in the FRAM Model Visualizer (FMV) have introduced quantitative capabilities, enabling the systematic analysis of functional interactions and variability within a probabilistic framework. This paper explores the potential of FRAM to bridge the gap between human factors specialists, who prioritize qualitative insights, and engineers, who demand numerical rigour for system reliability and safety predictions.

Click to download PDF