This page provides definitions of the essential FRAM terminology.
Although the method has reached a stable version, the definitions are subject to continuous improvements; new terms may be therefore be added and existing definitions may change unexpectedly.
Users are consequently advised regularly to check for updates to the Glossary.
Latest update, January 30, 2016
Activity: the description of the work that is done. The origin is in French ergonomics in the 1950s, where a distinction was made beween task (tâche) and activity (activité). The task corresponds to the formal description of work (checking out a customer at a supermarket, taking a blood sample in a hospital, starting up a coal-fired power plant). The activity corresponds to the description of how the task is carried out in practice. Task and activity thus corresponds to work-as-imagined (WAI) and work-as-done (WAD). An activity comprises a number of functions, usually described in the order in which they were carried out (relative to a time-line).
Aspect (of a FRAM function): Each FRAM function is described by six aspects, namely: Input, Output, Precondition, Resource, Control, and Time. An aspect should be described if it deemed reasonable by the analysis team, and if there is sufficient information or experience to do it. But it is not necessary to describe all six aspects for all functions; indeed, it may sometimes be either impossible or unreasonable to do so. For foreground functions, it is necessary to describe at least Input and Output. For background functions that represent a source of something, it may be sufficient to describe the Output. Similarly, for background functions that represent a sink, it may be sufficient to describe the Input. Note, however, that if only Inputs and Outputs are defined, the FRAM model regresses to a simple flowchart or network. (The term ‘aspect’ has become the established name; alternatives are ‘characteristics,’ ‘dependencies,’ or even – affectionately – ‘thingies.’)
Approximate adjustments: When working conditions are underspecified or when time or resources are limited, it is necessary to adjust performance to match the conditions. This is a main reason for performance variability. But the very conditions that make performance adjustments necessary also mean that the adjustments will be approximate rather than perfect. The approximations are, however, under most conditions good enough to ensure successful performance.
Background function: Foreground and background refer to the relative importance of a function in the model. It has been common practice in human reliability assessment to invoke the presence of so-called Performance Shaping Factors (PSF) to describe the conditions that influence the events being studied. Instead of doing that, the FRAM represents the PSFs as background functions. The background functions denote that which affects the foreground functions being studied. In other words, the background functions constitute the context or working environment. These background functions should clearly be described in exactly the same way as other functions, although possibly with less detail. A designated background function may become a foreground function if the focus of the analysis changes.
A background function can have one or more Outputs, but cannot have any other aspects defined. The FMV will automatically categorise a function as a background function if it only has Output(s). But as soon as another aspect is added, the status changes from a background function to a foreground function. Background functions are typically used to denote the sources for Input, Precondition, Resource, Control, and/or Time aspects of downstream functions. A background function may therefore be seen as a placeholder for an upstream function that does not need to be specified further. Since background functions do not need to be described in more detail, they provide a convenient way of limiting the expansion of a FRAM model.
A drain is a special type of background function.
Bimodal principle: Technological components and systems function in a bimodal manner. Strictly speaking this means that for every element e of a system, the element being anything from a component to the system itself, the element will either function or it will not. In the latter case the element is said to have failed. The bimodal principle does, however, not apply to humans and organisations. Humans and organisations are instead multi-modal, in the sense that their performance is variable – sometimes better and sometimes worse but never failing completely. A human ‘component’ cannot stop functioning and be replaced in the same way a technological component can.
The principle of bimodal functioning may admittedly become blurred in the case of systems with a large number of components, and/or systems that depend on software. In such systems there may be intermittent functions, sudden freezes of performance, and/or slow drift, e.g., in sensor measurements. However, the principle of bimodal functioning is true even in these cases, since the components are bimodal. It is just that the systems are intractable and that the ability adequately to describe and understand what is going on therefore is limited.
Boundary: An important question for any analysis is how far it should go, or rather when it should stop (the stop rule). This is the issue of how when a FRAM model is (reasonably) complete. An analysis may often go beyond the boundaries of the system as initially defined. This can be because the description of the aspects of a function makes it necessary to include additional functions in the model, or it because some background functions may be found to vary and thereby affect designated foreground functions, in which case they should be treated as foreground functions and the boundary extended correspondingly. The semi-explicit stop rule of the FRAM is that the analysis should continue until there is no unexplained (or unexplainable) variability of functions – which is the same as saying that the analysis has reached a set of functions which can be assumed to be stable rather than variable. But the boundary is relative rather than absolute, and refers to functional characteristics rather than physical characteristics.
Calculated instantiation. In a calculated instantiation the FRAM is interpreted as a network or a graph, where the functions are the vertices (nodes, points) and the potential couplings are the edges (arcs, lines). Because the functions are assumed to be carried out in a known and predetermined sequence, the outcome can be calculated from the instantiation. This calculation must, of course, respect that the uncertainty or unpredictability in the functions is not simply probabilistic, but may require other techniques, such as Bayesian networks, fuzzy sets, rule interpretation, etc. Regardless of which solution is chosen, the basic assumption is still that the edges (and vertices) are both well defined and static.
Calculated instantiations provide a solution to the practical demands for quantitatve results. But interpreting a FRAM model as a graph or network makes it difficult to consider alternative instantiations (such as alernative sequences of functions) from the same model. This can be done by using emergent instantiations.
Control (as aspect): Control, or control input, is that which supervises or regulates a function so that it results in the desired Output. Control can be a plan, a schedule, a procedure, a set of guidelines or instructions, a program (an algorithm), a ‘measure and correct’ functionality, etc. A different type of Control is social control and/or expectations. Social control can be external, e.g., the expectations of others, for instance the company or management, or internal, such as when we plan do to some work and make clear to ourselves when and how to do it. Social control can also be internal, as a kind of self regulation. External social control will typically be assigned to a background function. The description of a Control should be a noun, if it is a single word, or begin with a noun if it is a short sentence.
Coupling: In the book ‘Normal accidents’, Perrow (1984) proposed that systems in general could be described by two dimensions called coupling and interactiveness. Coupling describes the degree to which subsystems, functions, and components are connected or depend upon each other; the degree of coupling can range from loose to tight. The FRAM makes a distinction between the potential couplings that are defined by a FRAM model, and the actual couplings that can realistically be assumed to exist for a given set of conditions (an instantiation). While the actual couplings always will be a subset of the potential couplings, they may be different from the couplings that were intended by the system design.
Downstream functions: A FRAM model describes the functions and their potential couplings, rather than how the functions are organised for specific conditions. It is therefore not possible to say with certainty whether one function always will be carried out prior to or after another function. In an instantiation of the model, detailed information about a specific situation or scenario is used to create an instance or a specific example of the model. This establishes a temporal organisation of the functions as they are likely to unfold (or become activated) in the scenario, and it is only possible to consider functions in their temporal and causal relations when an instantiation has been produced. Functions that – in the instantiation – happen after other functions, and which therefore may be affected by them, are called downstream functions. The notion of downstream function is relative rather than absolute.
Drain: A drain is a function that receives one or more Inputs; it represents a further process or continuation of the event or activity. A drain is a placeholder for downstream functions that are not included in the analysis; these can of course, be developed in more detail if the analysis requires.
Elaborate description of Output variability: The elaborate description of the possible variability of the Output from a function comprises ten dimensions:
The Output can vary with respect to timing (onset, when the Output is produced)
The Output can vary with respect to duration (how long it continues)
The Output can vary with respect to distance / length
The Output can vary with respect to direction
The Output can vary with respect to magnitude
The Output can vary with respect to speed
The Output can vary with respect to force / power / pressure
The Output can vary with respect to object
The Output can vary with respect to quantity / volume
The Output can vary with respect to sequence
Emergent instantiation. An emergent instantiation is the outcome from a stepwise evaluation of upstream-downstream couplings. The outcome is called emergent because different conditions may produce different instantiations both in terms of the final outcome and in terms of the order (sequence) in which functions were triggered. The basic principle of an emergent instantiation is the following.
In each step every function will be evaluated.
The evaluation will determine whether the values of the Outputs from upstream functions match the triggering conditions of the function.
If so, it will also be determined how the values of the Outputs from immediate upstream functions affect the variability of the function.
The result of the evaluation will be updated values for the function’s Output (or Outputs) – either a new value or the same value refreshed. This value will be used during the next step. The stepwise evaluation stops when a specific criterion (the stop rule) has been fulfilled.
Emergent instantiations will be made by means of the FRAM Model Interpreter (under development).
Equivalence of successes and failures: Whenever something is done, the intention is always to do something right and never to do something wrong. For each action, the choice of what to do is determined by many different things, including competence, understanding of the situation, experience, habit, demands, available resources, and expectations about how the situation may develop – not least about what others may do. If the expected outcome is obtained, the next action is taken, and so on. But if the outcome is unexpected, then the preceding action is re-evaluated and classified as wrong rather than right, as an error or as a mistake, using the common but fallacious post hoc ergo propter hoc argument. With hindsight, it is pointed out what should have been done, if only people had made the necessary effort at the time. The whole argument is, however, unreasonable because the action was chosen based on the expected rather than the actual outcome. Failures and successes are equivalent in the sense that we can only say whether the preceding action was right or wrong after the outcome is known. That changes the judgement of the action, but not the action itself.
Execution conditions: See Resource.
Foreground functions: Foreground and background refer to the relative importance of a function in the model. The foreground functions denote that which is being analysed or assessed, i.e., the focus of the investigation. A designated foreground function may become a background function if the focus of the analysis changes.
Function: In the FRAM, a function represents the means that are necessary to achieve a goal. More generally, a function refers to the activities – or set of activities – that are required to produce a certain outcome. A function describes what people – individually or collectively – have to do in order to achieve a specific aim. A function can also refer to what an organisation does: for example, the function of an emergency room is to treat incoming patients. A function can finally refer to what a technological system does either by itself (an automated function) or in collaboration with one or more humans (an interactive function or coagency). The description of a Function should be a verb, if it is a single word, or begin with a verb if it is a short sentence.
Functional resonance: Functional resonance is defined as the detectable signal that emerges from the unintended interaction of the everyday variability of multiple signals. The signals are usually subliminal, both the ‘target’ signal and the combination of the remaining signals that constitutes the noise. But the variability of the signals is subject to certain regularities that are characteristic for different types of functions, hence not random or stochastic. Since the resonance effects are a consequence of the ways in which the system functions, the phenomenon is called functional resonance rather than stochastic resonance.
Hexagons (FRAM): In the usual graphical rendering of a FRAM model, or of an instantiation, a function is represented by a hexagon. The reason is obviously that a hexagon has six corners corresponding to the six aspects. If the number of aspects is ever changed (reduced or increased), the graphical rendering will have to change accordingly. (See also snowflake.)
Input (as aspect). The Input to a function is traditionally defined as that which is used or transformed by the function to produce the Output. The Input can represent matter, energy, or information. This definition corresponds to the normal use of the term in flowcharts, Process-and Instrumentation Diagrams, process maps, logical circuits, etc. There is, however, another meaning that is just as important for the FRAM, namely the Input as that which activates or starts a function. The Input in this sense may be a clearance or an instruction to begin doing something, which in turn requires that the input is detected and recognised by the function. While this nominally can be seen as being data, it is more important that the Input serves as a signal that a function can begin. Technically speaking, the Input represents a change in the state of the environment, just as if the Input was matter or energy. The description of an Input should be a noun, if it is a single word, or begin with a noun if it is a short sentence.
Instantiation: An instantiation of a FRAM model is a ‘map’ of how a set of functions are mutually coupled under given conditions (favourable or unfavourable) or for a given time-frame. The set may include all the functions in a model or a subset thereof. The couplings that are represented by a specific instantiation can be seen as representing the order or sequence in which the functions were triggered for the given conditions. The sequence may include parallel ‘paths’ and ‘iterations’ (cf. the Record and Playback options of the FMV). If the model is used to represent an event that has take place, such as an accident investigation, the instantiation will typically cover the time-frame of the event and represent the couplings that did exist at the time. For a risk assessment, it is more appropriate to work with a set of instantiations, where each instantiation represents the couplings between upstream and downstream functions at a given time or for given conditions. The instantiation may be represented graphically, although this is not necessary. (See also Performance Variability.)
There are two different forms of instantiations, called calculated instantiations and emergent instantiations, respectively.
Model: A FRAM model describes a system’s functions (the union of the sets of foreground functions and background functions). The potential couplings among functions are defined by how the aspects of the functions are described. The FRAM model, however, does not describe the actual couplings that may exist under given conditions (see: Instantiation). To represent the result of performance variability of a function – or of several functions – therefore requires an instantiation (q.v.). A graphical representation of a FRAM model will be a set of hexagons, where each hexagon stands for a function, but without any lines or connections among functions. There is therefore no default orientation of the graphical representation.
Because the couplings in a FRAM model are potential rather than actual, a FRAM model is not a process model or even a net (graph). An instantiation of a FRAM model may be isofunctional to a process model or a net.
Output (as aspect): The Output from a function is the result of what the function does, for instance by processing the Input. The Output can therefore represent matter, energy, or information – the latter as a command issued or the outcome of a decision. The Output can be seen as representing a change of state – of the system or of one or more output parameters. The Output can also represent the signal that starts a downstream function. The description of an Output should be a noun, if it is a single word, or begin with a noun if it is a short sentence.
Output variability: The variability in how a function is carried out may show itself by the variability of the Output. Since the Output may be used as Input, Preccondition, Resource, Time, or Control by one or mode downstream functions, the variability of the Output may affect how the downstream functions are carried out and thereby the overall level of performance of the model. (Strictly peasking, the performance of the instantiation of the model.)
The variability of the Output can be described in either a simple or elaborate form.
Performance variability: The study of risk and accidents has traditionally focused on how failures or malfunctions of components or elements (technological, human, organisational) could happen and how the effects could propagate through the system. This can be called a bimodal view of functions and performance. The FRAM is based on the principle of equivalence of successes and failures and the principle of approximate adjustments. Performance is therefore in practice always variable. The performance variability of upstream functions may affect the performance variability of downstream functions, and thereby lead to non-linear effects (functional resonance).
Pre-condition (as aspect): A function can in many cases not begin before one or more Preconditions have been established. These Preconditions can be understood as system states that must be true, or as conditions that must be verified before a function is carried out. Although a Precondition is a state that must be true before a function is carried out, it does not itself constitute a signal that starts the function. An Input, on the other hand, can activate a function. This simple rule can be used to determine whether something should be described as an Input or as a Precondition. The description of a Precondition should be a noun, if it is a single word, or begin with a noun if it is a short sentence.
Resonance: In physical systems, classical (or mechanical) resonance refers to the phenomenon that a system can oscillate with larger amplitude at some frequencies than at others. These are known as the system’s resonant (or resonance) frequencies. At these frequencies even small external forces that are applied repeatedly can produce large amplitude oscillations, which may seriously damage or even destroy the system.
Resource (as aspect): A Resource is something that is needed or consumed while a function is carried out. A Resource can represent matter, energy, information, competence, software, tools, manpower, etc. Time can, in principle, also be considered as a Resource, but since Time has a special status it is treated as a separate aspect. Since some Resources are consumed while the function is carried out and others are not, it is useful to distinguish between Resources on the one hand and Execution Conditions on the other. The difference is that a while a Resource is consumed by a function, so that there will be less of it as time goes by, an Execution Condition is not consumed but only needs to be available or exist while a function is active. The difference between a Precondition and an Execution Condition is that the former is only required before the function starts, but not while it is carried out. The description of a Resource (or an Execution Condition) should be a noun, if it is a single word, or begin with a noun if it is a short sentence.
Simple description of Output variability: The simple description of the possible variability of the Output from a function comprises two dimensions:
The Output can vary with respect to time, being either too early, on time, too late, or omitted (not occurring at all).
The Output can vary with regard to precision, being either precise, imprecise, or acceptable.
Snowflakes: ‘Snowflakes’ are a nickname for FRAM hexagons. The name is based on the similarity between the graphical representations. The idea of a non-linear development was also initially communicated by using an avalanche as an analogy. Today, the ‘snowflake’ reference is, however, considered obsolete and potentially misleading, and should therefore be avoided.
Stochastic resonance: Stochastic resonance can be defined as the enhanced sensitivity of a device to a weak signal that occurs when random noise is added to the mix. The outcome of stochastic resonance is non-linear, which simply means that the output is not directly proportional to the input. The outcome can also occur – or emerge – instantaneously, unlike classical resonance which must be built-up over time.
System: Systems are usually defined with reference to their structure, i.e., in terms of their parts and how they are connected or put together. Common definitions emphasise both that the system is a whole, and that it is composed of independent parts or objects that are interrelated in one way or another. Definitions of this type make it natural to rely on the principle of decomposition to understand how a system functions, and to explain the overall functioning in terms of the functioning of the components or parts – keeping in mind, of course, that the whole is larger than the sum of the parts. It is, however, entirely possible to define a system in a different way, namely in terms of how it functions rather than in terms of what the components are and how they are put together. From this perspective, a system is a set of coupled or mutually dependent functions. This means that the characteristic performance of the system – of the set of functions – cannot be understood unless it includes a description of all the functions, i.e., the set as a whole. The delimitation of the system is thus not based on its structure or on relations among components (the system architecture). An organisation, for instance, should not be characterised by what it is but by what it does. Neither should it be characterised by the people who are in a given place (on the organisation chart or in reality) but by the functions they performs. One consequence of a functional perspective is that the distinction between a system and its environment, and thereby also the system boundary, becomes less important, cf., the distinction between of foreground and background functions.
Task: the formal description of work as it expeted to be carried out. According to the Shorter Oxford Dictionary, a task is “any piece of work that has to be done,” which is generally taken to mean one or more functions or activities that must be carried out to achieve a specific goal. The origin is in French ergonomics in the 1950s, where a distinction was made beween task (tâche) and activity (activité). The task corresponds to the formal description of work (checking out a customer at a supermarket, taking a blood sample in a hospital, starting up a coal-fired power plant). The activity corresponds to the description of how the task is carried out in practice. Task and activity thus corresponds to work-as-imagined (WAI) and work-as-done (WAD). A task is usually described in a specific order or organisation as the result of a task analysis, which is the study of what people, individually and collectively, are required to do to achieve a specific goal, or, simply put, as who does what and why. A task usually comprises a number of steps and/or sub-tasks, hence a number of functions.
Time (as aspect): The Time aspect of a function represents the various ways in which Time can affect how a function is carried out. Time, or rather temporal relations, can be seen as a form of Control. One example of that is when Time represents the sequencing conditions. A function may, for instance, have to be carried out (or be completed) before another function, after another function, or overlapping with – parallel to – another function. Time may also relate to a function alone, seen in relation to either clock time or elapsed time. Time can also be seen as representing a Resource, such as when something must be completed before a certain point in time, or within a certain duration (as when a report must be produced in less than a week). Time can, of course, also be seen as a Precondition, e.g., that a function must not begin before midnight, or that it must not begin before another functions has been completed. Yet rather than having Time as a part of either of the three aspects of a function – or indeed, the four since it conceivably could also be considered as an Input – it seems reasonable to recognise its special status by having it as an aspect in its own right. The description of a Time aspect should be a noun, if it is a single word, or begin with a noun if it is a short sentence.
The description of a Time aspect could also be one of the commonly used descriptors of a temporal relationship [Allen, J. F. (1983). Maintaining knowledge about temporal intervals, Communications of the ACM, 26(11), 832-843].
Upstream functions: An upstream function is defined in a similar way as a downstream function (q.v.) as a function that – in a given instantiation – happens before other functions, and which therefore may affect them. The notion of upstream function is relative rather than absolute.
Variability: See Performance Variability.