Theory


FRAM Methodology Developments

A number of centres are using the FRAM methodology and developing ideas to enable it to be used not only in a wider range of applications, but also to validate and extend its acceptance and use by a wider community of both academic and practical proponents.


A major attraction of the approach is that it offers a way of modelling complex systems and systems of systems which does not rely on “levels of abstraction”, which does not get bogged down by the physical details of components, but only requiring careful clarification of what they are supposed to accomplish.

One of the key features of the methodology is the teasing out of the actual variability found in practice (As is – Safety-II), in the various functional interactions. Currently this variability is assigned from experience, aspect by aspect, to track the emergent behaviours that can result in unfortunate consequences (resonances) as well as in the successful completion of tasks.


Several groups are attempting to develop ways to predict the results of these interdependent variations. These include assigning typical variabilities for the different functions (Human, Technological, or Organisational) and propagating the consequences automatically (Rees et al). Others are utilising a MonteCarlo approach to sum the probable effects of the component functions in a particular instantiation (Patriarca et al). A number of people are interested in utilising more formal methods of testing the FRAM instantiation models for validity, completeness etc. (Jin Tian et al, van Kleef, Jeronymo) Other groups are working on ways to quantify the predicted outcomes of the modelled systems using Bayesian Belief Nets, etc. (Slater et al)


But it needs to be emphasised that a FRAM model of a system is merely a systematic description of key functions. It is not a network. And it is not a flow model. Even individual task instantiations with their interdependent links are really emergent configurations, intimately dependent on previous instantiations and developing aspects of other functions.


So it is vital to ensure a consistent and consensus benchmark of what is at the core of the approach to avoid it deteriorating into a plethora of different usages and terminology. We hope to avoid that by having a formal endorsement of validity from a standards panel chaired by Erik.