Category: wicked problems

  • Improvising Design For Wicked Problems 

    Why is design improvisational?  We talk about design as if it were fixed: as if there were a clear set of rules for design – one best way to design everything. We celebrate designers who produce especially elegant or usable artifacts as if they were possessed of supernatural powers. Yet design is just the application of “best practice” principles to a specific situation. We can observe how the users of a designed artifact or system work, then design the artifact or system accordingly. Why does that approach fail so often?

    The issue with design rules (and patterns) is that they only deal with well-defined problems.

    Designers acquire a repertoire of design-elements-that-work: patterns that fit specific circumstances and uses. Experienced designers are capable of building up a deep understanding over time, of which problem-elements each of these patterns resolves. So they can assess a situation, recognize familiar problem-elements, then fit these with design patterns that will work in these circumstances. The problem comes when a designer is faced with a novel or unusual situation that they have not encountered before. Novice designers encounter this situation a great deal, but even experienced designers must deal with emergent design in a novel context. In these circumstances, designers iterate their design, as shown in Figure 1. They identify (often partial) problems, ideate/conceive relevant solutions, give those solutions form with a prototype, then evaluate the prototype in context. This often reveals emergent user needs or problems, that are explored in the next iteration.

    The stages of iterative design: identify problem, ideate solutions, prototype designed solution, evaluate de4signed solution in context, explore remaining user needs.
    Figure 1. Iterative Design

    An important aspect of iterative design is that iterations can occur within cycles. As designers succeed or fail at successive designs, they accumulate experiential knowledge, that allows them to assess new situations quickly and to understand which design elements will work or fail in that situation, looping back to remediate the design as they spot logical flaws and gaps in the design. The problem with this is that (as the Princess said) you have to kiss an awful lot of frogs to get a Prince. An awful lot of people end up with really bad designs, because their designer did not recognize elements of the situation well enough to understand which pattern-elements to implement. If you are really unlucky, you will also end up with one of those designers who feel it is their mission in life to prevent the end-user “mucking about with” their design. If you are lucky, your designer will recognize that it is your design, not theirs. They design artifacts and systems in ways that allow people to adapt and improvise how they are used.

    Which means that design-goals are constantly changing between iterations, as shown in Figure 2. The designer starts by designing for the subset of goals they understand. As they explore and test the design with users, they become aware of new requirements and so modify the subset of goals they are designing for. As part of this process, they also discard any requirements that are no longer associated with perceived user needs.

    The parabola of process steps introduced by goal emergence in design
    Figure 2. Goal-emergence in design

    Improvisation takes a multitude of forms. It might be that a user wishes to customize the color of their screen (because the designer thought that a good interface should look like a play-school). This may not do much for the function of your work-system, but it does mean that your disposition towards work is a heck of a lot sunnier as you use it. Or it might be that the information system which you use expects you to enter data on one step of your work before another. You might be able to enter data into a separate screen for each step, reordering the steps as you wish. More usually, you have to enter fake data into the first step, then go back later to change this, once you have the real data. This is because IT systems designers treat software design as a well-structured problem. A well-structured problem is one that contains the solution within its definition. Defining the problem as a tic-tac-toe game application means that you have a set of rules for how the game is played which absolutely define how it should work. This is fine if everything goes to plan, but a huge pain for users when it does not. The only discretion left to the user is how to format the results in a printed report, which is not much comfort if your whole transaction failed because you were prevented from going back to change one of the inputs. This is not rocket science – developers need to design systems that let users work autonomously.

    Business applications tend to present wicked problems [2]. A wicked problem cannot be defined objectively, for all the reasons identified in Figure 3. Solving a wicked problem needs business users and stakeholders to agree on what problems that they face, their priorities in resolving these, and what their change-goals are.

    Diagram lists the constraints on Design Posed by Wicked Problems:
Every problem is unique;
Problems have multiple causes & are interrelated 
No clear problem definition; multiple, subjective problems compete for attention;
Problems span organizational,  functional, and management boundaries
Multiple stakeholders have different perspectives & conflicting agendas;
Problem-definition depends on framing, which constrains & directs possible solutions;
Problems have no clear structure or logic to judge when a solution is reached;
Change-agents are liable for consequences  of actions; can make situation worse as well as better;
No opportunity to learn by trial-and-error; every solution attempt changes  the problem-situation;
No way to test a solution; need to wait and see knock-on impacts of change.
    Figure 3. Constraints on Design Posed by Wicked Problems (Rittel & Webber, 1973) [2]

    A wicked problem can be understood as a web of interrelated problems. It is not always clear what the consequences will be, of solving any part of this mess. Some of the problems may have “obvious” solutions. But implementing these solutions may make other, related problems worse or better. This is why iterative design is central to resolving wicked problems. In general, stakeholders don’t understand what they need until they see it. So solutions must be designed flexibly, for changes to be implemented as the consequences are realized and to permit adaptation-in-use by stakeholders and users. People are infinitely improvisational. They develop work-arounds and strategies to manage poor design. But, as Norman [1] observes, why should users have to develop work-arounds for poor design?.

    What is it, about the design process, that leads us to such constraining IT systems, interfaces, and work procedures that are based on the system design, rather than system designs that are based on flexible work-procedures?

    • It is difficult to manage design in situations where design goals evolve as you understand more about the context in which a design will be used. You are trying to hit a moving target.
    • Organizational change problems associated with situated design1 are wicked problems. The design constraints that result from the social and political nature of wicked problems (shown in Figure 3) create a more complex and iterative process than the design of well-defined technology that employs a more reductionist approach.

    Other papers and posts on this site explore findings from my research studies and reflections from my own experience in design. Elsewhere, I discuss some key underlying principles of design, to explore how the design process works in practice (rather than how we manage it now, which is pretty much based on unsubstantiated models of how humans think, from the 1950s). We need to develop approaches that allow us to incorporate multiple perspectives of “the problem” and that manage the evolution of design goals and requirements.  Improvisationally.


    Footnote [1]: Situated design is design that considers the situation or context in defining how a design works. If we consider organizational problems to be wicked problems, the majority of organizationally-situated design will deal with problems that are subjective, contentious, involve multiple perspectives, and systemic in their impacts.

    References

    Norman, D. A. (2013). The Design of Everyday Things: Revised and Expanded Edition. Basic Books, New York.
    Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4, 155-169.

  • Systemic Analysis

    In systemic thinking, we often debate whether to start with the “big picture” and then decompose the problem situation into sub-problems (the traditional, decompositional approach to requirements analysis), or whether to start with the detailed problems that stakeholders articulate and analyze how these are related (the bottom-up approach). As a systemic analyst, these have always seemed to be two halves of the same approach, so it was interesting to remember my own confusion in the early days of my career when this was not so obvious … 🙂

    Systemic analysis can be related to the learning-cycle. There are many variants of this. Dewey introduced the world to the idea that we learn about how the world works through interactions with that world (experiential learning):

    ‘An experience is always what it is because of a transaction taking place between the individual and, what at the time, constitutes the environment’ (Dewey, 1938, p. 43)

    Lewin suggested a four-stage model of experiential learning, that cycles between the concrete and the abstract – this later became known as the “hermeneutic circle” as an individual cycles around these stages of learning, pondering the meaning of what they experience in each cycle, then using this meaning to construct an increasingly complex mental model of the world.

    Stages of the hermeneutic circle of learning: concrete experience, observations and reflections, the formation of abstract concepts and generalizations, testing implications of concepts in new situations,

    To engage in systemic thinking, we need approaches to analysis that permit us to cycle around these stages multiple times – especially when we encounter new information. In the pages that follow, I explore some methods and techniques used to support cyclical, systemic analysis.

  • Wicked Problems

    Organizational Problems are Wicked Problems

    A wicked problem is one that is just too complex and messy (comprising multiple problem-elements) to be easily defined. As it can’t be defined, it can’t be resolved using regular analysis methods, such as those used to generate IT system requirements. Different stakeholders will define the problem in different ways, depending on the parts they have encountered in their work. The emergence of multiple problem-definitions as the problem is explored distinguishes “wicked” problems from the “tame” problems that organizational analysts and IT systems developers typically deal with. While tame problems can be defined in terms of goals, rules, and relate to a clear scope of action, wicked problems consist of many, interrelated problems, each with its own organizational scope and goals. As a result, wicked problems have vague, emergent goals and boundaries. Ways of framing wicked problems are negotiated among stakeholders who hold radically different views of the organization (Rittel & Webber, 1973).

    “It comes as no particular surprise to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”
    Kaplan, Abraham (1964) “The Age of the Symbol—A Philosophy of Library Education” The Library Quarterly: Information, Community, Policy, Oct., 1964, Vol. 34, No. 4 (Oct., 1964), pp. 295-304

    These types of problem are also known as systemic problems because we use systems thinking (a.k.a. systemic analysis) to resolve them. Systemic analysis methods use a “divide-and-conquer” approach to exploring problems. The sub-problems prioritized by various stakeholders are explored and debated across the wider group of change managers. Goals and potential solutions emerge as “the problem” is framed and re-framed in multiple ways over time, and across stakeholders. This process results in organizational learning, as stakeholders acquire an improved understanding of others’ perspectives across organizational functions and boundaries. Systemic analysis also allows change managers to explore the “knock-on” impacts of change, allowing them to appreciate conflicts and tradeoffs between perspectives and to predict the impact of changes to one area of the organization on other areas and functions.

    What are Wicked Problems?

    Wicked problems present as tangles of interrelated problems, or “messes” (Ackoff, 1974). Because these problems are so messy, they are defined by various stakeholders in multiple ways, depending on the parts that they perceive — which in turn depends on where they are in the organization, their experience and their disciplinary background.

    If you try to model a complex problem-situation, you will rapidly discover that any “system” of work consists of subsystems, the definition and scope of which depends on where the definer stands in the organization. To act upon a wicked problem, you need to understand the multiplicity of perspectives that various stakeholders take. Often, a single person will hold multiple perspectives depending on the role they are playing at any point in time. For example, I trained as an engineer, I was introduced to systemic analysis during my education, and I adopted a social science perspective as an academic. So I can happily (and obliviously) define any situation in three different ways, depending on which “hat” I am wearing when I do so!

    Wicked problems are so-called because they are not “well-structured” – that is, amenable to analytical methods of problem-solving. This means that analysts often experience difficulty in defining the problem that needs solving or selecting an appropriate technique to model the problem.

    “Successful problem solving requires finding the right solution to the right problem. We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.”
    Russel Ackoff (1974) Redesigning the Future. ‎Wiley.

    Wicked Problems Require Systemic Analysis

    As a result, Wicked Problems have a number of characteristics not found in the sorts of problems for which professional analysts and change-agents are typically trained. They are solved by trial and error, rely more on problem-negotiation than analysis, and need to be investigated, rather than analyzed. Any analysis imposes a model or structure that includes some aspects of the situation and excludes others, imposing an expectation that the elements found will be related in specific ways:

    “… it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”
    Maslow, Abraham Harold (1966). The Psychology of Science: A Reconnaissance. Harper & Row.

    The bottom line is that, while most analysis approaches focus on the form of the solution, wicked problem analysis needs to investigate the nature and scope of the problem. Successful resolution of wicked problems requires appreciative design techniques (Vickers, 1968), where the definition of a solution emerges in tandem with the definition of the problem. Analysts must become enculturated in the problem-situation to understand the stakeholder perspectives that drive various definitions of wicked problems. They need to be familiar with systemic analysis of problems . Plus, they need to be good facilitators, capable of negotiating solutions across multiple stakeholders, with multiple viewpoints and priorities.

    References

    Mitroff, I.I., Kilmann, R.H. (2021). Wicked Messes: The Ultimate Challenge to Reality. In: The Psychodynamics of Enlightened Leadership. Management, Change, Strategy and Positive Leadership. Springer, Champaign. https://doi.org/10.1007/978-3-030-71764-3_3

    Pickering, Andrew (1995) The Mangle of Practice. University of Chicago Press, Chicago IL.

    Rittel, H. W. J. (1972). Second Generation Design Methods. Design Methods Group 5th Anniversary Report: 5-10. DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.: Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 317-327.

    Rittel, H. W. J. and M. M. Webber (1973). “Dilemmas in a General Theory of Planning.” Policy Sciences 4, pp. 155-169.

    Vickers G. (1968) Value Systems and Social Process. Tavistock, London UK.

  • Soft Systems Analysis – Insights and Supplementary Tools

    Insights on Soft Systems Analysis

    SOFT SYSTEMS are purposeful systems of human activity, that represent how various business processes, groups, or functions work, are organized,and interact. The key idea, as presented by the originator of the approach, Peter Checkland, is to generate multiple views of a situation of interest that reflect the multiplicity of perspectives on what people are trying to achieve. We employ a “divide-and-conquer” approach to modeling each purpose of the larger system separately, exploring its interior logic, problems with how or what activities are performed, and measures or processes for monitoring success of this purpose. This provides us with a set of “sub-system” models of human-activity that can be compared with the real world to define desirable and feasible change, as shown in Figure 1. But it also provides insights into a larger set of activities that can be integrated into a model of the “big picture” system of work (or play) that we are attempting to improve.

    The Process of Soft Systems Methodology (Checkland, 1999)
    Figure 1. The Process of Soft Systems Methodology (Checkland, 1999)

    By focusing on the method – the generation of Root Definitions and Conceptual Models to be compared with real-world-activity – it is easy to miss the small miracle that defining multiple models of relevant purposeful activity systems enables. System requirements methods are almost universally reductionist in nature, striving to merge every struggling purpose of what people do into a “sub-goal” or subsystem of a unifying system goal. SSM, on the other hand, legitimizes diversity of goals. It surfaces all the emergent purposes of work that – in future years, when using traditional requirements analysis methods – will pop their heads above the surface as “bug reports” or “supplementary requirements.” Requirements are multivocal. They fulfill multiple objectives because people are working to rich understandings of work outcomes, not the over-simplistic definition of work outputs that traditional IT requirements analysis produces.

    For example, when I was investigating the requirements for improvements to a UK Charity’s regional store management system, an output that repeatedly reared its head was that the weekly store report should be compiled by 11 am on Friday. This made little sense, until one of the middle managers let slip that the senior Regional Manager played golf on Friday afternoons, so he needed the report an hour before lunch, so he could address any issues and also be familiar with the performance figures before he left for his game. Odd though this seemed, it was an important deadline and needed to be included in the system requirements. I later discovered that the Friday golf game was where the Regional Managers caught up with each other, and with the Chief Financial Officer, to strategize and report on their region’s performance. Understanding this outcome provided context and meaning for the IT system output that the report should be available by 11am on Fridays.

    It is also important to note that individuals develop rich understandings of their work that provide them with multiple purposes for the activities that they perform, even on an individual basis. We are all the products of experiential learning, which produces multiple ways of framing any situation. Sometimes one frame is salient, sometimes another, depending on circumstances. As an instructor, sometimes my effort is focused on grading my students work, normally to a deadline, so I can report on their progress. But at other times, my effort is focused on providing detailed feedback to my students, so they can learn from the work that they did and improve their professional and analytical skills. This is the same activity: grading assignments. But I have (at least!) two purposes in performing that activity. I would judge success differently, depending on which perspective I take at any point in time. The reporting-on-progress perspective always becomes especially salient in the hours leading up to my grade-entry cut-off deadline!

    The ability to generate multiple purposes for a system of work provides an opportunity for brainstorming that is unique in generating perspectives that might otherwise be missed early on in requirements analysis. For example, a rather tongue-in-cheek set of alternative purposes, many of which are complementary, are shown in Figure 2. While many of these may be informal purposes, accidental purposes, or secondary (to the main goal – whatever that may be) purposes, they are purposes of the system of activity that makes up a prison. If we are trying to define requirements for an IT system to support this activity, we need to understand all of these purposes (even if only to control and monitor the less desirable aspects of prison life).

    Multiple root-definitions of a prison system
    Figure 2. A Set of Alternative/Complementary Purposes For A Prison System

    Using Multiple Purposes To Model the Big Picture

    A system of work may be viewed as the combined, purposeful work of multiple individuals who perform parts of the work to achieve a “big picture” goal for an organization. By interviewing those individuals and understanding the multiple purposes they strive to achieve with their work, it is possible to gather a systemic map of purposeful activity for the whole unit, be it a functional department, a specialist workgroup, or a targeted business unit serving a specific market segment. Figure 3 shows the general structure of a Purposeful Human Activity System Model. This structure may apply at the detailed level, to a subsystem purpose, or it may be used to assemble and integrate (co-relate) subsystems in a big-picture model of a business unit purposeful system. In other words, each of the model components (numbered 1 to 7 in Figure 3) might be a single activity, in a Conceptual Model of a single purposeful subsystem of activity, or it might represent a Conceptual Model in its entirety, when these are integrated into a big-picture model of the business department (for example).

    The general structure of a model of a Purposeful Human Activity System, showing interconnected elements, each of which is defined separately
    Figure 3. The General Structure of a Purposeful Human Activity System Model

    This sort of “drill down” and “drill up” modeling allows us to engage in the “zoom-in” and “zoom out” analysis that is required for systems thinking. An example is presented in my analysis of a Market Research field research department (to be added soon!).

  • Boundary-Spanning Design

    Boundary-Spanning Design

    We talk about organizational design and change problems as if they were given — as if there is only one problem that could be defined for any situation and only one best way to design a solution. This is far from true. Design is like completing a jigsaw puzzle without the picture on the box. We find a bit of sky and then realize it is, in fact, part of a swimming pool.

    Organizational change, design, and problem-solving depend on pattern recognition. When organizations assemble a team of managers or designers to represent different business groups, each person brings the assumptions of their group culture and “best practices” with them. They are expected to collaborate as if they totally understand every single part of every business practice involved. But there are multiple, interrelated problems involved in any situation and different stakeholders will perceive different problems depending on their background and experience. The key skill is to recognize those problems and tease them apart, dealing with each one separately.

    Organizational problems – whether operational or strategic – typically span organizational boundaries, so the design of business processes and enterprise systems is complicated. Boundary-spanning systems of work need systemic methods and solutions. The point is to understand how collaboration works when people lack a shared context or understanding — and to use design approaches that support collaborative problem investigation, to increase the degree of shared understanding as the basis for consensus and action in organizational change. To enable collaborative visions, people need some point of intersection. In typical collaborations – for example a design group working on change requirements – the “shared vision” looks something like Figure 1.

    Venn diagram, showing intersubjective frames, intersections of understanding between 2 stakeholders, and distributed cognition as the union of all frames

    Figure 1. Differences Between Individual, Shared, and Distributed Understanding In Boundary-Spanning Groups

    The only really shared part of the group vision is the shaded area in the center. The rest is a mixture of partly-understood agreements and consensus that mean different things to different people, depending on their work background, their life experience, education, and the language they have learned to use. For example, accountants use the word “process” totally different to engineers. Psychologists use it to define a different concept from either group. When group members perceive that others are not buying in to the “obvious” consequences of a shared agreement, they think this is political behavior — when in fact, it most likely results from differences in how the situation and group agreements are interpreted.

    Boundary-spanning collaboration is about maximizing the intersections of understanding using techniques such as developing shared representations and prototypes, to test and explore what group members mean by the requirements they suggested. It involves developing group relationships to allow group members to delegate areas of the design to someone they deem knowledgeable and trustworthy. It uses methods to “surface” assumptions and to expose differences in framing, in non-confrontational ways. But most of all, it involves processes to explore group definitions of the change problem, in tandem with emerging solutions. We have understood this for a long time: Enid Mumford, writing in the 1970s and 80s, discussed the importance of design approaches that involved those who worked in the situation, and the need to balance job design and satisfaction with the “bottom line interests” of IT system design (Mumford and Weir 1979; Mumford 2003) – also see Porra & Hirchheim (2007). This theme has been echoed by a succession of design process researchers: Horst Rittell (Rittel 1972; Rittel and Webber 1973), Peter Checkland (1999), and Stanford’s Design Thinking initiative (although “design thinking” tends to be co-opted to focus on “creativity” in interface design, rather than the integrative design approach that may have been envisioned).

    The problem is that most collaboration methods for design of organizational and IT-related change, whether focused on enterprise systems design, business process redesign, cross-functional problem-solving, or IT support for business innovation, employ a decompositional approach. Decomposition fails dramatically because of distributed sensemaking. Group members cannot just share what they know about the problem, because each of them is sensitized by their background and experience to see a different problem (or at least, different aspects of the problem), based on where they are in the organization. Goals for change evolve, as stakeholders piece together what they collectively know about the problem-situation — a process akin to assembling a jigsaw-puzzle. (Productive) conflict and explicit boundary negotiation are avoided because group-members lack a common language for collaboration so misunderstandings are ascribed to political game-playing. We need design and problem-solving approaches that support the distributed knowledge processes underpinning creativity and innovation — approaches that recognize and embrace problem emergence, boundary-negotiation, and the development of shared understanding. This is the core of my work on improvising design.

    References

    Balcaitis, Ramunas 2019. What is Design Thinking? Empathize@IT https://empathizeit.com/what-is-design-thinking/. Accessed 8-15-2023.

    Checkland, P. 1999. Systems Thinking Systems Practice: Includes a 30 Year Retrospective, (2nd. Edition ed.). Chichester UK: John Wiley & Sons.
    [Original edition of this book published in 1981].

    Mumford, E. 2003. Redesigning Human Systems. Hershey, PA: Idea Group.
    Mumford, E. and Weir, M. 1979. Computer Systems in Work Design: The Ethics Method. New York NY: John Wiley.

    Porra, J., & Hirschheim, R. 2007. A lifetime of theory and action on the ethical use of computers: A dialogue with Enid Mumford. Journal of the Association for Information Systems, 8(9), 3.

    Roumani, Nadia 2022. Integrative Design: A Practice to Tackle Complex Challenges, Stanford d.school on Medium, Accessed 8-15-2023.

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.

    Rittel, H.W.J. and Webber, M.M. 1973. “Dilemmas in a General Theory of Planning,” Policy Sciences (4), pp. 155-169.

  • Wicked Problems Explained

    Wicked Problems, Explained

    Wicked problems are problems that defy definition. They are too complicated for any one person to understand in their entirety, they span organizational and group  boundaries, and call upon knowledge from multiple, specialized areas of work. We only ever understand a small part of what people in other groups and departments do. Solving a wicked problem is like piecing together a jigsaw puzzle — without the picture on the box.

    When faced with uncertainty, we retreat to what we know – knowledge that is based on our own experience. In our day-to-day practice, we develop a repertoire of solutions-that-work, patterns and decision-criteria that fit the problems we face. We know (from research studies), that people try to fit partial solutions, based on this experiential knowledge, to the problems they perceive. When that does not work, they ask colleagues and trusted contacts for a solution. When that does not work, they re-define the problem to fit the partial solutions available to them.

    That is why change management groups get bogged down in political disputes. Managers from one function don’t realize that people from different areas of the business understand their words and ideas differently than they intended. They make assumptions about how these other areas work, based on what they know from their own area. When we extend our personal solutions to fit how people work in other business groups and functions, they fail miserably because, in essence, everyone is solving a different problem.

    In 1973, Horst Rittel and Melvin Webber wrote a paper exploring organizational planning and change-management problems were not amendable to logical analysis techniques. They contrasted “tame” problems, which could be expressed as a set of rules, or constraints, with “wicked” problems, which could not be defined in terms of an algorithm. They defined ten characteristics of wicked problems, shown in Figure 1.

    Ten aspects of wicked problems, discussed in text following
    Figure 1. Ten Characteristics of Wicked Problems (Rittel & Webber, 1973)

    1. There is no definitive formulation of a wicked problem.
    Wicked problems cannot be defined and have no clear boundary. They consist of multiple problem-elements, some of which appear more important to some stakeholders than others, depending on their experience and where they are in the organization. Rittel (1972) argues that we cannot use conventional problem analysis methods, as our understanding of the problem-situation and potential solutions emerge in tandem, through mutual exploration and “argumentation” concerning the nature of the problem and appropriate solutions.

    2. Every wicked problem is essentially unique.

    “… by ‘essentially unique’ we mean that, despite long lists of similarities between a current problem and a previous one, there always might be an additional distinguishing property that is of overriding import­ance. Part of the art of dealing with wicked problems is the art of not knowing too early which type of solution to apply.” (Rittel & Webber, 1973)

    In software design, the idea of not converging on a solution too early is known as avoiding premature design. In management science, the term complexification is used to explain how shared mental models, or frames become more complicated as groups of problem-solvers develop a shared language and pool perspectives on a problem-situation. The iterative processes of perspective-taking (from others in the group) and perspective-making (to improve shared understanding), underpin effective argumentation for wicked problems (Boland and Tenkasi 1995). 

    3. Wicked problems have no stopping rule.
    Because Wicked Problems have no definitive formulation, we cannot define criteria to evaluate when we have a good enough solution. Problem-definitions are negotiated across those involved in the situation to be changed. They incorporate multiple perspectives, some of which become more salient as specific aspects of the situation or its solution are explored, or as various stakeholders attempt to incorporate their own perspectives and agendas for change (Davidson 2002). We don’t know when we are done with this process – we can only judge individually if all the elements we wanted to see in a solution have been included.

    4. Solutions are subjectively good-enough, rather than optimal.
    As there is no clear set of criteria for a solution, we cannot evaluate when we have reached a solution that would “work.” Stakeholder assess­ments of proposed solutions are subjective and negotiated, defined around fit with the framing perspective they adopt, and expressed as ‘good enough,’ rather than right or wrong.

    5. There are no criteria to judge if all solutions have been identified.
    Lacking a definitive problem formulation and boundary, we cannot define any rules or logic to judge whether we have included all the important aspects of the problem in our analysis. Implementing a solution will generate waves of consequences, which may have repercussions that outweigh the intended benefits. We may end up making the situation worse than when we started.

    “The full consequences cannot be appraised until the waves of repercussions have completely run out, and we have no way of tracing all the waves through all the affected lives ahead of time or within a limited time span.” (Rittel & Webber, 1973)

    However, we can explore the knock-on effects of various solution elements by employing a systemic analysis that allows us to analyze the likely impacts of change in conjunction with our emerging understanding of the problem.

    6. Every solution to a wicked problem is a ‘one-shot operation’; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
    Every attempt at a solution changes things, in ways that are irreversible. Once we make changes to the problem-situation, we face a different set of problems. So we cannot plan an incremental set of changes, then reverse course if something does not work. We can attempt to predict the knock-on effects of changes using a systemic analysis, but some solutions may introduce unforeseen consequences, requiring us to start again with a fresh analysis.

    7. Wicked problems do not have an agreed scope, or set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan for a solution.
    Wicked problems tend to span organizational, functional, and management boundaries. So nobody fully understands what the problem is – just the parts that they can see from where they stand. Wicked problem exploration is like assembling a jigsaw puzzle – you get glimpses of a face, or a building, but are never quite sure what the big picture looks like. So solutions are partial and negotiated around “fit” with the emerging problem, rather than any objective definition of scope or legitimacy. We can agree a set of elements we’d like to incorporate into a plan of change, but we don’t know if we understood all the requirements, or if we missed something big. All we can do is to work of collective representations of the problem and debate the expected consequences of solution elements, using representations and methods to explore interactions between the parts.

    8. Every wicked problem can be considered to be a symptom of another problem.
    Wicked problems can be conceived of as “messes” (Ackoff 1974) or “soft systems” of human-activity (Checkland 1999). Because wicked problems are so complex, incorporating multiple chains of cause-and-effect (many of which may have multiple causes) into a coherent representation of the problem-situation is difficult. Problem-components are interrelated: some problems may be causes or symptoms of others, some problems have multiple causes, and some share a common cause. To untangle these problems, the relationships between them – and the consequential knock-on effects of changing part of the system of related elements – needs to be modeled and appreciated. This requires systemic analysis methods.

    9. A wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.
    As discussed above, we need to assess and argue the value of a solution across multiple stakeholders who have differing perspectives on the problem and competing agendas for change. Their perspectives are likely to depend on where they are in the organization, their disciplinary background and education, and their prior experience. Their priorities for change are likely to differ widely according to their group or personal interests and values, and their sensitization to various types of organizational problem and solutions. It is unlikely that everyone will define either problems or solutions in the same way. The process of argumentation used to explore problems mush therefore focus on building consensus, as the way in which the problem is defined tends to direct the type of solution considered. For example, if we define the problem as one of disorganized work processes, an appropriate solution might be to implement a team-coordination system, whereas if we define the problem as a lack of relevant information, the solution would be more likely to focus on an information repository.

    10. We need to involve participants in the situation – and to listen to what they say.

    “Planners are liable for the consequences of the actions they generate; the effects can matter a great deal to those people that are touched by those actions.” (Rittel & Webber, 1973)

    The consequences generated by changes last for a long time. Some of these may be predictable, but because problems are interrelated, some changes may introduce unforeseen consequences. The lives and work of peo­ple involved in the problem-situation are irreversibly changed, and these changes will influence how they work, and what they are able to do (or not do) in the future. The aim is to improve some aspects of the organizational situation, but changes may have unintended consequences that need remediation.

    Conclusion: Wicked Problems Need Exploration, Appreciation, and Systemic Analysis

    Solving Wicked Problems requires appreciation of the problem-situation, accompanied by systemic analysis. Horst Rittel (1972), who originated the term, suggested that we use a process of argumentation to design solutions: “a counterplay of raising issues and dealing with them, which in turn raises new issues, and so on, and so on.” He saw the goal of argumentation as piecing together a big picture from the fragmented viewpoints and problem-definitions held by change-agents, stakeholders, and those people who work in the problem-situation.

    The main thing to note about wicked problems is that they can be defined in multiple ways, which means that various stakeholders will define them differently, depending on their experience and their position in the organization. That means that wicked problems are not goal-oriented, nor are they simple to define. Instead, we need to employ a systemic problem analysis approach to resolve them – one where stakeholders can explore and negotiate the boundary and the elements included in the problem to be resolved.

    References

    Boland, R.J. and Tenkasi, R.V. 1995. “Perspective Making and Perspective Taking in Communities of Knowing,” Organization Science (6:4), pp. 350-372

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.

    Rittel, H. W. J. and M. M. Webber (1973). “Dilemmas in a General Theory of Planning.” Policy Sciences 4, pp. 155-169.

  • Analyzing Problems Systemically

    Problem Cause & Effect Analysis

    Designs often fail to be used as intended because problems are over-simplified. Problem root-causes and their effects are treated as both clearly-defined (obvious) and linear (i.e. a single cause leads to a single effect). This is only true for really structured problems, for example a set of rules to play a simple game like tic-tac-toe. In real life, we face wicked problems, problems that consist of a tangle of other problems, that are defined differently by different people, and that evolve over time as you learn more about the situation. So we get poor solutions that don’t solve the problems we actually face, especially in the design of technology and organizational change.

    Systemic cause & effect analysis explores the complexity of a situation, reflecting the multiple perspectives we encounter and reducing the time needed to learn about how problems “work.” Using problem cause & effect analysis reduces the time needed to understand a problem-situation.

    Modeling Problem Cause & Effect, Systemically

    Systemic analysis means thinking through the system of influences that effect a problem. The point is to draw out all the problems which are communicated to you either directly from comments made by people, or indirectly by your analysis of a problem situation, to determine what causes these problems and what you can affect about them. To achieve a systemic problem analysis, we draw problem diagrams that work from cause → effect (i.e. problem 1 causes problem 2). But instead of limiting the analysis to a single cause or effect, we try to include as many factors as we can. Then we validate these models with the people involved in the situation, asking what elements or connections we have missed.

    It helps to start in the middle. People in any situation will tell you about symptoms not problems, because they don’t stop to reflect on the real causes of things that bug them. So draw these out, then ask them “why does this happen,” and in turn, “why does that happen,” and so on … Keep working backwards until you get to human-nature, or “it’s just like that.” The levels of analysis are shown in Figure 1 to provide a guide to the scope of these analyses – they are there to help you to think about whether you have dug deeply enough into the problem situation. If they don’t help, or if your problem doesn’t fall into such layers, ignore them. We stop when we get to problems such as “it’s human nature,” or “that’s the company culture.” These are things we cannot control, but it’s important to model them so we understand the constraints on solving the problem.

    Typical "levels" of cause-and-effect analysis: 
* business and environment (context) problems
* stakeholder need problems
* symptoms of problems
* consequences of problems
* elements of a solution (including changes to how people work)
    Figure 1: Structuring A Cause & Effect Analysis

    After working backwards, we work forwards until we start seeing the need: fragments of a solution that might comprise part of how to solve our problem. These will often be suggested by the stakeholders – although it is important to note these, they may have a partial and uninformed perspective on how to solve the problem, as they only see those parts of the situation that they encounter daily. In the end, it is by integrating multiple points of view that solutions start to emerge.

    A simple example of a problem cause and effect analysis is shown in Figure 2. It demonstrates how problems involved in managing downtown parking  are related – solving one problem will not resolve all the problems necessary for parking management to work. Instead, we need to understand how problems are related, so we can identify clusters of problems and their symptoms to undermine with solutions. The analyst needs to realize that many problems are just beyond their pay-grade (or are just due to human nature). You need to be able to draw a boundary on any problem analysis, where problems inside the boundary can be attacked – and “external” problems act as known constraints. This is shown by the red line in Figure 2.

    PCE1
    Figure 2: Simple Cause & Effect Analysis

    When modeling problems, bear in mind:
    • Effect-problems may be causes of other problems.
    • By linking problems together, you see which problems are related and where your boundary of action can be.
    • Think backwards, as well as forwards, to brainstorm causes of observed problems.
    • Clusters of problems tend to suggest forms of solution: cause → (effect=symptom/cause) → (effect=symptom/cause) → solution requirements.
    • You need to understand the boundary of what can be affected by your (design) solution and what factors are outside of this boundary.
    • Factors outside of the boundary-line act as constraints on your design, so it is important to note these.

    Issues With Problem Analysis

    1) One of the main issues of problem analysis is that problems are never simple. They don’t tend to be related in straight lines.

    Issues with problem determination #1: problems don't tend to be related in straight lines
    Figure 3. Problem-Elements are Modeled As A Single Chain of Cause & Effect (Straight-Line Modeling)

    2) This is exacerbated by people telling us about symptoms (my disk drive is full), rather than problems (I need a way to share data with someone else -> so we are sharing files via my local disk drive -> so my local disk drive is full as it was not provided for this purpose).

    PCE5b
    Figure 4. Problem-Elements are Modeled Around Symptoms, So They Don’t Explore Root Causes

    3) Problems are over-simplified, as problem-analysts are trained to focus on specifics, rather than to think systemically  (even the problem in Figure 1 was simplified so it would fit into one diagram). Even when you bully them into reflecting a wider scope of analysis, they will artificially constrain this around the problems they understand best.

    PCE5c
    Figure 5. Problem Chains of Cause & Effect Fail To Explore Relationships Between Problem-Clusters and Elements

    4) “Lower-level” problems are related to “higher level” problems, in ways that create reinforcing problem-cycles (vicious circles of causality). You need to be able to take one of these problems out of the loop, so the vicious circle is disrupted, before you can solve the rest of these problems. For example, by defining a problem with work being done as the consequence of a company culture of not employing enough people, you are admitting defeat before you even try to solve the problem by reinforcing the vicious circle of causality. If, however, you define the cause as too few people to do the work required, you have two potential solutions: (i) employ more people, or (ii) reduce the work required. Either of these will break the vicious circle you encountered, without a political battle over company culture(!).

    PCE5d
    Figure 6. Lower-Level Problem-Elements are Related to Higher-Level Problem Causes, Leading to Vicious Circles of Causality That Must Be Broken To Produce A Solution

    5) Finally, many problems are just beyond the problem-solver’s pay-grade (i.e. those that are due to human nature).
    You need to be able to draw a boundary on any problem analysis model, to distinguish problems that are inside the boundary of influence – which can be dealt with – and “external” problems that act as known constraints on the system of work inside the boundary.

    Systemic Problem-Analysis is Complex: Real-Life Examples

    As you work, you will find that problems expand – different people add different perspectives and suddenly, your “simple problem” covers five sheets of paper! Done properly, a problem cause-and-effect analysis can be huge – mine often require piecing together many sheets of paper, as shown in Figure 3. The critical thing is to use the analysis to explore areas of problems, especially where multiple people’s area of responsibility overlap. Then you can define related “clusters” of problems to resolve, as shown in Figure 7.

    PCE4
    Figure 7: Exploring The Problem Situation and Identifying Clusters of Related Problems To Resolve

    We can then dig deeper into just one cluster of problems, uncovering the details of why and how things happen- and defining a boundary for what problems can be tackled:

    PCE2b
    Figure 8: Digging Deeper and Defining The Boundary of What Problems Can Be Tackled

    We can also use the “big picture” problem analysis to identify patterns, such as vicious circles of causality that need to be broken before a problem can be resolved.  Figure 9 shows a real-life analysis, derived from a management workshop.

    PCE3
    Figure 9: Identifying Problem-solving Constraints and Vicious Circles of Causality

    In Figure 9, we can see three separate clusters of problems, each of which relates to a different aspect of change that can be considered (and resolved) separately – even though some of the issues are related.

    • There is a vicious circle of problems bounded in orange which starts with the lack of ownership for order-capture, cycles around three separate branches of the same problem-cluster, then loops back through a time-constraint, to reinforce the lack of ownership.
    • There is a cluster of problems bounded in blue which reflect limitations of the Marketing process – and the siloism of the Marketing function.
    • The cluster of problems bounded in green reflect limitations of the bid preparation and response process (the original focus of this analysis). Its scope reflects only about one-third of the issues that need to be tackled, for bid response to be successful in winning new business.

    It takes quite a lot of modeling, validation with stakeholders, and arrangement of problem-elements to get to this type of “tidy” model. But the understanding that results from this analysis means that obtaining buy-in for change becomes much easier.

    Summary

    The point of drawing a cause-and-effect diagram is:
    (a) To distinguish between cause and effect, so that time and effort are not wasted in solving problems which are just symptoms of others
    (b) To understand (as opposed to just listing!) what are the problems of the situation you are analyzing
    (c) to understand which problems are within your scope of analysis and which problems are “somebody else’s problem”.
    (d)You can draw a system boundary on the problem diagram – a line around those problems that are within your power to influence, but which excludes those which it is beyond your power to influence.

    Don’t over-simplify your analysis or accept “obvious” suggestions for root-causes. People often take a limited view of why things happen in the way that they do, suggesting “symptoms” of problems as the root cause, rather than bending their brains around why things really happen in a certain way. It is better to model everything that stakeholders suggest – or you (the analyst) observe – then split your analysis into clusters of problem-elements to be dealt with separately.

  • The Problem of The Problem

    The problem of “the problem.”  

    Designers are taught a repertoire of designs-that-works: patterns that fit specific circumstances and uses. Experienced designers are capable of building up a deep understanding over time, of which problem-elements each of these patterns resolves. So they can assess a situation, recognize familiar problem-elements, then fit these with design patterns that will work in these circumstances. The problem comes when a designer is faced with a novel or unusual situation that they have not encountered before. Novice designers encounter this situation a great deal, but even experienced designers must deal with emergent design in a novel context. In these circumstances, designers iterate their design, as shown in Figure 1. They identify (often partial) problems, ideate/conceive relevant solutions, give those solutions form with a prototype, then evaluate the prototype in context. This often reveals emergent user needs or problems, that are explored in the next iteration.

    The stages of iterative design: identify problem, ideate solutions, prototype designed solution, evaluate de4signed solution in context, explore remaining user needs.
    Figure 1. Iterative Design

    An important aspect of iterative design is that iterations can occur within cycles. As designers succeed or fail at successive designs, they accumulate experiential knowledge, that allows them to assess new situations quickly and to understand which design elements will work or fail in that situation, looping back to remediate the design as they spot logical flaws and gaps in the design. The problem with this is that (as the Princess said) you have to kiss an awful lot of frogs to get a Prince. An awful lot of people end up with really bad designs, because their designer did not recognize elements of the situation well enough to understand which pattern-elements to implement. If you are really unlucky, you will also end up with one of those designers who feel it is their mission in life to prevent the end-user “mucking about with” their design. If you are lucky, your designer will recognize that it is your design, not theirs. They design artifacts and systems in ways that allow people to adapt and improvise how they are used.

    Design as improvisation

    Improvisation takes a multitude of forms. It might be that a user wishes to customize the color of their screen (because the designer thought that a good interface should look like a play-school). This may not do much for the function of your work-system, but it does mean that your disposition towards work is a heck of a lot sunnier as you use it. Or it might be that the information system which you use expects you to enter data on one step of your work before another. You might be able to enter data into a separate screen for each step, reordering the steps as you wish. More usually, you have to enter fake data into the first step, then go back later to change this, once you have the real data. This is because IT systems designers treat software design as a well-structured problem. A well-structured problem is one that contains the solution within its definition. Defining the problem as a tic-tac-toe game application means that you have a set of rules for how the game is played which absolutely define how it should work. This is fine if everything goes to plan, but a huge pain for users when it does not. The only discretion left to the user is how to format the results in a printed report, which is not much comfort if your whole transaction failed because you were prevented from going back to change one of the inputs. This is not rocket science – developers need to design systems that let users work autonomously.

    But business applications are not well-structured. They represent wicked problems. A wicked problem cannot be defined objectively, for all the reasons identified in Figure 2. Solving a wicked problem needs business users and stakeholders to agree on what problems that they face, their priorities in resolving these, and what their change-goals are.

    Diagram lists the constraints on Design Posed by Wicked Problems
    Figure 2. Constraints on Design Posed by Wicked Problems (Rittel & Webber, 1973)

    A wicked problem can be viewed as a web of interrelated problems. It is not always clear what the consequences will be, of solving any part of this mess. Some of the problems may have “obvious” solutions. But implementing these solutions may make other, related problems worse or better. This is why iterative design is central to resolving wicked problems. In general, stakeholders don’t understand what they need until they see it. So solutions must be designed flexibly, for changes to be implemented as the consequences are realized and to permit adaptation-in-use by stakeholders and users. People are infinitely improvisational. They develop work-arounds and strategies to manage poor design. But, as Norman (2013) observes, why should users have to develop work-arounds for poor design? What is it, about the design process, that leads us to such constraining IT systems, interfaces, and work procedures that are based on the system design, rather than system designs that are based on flexible work-procedures?This website reflects findings from my research studies and reflections from my own experience in design, to discuss some key underlying principles of design, to explore how the design process works in practice (rather than how we manage it now, which is based on unsupported theoretical models), and to present a way of managing design differently.  Improvisationally.

    References

    Norman, D. A. (2013). The Design of Everyday Things: Revised and Expanded Edition. Basic Books, New York.

    Rittel, H.W.J. (1972) “Second Generation Design Methods,” Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 317-327., Interview in: Design Methods Group 5th Anniversary Report, DMG Occasional Paper, 1, pp. 5-10.

    Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4, 155-169.

  • Managing Organizational Knowledge

    Managing Organizational Knowledge

    This thread of my work explores the forms of knowledge shared across organizational boundaries, the mechanisms for sharing knowledge that are employed, and how human-sensemaking is mediated by processual, technical and informational artifacts. My work draws on theories of distributed cognition, contextual emergence, and sociomateriality. Hayden White observes that human sensemaking relies on subjective forms of narrative for meaning. Much of this work explored how to enable a “conversation with the situation” that introduces reflexive breakdowns into the situated narrativizing and framing in which humans routinely engage. This results in different types of support, focusing on the different forms of knowledge that are required for decision-making — and the degree to which such knowledge can be shared.

    In virtual organizations and distributed project groups, non-human objects increasingly mediate human relationships, as they displace humans as collaboration-partners in distributed knowledge networks. We may be able to identify forms of metaknowledge that work across domain boundaries by identifying mediating object roles – e.g. categorization schemes, instrumentation, databases, and routinized practices that embed frameworks for analysis or participation. My analysis has revealed different forms of group memory management in use, depending on the organizational scope of projects and the locus of control in the global network. Organizational knowledge – about how to work, how to frame organizational goals and outcomes, and how to organize work effectively – is mediated by technical objects, creating assemblages of social and technical systems of work, that guide the emergence of new business practices. The distributed scope of organizational locales creates four categories of knowledge that are acquired in different ways, summarized in Figure 1.

    2 x 2 matrix showing 4 forms of knowledge - these are described in the following text

    Figure 1. Forms of Knowledge (Gasson & Shelfer, 2006)

    Codifiable knowledge is the simplest to define, as this knowledge is routine and programmable. It equates to explicit knowledge, in that we know that we know it – and we can articulate what we know, so it can be stored for others to access and use. Typical examples are organization charts, or the rules, standards, and forms used in business processes.

    Transferable knowledge is articulable, but it is also situational – it is related to the context in which it is applied. For example, an IT systems developer might design software differently for a general-purpose website, whose users are relatively unknown, than for a small local application to be used by 4-5 people working together to perform specific business calculations as part of their shared work. The knowledge of when to apply different design techniques depends on the designers experience of working in various business environments and is generally acquired through some sort of apprenticeship process, where they learn from someone who has more experience of that environment.

    Discoverable knowledge is less straightforward. It combines tacit knowledge (Polanyi, 1961), which is process or skills based, with implicit knowledge that people fail to recollect consciously, or perceive explicitly (Schacter, 1991). As such knowledge is inarticulable, its possessors must recall it inferentially, by relating reported case studies to their own experience, or pattern recognition that can be related to data analysis findings. An effective way of surfacing such knowledge is to discuss historical data or case studies to explore what is known collectively about various situations. This is similar to the argumentation method proposed by Rittel (1972) in his discussion of “second generation design.”

    Hidden knowledge is the most difficult type to surface. It’s not the sort of knowledge that you are going to realize, unless you stop to reflect on what went wrong in your decision-making, or how an action was performed. For example, an IT Manager commented to me that the business process he had selected for a new initiative in organizational change was not as “stand-alone” as he had expected. He stopped to think, then commented that “in fact, I couldn’t have chosen a worse process to start with – it was related to every single business process we have.” Then he paused, and added, “but actually, you could say that about all of our business processes. It seems there is no such thing as a stand-alone process!” This category of knowledge is surfaced through breakdowns (Heidegger, 1962), where the “autopilot” of everyday action is disrupted by the realization that one’s usual recipe-for-success in such circumstances is not working. At that point, the tool or process we were about to use goes from being ready-to-hand, ready for automatic use, to being present-at-hand, needing reflection in order to work out how to use a tool, or how to behave in those circumstances (Winograd & Flores, 1986). During breakdowns, we need to stop and think, revising our mental model of how the world works to come up with a new way of behaving that is a better fit to the situation. Again, Rittel’s (1972) argumentation approach would be helpful here, as people pool and debate what they have learned from a failure, collectively.

    The ways in which we learn, then, are dictated by the scope of access that we have to our colleagues. The more distributed people are, the more that knowledge is mediated across formal technology channels, as distinct from being acquired through face-to-face conversations. This remoteness means that we are more reliant on formal knowledge, that is codifiable, or discoverable from formal sources of information. When people are co-located, they can spend time learning from what others do, or how a mistake or failure happened. They key take-away is that we need multiple ways of configuring and using technology platforms, for all types of knowledge to be supported. We cannot design one-size-fits-all information and communication technology systems.

    Selected Bibliography:

    Khazraee, E.K. & Gasson, S. (2015) ‘Epistemic Objects and Embeddedness: Knowledge Construction and Narratives in Research Networks of Practice’ The Information Society, 31(2), forthcoming, Jan. 2015.

    Gasson, S. (2015) “Knowledge Mediation and Boundary-Spanning In Global IS Change Projects.” Proceedings of Hawaii Intl. Conference on System Sciences (HICSS-48), Jan. 5-8, 2015. Knowledge Flows, Transfer, Sharing and Exchange minitrack, Knowledge Systems.

    Gasson, S. (2012) The Sociomateriality Of Boundary-Spanning Enterprise IS Design, in Joey, F. George (Eds.), Proceedings of the International Conference on Information Systems, ICIS 2012, Orlando, USA, December 16-19, 2012. Association for Information Systems 2012, ISBN 978-0-615-71843-9, http://aisel.aisnet.org/icis2012/proceedings/SocialImpacts/8/

    Gasson, S. (2011) ‘The Role of Negotiation Objects in Managing Meaning Across e-Collaboration Systems.’ OCIS Division, Academy of Management Annual Meeting, San Antonio, August 11-16, 2011.

    Gasson, S. and Elrod, E.M. (2006) Distributed Knowledge Coordination Across Virtual Organization Boundaries’, in Proceedings of ICIS ’06, Milwaukee, WI, paper KM-01. [Winner of ICIS Best paper in track award].

    Gasson, S. and Shelfer, K.M. (2006) ‘IT-Based Knowledge Management To Support Organizational Learning: Visa Application Screening At The INS’, Information, Technology & People, 20 (4), pp. 376-399. Winner of 2008 Emerald Literati outstanding paper award.

    DeLuca, D., Gasson, S., and Kock, N. (2006) ‘Adaptations That Virtual Teams Make So That Complex Tasks Can Be Performed Using Simple e-Collaboration Technologies’ International Journal of e-Collaboration, 2 (3), pp. 65-91

    References

    Heidegger, M. 1962. Being and Time. New York NY.: Harper & Row New York

    Polanyi, M. 1961. “Knowing and Being,” Mind (5:70), pp. 458-470.

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.

    Schacter, D. L. (1992). Implicit knowledge: new perspectives on unconscious processes. Proceedings of the National Academy of Sciences, 89(23), 11113-11117.

    Winograd, T. and Flores, F. 1986. Understanding Computers and Cognition. Norwood New Jersey: Ablex Corporation.

  • Soft Systems Methodology (SSM) – A Summary

    SSM As an Iterative Inquiring/Learning Cycle of Change

    SSM is based upon a simple concept: that we separate and analyze systems of purposeful human activity (what people do), rather than data-processing or IT systems (how IT components should function to support what people do). SSM is helpful for analysis approaches that wish to understand the connections, conflicts, and discrepancies between elements of a situation rather than attempting to subsume all elements into a single perspective. It is systemic, rather than systems-focused, exploring relationships between participants and between perspectives (Checkland, 2000a).

    Separating Out Purposeful System Perspectives

    SSM relies on the explicit declaration of worldviews which make each purposeful system model meaningful to participants. In defining purposeful models, the Soft Systems Method (SSM) avoids what Russell Ackoff (1974) called “messes”: the typical situation where none of the requirements for change are understood or implemented fully, because the relationship between requirements is systemic (each requirement has “knock-on” implications for other requirements). Unraveling systemic messes needs a divide-and-conquer approach that separates out the requirements for each separate purpose that the system-of-work exists to achieve, as shown in Figure 4. We delay this stage until we have a reasonable understanding of the problem situation: what various actors do, why they do it, and how they interact to achieve their purposes.

    SSM transformation process overview, showing key elements: input, transformation, output, worldview, and success criteria
    Figure 4. SSM Transformation Process Overview

    Generating one perspective on the work-system at a time allows us to analyze what work-activities need to be done to achieve a single purpose — and how actors’ work-outcomes are evaluated — in isolation from all the other purposes, before merging perspectives back into a (better understood) integrated system-of-human-activity so we may determine priorities for change. This divide-and-conquer strategy employs the CATWOE framework to define each perspective (or purposeful system):

    ClientWho is the victim or beneficiary of these changes?
    ActorsWho performs the work for this transformation?
    TransformationHow is an entity, the input to the transforming process,
    changed into a different state or form to become the output of the process?
    WeltanschauungWhat is the perspective that makes the transformation significant to participants (the purpose)?
    OwnerWho has the power to say whether the system will be implemented or not?
    EnvironmentWhat needs to be known about the environmental conditions that the system operates under?

    Table 1. The CATWOE Framework

    Complex transformations should be separated into distinct purposes, with a single set of actors. It is usually an indication that you have conflated two purposes if you have two different sets of actors involved – you will tie yourself in knots attempting to model both points of view at the same time. You can happily repeat the exercise using another perspective, to see if you can balance the needs of two purposes with the same system, but conflating perspectives often leads to change requirements being overlooked. Sometimes it may not be possible to balance two perspectives – for example, how do you integrate the needs of drivers to park easily with the needs of pedestrians to be kept safe in congested areas? If you cannot integrate competing perspectives, you must take a decision about whose perspective will have priority – this is where your client’s objectives come into play. In every change, there are winners and losers; your job is to make it explicit who loses and who wins, rather than to make this decision for the client (!).

    Defining and Prioritizing Requirements For Change

    Once each transformation is defined, it can be explored in terms of the potential sequences of activity needed to achieve that purpose. Participants and other stakeholders are involved in developing conceptual models that reflect “ideal world” sub-systems of activity. Not all of these activities may be implemented – or they may not be implemented in the way originally envisaged. Subsequent changes to the real-world system of work are evaluated by managers and the client of the change initiative, to determine the order and priorities for change. But it is important to understand what work needs to be done, so you can consider constraints on achieving the desired outcomes should your client not choose to implement some process-changes. Actions to improve the problem-situation are based on finding accommodations that balance the interests of competing perspectives. This involves defining versions of the situation which stakeholders with competing interests can live with (Checkland, 2000b).

    In much of his later work, Checkland addresses the difficulty of converting a model that represents a system of human activity into a set of requirements for an IT-based information system by distinguishing between the supporting system (IT) and the supported system (human activity).  The supported system represents the problem domain, the amalgam of purposeful systems of human activity that underpins the problem situation. The supporting system represents the solution domain, that combination of people, processes, and technology that provides a solution to problems identified by participants in the problem situation.

     
    Supported system of human activity vs. supporting system of IT
    Fig 4. The supporting system of IT vs. the supported system of human activity

    Reconciliation between the two system views depends on the translation of requirements for change to the human-activity system to IT system requirements. If these changes are undertaken incrementally, with a focus on a single purposeful system perspective at a time, this is generally fairly simple to accomplish – it aligns well with the business-process (re)design approach used by most organizations in driving IT change.

    The Contribution of SSM

    Soft Systems Methodology (SSM) was devised by Peter Checkland and elaborated in collaboration with others at Lancaster University in the UK: (Winter, Brown, & Checkland, 1995; Checkland & Scholes, 1999; Checkland & Holwell, 2007; Checkland & Poulter, 2006). Its contribution is to separate the analysis of the “soft system” of human-activity from the “hard system” of IT and engineering logic that serves the needs of the human-activity system.

    Checkland distinguishes between the hard, engineering school of thought and soft systems thinking. The development of systems concepts and system thinking methods to solve ill-structured, “soft” problems is Checkland’s (1979) contribution to the field of systems theory. Soft Systems Methodology (SSM) provides a method for participatory design centered on human-centered information systems. It also provides an excellent approach to surfacing multiple perspectives in decision-making, complex problem-analysis, and socially-situated research. The analysis tools suggested by the method — which is really a family of methods, rather than a single method in the sense of modeling techniques — permit change-analysts, consultants and researchers to exercise reflexivity and to explore alternatives. In his valedictory lecture at Lancaster University, Peter Checkland listed four “big thoughts” that underlie soft systems thinking (Checkland, 2000a):

    1. Treat linked activities as a purposeful system.

    2. Declare worldviews which make purposeful models meaningful (there will be many!).

    3. Enact SSM as a learning system, finding accommodations enabling action to be taken.

    4. Use models of activity systems as a base for work on information systems.

    Systems thinking attempts to understand problems systemically. Problems are ultimately subjective: we select things to include and things to exclude from our problem analysis (the “system boundary”). But real-world problems are wicked problems, consisting of interrelated sub-problems that cannot be disentangled — and therefore cannot be defined objectively (Rittel, 1972; Rittel & Webber, 1973). The best we can do is to define problems that are related to the various purposes that participants pursue, in performing their work. By solving one problem, we often make another problem worse, or complicate matters in some way. Systemic thinking attempts to understand the interrelatedness of problems and goals by separating them out.

    In understanding different sets of activities and the problems pertaining to those activities as conceptually-separated models, we understand also the complexity of the whole “system” of work and the interrelatedness of things – at least, to some extent. It is important to understand that, given the evolving nature of organizational work, a great deal of the value of this approach lies in the collective learning achieved by involving actors in the situation in analyzing changes, and that this approach in inquiry is, in principle, never ending. It is best conducted with and by problem-situation participants (Checkland, 2000b).

    SSM is an approach to the investigation of organizational problems that may – or may not – require computer-based system support for their solution. In this sense, SSM could be described as an approach to generating early requirements for change analysis, rather than as a systems design approach. The original, seven-step method of SSM (Checkland, 1979) has been replaced by more nuanced approaches to soft systems analysis (note the loss of the word “method”). Checkland now considers the approach more of a mindset, or way of thinking than a strict set of steps that should be followed (Checkland & Holwell, 1997). However, without the roadmap provided by the seven stages of analysis, it can be difficult to understand where to start. For this reason, the discussion on this site employs the terminology and approach of the seven-stage method originally proposed, while attempting to iron out some of the complexities and confusions in the original (Checkland, 1979).

    This website attempts to explain some of the elements of SSM for educational purposes. It is not intended as a comprehensive source of information about SSM, but as a reflection of my own tussles with the approach.
    As a result, it may well subvert some of Checkland’s original intentions, in my attempt to make the subject accessible to students and other inquiring minds … .

    BD14539_

    Note [1]: Weltanschauung is a German word, used in philosophical works to represent a particular philosophy or view of life, especially when analyzing the worldview of an individual or group. Checkland (1979) adopted the term in its philosophical sense when he was making the argument that the purposes for which a system of human-activity (or work) exist are situated within a specific sociocultural framework of rules and expectations that govern who-does-what and how. The idea is similar to Anselm Strauss’ concept of social worlds, where participants draw on experience-based interpretations, culturally-situated, relational identities, and shared interpretations of organizational structures that allow them to respond automatically to new phenomena and events (Strauss, 1978). The term “worldview” may be substituted, with the understanding that this reflects the broader use of the term to indicate socioculturally-situated interpretations, not simply an individual’s point of view.

    References

    Ackoff, R. L. (1974). Redesigning The Future: Wiley.

    Checkland, P. (1979)  Systems Thinking, Systems Practice. John Wiley and Sons Ltd. Chichester UK. Latest edition includes a 30-year retrospective. ISBN: 0-471-98606-2.

    Checkland P., Casar A. (1986) Vickers’ concept of an appreciative system: A systemic account. Journal of Applied Systems Analysis 13: 3-17

    Checkland, P., Holwell, S.E. (1998) Information, Systems and Information Systems. John Wiley and Sons Ltd. Chichester UK. ISBN: 0-471-95820-4.

    Checkland, P. & Scholes, J. (1999) Soft Systems Methodology in Action. John Wiley and Sons Ltd. Chichester UK. ISBN: 0-471-98605-4.

    Checkland, P. (2000a) New maps of knowledge. Some animadversions (friendly) on: science (reductionist), social science (hermeneutic), research (unmanageable) and universities (unmanaged). Systems Research and Behavioural Science, 17(S1), pages S59-S75. http://www3.interscience.wiley.com/journal/75502924/abstract

    Checkland, P. (2000b). Soft systems methodology: a thirty year retrospective. Systems Research and Behavioral Science, 17: S11-S58.

    Checkland, P., Poulter, J. (2006) Learning for Action: A Short Definitive Account of Soft Systems Methodology and its Use, for Practitioners, Teachers and Students. John Wiley and Sons Ltd. Chichester UK. ISBN: 0-470-02554-9.

    Churchman, C.W. (1971) The Design of Inquiring Systems, Basic Concepts of Systems and Organizations, Basic Books, New York.

    Rittel, H. W. J. (1972). Second Generation Design Methods. Design Methods Group 5th Anniversary Report: 5-10. DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.: Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 317-327.

    Rittel, H. W. J. and M. M. Webber (1973). “Dilemmas in a General Theory of Planning.” Policy Sciences 4, pp. 155-169.

    Vickers G. (1968) Value Systems and Social Process. Tavistock, London UK.

    Winter, M., Brown, D. H., & Checkland, P. B. (1995). A role for soft systems methodology in information systems development. European Journal of Information Systems, 4(3), 130-142.

    von Bertanlaffy, L. (1968) General System theory: Foundations, Development, Applications, New York: George Braziller, revised edition 1976.