Why is design improvisational? We talk about design as if it were fixed: as if there were a clear set of rules for design – one best way to design everything. We celebrate designers who produce especially elegant or usable artifacts as if they were possessed of supernatural powers. Yet design is just the application of “best practice” principles to a specific situation. We can observe how the users of a designed artifact or system work, then design the artifact or system accordingly. Why does that approach fail so often?
The issue with design rules (and patterns) is that they only deal with well-defined problems.
Designers acquire a repertoire of design-elements-that-work: patterns that fit specific circumstances and uses. Experienced designers are capable of building up a deep understanding over time, of which problem-elements each of these patterns resolves. So they can assess a situation, recognize familiar problem-elements, then fit these with design patterns that will work in these circumstances. The problem comes when a designer is faced with a novel or unusual situation that they have not encountered before. Novice designers encounter this situation a great deal, but even experienced designers must deal with emergent design in a novel context. In these circumstances, designers iterate their design, as shown in Figure 1. They identify (often partial) problems, ideate/conceive relevant solutions, give those solutions form with a prototype, then evaluate the prototype in context. This often reveals emergent user needs or problems, that are explored in the next iteration.
An important aspect of iterative design is that iterations can occur within cycles. As designers succeed or fail at successive designs, they accumulate experiential knowledge, that allows them to assess new situations quickly and to understand which design elements will work or fail in that situation, looping back to remediate the design as they spot logical flaws and gaps in the design. The problem with this is that (as the Princess said) you have to kiss an awful lot of frogs to get a Prince. An awful lot of people end up with really bad designs, because their designer did not recognize elements of the situation well enough to understand which pattern-elements to implement. If you are really unlucky, you will also end up with one of those designers who feel it is their mission in life to prevent the end-user “mucking about with” their design. If you are lucky, your designer will recognize that it is your design, not theirs. They design artifacts and systems in ways that allow people to adapt and improvise how they are used.
Which means that design-goals are constantly changing between iterations, as shown in Figure 2. The designer starts by designing for the subset of goals they understand. As they explore and test the design with users, they become aware of new requirements and so modify the subset of goals they are designing for. As part of this process, they also discard any requirements that are no longer associated with perceived user needs.
Improvisation takes a multitude of forms. It might be that a user wishes to customize the color of their screen (because the designer thought that a good interface should look like a play-school). This may not do much for the function of your work-system, but it does mean that your disposition towards work is a heck of a lot sunnier as you use it. Or it might be that the information system which you use expects you to enter data on one step of your work before another. You might be able to enter data into a separate screen for each step, reordering the steps as you wish. More usually, you have to enter fake data into the first step, then go back later to change this, once you have the real data. This is because IT systems designers treat software design as a well-structured problem. A well-structured problem is one that contains the solution within its definition. Defining the problem as a tic-tac-toe game application means that you have a set of rules for how the game is played which absolutely define how it should work. This is fine if everything goes to plan, but a huge pain for users when it does not. The only discretion left to the user is how to format the results in a printed report, which is not much comfort if your whole transaction failed because you were prevented from going back to change one of the inputs. This is not rocket science – developers need to design systems that let users work autonomously.
Business applications tend to present wicked problems [2]. A wicked problem cannot be defined objectively, for all the reasons identified in Figure 3. Solving a wicked problem needs business users and stakeholders to agree on what problems that they face, their priorities in resolving these, and what their change-goals are.
A wicked problem can be understood as a web of interrelated problems. It is not always clear what the consequences will be, of solving any part of this mess. Some of the problems may have “obvious” solutions. But implementing these solutions may make other, related problems worse or better. This is why iterative design is central to resolving wicked problems. In general, stakeholders don’t understand what they need until they see it. So solutions must be designed flexibly, for changes to be implemented as the consequences are realized and to permit adaptation-in-use by stakeholders and users. People are infinitely improvisational. They develop work-arounds and strategies to manage poor design. But, as Norman [1] observes, why should users have to develop work-arounds for poor design?.
What is it, about the design process, that leads us to such constraining IT systems, interfaces, and work procedures that are based on the system design, rather than system designs that are based on flexible work-procedures?
It is difficult to manage design in situations where design goals evolve as you understand more about the context in which a design will be used. You are trying to hit a moving target.
Organizational change problems associated with situated design1 are wicked problems. The design constraints that result from the social and political nature of wicked problems (shown in Figure 3) create a more complex and iterative process than the design of well-defined technology that employs a more reductionist approach.
Other papers and posts on this site explore findings from my research studies and reflections from my own experience in design. Elsewhere, I discuss some key underlying principles of design, to explore how the design process works in practice (rather than how we manage it now, which is pretty much based on unsubstantiated models of how humans think, from the 1950s). We need to develop approaches that allow us to incorporate multiple perspectives of “the problem” and that manage the evolution of design goals and requirements. Improvisationally.
Footnote [1]: Situated design is design that considers the situation or context in defining how a design works. If we consider organizational problems to be wicked problems, the majority of organizationally-situated design will deal with problems that are subjective, contentious, involve multiple perspectives, and systemic in their impacts.
References
Norman, D. A. (2013). The Design of Everyday Things: Revised and Expanded Edition. Basic Books, New York. Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4, 155-169.
In systemic thinking, we often debate whether to start with the “big picture” and then decompose the problem situation into sub-problems (the traditional, decompositional approach to requirements analysis), or whether to start with the detailed problems that stakeholders articulate and analyze how these are related (the bottom-up approach). As a systemic analyst, these have always seemed to be two halves of the same approach, so it was interesting to remember my own confusion in the early days of my career when this was not so obvious … 🙂
Systemic analysis can be related to the learning-cycle. There are many variants of this. Dewey introduced the world to the idea that we learn about how the world works through interactions with that world (experiential learning):
‘An experience is always what it is because of a transaction taking place between the individual and, what at the time, constitutes the environment’ (Dewey, 1938, p. 43)
Lewin suggested a four-stage model of experiential learning, that cycles between the concrete and the abstract – this later became known as the “hermeneutic circle” as an individual cycles around these stages of learning, pondering the meaning of what they experience in each cycle, then using this meaning to construct an increasingly complex mental model of the world.
To engage in systemic thinking, we need approaches to analysis that permit us to cycle around these stages multiple times – especially when we encounter new information. In the pages that follow, I explore some methods and techniques used to support cyclical, systemic analysis.
Susan Gasson, College of Computing & Informatics,Drexel University
Design is too often viewed as a single stage in IS development, where we give form to an IT architecture. I present a history of IS design approaches over time, explaining how design goals become subverted by technocentric approaches and how to retrieve them for human-centered design, to provide an “information architecture” that is meaningful to the system’s users.
Please cite this paper as: Gasson, S. (2024) ‘A Brief History Of Information System Design.’ Working Paper. Available from https://www.improvisingdesign.com/bhod/ Last updated 08/17/2023.
ABSTRACT
Information system design is often viewed as a stage in the system development life-cycle, concerned with the detailed “laying out” of system software – more akin to technical drawing than to design in an architectural sense. The intent of this paper is to retrieve the notion of design and to view design as an holistic activity, where form is conceptualized for a whole set of information system elements, that together make up a meaningful set of features, supported by an information architecture for the system’s users. Every information system design process is unique, because every information system is embedded much more firmly in an organizational context and culture than physical artifacts. To manage this uniqueness, we need a more complex understanding of what design involves than that communicated by most IS texts.
The paper presents a review of design theories, derived from IS literatures and from other relevant literatures, such as organizational management and social cognition. Friedman and Cornford (1990) identify three phases of computer system development: (i) dominated by hardware constraints, (ii) dominated by software constraints, and (iii) dominated by user relations constraints. Evolution in conceptualizations of design are presented from these perspectives, then a fourth evolutionary stage is discussed: IS design dominated by business process constraints. This fourth perspective moves the design of information systems on from the limited perspectives offered by viewing an information system as synonymous with a computer system and resolves many of the theoretical conceptualization issues implicit in recent IS design writings.
At the conclusion, it is argued that current models of design focus on design closure and so de-legitimize the essential activities of investigating, negotiating and formulating requirements for an effective design. IS design faces five “problems” that need to be resolved: employing an effective model of design by with which to manage the labor process, defining the role of the information system, bounding the organizational locus of the system problem, understanding the cultural, social and business context of which the IS will be a part, and managing collaboration between cross-functional stakeholders. A dual-cycle model of design is proposed: one that focuses on “opening up” the design problem, as much as design closure. An understanding of this dialectic has significant implications for both research and practice of design; these are presented at the end of the paper.
KEYWORDS: Information System Design, System Development Methods, Co-design of Business and IT Systems
1. THE DESIGN OF ORGANIZATIONAL INFORMATION SYSTEMS
1.1 What is Design?
Business organizations are increasingly moving their focus ‘upstream’ in the traditional, waterfall model of the system development life-cycle. Recent trends in information system implementation – standardization around a small number of hardware and software environments, the adoption of internet communication infrastructures, object-oriented and component-based software design, outsourcing and the use of customized software packages – have standardized and simplified the design and implementation of technical systems. Organizational information systems are no longer viewed as technical systems, but as organizational systems of human activity – business processes, information analysis and dissemination – that are supported by technology. Firms can therefore focus more on the strategic and organizational aspects of information systems, implementing cross-functional information systems that affect stakeholders from many different organizational and knowledge domains.
Yet existing approaches to information system design derive from a time when technical complexity was the core problem: so they are intended to bound and reduce the organizational ‘problem’ so that a technical system of hardware and software may be constructed. Existing design approaches treat complex organizational information systems as synonymous with information technology. They are based on models of individual, rather than group, problem- solving and cognition. We have few methods to enable stakeholders from multiple knowledge domains to participate in information system design. What methods exist are ad hoc and not based upon any coherent theoretical understanding of how collaborative design works. We have no models upon which to base future management approaches and methods for ‘upstream’ (from the waterfall model of the traditional, technical system development life-cycle) information system design.
Winograd & Flores (1986) define design as “the interface between understanding and creation”. Unsurprisingly, given the difficulty of studying such a complex process, there are few models of design which are based upon empirical work, rather than theoretical conjecture or controlled experiments. Most models are also rooted in an individual perspective of design, rather than those group processes which occur in most IS design contexts.
As theories of design activity have evolved, so the definition of the term “design” itself has changed. In the information systems literature, design was initially viewed as the decompositional processes required to convert a structured IT system definition into a physical system of hardware and software. In the introduction to Winograd (1996), the author states:
” Design is also an ambiguous word. Among its many meanings, there runs a common thread, linking the intent and activities of a designer to the results that are produced when a designed object is experienced in practice. Although there is a huge diversity among the design disciplines, we can find common concerns and principles that are applicable to the design of any object, whether it is a poster, a household appliance, or a housing development.” (Winograd, 1996, page v).
Given these commonalities, we have to question why the design of an organizational information system is so much more problematic than the design of a physical artifact, such as a house. In the field of architecture, design has well-established principles and procedures, with established computer-based tools to support them. Yet information system design is often viewed as a single stage in a “structured” system development life-cycle, concerned with the detailed “laying out” of system software – more akin to technical drawing than to design in an architectural sense. The intent of this paper is to retrieve the notion of design and to view design as an holistic activity, where form is conceptualized for a whole set of information system elements, some of which are physical and some abstract in nature (for example, a particular approach to the management of organizational change, the physical information system’s suitability for a particular group of users, or to its ability to provide a set of flexible organizational outcomes for a range of different stakeholder groups). For the purposes of this discussion, design is viewed as the process of conceptualizing, abstracting and implementing an organizational information system, rather than as a specific stage in the information system development life-cycle. Design is not viewed as giving form to system software, but as giving form to a whole set of information system elements, some of which are physical and some abstract in nature. The abstract elements may lead to such deliverables as a particular approach to the management of organizational change, the physical information system’s suitability for a particular group of users, or to its ability to provide a set of flexible organizational outcomes for a range of different stakeholder groups. Every information system design process is unique, because every information system is embedded much more firmly in an organizational context and culture than physical artifacts. Abstraction and generalization are therefore much more complex than that required for a universal artifact that can be employed in many different concepts.
This paper draws on several literatures to derive an understanding of what design “in the round” means. Abstractions of design from the literatures on organizational theory, architectural and engineering design, human-computer interaction, computer-supported cooperative work, management information systems and social cognition are synthesized here, to present an holistic conceptualization of design as organizational problem-solving, individual and group activity, and management-oriented process models.
Friedman and Cornford (1990) identify three historical phases of computer system development and a putative fourth phase: 1) System development dominated by hardware constraints (mainly cost and reliability of hardware). 2) System development dominated by software constraints: developer productivity, expertise and team project management issues (dominated by meeting deadlines and project budgets). 3) System development dominated by user relations constraints: inadequate perception of user needs by developers and lack of prioritization of user needs. 4) (Predicted) Organization environment constraints.
This paper reexamines these phases from the perspective of their impact on design methods and paradigms. The fourth phase is redefined, in the light of technical developments, which have provided ubiquitous computing platforms and simplified the development of both intra- and inter-organizational information systems. Instead, it is argued that the fourth phase constraints are to do with business and IT system alignment. This is signified by the poor involvement of business managers and other non-IS people, leading to a poor understanding of boundary- spanning business strategy and application domain issues.
1.2 Employing IS Design Methods
A brief summary of how methods relate to the evolving focus of IS design is provided in Table 1.
Table 1. Evolving Focus of IS Design Methods
Approach
Description
Issues Addressed
New Issues Introduced
Code-and-fix (pre-1970s)
Write code, test if it works, fix problems in the code. Work out reqs,. & design as you code. Many web development projects use code & fix – inappropriately
Lack of standardization in development methods.
The code quickly becomes messy; the product often does not meet the requirements, and it is expensive to fix the code once established.
Waterfall model – a.k.a. structured development (1970s onward)
System development proceeds through a sequence of stages: requirements analysis, design, build (code & test), deliver, maintain. System requirements are defined, in collaboration with the client and users, at the start of the project. There is little feedback between stages – requirements, design, evaluation criteria, etc. are frozen and system is built and evaluated against an unchangeable spec.
No systematic approach to understanding requirements, or to ensuring you have a complete set.
The requirements need to be understood in detail before system development starts. Assumes that the client and users understand their requirements for support and are capable of articulating these perfectly. Users only normally get involved at the start and end of development, so poor user support is not spotted until it is too late to do anything about it. High amounts of documentation are involved. This slows development, leads to out of date documentation (things change), and systems that deliver the agreed functions but are useless for their intended purpose.
Incremental development – a.k.a. phased rollout (mid-1970s onward)
A modified waterfall model, that incorporates a predetermined number of system design and build iterations, to develop incremental sets of functional requirements.System requirements are defined at the beginning, but may be modified at specific points (start of project phases), after user evaluation of the previous phase deliverables.
Fixes the “frozen requirements” problem, as it introduces iterations during which both developers and users get to evaluate each version of the system, so they understand the implications of requirements before it is too late to change things.
Has the benefit that system requirements can be changed as both the project team and users understand these better (within the overall project scope). Does not always involve users meaningfully – phases can be used for developers to explore requirements. Changes to requirements need to be evaluated and controlled, or “scope creep” sets in (the project cost and duration keep increasing until they become untenable). This approach needs a strong change control process, where the Project Manager negotiates additional budget and duration if additional functionality is added, or trades one area of system scope for another, with the client.
Involves cycles of development where each increment is built to satisfy a specific set of user requirements, evaluated for fit with user needs, then modified or scope evolved in the next increment.
Introduces a specific user-deliverable focus. Also introduces idea of rapid turnaround (shorter development cycles, based on incremental subsets of the system scope).
Prototyping approaches almost always start at the wrong point in the cycle (design, rather than requirements analysis, because the tendency is for developers to build the parts they understand). So requirements follow design and evaluation of the system, rather than driving system design. Early design decisions (e.g. the choice of an implementation platform, size of storage buffers and databases, etc.) often constrain how later user requirements can be implemented. Can be costly, as prototypes need to be thrown away and the design started over.
Agile development (late 1990s onward)
User-requirements drive cycles of incremental development. Each increment is built to satisfy a specific set of user feature-requests (a.k.a. use-cases). Each increment is evaluated for fit with operational user needs, then scope & requirements are reviewed before next increment.
Developers meet to swap notes on workarounds for increasingly formalized methods that took too much time and did not satisfy user needs. They defined a minimalist approach around: i) rapid development cycles; ii) regular user evaluation of system prototypes; iii) client-driven prioritization of development goals.
This approach can be perceived as a bottomless money pit (especially by accountants). Because it blurs the line between system development and maintenance, there is no point at which the system can be considered stable (unless the number of cycles or functionality is constrained by pre-defining project duration or scope – which defeats the point). Because of this, agile devt. is not a good choice for enterprise systems or systems to be integrated with other systems – although it may be used to explore user requirements for subsystems that are later enhanced to be cross-functional and scalable using an RUP approach.
eXtreme development (late 1990s onward)
eXtreme programming was developed in the late 1990s and predates the agile manifesto. Now known as eXtreme development, this provides a method for adaptive systems development, that combines pair programming, user stories, with team coordination (the planning game), collective ownership of code, and test-driven development. Many of these ideas have been absorbed into other agile methods.
User-requirements drive rapid, focused cycles of development, aimed at producing a system prototype. Each increment is built to satisfy a specific set of user stories (scenarios of work-task sequences), that are elicited during development in face-to-face exploration of how people work. This approach is more user-facing and participatory than most agile approaches.
This is very narrowly focused and seldom produces a system that can be used operationally, unless the number and variety of users is very small (e.g. a subset of the Marketing group may want to set up a customer database for high-value promotions). The system produced is designed around functionality, not technical stability or compatibility, so it may need to be redesigned once the user requirements have been determined. As with the prototyping approach from which it developed, system prototypes may be thrown away, so a lot of time may be wasted.
Scaled Agile (2000s onwards)
An adaptive version of Agile Development that focuses on corporate systems scalability, compatibility, and maintainability. Takes a “minimal viable product” approach to design.
Moves away from user- or human-centered interests of prior agile approaches, focusing more on optimization of systems, databases, & applications across platforms and business units.
The “minimal viable product” approach means that user-requirements are often not satisfied unless these are key to business strategy. This becomes a firefighting approach, where developers don’t understand what users need and systems are updated only when things go wrong.
* This is why object classes and inheritance are central, as telecomms networks are hierarchical in the way that data is distributed between devices. One telecoms device will act as a router, sending instructions to multiple lower-level devices that control different network routes. The Object-Oriented modeling approach emerged through the work of (Rumbaugh, 1991; Booch, 1991; and Jacobson, 1992), who developed O-O principles (object-classes and inheritance, the association of data-structure/object definitions with specific process/methods, and the relative independence of objects and their methods from other objects, coordinated by messages rather than sequential, monolithic code) for business applications. Use-cases were an afterthought, added when Ivar Jacobson defined his variant of O-O design around subsets of functionality associated with different user Roles, Products, and Tasks. This was formalized by Rational Software in 1997, to improve quality and predictability by adding performance testing, UI Design, data engineering, & PM controls. The resulting approach became the Rational Unified Process (RUP).
1.3 Relating IS Design Methods To Models of Problem-Solving
Design is most frequently equated with theories of decision-making and problem-solving in organizations. The following sections analyzes the development of how we understand problem-solving in design and traces the impact of evolving problem-solving models on design approaches for organizational information systems. This evolution is shown in Figure 1. It is argued in this paper that, as the scope of organizational impact encompassed by information systems increased with time, so did the importance of integrating contextual and social requirements, for the system to operate.
The paper is structured as follows. Theories of design, as they relate to organizational information systems, are presented from the evolutionary perspective of the five theories of decision-making. But theories of design do not follow a linear development of thought: they are interrelated and emergent. Relevant design theories are arranged according to the identified research threads, to reflect progressive and sometimes parallel developments in how design is perceived by the organizational and IS literatures. The use of Friedman and Cornford’s (1990) “constraints” view of computer-system development permits a reasonable (if post-justified) perspective of why various theories were adopted by the IS community (both academic and practical) at a particular stage in the development of how IS were perceived. Section 2 deals with the evolution of early design theories: the attempt to apply, and modify, rational, “information processing” models to the development of systems limited by hardware constraints. Section 3 discusses the adoption of hierarchical decomposition and “structured” approached to design, as a way of dealing with software constraints. Section 4 presents the incorporation of theories of social construction, emancipation and “human-centered” design as a way of dealing with user- relations constraints. Section 5 presents the extension of design theory to encompass boundary- spanning information systems and discusses this extension as a response to strategic business coordination constraints. Section 6 summarizes the evolution of design theories, discusses some lacunae in current understandings of how design works and presents a dual-cycle model of design, to resolve some of the implications for design research and practice.
2. THE EVOLUTION OF INDIVIDUAL DESIGN APPROACHES
2.1 From Rational Decision-Making To Bounded Rationality
The concept of “rational” decision-making developed from Taylor’s (1911) “scientific management” principles and Weber’s (1922) “rationalization” of the social world. Both of these theories were concerned with optimization and quantifiable interpretations of natural phenomena, including human behavior. Simon’s (1945) book Administrative Study formalized rational decision-making into a linear, staged process model of intelligence-gathering, evaluation of alternative courses of action and choice. Early information system (IS) design theory stems from this perception of human behavior in organizations as rational decision-making. Human beings are seen as objective information processors, who make decisions rationally, by weighing the consequences of adopting each alternative course of action. One stage uses the outputs of the previous stage (hence the waterfall model of Royce, 1970). The information processing model of problem-solving (shown in Figure 2) assumes that all information pertaining to design requirements is available to the designer and that such information can be easily assimilated (Mayer, 1989).
In the information processing model, design involves moving from a statement of the problem in the world to an internal encoding of the problem in memory by mentally encoding the given state, goal states, and legal operators for a problem – i.e. by defining the problem mathematically. Solution involves filling in the gap between the given and goal states, devising and executing a plan for operating on the representation of the problem – i.e. by making a rational choice between alternative courses of action.Two major advances in design theory modified, but did not replace, the notion of rational design. The first was that Alexander (1964) proposed that there is a structural correspondence between patterns embedded within a problem and the form of a designed solution to solve the problem.Alexander proposed the process of hierarchical decomposition – breaking down the overall design problem into a series of smaller problems (patterns) which can be solved independently from each other – as a way to accomplish complex problem-solving where all variables could not be grasped at once. The structure of an appropriate solution may then be determined mathematically, by analyzing interactions between the variables associated with the design problem (Alexander, 1964). As Lawson (1990) notes, this presupposes that the designer is capable of defining all the solution requirements in advance of designing the solution, that all requirements assume equal importance for the solution and that all requirements and interactions between them affect the form of the proposed solution equally. But there is a deeper consideration. Do external “patterns” exist, or do we simply impose them subjectively on external phenomena? Alexander himself (1966) criticizes the human tendency to artificially impose patterns on external elements that may constitute a design (in his example, he discusses architectural considerations affecting town planning). Yet there is a contradiction in Alexander’s position: even his recent work assumes that patterns are somehow inherent in external “entities” and therefore that his method of pattern-matching may be employed to define objects for object- oriented system design (Alexander, 1999).
The second advance was that Simon (1960) introduced the principle of bounded rationality. Human-beings have cognitive limitations which constrain the amount of information they can absorb and that they have access to incomplete information about alternative courses of action. These limitations lead to high levels of uncertainty, to which humans respond by developing a simplified model of the real situation: they reduce, constrain and bound the problem until it becomes sufficiently well-defined to be resolved. Then they evaluate alternative solutions sequentially until an alternative is discovered which satisfies an implicit set of criteria for a satisfactory solution. The solution reached by this process of bounded rationality is not optimal, but satisficing, in that it satisfies a minimal, rather than optimal set of solution criteria.This theory only appeared to apply to a subset of relatively well-defined design problems. Simon (1973, 1981) distinguished between well-structured and ill-structured problems. Well-structured problems may be resolved through the application of hierarchical decomposition techniques. But ill-structured problems (such as the design of a computer system) need to be structured before they can be analyzed. Individuals structure such problems by decomposing them into sub- problems: these are synthesized unconsciously so that the original, ill-structured problem “soon converts itself through evocation from memory into a well-structured problem” (Simon, 1973). The significance of this is worth noting: the process of problem structuring requires additional information, retrieved from long-term memory. This demonstrates a gradual realization that inductive reasoning (generalization from evidence) is significant in design. The “rational” model assumes deductive reasoning (logical inference about particulars that follows from general or universal premises). Even Alexander (1964) did not view the design process as entirely rational. In fact, he observes:
“Enormous resistance to the idea of systematic processes of design is coming from people who recognize correctly the importance of intuition, but then make a fetish of it which excludes the possibility of asking reasonable questions.” (Alexander, 1964, page 9).
By Simon’s (1973) work, what Alexander refers to as “intuition” has become the application of inductive reasoning. However, a realization of the significance of inductive reasoning appears to lead to the notion that design is wholly “creative” in nature and therefore uncontrollable. Inductive reasoning involves conclusions drawn from particular cases in the individual’s experience (the inference from particular to general), which is the antithesis of deductive reasoning, such as that involved in hierarchical decomposition. Thus, we come to the period of IS design dominated by pressures to manage the labor process.
2.2 From Structured Decomposition To Opportunistic Design
2.2.1 Design As Hierarchical Decomposition
The transition between constraint-phases was driven by the need to collaborate in the production of systems software. While IT systems remained relatively simple, a single designer could accomplish their execution. Once IT systems were used to support multiple organizational activities, it became necessary to employ teams of software designers. The evolution of design methods was driven by the need for collaboration and communication between individuals. Design approaches needed to develop a “common language”. This was provided by the concept of structured design, based on Alexander’s (1960) hierarchical decomposition. Lawson (1990) presents a typical hierarchical decomposition model from the architecture field (Alexander’s original work was in architecture), shown in Figure 3.
Although the application of hierarchical decomposition appears first in the architectural design literature, this model was soon applied to IT system design. No single author can be credited with the development of the structured systems development life-cycle model that now underlies most IT systems development (the “waterfall” model), but the most influential presentation of this model is in Royce (1970). The model is so attractive because it seems to prescribe a way to control the labor process. By breaking the design process into stages (which are reproduced at multiple levels of decomposition), managers are enabled to (i) standardize work processes and (ii) divide design work between different people. This type of fragmentation in design-work distances people from the object of their work (leading to lower morale and productivity) and leads to uninformed decisions about design alternatives and form (Corbett et al., 1991).
The model is linear – while still popular in the IS field because of its focus on clearly-defined delivery milestones, this type of model has been rejected by many areas of creative design, such as architecture, as being unrepresentative of ‘real-world’ design processes (Lawson, 1990). McCracken and Jackson (1982) voiced the first dissent with structured, hierarchical decomposition in the IS literature, when they argued that this approach did not support the actual processes of design. They observed that IS professionals circumvented the method, then post- rationalized their designs by producing structured design documentation that made it look as if they had followed the method. But they concluded that the structured approach should be used anyway, because of the benefits for controlled and standardized process management:” System requirements cannot ever be stated fully in advance, not even in principle, because the user doesn’t know them in advance … system development methodology must take into account that the user, and his or her needs and environment, change during the process.” (McCracken & Jackson 1982, p. 31).
This paper argues for the emergence of system requirements through the process of design. But it is not until Boehm (1988) that structured approaches to design are seriously considered harmful to information system design because they ignore human-activity and task requirements of the information system:” Document-driven standards have pushed many projects to write elaborate specifications of poorly understood user interfaces and decision support functions, followed by the design and development of large quantities of unusable code.’’ (Boehm 1988, p. 63).
If we assess the empirical literature, studies of information system design tend to embody the assumptions of their contemporary theoretical literature. Earlier studies (e.g. Jeffries et al., 1981; Vitalari & Dickson, 1983) argue that failure is due to a lack of methodological consistency in applying structured decomposition. Later studies, with a wider scope of IS design in its organizational context (e.g. Jenkins et al., 1984; Curtis et al., 1988; Hornby et al., 1992; Davidson, 1993) argue that methodologies do not represent a ‘theory-in-use’, but a ‘theory-of- action’ (Argyris and Schön, 1978): they represent a rule-based interpretation of what should be done, rather than what people actually do. As time proceeds (with disciplinary knowledge), the role of inductive reasoning increases in importance. There is an appreciation of the role that tacit knowledge plays in design. The notion of “pattern-matching” from Alexander’s (1964) early, positivist concept of deductive pattern-matching evolves to an inductive concept of convergence, involving the progressive fit of partial problem-definitions to partial elements of known solutions to such problem-patterns (Turner, 1987). Alexander himself critiqued his early notion of design as deductive variance-reduction. His recent work (e.g. Alexander, 1999) demonstrates a rich appreciation of design as inductive pattern-matching. In this sense, an individual design process as problem-solution convergence becomes very similar to the notion of design emergence discussed below.
2.2.2 Design As Experiential Learning
As information systems designers became more ambitious in the scope of organizational support that they attempted, information systems became more complex and the focus of design shifted to solving organizational and informational problems, rather than data processing. An evolution in thinking about organizational problems is demonstrated by Rittel (1972; Rittel and Webber, 1973) and Ackoff (1974). Ackoff (1974) described organizational problems as “messes”, arguing that organizational problem selection and formulation are highly subjective:“ Successful problem solving requires finding the right solution to the right problem. We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.” (Ackoff, 1974, page 8). Horst Rittel suggested that organizational problem are “wicked” problems (Rittel 1972; Rittel and Webber, 1973). A wicked problem has the following characteristics:
a) it is unique b) it has no definitive formulation or boundary c) there are no tests of solution correctness, as there are only ‘better’ or ‘worse’ (as distinct from right or wrong) solutions d) there are many, often incompatible potential solutions e) the problem is interrelated with many other problems: it can be seen as a symptom of another problem and its solution will formulate further problems.
Wicked problems (and messes) differ from Simon’s (1973) ill-structured problems in one important respect. Ill-structured problems may be structured by the application of suitable decompositional analysis techniques: they may be analyzed (even if not rationally, in a way that may be justified on rational grounds). But wicked problems cannot even be formulated for analysis, because of their complexity and interrelatedness (Rittel & Webber, 1973). Rittel (1972) argued that such problems cannot be defined objectively, but are framed (selected, artificially bounded and defined subjectively and implicitly). Even once a wicked problem has been subjectively defined, the designer has no objective criteria for judging if it has been solved (in computing terms, there is no ‘stopping rule’). Rittel advocated ‘second-generation design methods’ to replace the rational, decompositional model of design. These methods should include “designing as an argumentative process”, which Rittel saw as “a counterplay of raising issues and dealing with them, which in turn raises new issues and so on”.The realization that complex system design required experiential learning (Lewin, 1951) coincided with a demand-driven approach to IS design. As information systems expanded their scope, so information system users exerted their power. The rise of evolutionary prototyping was driven by their demands for more usable systems, but also by the fit with experiential learning. But it is a pragmatic fit, rather than an explanatory fit – the prototyping approach supports the need for experiential learning, but it does not explain the processes or behaviors of those people engaged in design. A convincing explanation is provided by Turner’s (1987) suggestion that design problems and solutions converge together. Information System design can be conceptualized as the progressive ‘fitting’ of the framework of system requirements that represent the problem with known solutions, based upon the designer’s previous experience of problems of a particular type (Turner, 1987). Turner observed various strategies employed by computer science and other students when resolving a semi-structured design task and concluded that goal definitions evolve with the design. Turner (1987) observed that, where designers’ own experience failed to provide a solution, they widened the search space to call on the experience of colleagues. Turner argued that “requirements and solutions migrate together towards convergence” and that the process of designing information systems is subjective as well as emergent:“Design appears to be more ad hoc and intuitive than the literature would lead us to believe, solutions and problems are interrelated and the generation of solutions is an integral part of problem definition. Problems do not have only one solution; there may be many. Consequently, design completeness and closure cannot be well-defined. There are two categories of design factors: subjective and objective. Objective factors follow from the subjective concepts on which designers model the system. The difficulty in the past is that we have not acknowledged, explicitly, the presence of subjective factors, with the result that, in many cases, objective factors appear to be arbitrary.” (Turner, 1987).
We are then faced with the problem of how designers determine the “subjective concepts” on which they model their systems. In an empirical study of architects by Darke (1978), the author discovered that there was a tendency to structure design problems by exploring aspects of possible solutions and showed how designers tended to latch onto a relatively simple idea very early in the design process (for example, “we assumed a terrace would be the best way of doing it”). This idea, or ‘primary generator’ was used to narrow down the range of possible solutions; the designer was able to rapidly construct and analyze a mental archetype of the building scheme, which was then used as the basis for further requirements search. Darke’s (1978) model of the design process is shown in Figure 4.
Darke’s (1978) architectural design model finds a parallel in the IS literature, in a protocol analysis study of information system design dialogues between designer and user (Malhotra et al., 1980). They highlighted the core role played by cognitive breakdowns (Winograd and Flores, 1982, after Heidegger, 1960) in making the implicit become explicit. They concluded that internal (mental) models held by designers often relied on assumptional, implied, rather than explicit requirements. These assumptions only surfaced when an implicitly-held requirement conflicted with an explicit user requirement, in dialog with system users. Designers often examined partially proposed design elements to test violation of an unstated goal and attempted to fit alternative solutions to subsets of the requirements, based on prior experience. Design goals evolve in with the learning that accrues from the process of design.Turner (1987) concluded that, in practice, only some ambiguities of design requirements and goals will be resolved and the central issue becomes one of discrimination between the significant and the insignificant. Strategies for such discrimination have been linked with “opportunism” in studies of software design (Guindon, 1990a, 1990b; Khushalani et al., 1994).
2.2.3 Opportunistic Design
Over time, it became clear in field studies of IS designers in context that designers did not employ a strictly hierarchical decomposition (top-down, breadth-first) approach. Given the widespread use of prescriptive “structured methods,” which were based on a breadth-first, then depth-wise approach to requirements decomposition, this was labeled “opportunistic design” (Guindon, 1990a). Ball & Ormerod (1995) compare opportunistic design with the more structured problem-solving approaches observed in earlier studies of software design. They conclude that much of the structure observed in the early studies of design arose from the more structured nature of the problems set for subjects in experimental situations. These ideas are synthesized in the diagrammatic model given in Figure 5.
The design process is both iterative and recursive, redefining parts of the problem as well as partial solutions. In practice, only some ambiguities of design requirements and goals will be resolved and the central issue becomes one of discrimination between the significant and the insignificant (Turner, 1987). As organizational problem-situations became increasingly complex, it became increasingly difficult for the design to delineate a suitable boundary and formulation for the design problem they were attempting to resolve (Rittel & Webber, 1973).
As designers became aware that they faced evolving requirements (the scope and objectives of which emerged as the problem became better understood through analysis), it appears that designers abandoned an exclusively decompositional approach to design, for one where their understanding of both problem and solution converged by a process of pattern-matching, looking for successive fit between the two (Maher & Poon, 1996). Turner’s (1987) convergence findings and Malhotra et al’s (1980) conclusions about breakdowns would lead one to conclude that this type of problem-solving, especially in design, is emergent and contingent upon interaction with multiple perspectives of the problem-situation.
Curtis et al. (1988) conclude that “developing large software systems must be treated, at least in part, as a learning, communication and negotiation process.” Designers have to integrate knowledge from several domains before they can function well. They identify the importance of designers with a high level of application domain knowledge: in their studies, these individuals were regarded by team members as “exceptional designers”, who were adept at identifying unstated requirements, constraints, or exception conditions, possessed exceptional communication skills. Exceptional designers spent a great deal of their time communicating their vision of the system to other team members, and identified with the performance of their projects to the point where they suffered exceptional personal stress as a result. They dominated the team design process, often in the form of small coalitions, which “co-opted the design process”. While these individuals were important for the depth of a design study, teams were important for exploring design decisions in breadth (ibid.).
This supports a perspective found in the literature on design framing: that decompositional approaches to system requirements analysis are of most use when designers are inexperienced, or when the design problem is unusually difficult to define (Jeffries et al., 1981; Turner, 1987). But the literature does not tell us whether the methodology is useful for supporting design activity in these situations, or whether its function is to provide psychological support in conditions of high uncertainty, as suggested by Reynolds & Wastell (1996).Guindon (1990b) argues that information system design involves the integration of multiple knowledge domains: the application domain, software system architecture, computer science, software design methods, etc.. Each of these domains represents a problem-space in which a more or less guided search takes place (depending upon which solution paths look most promising and the previous experience of the designer in this domain). The IS development process should encompass the discovery of new knowledge, in particular the discovery of unstated goals and evaluation criteria.
3. FROM PROBLEM-SOLVING TO PROBLEM-FRAMING
3.1 Human-Centered Design
The concept of the wicked (or socially constructed, multi-perspective problem) is reflected in the literature on participatory design, although this literature was driven also by an interest in worker emancipation and “human-centered” design. Research evidence indicated that the traditional approach to the development of new technology resulted in technological systems which were associated with a high degree of stress and low motivation among their users (Corbett, 1987; Gill, 1991; Scarbrough & Corbett, 1991; Zuboff, 1988). The human-centered approach to the design of technology arose as a reaction to this evidence. Gill (1991) defines human- centeredness as “a new technological tradition which places human need, skill, creativity and potentiality at the center of the activities of technological systems.” Bjorn-Andersen (1988) criticized the narrow definition of human-computer interaction used by ergonomics and systems design research, which takes technology as its starting point, with the words: “it is essential that we see our field of investigation in a broader context. A ‘human’ is more than eye and finger movements”. There is a wide body of literature on the development and application of human-centered technology. Some of the main ideas of this literature are:
The human-centered approach rejects the idea of the “one best way” of doing things (Taylor, 1947): that there is one culture or one way in which science and technology may be most effectively applied (Gill, 1991).
Technology is shaped by, and shapes in turn, social expectations: the form of technology is derived from the effect of these social expectations upon the design process (MacKenzie and Wajcman, 1985). This social constructivist approach reveals the social interior of technological design: technology no longer stands as an independent variable, but an outcome which is the result of socially-constrained choices made by designers.
The human-centered approach is opposed to the traditional, technically-oriented approach, which prioritizes machines and technically-mediated communications over humans and their communicative collaboration (Gill, 1991). While technically-oriented design traditions see humans as a source of error, the human-centered design approach sees humans as a source of error-correction (Rosenbrock, 1981).
That human-centered production should concern itself with the joint questions of “What can be produced?” and “What should be produced?” The first is about what is technically feasible, the second about what is socially desirable (Gill, 1991).
That objective and subjective knowledge cannot exist independently of each other: while technologists attempt to encode the explicit, rule-based knowledge needed to perform a task, this knowledge is useless without the “corona” of tacit and skill-based knowledge which surrounds the explicit core and through which explicit knowledge is filtered (Rosenbrock, 1988). Cooley (1987) raises the issue that modern technology is designed to separate “planning” tasks from “doing” tasks (for example, in modern Computer-Integrated Manufacturing). This results in deskilled human technology users, who are less equipped for exception-handling as a result, and poorer work outcomes, as those who plan are uninformed by seeing the results of their plans and those who “do” are unable to affect the way in which work tasks are approached.
A common theme in the human-centered literature is that it is the process of technology design which determines the effect of that technology upon its human users. This is best illustrated by considering recent developments in the approach to technological determinism. Technology may be argued to determine work design (Braverman, 1974), or to be neutral in its impact, with the relationship between technology and work design being mediated by managerial intentions and values (Buchanan and Boddy, 1983), by managerial strategic choice (Child: 1972) or by organizational politics (Mumford & Pettigrew, 1975; Child, 1984). However, the forms of available technology have an independent influence on the range of social choices available (Wilkinson, 1983; Scarbrough & Corbett, 1991). An analysis of technology as an unexplored entity which simply embodies the intentions and interests of particular groups ignores the technological decision-making which precedes the managerial decision-making process: the processes of design.This socio-technical perspective is most apparent in the literature analysis of prototyping and participatory design. This area of work explicitly attempts to deal with the “multiple worlds” espoused by various organizational actors (Checkland, 1981). Evolutionary methodologies permit users to incorporate desired ways of working into the design of the information system (Eason, 1982; Floyd, 1987). IS stakeholders are placed in a situation where they can negotiate their requirements of an IS around a design exemplar – a prototype IT system, or a prototype work-system. But the attempt to balance the two domains tends to focus more on one domain than the other. Whilst, for example, Mumford’s work in ETHICS (Mumford, 1983; Mumford and Weir, 1979) attempts the joint satisfaction of both social and technical interests, it deals almost exclusively with the design of work systems. Technology is viewed as infinitely configurable to suit the organization of workgroups, with no account taken of constraints imposed by either technology design or its implementation. More recent work (Butler and Fitzgerald, 2001; Lehaney et al., 1999) examines the ways in which user participation in decisions concerning the use of information technologies affects the outcome, but focus on participation in business process redefinition. While this is essential, it is not sufficient.
We have discussed how goals may be subverted by the technical systems design and implementation processes that follow business process redefinition.
Muller et al. (1993) list a variety of methods for participatory design, classified by the position of the activity in the development cycle and by “who participates with whom in what”. The latter axis ranges from “designers participate in users’ worlds” to “users directly participate in design activities”. For participatory design to be participatory, user-worlds must be effectively represented in the design. But, as discussed above, there is a wide disparity in user “worlds”. Participatory development has more potential to be politically disruptive and contentious than traditional (non-participatory) forms of system development, because it involves a wide variety of interests, with differing objectives and perspectives on how organizational work and responsibilities should change (Howcroft and Wilson, 2003; Winograd, 1996). This situation is therefore managed carefully in practice. System stakeholders are selected for participation on the basis of political affiliations and compliance, rather than for their understanding of organizational systems support and information requirements. This constrains user choice and significantly affects the potential to achieve a human-centered system design (Howcroft and Wilson, 2003). Users often have little choice about whether to participate. Even when trained in system development methods, users and other non-technical stakeholders often cannot participate on an equal basis with IT professionals (Howcroft and Wilson, 2003; Kirsch and Beath, 1996). User views are often inadequately represented because of cost constraints, or a lack of appreciation of the significance of users’ perspectives (Cavaye, 1995). Howcroft and Wilson (2003) argue that the user choice is significantly constrained by organizational managers, who predetermine boundaries for the scope of the new system, who select who will participate in systems development and to what extent.Because of its reliance on the production of technical system prototypes, the participatory approach is therefore technology-focused.
IT professionals frame user perceptions of how a technology can be employed (Markus and Bjorn-Andersen, 1987). They are able to constrain the choices of non-technical stakeholders, by the ways in which alternatives are presented and implemented in the system prototypes. User worldviews may easily be relegated to “interface” considerations by technical system designers, even when the explicit focus of the method is on joint system definition (Gasson, 1999). The use of participatory design may become a power struggle between, on the one hand, “rational”, technical system designers and, on the other hand, “irrational” user-representatives who are unable to articulate system requirements in technical terms (Gasson, 1999; Nelson, 1993). The concept of empowering workers raises hackles: this is seen as “social engineering” that compares unfavorably (in scientific, rationalist discourse) with “software engineering”. Designers who engage in such irrational behavior must have a subversive agenda that is counterproductive (Nelson, 1993). Thus, participatory design may often be subsumed to the less intrusive (and much less confrontational) path of producing user- centered design “methods” that can be partially used, in ways chosen and controlled by technical designers.Interaction DesignInteraction design is a recent development arising from work in Human-Computer Interaction (HCI). It considers a much deeper set of concepts than the traditional HCI interests of user- interface affordance and usability. Interaction design examines the ways in which people will work with a technical artifact and designs the artifact to reflect these specific purposes and uses (Preece et al., 2002). Winograd (1994) defines interaction design as follows:
“My own perspective is that we need to develop a language of software interaction – a way of framing problems and making distinctions that can orient the designer. … There is an emerging body of concepts and distinctions that can be used to transcend the specifics of any interface and reveal the space of possibilities in which it represents one point.” (Winograd, 1994).
So interaction design has the potential to consider a space of possibilities, but in general appears to be constrained to specific interactions with a predetermined technology by the tradition of HCI discourse. Interaction design, as defined by Cooper (1999) — who claims to have invented the approach — is “goal-directed design”:” Interaction designers focus directly on the way users see and interact with software-based products.” (Cooper, 1999).Interaction design from this perspective is product and development driven: this approach defines what software system products should be built and how they should behave in a particular context (Cooper, 1999). But goal-directed approaches are only appropriate when the problem is relatively well-defined (Checkland, 1981; Checkland and Holwell, 1998). Most organizationally-situated design goals are emergent. A similar, goal-driven approach is taken by Preece et al. (2002), who emphasize “the interactive aspects of a product” (page 11). Although they extend the goal-driven concept with rich discussions of use, their perspective is also essentially driven by the notion that design is centered around conceptualization of a computer- based product with an individual user. Inquiry into the socio-cultural worlds of its use and into negotiated collaboration between interested stakeholders are secondary.
3.2 Agile Software Development
Formal methods are increasingly being abandoned in favor of rapid methods with shorter lifecycles and a lower administrative overhead (Barry and Lang, 2003; Beynon-Davies and Holmes, 1998). But rapid methods do not appear to deal well with user requirements and may lead to a more techno-centric focus than with traditional methods (Beynon-Davies and Holmes, 1998). There is a temptation with rapid approaches, for system developers to revert to the code- and-fix approach that characterized software development before the advent of formal methods (Boehm et al., 1984; Fowler, 2003). “Agile” software development was conceived in response to a perceived need to balance technical system design interests with an understanding of user requirements. Uniquely, this approach is a practitioner-initiated approach to human-centeredness in IS design. Highsmith’s (2000) Adaptive Software Development and Beck’s (1999) eXtreme Programming are both examples of agile software development: practitioner-instigated approaches that combine a minimalist form of system design (i.e. informal methods and short lifecycles) with a user-centered approach. The Agile Manifesto (Fowler and Highsmith, 2001) argues for the following points:
Individuals and interactions are valued over processes and tools.
Working software is valued over comprehensive documentation.
Customer collaboration is valued over contract negotiation.
Responding to change is valued over following a plan.
These points reflect many of the conclusions of the literature discussion above, particularly with their focus on goal emergence. The ways in which goals are inquired into, agreed and made explicit are critical to achieving a human-centered outcome. Agile software development emphasizes an adaptive approach to defining system goals and requirements, as the design proceeds. This is an implicit recognition of the difficulties of understanding the needs of multiple user worlds, in advance of the system design. System goals and requirements are adapted to the designer’s (and others stakeholders’) increasing understanding of the role that the system will play, in organizational work. In Adaptive Software Development, Highsmith (Highsmith, 2000) rejects what he terms “monumental software development”, in favor of “fitting the process to the ecosystem”. At the heart of the approach are three overlapping phases: speculation, collaboration, and learning. He argues that systems design should respond to the contingencies of the local context, rather than fitting the problem analysis to the framework underlying a formal analysis method. Although Highsmith does not prescribe specific methods, he does emphasize teamwork and the involvement of system users in all aspects of system definition and design. However, although Highsmith’s work has been influential in forming popular perceptions of how to manage system design, it does not offer a method for performing design. One of the most popular methods for agile software development is eXtreme Programming (Beck, 1999). This approach is based partly on the concept of scenario analysis (Carroll and Rosson, 1992) – a concept that is familiar to HCI researchers but novel to many technical system designers.
The eXtreme Programming approach emphasizes a specific way of eliciting requirements from system users, in an informal and iterative process. Technical systems developers work in pairs with selected users, to generate short scenarios, which are coded into a system prototype. One developer codes, while the other checks the code for authenticity and correctness (these roles are swapped frequently). The user is invited back to validate the prototype against the scenario and to generate additional scenarios, based on their realization of shortcomings or omissions in the original scenario generated, after having used the prototype.In its focus on emergence and “the people factor”, agile software development may be considered human-centered in its intent. However, its ultimate emphasis on the practice and profession of producing software systems, without explicit validation of system goals and organizational roles by non-technical stakeholders, renders it vulnerable to deadline-driven expediency (Nelson, 2002). Agile approaches provide a worthwhile attempt to deal with problems of implicit knowledge, evolutionary learning (by users) of what technology has to offer for their work, and misunderstandings between technical designers and users, as technologists gradually enter the lifeworld of the user. But these approaches are based on the development of software, rather than organizational systems. It involves a very small selection of “representative” users, there is no attempt to understand or investigate the wider, socio-technical system of work and there is little attention paid to the selection of appropriate system users for scenario generation. Additionally, this method suffers from a common problem of evolutionary prototyping: the approach starts with the specific intention of building a technical system, not with the intention of bringing about organizational and technical change. As Butler and Fitzgerald (2001) remark, stakeholders must be involved in the definition of organizational and process change, before their involvement in IT systems development can be considered anything other than token.
3.3 Evolutionary Models of Design
Recent management concern has centered on more human-centered and business-oriented approaches to IS development (Hirschheim & Klein, 1994). However, IS development projects are concerned with process management issues at a macro level, rather than an individual level; an attempt to encompass both macro processes and human and organizational concerns can be seen in the spiral model of software development presented by Boehm (1988), shown in Figure 6. The spiral model is an attempt to manage design emergence, uncertainty and risk in ISD project management. As such, it preempts many of the situated action issues discussed above. In this model, the radial dimension represents the cumulative cost of development to date, the angular dimension represents the progress made in completing each cycle of the spiral.
An underlying concept of this model is that each cycle involves a progression that addresses the same sequence of steps, for each portion of the product and for each of its levels of elaboration. However, there are three main problems with this model in guiding the management of IS development:
The model represents a macro level representation of development: it does not address outputs from effective design processes and it does not represent the behavioral issues which managers face in real-life IS development.
The skills required at different stages of the cycle are unrealistic: it is not feasible to expect the same group of developers to possess (or to acquire) skills in both detailed technical design and risk analysis.
The model cannot be said to represent IS development practice, even at a macro level: Boehm (1988) admits that it is not based on empirical observations, nor has it been tested experimentally.
Despite these criticisms, the model is a real advancement in theoretical thinking about IS development practice. It embodies an iterative process and encompasses human and organizational concerns through the inclusion of evolutionary prototyping as an essential component of organizational risk management. However, the four evolutionary stages of the model – determine objectives, evaluate alternatives, develop product and plan next phase – may be too akin to the “rational” model of decision-making (Simon, 1960), criticized in the previous section, to be of help in managing real-life processes. What is needed are models which encompass both macro business processes and human and organizational concerns but rely less on managing the predictability and rationality of process outcomes. There is a basic conflict here: professional management is concerned primarily with the reduction of risk through an emphasis on predictability and an assertion of rationality, while effective design requires the “control and combination of rational and imaginative thought” (Lawson, 1990). It might be that the two goals are incompatible: that different models are required for the control of macro (project) processes and the support of micro (design) processes. However, the two are closely interlinked and any approach to IS development which adopts a single perspective will not succeed.
The need for both deductive and inductive reasoning in design has to do with how human beings cope with design requirements that are either undiscovered or tacit in nature. So it would follow that the role of tacit knowledge and inductive reasoning in design is greater for problems that are not well-structured. Schön (1983) describes design as “art”. A design problem can only be approached via “reflection-in-action”: purposeful action which calls on tacit knowledge for its execution. The concept is best described in Schön’s (1983) own words:“ Even when he [the professional practitioner] makes conscious use of research-based theories and techniques, he is dependent on tacit recognitions, judgments and skillful performances.” (Schön, 1983, page 50).Expertise in professional skills such as design can only be accomplished be learning-through- doing (Schön, 1983). The role of learning-through-doing was also highlighted by Rosenbrock (1988), in discussing the exigencies of engineering design and by Argyris (1987), who called for a new way of approaching information systems design. But if we see learning as central to design, the hierarchical decomposition approach becomes unusable. Empirical studies observe “opportunistic” design strategies (Jeffries et al., 1981; Guindon, 1990a, 1990b; Khushalani et al., 1994), which is defined as (various types of) deviation from hierarchical decomposition. Visser and Hoc (1990) argue that many of the early studies into design processes conflate prescription and description: they ignore what the activity of design is really like, to focus on what it should be like. In addition, early studies often presented subjects with a unitary, relatively well-defined and well-structured problem to solve. Hierarchical decomposition is excellently-suited to this type of problem and “opportunistic” deviations from a decompositional strategy must be considered unproductive. But when designers are faced with ill-bounded and ill- structured problems, decompositional strategies fail.
3.4 Empirical Studies Of Human-Centered and Evolutionary Design Processes
The “user-centered” model of evolutionary prototyping supposedly focuses on the design of IT systems to support the needs of human beings in the role of system user. Unfortunately, human-centered and user-centered approaches do not lead to the same outcome. With human-centered design approaches, the analysis starts with an exploration of the workers (or collectivity’s) work processes and information needs. It explores how processes could be redesigned, to remove historical and technological system constraints on how people work. It investigates workflows between people, stages of information processing and sharing of information across processes, to produce a coherent understanding of how work processes can be simplified, coordinated more effectively, and how we can supply their information needs in a timely and human-centered manner. Only then does it define the system of IT required to provide these aspects of support. The most important aspect of human-centered design, however, is the precedence given to human decision-making over IT-enabled decision analysis. An explicit part of human-centered design is the analysis of key decision-points, involving debate around who should make decisions, on the basis of what information, and the degree to which decision-support can and should be automated. Human-beings are viewed as sources of error-correction and local knowledge. Human decision-making is privileged over machine decision-making as humans adapt to changing circumstances and are better capable than IT systems of making decisions under conditions of uncertainty. Decisions relegated to the IT system are routine, programmable decisions involving repetitive, complex data processing. For User-centered design, on the other hand, starts with an IT system concept. It defines work-processes around a predefined notion of information classes and objects – and the relationships between these. It assumes that all work, especially decision-analysis, is best done by the IT system, as human beings are perceived as subjective. Even when “user-centered” approaches deliberately set out to model human business and work-processes, these models are always flawed because they represent such processes as interactions with an IT system, rather than in the situated context of workflows and human interactions (Gasson, 2003).
4. DESIGN DOMINATED BY BUSINESS PROCESS CONSTRAINTS
Section 4 (under development) presents the extension of design theory to encompass situated, collaborative, group action and discusses this extension as a response to strategic business coordination constraints. Business processes are viewed as cross-functional collaboration between different work-groups, whether internal to, or external to the organization. This fourth perspective moves the design of information systems on from the limited perspectives offered by viewing an information system as synonymous with a computer system and resolves many of the theoretical conceptualization issues implicit in recent IS design writings.
5. COLLABORATIVE DESIGN
Section 5 deals with the extension of design theory to collaborative, group action.
5.1 Intersubjectivity and Shared Understanding
Weick (1979) argues that shared cognition emerges through the process through which a group develops collectively structured behavior and that this process is inconsistent with achieving intersubjectivity. He describes four phases of this process, shown in Figure 7.
Initially, groups form among people who are pursuing diverse ends. As a structure begins to form, group members reciprocate behavior which is valued by other group members while still pursuing individual goals and thus converge on common means: common group process rather than common goals. Once the group members converge on interlocked behavior, a shift occurs from diverse ends to common ends. Initially, the common end is to perpetuate the group’s collective structure, which has been instrumental in aiding individuals to get what they want; other common ends follow from this recognition of mutual interest. Finally the group is enabled to pursue diverse means, often because of division of labor between group members permits individuals to pursue ends in ways which fit with their own specialization, but also because the stability engendered by a durable collective structure enables individuals to pursue elements of the problem-situation which appear unpredictable and disorderly in comparison to the stable environment produced by the group. There may be pressures to reassert individuality following the subsumation of individuals’ interests to those of the group.
Design depends upon intersubjectivity for effective communication between team members to take place (Flor and Hutchins, 1991; Hutchins, 1990, 1991, 1995; Orlikowski & Gash, 1994; Star, 1989). Technical system designers, “successful in sharing plans and goals, create an environment in which efficient communication can occur” (Flor and Hutchins, 1991). Orlikowski & Gash (1994), in a hermeneutic analysis of different interest groups’ assumptions, knowledge and expectations of a new groupware technology, refer to intersubjectively-held mental models as “shared technological frames”:“ Because technologies are social artefacts, their material form and function will embody their sponsors’ and developers’ objectives, values, interest and knowledge regarding that technology” (Orlikowski & Gash, 1994, page 179).
However, most work on shared technological frames assumes too great an extent of intersubjectivity, applying shared frames to areas of collective work that are highly unlikely to be framed in the same way by different actors, even when they belong to the same workgroup. Figure 8 illustrates the extent of intersubjectivity (shared meanings) between organizational actors, reflecting the degree to which cognitive frames are likely to be shared.
5.2 Distributed Cognition In Design
5.2.1 How Understanding is Distributed
An explanation of how the division of labor identified by Weick may be enabled lies in the notion of distributed cognition (Norman, 1991; Hutchins, 1991). Distributed cognition involves a model of the task or problem in hand which is “stretched over” rather than shared between members of a collaborative group (Star, 1989). Distributed cognition enables members of design teams and other workgroups to coordination their actions without having to understand every facet of the work of other individuals in the group.“ Distributed cognition is the process whereby individuals who act autonomously within a decision domain make interpretations of their situation and exchange them with others with whom they have interdependencies so that each may act with an understanding of their own situation and that of others.” (Boland et al., 1994, page 457).
Distributed cognition may be coordinated using “boundary objects” which represent the current state of the outcome of group activity (Norman 1991; Star, 1989). Boundary objects which aid distributed cognition include external representations of a design (e.g. a diagrammatic model) and design specifications. Thus, intersubjective understandings of the design problem and solution are not required. An effective collaborative group can function well when group members share very little common understanding of the problem in hand, if they have effective coordination mechanisms (Star, 1989).Individual group members may have very different models of the organizational “world” and different design goals. The literature on collaborative group work normally assumes intersubjectivity – a common vision shared by all group members. Distributed cognition theorists would argue that a group of designers do not need to understand all the elements of the design problem. They just need to achieve sufficient overlap between their different perspectives and understandings of the design problem for the group to coordinate their design activity.
Although some writers have proposed that group coordination may be aided by the use of “boundary objects” (Hutchins, 1991; Norman 1991; Star, 1989) we know little of the mechanisms by which effective distributed cognition is achieved and maintained. Lave (1991) argues that the process of socially shared cognition should not be seen as ending in the internalization of knowledge by individuals, but as a process of becoming a member of a “community of sustained practice”. Such communities reflect the sociocultural practices of the group in its work-context, from the perspective of cultural cohesion, rather than rationality. These practices reflect ways of accommodating the real-life personas that the group needs to encompass, in coordinating work effectively.4.3 Design As Situated Creativity
5.2.2 Tacit Knowledge About The Application Domain
Argyris and Schön (1978) had previously compared the espoused theory held by a person to explain how they performed a task, and the theory-in-use, what they actually did to perform the task. Espoused theories tended to conform to explicit organizational procedures and rules, while theories-in-use tended to be derived from implicit understanding of a task’s requirements, which were difficult or impossible to articulate. Schön’s (1983) work developed this concept with a focus on design (among other work-activities). He argued that design depended upon continual interaction with the problem-context, followed by reflection upon that interaction. Learning through doing is key to this perspective.
5.2.3 Social Construction In Design
A further thread in the individual perspective of design is provided by Mackenzie and Wajcman (1985), Bijker et al. (1987) and Latour (1987, 1991), building on the social construction theories of Berger and Luckman (1966). These authors argue that technology is socially constructed and that features that enforced a particular behavior or control mechanism were embedded in the form of technology designed for a specific context. Mackenzie and Wajcman (1985) argue that this mechanism is unconscious: the form of new technology is constrained by the form of technological exemplars encountered by the designer. But Latour (1987) argues that the design of technology embeds the explicit intentions of the stakeholders whose interests the designer serves. Scarbrough and Corbett (1991) conclude that, while technology does serve the interests of dominant stakeholders, this is because of a cyclical influence. Technology is shaped by social constructions inherent in the context of design, as shown in Figure 9.
This model of design influences is significant because of the dependence that it demonstrates between the cultural context and the design of technology. Taken with Latour’s (1987) work on actor-network theory, which demonstrated a network of causality between the dominant interests and technology, this model provides a convincing argument for how the cultural management of meaning (Smircich and Morgan, 1982). Perceptions of the meaning of technology to the organization (for example, its role and value) influence the design of technical systems, which in turn reinforces the cultural ideology of the organization, which in turn shapes and manages meanings of technology within the organization … and so on. Design is thus both formed by and forms the social context in which it takes place.
5.3 Design As Situated Action
5.3.1 Contextually-Situated Action
Suchman (1987) likened design and planning to steering a boat: while the overall goal may be fixed, the path to achieve that goal is affected by the local contingencies (the waves and currents that are encountered on the way to the goal). Current design practice is constrained by a view of information systems as rule-based information-processing systems, where human work disappears from view. She argues that design can only succeed if the process permits goals to change and contingent processes to emerge. This is reminiscent of Mintzberg’s (1985) arguments concerning strategic planning. As any plan is executed, new contingencies arise that cause some parts of the previous strategy to be discarded and new components to be added. This often leads to a change in the detailed goals of the plan. The consequence of applying a situated action model of human problem-solving to design is shown in Figure 10.
Design is this no longer “guided” by goals, but a relatively unpredictable process of seeking out short-term and partial goals. As postulated in Gasson (1998), design is conducted by taking “good enough”, partial goal subsets and working with these until situational contingencies or a conflict of explicitly stated requirements with an individual’s implicit model of the way that the “real world” works causes a redefinition, in part of the design goals. The latter type of conflict, referred to by Winograd and Flores (1982, after Heidegger, 1960) as a cognitive breakdown, underlies the groundbreaking study of design by Malhotra et al. (1980), discussed above.The emergent, situated model of design is a significant development in how design is perceived. The majority of extant design theories are goal-oriented – including many of the more recent “soft” approaches. For example, Checkland’s 1981 Soft Systems Methodology is based on the notion that stakeholders in the new information system being designed are capable of defining consensus outcomes that they wish the new system to achieve. But consensus may mean that perspectives are unitary in nature, reproducing a primary constraint of traditional approaches to IS design. Burrell (1983) criticizes the Soft Systems Approach of Checkland (1981) for privileging the management interest through the modeling of consensus, which is unrealistic in a political context where management interests dominate. In contrast, goals constantly evolve with an understanding of the design and the actual path of design is much more complex (and longer) than that perceived by actors external to the design process, who only see the start and end points of the design. This model may explain why timescales always ‘slip’ in IS development projects – a common comment from those not involved in such projects is “why did it take you so long?”.A critical process of design must therefore be the management of external perceptions of the design process, particularly those of the “global network” (Law & Callon, 1992) – the informal network of influential decision-makers affected by, and indirectly attached to a design project.
Socially Situated Action In Communities of Practice
An important development of situated action theories arose from arguments that individual action is situated within one or more communities of practice (Brown and Duguid, 1991; Lave 1991; Lave and Wenger, 1991). Brown and Duguid (1991) argue that learning takes place within a sociocultural context: a set of rules, norms and expectations that are constructed by members of the local work community, through their interactions. Lave and Wenger (1991) demonstrate that tasks and artifacts cannot be abstracted independently of their sociocultural context. For example, Brazilian street children who cannot perform mathematical calculations in a classroom context are perfectly capable of performing the same calculations when transacting a sale. Understanding is situated in the context of practice. Remove the understanding of the context in which the task will be performed and you remove the ability to understand and abstract the task. Lave and Wenger also argue that membership of a community of practice depends upon the implicit learning and adoption of the sociocultural norms of that group. This results in a great deal of organizational practice which is not rational, but historical in nature. Information system designers need to understand the sociocultural practices underlying the day to day practice of work-tasks, to be able to design effective support for those practices.
Organizationally-Situated Processes
The core role played by information system designers in mediating social and political concerns, and their unpreparedness for this task, was demonstrated by Boland and Day (1989). An IS designer was shadowed and then interviewed about her work. She expressed her concern at having to make decisions about social and political issues which she saw as outside an appropriate scope of work for IS design. Such issues were often dealt with at an implicit level: the designer was not aware of making such decisions until much later.
6. SUMMARY AND DISCUSSION
Section 6 summarizes the evolution of design theories, discusses some lacunae in current understandings of how design works and presents a dual-cycle model of design, based on an empirical field study of situated design, to resolve some of the major deficiencies in current information system design theory.
6.1 The Adoption Of Successive Theories of Design
As might be expected, the information system design literature follows the paradigmatic assumptions of the period during which they were conducted. Early studies assume an individual, rational, “information processing” view of human cognition (Mayer, 1989) and focus on the design process as unitary, structured problem-solving. Studies in the 1970s and early 1980s follow Simon (1973) in their focus on structuring a unitary problem (e.g. Gane and Sarsen, 1979). Most studies equate the technical information system with an organizational information system and so are surveys of what IT system development methodologies [sic] are in use. Even after ground-breaking theory in socio-technical systems (Emery and Trist, 1960) had been applied to information system design by Enid Mumford (Mumford, 1983; Mumford and Sackman, 1985), empirical studies continued to focus uncritically on applying structured IT system development methods (e.g. Sumner and Sitek, 1986; Necco, 1989).
This emphasis is followed by the early “psychology of programming” interest group literature (e.g. Hoc et al., 1980). Early empirical studies in this area found a distinct difference between the problem decomposition strategies employed by novice vs. experienced programmers (Adelson and Soloway, 1985; Jeffries et al., 1981; Kant and Newell, 1984). Novice programmers tended to employ a top-down, depth-first decomposition strategy (i.e. hierarchical decomposition, as shown in Figures 2 and 3 above). Experienced programmers employed a top-down, breadth-first strategy, developing an integrated approach to the synthesis of solutions to partial problem-statements in an ad hoc manner. They conclude that (experienced) design is therefore opportunistic in nature. Most of these studies employed a very structured design problem, such as designing a program to control the priorities of an elevator. They also often used students as proxy subjects for “novice” and “experienced” programmers, which limits the validity of their findings.
Thus it can be seen that an evolution is perspective has taken place: an “information system” is now viewed as a work-system. Support for the IT system user’s decision-processes and a focus on the user-interface with the IT system characterize this period of IS design literature. This period contains a great many studies that focus on the application of prescriptive methods for ensuring usable systems, in terms of task-fit and IT system interface design. “Human-centeredness” and emancipation become popular threads in the IS and IR-related organizational change literature (for example: Corbett, 1987; Gill, 1991; Scarbrough & Corbett, 1991; Zuboff, 1988) . Prototyping becomes a popular method for developing IT systems (e.g. Boehm, 1984; Floyd, 1984, 1987). But prototyping does not necessarily focus on work-task support.
Floyd (1984) contrasts evolutionary prototyping, an approach that involves IT system users in evaluating and providing input to the design of an evolving system, with experimental prototyping, the production of prototypes for feasibility testing of a technical concept. But the perspective of design is seriously constrained by a focus on the “user” of a technical system, rather than on the combined social, work-process, business strategy and technical goals of an organizational information system.Social construction is brought into the explicit processes of design by the “soft” systems perspective (Checkland, 1981). Churchman’s (1971) work on inquiring systems drew attention to the interconnectedness of system elements and requirements, and the need for purposive inquiry into the problem situation to define design goals. Checkland (1981, Checkland & Scholes, 1990) built on this concept to provide an holistic theory of information system design that has four main properties:
An organizational information system is a system of human-activity supported by an IT system. The task of information system design is to investigate the problem-situation concerning a particular human-activity system and to determine appropriate interventions, only some of which may involve IT system design.
Any human-activity system involves multiple sub-systems of tasks, performed for multiple purposes. A major priority for information system design is for the designer, participants and other stakeholders in the human-activity system to understand and to separate conceptually the purposive systems that constitute the whole.
Different information system stakeholders have different worldviews that cause them to interpret the meaning and purpose of human-activity in different ways. To succeed in implementing an information system that will benefit the majority of stakeholders, the design process must focus on the collective negotiation of requirements for action.
Selection of appropriate scope(s) for organizational intervention(s), such as the design of a new information system (work-task changes plus IT system support), should be made explicit to all system stakeholders and subject to consensus agreement.
Criticisms may be leveled at the detailed mechanisms proposed by Checkland and other soft systems authors: for example, Burrell (1983) criticized Checkland’s (1981) notion of consensus, arguing that the facilitated workshops proposed for this purpose would always privilege the management interest over other interests). But Checkland’s work has had a significant influence on both theories and practice and is responsible for changing the dominant paradigm of information system design. The negotiation and incorporation of a consensus set of system requirements, derived from multiple stakeholder worldviews is a very long way from Alexander’s (1964) argument that the structure of a system solution is embedded in the system problem and exists independently of the designer. Checkland’s work has given a deeper meaning to Rittel’s (1972) argument that design is cognitively-constructed and so should be derived through “argumentation”.This perspective can be contrasted with the hard systems approach, which sees system properties as being objective, rather than emergent, with communication and control being human interactions with the material (computer-based) ‘system’, rather than properties of the system itself. While soft systems approaches to IS design see IT as the “serving system” to a “served system” of purposeful human-activity (Winter, Brown and Checkland, 1995), hard systems approaches see IT as the target object system. However, the view is still static: the soft systems literature views design as being a process of negotiating a consensus on organizational system definitions, which embody structure and persistence. It may also be argued that the whole thrust of the ‘problem’ investigation literature in the field of IS is aimed at structuring problems and constructing structured data (Preston, 1991).
An alternative model rejects organizational structure as the basis for design (Truex & Klein, 1991): organizations are seem as emergent and dynamic, with design defined as situated, evolutionary learning.More recently, information system design is viewed as socially-embedded. The work of Winograd and Flores (1986) argued that design includes “the generation of new possibilities” in an organizational change context. They provided significant insights into the nature of cognition in design and its social context. Brown and Duguid (1990) take this perspective further with a discussion of design as supporting communities of practice. They view design as socially- situated and emergent: for successful design, the IS designer should be a peripheral participant in the community of practice which the information system is intended to support.
6.2 Limitations Of Current Design Practice Arising From The Various Literatures On Design
It would appear, from the review presented here, that design theories are primarily concerned with problem closure. While this may have been appropriate at a time when information technology designers were concerned with relatively well-defined, unitary problems, it is no longer appropriate for groups of designers engaged in the exploration and definition, as well as the solution of, “wicked” problems relating to organizational information systems. The problem of “the problem” dominates design theories and yet design models are concerned more with solution definition than with problem investigation. Given the concerns expressed above, coupled with the limitations of human cognition, it would appear that evolutionary models of the design process are more appropriate for relatively well-defined problems, that need a complex technical solution (and so are analyzable in Simon’s sense), than for complex organizational problems which require formulation, inquiry, reformulation, negotiation, and argumentation before any action can be taken (Rittel, 1972).
Thus we end with five areas of concern, that limit current conceptualizations of design. As with any “wicked” problem, these five areas may be conceptually separated, yet are interrelated.
1. The labor process problem: While the traditional model provides a clear basis for managing the labor process in IS development, it artificially separates the conceptual and social processes of organizational IS development which are referred to here as design processes. Design activity cannot be separated into a single stage of the system development lifecycle, as in the traditional model: requirements specification, design and technical system implementation are intertwined (Bansler and Bødker, 1993) and so require support and legitimacy at all stages of the system development life-cycle. Radical redesign of a technical system may occur even at the system implementation stage, when problems are encountered during interactive user testing; such redesign is often referred to euphemistically as ‘system maintenance’ (Lientz and Swanson, 1980).
2. The design process-model problem: The way in which design is managed is based upon a decompositional, breadth-first exploration of the design problem, where all requirements for a solution are defined before problem decomposition is attempted. But empirical studies of individual design strategy show that design strategies are “opportunistic” in nature, adopting depth-first, iterative, recursive or ‘inside-out’ approaches (Ball and Ormerod, 1995). Turner (1987) argues that “requirements and solutions migrate together towards convergence”. Designers fit known solutions to parts of the problem, or reframe the problem to fit known solutions (Malhotra et al., 1980; Guindon, 1990; Turner, 1987).
3. The bounding problem: The traditional model presupposes a design problem which is unitary in nature, which exists independently of the designer’s frame of reference and which is capable of analysis under conditions of “bounded rationality” (Simon, 1973), where the designer bounds the problem until it is amenable for structured analysis. But the design of complex organizational information systems centers upon the investigation of socially-constructed, “wicked” problems (Rittel and Webber, 1973), which are associated with interrelated, organizational systems of activity. Such problems cannot be “stated” or “solved” in the sense of definitive rules or requirements for a solution (Moran and Carroll, 1996): they are socially-constructed and subjective (Schön, 1983; Galliers and Swan, 1997) and each problem is interrelated with – and thus cannot be defined separately from – multiple, other organizational problems (Rittel and Webber, 1973).
4. The collaboration problem: The traditional model of IS design is based upon an individual, cognitive model of problem- solving and so excludes many necessary social processes required for collective investigation and negotiation of design attributes. Empirical studies indicate the centrality of communication, shared learning and project co-ordination, but such processes are often deemed illegitimate by managers guided by traditional, individual models of design (Curtis et al., 1988; Walz et al., 1993). Existing approaches resolve this problem by assuming that a unitary, intersubjective model of the designed system can be negotiated by design team members. As we have argued above, this may not be feasible in most organizational information systems supporting complex human work-systems, as these problems are “wicked” problems (Rittel, 1972), that are socially constituted, represent multiple work-goals and are highly interrelated.
5. The context problem: The traditional model ignores the context of design, as situated in a socially-constituted organizational culture. The form taken by a design involves both technical and social issues, for example, designers may debate the form of a technical artifact in terms of whether users should be prevented by its design from amateur repairs, or whether its design should reflect users’ desires for conspicuous consumption (Callon, 1991). Design is also political: an information system may change the nature of work and the basis of power, for different stakeholder groups within an organization (Wilkinson, 1983). Design processes are viewed as irretrievably interrelated with context: design activity is “situated” in knowledge and assumptions about organizational contexts (Gasser, 1986; Suchman, 1987; Lave, 1991; Lave and Wenger, 1991). Legitimate system “solutions” to political, situated problems are negotiated, rather than defined and are emergent, rather than explicitly stated (Boland and Tenkasi, 1995).
The five problems of design result in a separation of degree, rather than concept, between the design of a physical artifact and the design of an information system. Current models of design focus on design closure and so delegitimize the essential activities of investigating, negotiating and formulating design problems. We need to focus on “opening up” the design problem, to legitimize the modes inquiry required for effective design of complex, situated information systems. An understanding of this dialectic has significant implications for both the research and practices of design. The situated nature of design requires design models to be constructed through sharing simulated design contexts, rather than through the medium of abstract representational models; this is ill-supported by traditional methods for design. Such constructions cannot be shared intersubjectively, but rather are distributed between collaborating design-group members.
Additionally, the contextual constraints upon IS design are considered to have significant implications for design and constitute a critical area of activity which should be managed proactively, particularly where influential organizational decision-makers are involved as stakeholders in a design initiative. These findings have implications for co-operative learning, knowledge management and organizational innovation. If organizational problem-investigation processes are seen as involving distributed knowledge, then the focus of organizational learning and innovation shifts from sharing intersubjective organizational knowledge (achieving a “common vision”) to collaborating in constructing distributed organizational knowledge which is emergent, political and incomplete.
6.3 A Dual-Cycle Model of Situated, Information System Design
Section 8 concludes the discussion by presenting a dual-cycle model of design, based on this review of current knowledge, to resolve some of the major deficiencies in current information system design theory.
Resolution Of The Five Problems: This can only be highlighted the central role played by a periodic reopening of the design problem-definition in achieving shared understanding.
Findings From Previous Studies:
Effective, shared design understanding of design needs results from repeated revisiting of design problem definitions by stakeholders. This is driven through the introduction of a new ‘primary generator’ idea. Too early a closure of the problem is counter-productive.
It is not feasible for each member of a collaborative design team to understand all of the design rationale for a complex organizational information system. Readiness for problem closure (and solution specification) is gauged by the extent to which the team share an understanding of organizational goals and outcomes, not by the extent to which they share an understanding of the designed system.
The effectiveness of a design is constrained by the need to manage external perceptions and expectations of design outcomes. Successful expectation-management is key to successful evolution of the design, as stakeholder understanding of the design problem improves.
The extent to which a shared understanding of organizational goals was found to be more critical than a shared understanding of design outcomes to group perceptions of design completeness: an initial, dual-cycle model of collaborative design processes was proposed (Gasson, 1998a). The model is shown in Figure 11.
Show progress and iterations of the following process model of design:- opening up activity: requires detailed investigation of how the organization works (not just how individual domains work) – need to understand target system in terms of work-processes, information use and interactions with other organizational systems of work – in doing so, team members construct process goals (how they want to affect the organization), – which enable them to effectively move into second loop of operation:– progress (delegation) only possible once team-members trust each other enough to hand off parts of design – happens only when process goals are shared – then move on to closing down loop;- breakdown (collective) occurs when product goals conflict with what has been designed – need to reframe process goals collectively in terms of expanded understanding of role of designed process in larger organizational process – this requires return to original loop of operation (opening up), to define new goals and boundaries;- once new process goals are agreed, can move back to closing down loop, to division of labor in designing details of new process and determining collective action to implement this.
7. IMPLICATIONS FOR RESEARCH AND PRACTICE
The dialectic expressed by the dual-cycle model is critical in our understanding of what design is. It is counterproductive to artificially separate problem articulation from solution formulation. Problems and solutions converge, at an individual level, at a group level and in the politically- negotiated organizational processes that surround design activity. Current models of design focus on design closure and so delegitimize the essential activities of investigating, negotiating and formulating design problems. A dual-cycle model of design is proposed: one that focuses on “opening up” the design problem, as much as design closure. An understanding of this dialectic has significant implications for both research and practice of design that are discussed at the end of the paper. The implications for research are that this type of model needs to be investigated in practice and its contingencies understood. How well does this type of “dual-cycle” model fit with the activities required for effective design? What elements drive this type of model, how do we ensure an effective cycling between inquiry and closure, and how do we recognize design stopping points? The implications for practice are that we need to understand the contingencies of this type of model for IS design process management. How do we assess progress, for such an iterative model? How do we plan IS design and development projects, in a way that ensures agreement from project sponsors and a definition of interim deliverables? These questions remain to be answered.
References
Ackoff, R.L. (1974) Redesigning The Future, Wiley
Adelson, B. and Soloway, E. (1985) ‘The role of domain experience in software design’, IEEE Transactions In Software Engineering, Vol. 11, pp 1351-1360
Alexander, C. (1966), ‘A City Is Not A Tree, Design, No. 206, February 1966, pp.46-55
Alexander, C. (1999) ‘The origins of pattern theory: The future of the theory, and the generation of a living world’, IEEE Software Sept-Oct. 1999, pp 71-82.
Argyris, C. & Schön, D. (1978) Organizational Learning: A Theory Of Action Perspective, Addison-Wesley, Reading, Mass.
Argyris, C. (1987) ‘Inner Contradictions In Management Information Systems’, in R. Galliers, Information Analysis, Selected Readings, Addison-Wesley
Ball, L.J. & Ormerod, T.C. (1995) Structured and opportunistic processing in design: a critical discussion’, Int. Journal of Human-Computer Interaction, Vol. 43, pp 131-151
Bansler, J.P. & Bødker, K. (1993) ‘A Reappraisal of structured analysis: design in an organizational context’, ACM Transactions on Information Systems, Vol. 11, No. 2, pp. 165-193
Barry, C. and Lang, M. (2003) “A comparison of ‘traditional’ and multimedia information systems development practices,” Information and Software Technology (45:4), 217-227.
Berger, P.L. and Luckman, T. (1966) The Social Construction Of Reality: A Treatise In The Sociology of Knowledge, Doubleday & Company Inc., Garden City, N.Y.
Bertanlaffy, L. von (1968) General System Theory, Braziller, UKBeynon-Davies, P. and Holmes, S. (1998) “Integrating rapid application development and participatory design,” Software, IEE Proceedings (145:4), 105-112.
Bijker, W.E., Hughes, T.P. and Pinch, T.J. (Eds.) (1987) The Social Construction of Technological Systems, New Directions in the Sociology and History of Technology, MIT Press, Cambridge, MA
Boehm, B.W., Gray, T.E. & Seewalt, T. (1984) ‘Prototyping Versus Specifying: A Multiproject Experiment’, IEEE Transactions on Software Engineering, Vol. SE-10, Number 3, pp 290-302
Boehm, B.W., Gray, T.E., and Seewalt, T. (1984) “Prototyping Versus Specifying: A Multiproject Experiment,” IEEE Transactions on Software Engineering (SE-10:3), 290-302.
Boehm, B.W. (1988) ‘A Spiral Model Of Software Development And Enhancement’, IEEE Computer Journal, May 1988
Boland, R. and Day, W.F. (1989), The experience of systems design: a hermeneutic of organisational action, Scandinavian Journal of Management, 5,2 87-104
Boland, R J, Tenkasi, R. V., Te’eni, D. (1994) Designing Information Technology to Support Distributed Cognition, Organization Science, Vol 5, No 3, pp 456-475
Boland, R J and Tenkasi, R V (1995) Perspective Making and Perspective Taking in Communities of Knowing, Organization Science, Vol 6, No 4, pp 350-372
Booch, G. (1991) Objected-Oriented Design with Applications. Benjamin-Cummings Publishing Co. ISBN:0-8053-0091-0
Brown, J.S. & Duguid, P. (1991) “Organizational Learning and Communities of Practice: Toward a Unified View of Working, Learning, and Innovation”, Organization Science, Vol.2, No. 1, pp. 40-57.
Burrell, G. (1983) ‘Systems Thinking, Systems Practice: A Review’, Journal Of Applied Systems Analysis, 10.
Butler, T. and Fitzgerald, B. (2001) “The relationship between user participation and the management of change surrounding the development of information systems: A European perspective,” Journal of End User Computing (13:1), 12-25.
Callon, M. (1987) ‘Society in the making: The study of technology as a tool for sociological analysis’, in W.E. Bijker, T.P. Hughes, and T.J. Pinch (Eds.) The Social Construction of Technological Systems, New Directions in the Sociology and History of Technology, MIT Press, Cambridge, MA
Callon, M. (1991) ‘Techno-Economic Networks and Irreversibility’, in J. Law (Ed.) A Sociology of Monsters. Essays on Power, Technology and Domination, Routledge, London, UK
Carroll, J.M. and Rosson, M.B. (1992) “Getting Around The Task-Artifact Cycle How To Make Claims and Design By Scenario,” ACM Transactions on Information Systems (10:2), 181-212.
Cavaye, A.L.M. (1995) “User Participation In System Development Revisited,” Information & Management (28:5), 311-323.
Checkland, P. (1981) Systems Thinking, Systems Practice, John Wiley & Sons, Chichester.Checkland, P. and Holwell, S. (1998) Information, Systems and Information Systems: Making Sense of the Field, Chichester UK: John Wiley & Sons.
Checkland, P. & Scholes, J.(1990) Soft Systems Methodology In Action, John Wiley & Sons, ChichesterChurchman, C. W. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organization. Basic Books , New York, NY
Cohen, M.D., J.G. March and J.P. Olsen (1972) “A Garbage-Can Model of Organizational Choice”, Administrative Science Quarterly, Vol. 17, 1-25
Cooley, M (1990) Architect or Bee? The Human / Technology Relationship, 2nd Edition. Southend Books, UK
Cooper, A. (1999) The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How To Restore The Sanity. Sams Publishing.
Corbett, J.M., Rasmussen, L.B. & Rauner, F. (1991) Crossing the Border: The social and engineering design of computer integrated manufacturing systems, Springer-Verlag, London
Darke, J. (1979) ‘The Primary Generator And The Design Process’, Design Studies Vol. 1, No 1. Reprinted in N. Cross (Ed.) Developments In Design Methodology (1984), J. Wiley & Sons, Chichester, pp 175-188
Emery, F.E. & Trist, E.L. (1960) ‘Socio-Technical Systems’ in C.W. Churchman & M. Verhulst (Eds.) Management Science Models and Techniques, Vol. 2, Pergamon Press, Oxford, UK
Floyd, C. (1984) ‘A Systematic Look At Prototyping’, in R.Budde, K.Kuhlenkamp, L.Mathiassen, & H.Zullighoven (Eds.) Approaches To Prototyping, Springer-Verlag Books
Floyd, C. (1987) ‘Outline of a Paradigm Change in Software Engineering’ in G. Bjerknes, P. Ehn & M. Kyng (Eds.) Computers and Democracy: A Scandinavian Challenge, Avebury Gower Publishing Company, Aldershot, UK
Fowler, M. (2003) “The New Methodology,” Online at http://www.martinfowler.com).
Fowler, M. and Highsmith, J. (2001) “The Agile Manifesto,” Software Development Magazine).
Friedman, A.L. (1990) ‘ Four Phases of Information Technology – Implications for Forecasting IT Work’, Futures, Guildford, Vol. 22, No 8., pp. 787-800.
Gane, C. & Sarsen, T (1979) Structured Systems Analysis: Tools and Techniques, Prentice-Hall, New Jerseyasson, S. (1998) ‘Framing Design: A Social process View of Information System Development’, in Proceedings of ICIS ’98, Helsinki, Finland, December 1998, pp. 224 – 236.
Gasson, S. (1999) “The Reality of User-Centered Design,” Journal of End User Computing (11:4), 3-13.
Gasson, Susan (2003) “Human-Centered vs. User-Centered Approaches to Information System Design,” Journal of Information Technology Theory and Application (JITTA): Vol. 5: Iss. 2, Article 5. Available at http://aisel.aisnet.org/jitta/vol5/iss2/5/
Guindon, R. (1990a) ‘Designing the design process: Exploiting opportunistic thoughts’, Human-Computer Interaction, No. 5
Guindon, R. (1990b) ‘Knowledge Exploited By Experts During Software System Design’, International Journal of Man-Machine Studies, Vol. 33, October 1990, pp 279-304
Heidegger, M. (1962) Being and Time, Harper & Row, New York
Highsmith, J. (2000) Adaptive Software Development: A Collaborative Approach to Managing Complex Systems, New York, NY: Dorset House Publishing.
Hoc, J.M. , Green, T.R.G., Samurçay, R and Gilmore, D.J. (1980) [Eds.] Psychology of Programming, Academic Press, London, UK
Howcroft, D. and Wilson, M. (2003) “Participation: ‘Bounded freedom’ or hidden constraints on user involvement,” New Technology, Work, and Employment. (18:1), 2-19.
Hutchins, E. (1991), ‘The Social Organization of Distributed Cognition’, in Lauren B. Resnick, John M. Levine, and Stephanie Teasley (eds.) Perspectives on Socially Shared Cognition, Washington, DC: American Psychological Association. pp. 283-307.
Jacobson, I. 1992 Object-Oriented Software Engineering: A Use Case Driven Approach. Addison-Wesley. ISBN 0-201-54435-0
Jeffries, R., Turner, A.A., Polson, P.G. & Atwood, M.E. (1981) ‘The Processes Involved In Designing Software’, in J.R. Anderson (ed.) Cognitive Skills And Their Acquisition, Lawrence Erlbaum Associates, Hillsdale, New Jersey
Kant, E. and Newell, A. (1984) ‘Problem solving techniques for the design of algorithms’, Information Processing andManagement, Vol. 28, pp 97-118
Kirsch, L.J. and Beath, C.M. (1996) “The enactments and consequences of token shared and compliant participation in information systems development,” Accounting Management and Information Technology (6:4), 221-254.
Land, F. and Hirschheim, R. (1983) ‘Participative Systems Design: Rationale, Tools and Techniques’, Journal Of Applied Systems Analysis, Vol. 10.
Lanzara, G.F. (1983) ‘The Design Process: Frames, Metaphors And Games’, in U. Briefs, C. Ciborra, L. Schneider (eds.) Systems Design For, With and By The Users, North-Holland Publishing Company
Latour, B. (1987) Science in Action, Harvard University Press, Cambridge, MALatour, B (1991) ‘Technology is society made durable’, in J. Law (Ed.) A Sociology of Monsters. Essays on Power, Technology and Domination, Routledge, London, UK
Laukkanen, M. (1994) ‘Comparative Cause Mapping of Organizational Cognitions’, Organization Science, Vol. 5, No. 3, reprinted in J.R. Meindl, C. Stubbart, J.F. Porac (Eds.) Cognition Within and Between Organizations, Sage Publications, California, 1996
Lave, J. (1991) ‘Situating Learning In Communities of Practice’ in L.B. Resnick, J.M. Levine, S.D. Teasley (Eds.) Perspectives on Socially Shared Cognition, American Psychological Association, Washington DC, pp 63-82
Lave, J. & Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation, Cambridge University Press, Cambridge, UK
Law, J. & Callon, M. (1992) ‘The Life and Death of an Aircraft: A Network Analysis of Technical Change’ in W.E. Bijker and J.Law (Eds.) Shaping Technology/Building Society: Studies In Sociotechnical Change, MIT Press, Cambridge, MA
Lehaney, B., Clarke, S., Kimberlee, V., and Spencer-Matthews, S. (1999) “The Human Side of Information Development: A Case of an Intervention at a British Visitor Attraction.,” Journal of End User Computing (11:4), 33-39.
Lewin, Kurt (1951). Field theory in social science: Selected theoretical papers (D. Cartwright, Ed.). Harper & Row, New York.
Mackenzie, D.A. & Wajcman, J. (1985) ‘Introduction’ to Mackenzie, D.A. & Wajcman, J. (Eds.) The Social Shaping Of Technology, Open University Press, Milton Keynes
Maher, M.L. and Poon, J. 1996. “Modelling Design Exploration as Co-Evolution,” Microcomputers in Civil Engineering (11:3), pp. 195-210.
Malhotra, A., Thomas, J., Carroll, J., and Miller, L. 1980. “Cognitive Processes in Design,” International Journal of Man-Machine Studies (12), pp. 119-140.
Markus, M.L. and Bjorn-Andersen, N. (1987) “Power over users: its exercise by system professionals,” Communications of the ACM June 1987 (30:6).
Mathiassen, L. & Stage, J. (1992) ‘The Principle of Limited Reductionism In Software Design’, Information Technology & People, Vol. 6, No. 2, pp 171-185
Mayer, R.E. (1989) ‘Human Nonadversary Problem-Solving’ in K.J. Gilhooley (Ed.) Human and Machine Problem-Solving, Plenum Press, New York
McCracken, D. D. & M. A. Jackson, (1982). Life Cycle Concept Considered Harmful. ACM SIGSOFT, Software Engineering Notes. 7(2):29-32
Mintzberg, H. & Waters, J.H. (1985) ‘Of Strategies, Deliberate and Emergent’, Strategic Management Journal, vol. 6, pp 257-72)
Mumford, E. (1983) Designing Participatively, Manchester Business School, UK
Mumford, E. & Sackman, H. (1975) Human Choice and Computers, North-Holland Publishing Company
Mumford, E. and Weir, M. (1979) Computer Systems in Work Design: the ETHICS Method, New York NY: John Wiley.
Necco, C.R. (1989) ‘Evaluating Methods of Systems Development: A Management Survey’, Journal of Information Systems Management, Vol. 6, Issue 1, pp 8-16
Nelson, D. (1993) “Aspects of Participatory Design – Commentary on Muller et al. 1993,” Communications of the ACM (34:10), 17-18.Nelson, E. (2002) “Extreme Programming vs. Interaction Design,” FTP Online Magazine:January 15).
Preece, J., Rogers, Y., and Sharp, H. (2002) Interaction Design: Beyond Human-Computer Interaction, New York, NY: Wiley.
Rittel, H.W.J. (1972) ‘Second Generation Design Methods’ Reprinted in N. Cross (Ed.) Developments In Design Methodology (1984), J. Wiley & Sons, Chichester, pp 317-327
Rittel, H.W.J. & Webber, M.M. (1973) Dilemmas in a General Theory of Planning, Policy Sciences, Vol. 4, pp 155-169
Rosenbrock, H.H. (1988) ‘Engineering As An Art’, AI & Society, Vol. 2, No. 4, pp 315-320
Royce, W. W., (1970) ‘Managing the Development of Large Software Systems: Concepts and Techniques’, in Proceedings of WESCON August 1970, pp 1-9. Reprinted in: Proceedings ICSE 9, Computer Society Press, 1987, pp. 328-338.
Rumbaugh, J. (1991) Object-Oriented Modeling and Design. Prentice HallScarbrough, H. and Corbett, J.M. (1991) Technology and Organisation: Power, Meaning and Design, Routledge.
Schön, D.A. (1983) The Reflective Practitioner: How Professionals Think In Action, Basic Books, NYSimon, H. A. (1957) Models of Man: Social and Rational, New York: John Wiley.
Simon, H.A. (1960) The New Science of Management Decision, Harper & Row, New YorkSimon, H.A. (1973) ‘The Structure of Ill-Structured Problems’, Artificial Intelligence, No. 4, pp 145-180
Simon, H.A. (1981) Sciences of The Artificial, Second Edition, MIT Press, Cambridge, MA
Smircich, L. & Morgan, G. (1982) ‘Leadership: The management of meaning’, Journal of Applied Behavioural Science, Vol. 18, No. 3, pp 257-273
Star, S. L. (1989), ‘The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed ProblemSolving’, in L. Gasser and M. N. Huhns (eds.) Distributed Artificial Intelligence, Vol. II. San Mateo, CA: Morgan Kaufmann Publishers Inc., pp. 37-54.
Suchman, L. (1987) Plans And Situated Action, Cambridge University Press, MA, USASumner, M. & Sitek, J. (1986) ‘Are Structured Methods for Systems Analysis and Design Being Used?’, Journal of Systems Management, June 1986, Vol. 37, Issue 6, pp 18-23
Taylor, F.W. (1911) Principles Of Scientific Management, Harper, New York
Visser, W. & Hoc J-M. (1990) ‘Expert Software Design Strategies’ in J.M. Hoc, T.R.G. Green, R. Samurçay, D.J. Gilmore (Eds.) Psychology of Programming, Academic Press, London, UK
Weber, M. (1922), The Theory of Social and Economic Organization, translated by A.M. Henderson and T. Parsons (Eds.), Oxford University Press, New York NY, [This English translation published 1947.]
Winograd, T. & Flores, F. (1986) Understanding Computers And Cognition, Ablex Corporation, Norwood, New JerseyWinograd, T.A. (1994) “Designing a language for interactions,” interactions (1:2), 7-9.
Winograd, T. (1996) “Introduction,” In: T. Winograd (ed.) Bringing Design To Software, New York NY: Addison-Wesley Publishing.
Winter, M.C., Brown, D.H. & Checkland, P.B. (1995) ‘A Role For Soft Systems in Information Systems
In the last few years, the terms human-centered and user-centered have become synonymous in HCI and IT design, with a focus on disciplines such as “user experience” and “interaction design.” Here I will argue that neither discipline really deals with the core issues of human-centered design.
Human-centeredness in design involves designing technology artifacts, applications, and platforms that provide a “support system” to people performing specific work or play activities as individuals, or collaborating around a set of (more or less) well-defined aims – often messily and exploratively. Asking people to describe their requirements for technology to support them in their activity doesn’t work because no-body really stops top think about how they work, or what they do to achieve a goal. When they are forced to do so, they will describe how work should be done – the formal system of procedures and rules – rather than how it is done – the informal, socially-situated system that makes work activities fit with their environment and the objectives that people have.
People are seldom alone in what they do, even when engaging in individual activity. They socialize with other people and exchange ideas, they seek advice on how to proceed, and they collaborate to achieve shared – or similar – goals. When confronted with a novel problem, most people turn to a “small world” network of trusted social contacts for input – people who share their values and perspectives – rather than conducting a wider search that includes subject experts and knowledge resources (Chatman, 1991). Even when working alone, we are never truly alone. We are thrown into a working environment that existed before we joined – a self-contained world of work and social activity that we can only understand through participation (Weick, 2004). Professionalism and practice in one organization are completely different to the practices and standards applied in another.
When we try to understand the “user” of a software application or system, we often fail miserably because we only see the formal work activities that they perform. We miss the web of activities that their formal activity is a part of – the multiple other human-activity systems they interact with, to get things done. User-experience design is reductionist in its focus on interaction design. It takes a human being, rich in purpose and understanding, and reduces them to the role of artifact user. Not only that, but by implication, the user of a pre-defined artifact, whose purpose is understood, but whose mechanisms of interaction remain to be fully defined. By focusing on conceptual models of use, user scripts, and activity/task frameworks for work-analysis e.g. Sharp, Preece, and Rogers (2019), it isolates the user from the social context of work, describing activities in terms of fixed procedures and embedding assumptions about how and why the artifact will be used. It loses the joyful multivocality of the human-centered approach to design. Instead of understanding that thrownness is a temporary state, where there is a choice between reaction or being proactive, user-centered design embeds reaction as a paradigm. It separates tasks from workflows, making each interaction an end in itself and enforcing the approach to design that led Lucy Suchman to write her famous treatise on situated design (Suchman, 1987, 2007). There is no linked flow of work processes, where the human being knows that (for example) they have already photocopied the report covers (onto special cardstock) and the early chapters, so now have only to copy later chapters. There is the dumb lack-of-saved-status machine, which jams halfway, then asks the user to reload the report pages in their original order, starting with the covers which need the user to load special cardstock into the paper feeder. Which they already did.
We can support this world by understanding the various purposes of human activity and designing technology to assist in those purposes (Checkland and Winter, 2000). Human-centered design differs from user-centeredness by being systemic and multi-vocal: it is aware of the multiple networks of activity in which a human technology user engages, simultaneously. Unlike user-centered design, which focuses on a single, definable work-goal, human-centered design appreciates the multiple goals that people pursue simultaneously, for different purposes. Human-centered design appreciates the social and organizational context of work, employing analytical approaches and methods that explore the complexity of the activities that we do – and the social networks we inhabit to do them.
Designing for humans rather than users is a choice:
Human-centered design explores the multiple, purposeful systems of human-activity that are required to achieve even simple work (or play) goals.
It treats the participants in a human activity system as autonomous individuals, not agents to be modeled, controlled, and curtailed. Human-centered design respects and supports the local knowledge required to act skillfully, using local knowledge and various forms of tacit or implicit knowledge to perform work that is often not recognized as knowledge work.
It recognizes that a social system of information exchange exists, of which the designed technology artifact or software is only a part, and that humans need to exercise a deliberative choice about what to record and why. Any computer-based system of data is part of a wider, human-network-based system of information.
Because it appreciates work as part of a wider social system, human-centered design involves a conscious decision to support the informal communications and activities that keep the system of work connected and informed – for example, water-cooler conversations or phone calls. These informal channels produce more knowledgeable participants in the system of work, rather than resulting in recorded data records or written resources. They are often omitted from – or worse, designed out of – the formal system of “user experience design.”
Above all, it acknowledges that knowledge, understanding, and the meanings that we ascribe to work are emergent. We understand how to do things by doing them, then reflecting on what we did and how – after which we have a better understanding of how to do them next time. Designing any particular set of procedures into a computer-based system is not only a waste of time, but may be counterproductive, as we constantly improvise and improve on how we did things previously (learning-by-doing). Human-centered systems design allows the human to be in control of their work, rather than the IT system.
So no – “user experience design” and “interaction design” do not support human-centeredness in work (or play). They seek to humanize the artificial processes imposed by transaction-based systems by associating these with perspectives that acknowledge the psychology of human activity, learning, and interactions with technology. But they don’t even scratch the surface of understanding situated, systemic activity. For that, you need to employ methods that complicate your perspective, such as Soft Systems Analysis (Checkland, 2000; Checkland and Poulter, 2006) – and to take human-centeredness seriously.
To conclude, user-centered design – as the term is employed in HCI and UX – is not the same as human-centered design. User-centered design is aimed at mitigating and improving the experience of using a system of technology that was designed for another purpose than those the user prioritizes – to make money, to “engage” users on the website so they return (and spend more money), and to publicize the firm’s products and services. In contrast, human-centered design is an approach that starts with user values, priorities, and purposes. It seeks to afford uses of the system that fulfill how the user would like to access the features that they value and expect. It designs the flow of use-interactions around the expected user flow of work (or play), allowing the user to configure this flow how they want. It does not make you do illogical or stupid things, like reloading all the sheets in a photocopier feed in their original order, even when the copy failed on the last-page-but one. It does not make you enter the same information repeatedly, because the designer was too unimaginative to anticipate that a user might want to change some of the options they had selected earlier (e.g. when booking an airline ticket). And it doesn’t make you go through seven layers of a menu to reach the one page you need.
Human-centered design is performed by people who talk to users, learn to think like users, and walk alongside them in their work. These designers not only prototype and evaluate their designs, but also listen to the feedback they are given. They value user input and see it an critical to their portfolio of design experience. In the design literature of the 1980s there was a lot of discussion of how user representatives would “go native,” when participating in design projects, learning to think like designers and subsuming the interests of their fellow users in the process. In the 2020s, we need to see more IT designers going native, learning to think like users, reworking IT system designs to support how users work, and valuing the aspects of system design that users value. That is human-centered design.
References
Chatman, E.A. 1991 “Life in a Small World: Applicability of Gratification Theory to Information-Seeking Behavior,” Journal of the American Society for Information Science (42:6), pp. 438–449.
Checkland, P. 2000 “Soft systems methodology: a thirty year retrospective,” Systems Research and Behavioral Science (17), pp. S11-S58.
Checkland, P., and Poulter, J. 2006. Learning For Action: A Short Definitive Account of Soft Systems Methodology, and its use Practitioners, Teachers and Students Chichester: John Wiley and Sons Ltd, 2006.
Checkland, P., and Winter, M.C. 2000 “The relevance of soft systems thinking,” Human Resource Development International (3:3), pp. 411-417.
Sharp, H., Preece, J., and Rogers, Y. 2019. Interaction Design: Beyond Human-Computer Interaction, 5th EditionWiley, UK, 2019.
Suchman, L. 1987. Plans And Situated Action Cambridge MA: Cambridge University Press, 1987.
Suchman, L. 2007. Human–machine reconfigurations: Plans and situated actions Cambridge, UK: Cambridge University Press, 2007.
Weick, K.E. 2004. “Designing For Throwness,” in: Managing as Designing, R. Boland, J and F. Collopy (eds.), Stanford CA: Stanford Uniersity Press, pp. 74-78.
Selected Papers:
Gasson, S. (2008) ‘A Framework For The Co-Design of Business and IT Systems,’ Proceedings of Hawaii Intl. Conference on System Sciences (HICSS-41), 7-10 Jan. 2008. Knowledge Management for Creativity and Innovation minitrack, p348. http://doi.ieeecomputersociety.org/10.1109/HICSS.2008.20.
Modern organizations are complex, and the sorts of problems that remain to be resolved using process redesign, IT systems design, or the combination of both that we call the co-design of business & IT systems can’t be defined around a simple set of issues. There are multiple managers and groups of stakeholders, with competing goals for change. Some of these overlap, some complement each other, and some are in conflict with those of other stakeholders. Even a group of people who work together will have differing requirements and goals, depending on their experience, their professional background, and their position in the organization. People understand the parts of the organization they have experience of. Those who have worked in multiple groups will have a much wider – and more complex – view of what needs to change than those who have worked in the same role for years.
There’s a management consultancy joke that says if you get five stakeholders around a table, you’ll have at least fifteen different goals.
The co-design of business & IT systems is like piecing together a jigsaw puzzle without the picture. You get an edge here and there, part of a building outline, or a connecting feature, but mainly you are assembling bits and pieces that are tacked together in whatever way makes sense at the time. Most IT analysts fudge this by merging stakeholder requirements for change under a single, vague business goal. But this doesn’t prevent the shift in focus between multiple objectives that stakeholders prioritize, as these become salient to the current area of design. Change analysts have to understand multiple business domains, as stakeholders’ requirements indicate different types of solution and the analyst attempts to integrate these around a coherent business vision. Even business managers don’t really understand their processes – and know very little of the processes with which their area of responsibility interfaces. Conflicts, priorities, and omissions in change objectives are seldom realized as the logical analysis methods used for IT requirements don’t provide ways to map out the full scope of change – the big picture.
We lack ways to represent the big picture of how the organization “works” in ways that would allow business managers to understand the implications of changing things. Business analysts, change managers, and IT systems analysts are in a no-win situation. They are expected to understand myriad interpretations of the business strategy, reconcile conflicting viewpoints on how business processes work, and somehow define a coherent set of change objectives that pleases everyone. All while the stakeholders they need to satisfy each understand only a fraction of what the business does.
The co-design of complex organizational processes & IT systems
In today’s complex organizations, very few design goals are understood to the point that they can just be stated and agreed across stakeholders. Design-goals are constantly changing between iterations, as shown in Figure 1. The designer starts by designing for the subset of goals they understand. As they explore and test the design with users, they become aware of new requirements and so modify the subset of goals they are designing for. As part of this process, they also discard any requirements that are no longer associated with perceived user needs.
Goal-based design is a myth
Organizational change requires repeated cycles of appreciation, enculturation, inquiry, learning, change, and evaluation – until the design is good enough. Not perfect – and certainly not optimal – good enough is good enough [2]. We talk about design as if it were fixed: as if there were one best way to design everything. We celebrate designers who produce especially elegant or usable artifacts as if they were possessed of supernatural powers. Yet design should be easy. It is the application of “best practice” principles to a specific situation. We can observe how the users of a designed artifact or system work, then design the artifact or system accordingly.
A few years ago, I published an academic paper – which proved to be one of my most-read articles, on user-centered vs. human-centered design. In that paper, I compared the typical analytic methods and tools for user-centered design to an idea of human-centered design that came out of the field industrial engineering. Having seen the recent explosion of “user-centered” design fields such as User Experience design, I feel even more strongly that human-centered design is a discipline that has not yet fulfilled its potential for changes to the way in which we design technology systems for both work and play.
Human-centered design ideas come out of an emancipatory labor movement – originally in the UK – that looked at the constraints imposed by technology on work and focused on the impact of design on the quality of working life. This “socio-technical” approach to design (Emery & Trist, 1960) originated in studies of industrial processes, often embedded in the rapid societal and technical change of post World War II Britain conducted by researchers from The Tavistock Institute of Human Relations in London. A research team led by Eric Trist, Ken Bamforth, and Fred Emery studied the organization of coal-mining teams for various types of mine and coal-seam environment, concluding that the design of working arrangements and the use of technology needed to be balanced with the conditions in various type of working environment. They noted the tension between the need for miners to self-organize into collaborative groups that increased productivity by allowing miners autonomy in selecting their team role, and management directives which constrained group autonomy because this empowered the miners – and allowed them to negotiate the higher rates paid for skilled labor (Trist et al., 1963). They coined the term “sociotechnical” to define an approach to the design of working arrangements that balanced the socially-situated needs of human workers with the use of machinery to automate repetitive and dangerous work.
The ideas behind socio-technical design really took off in the 1980s, with the explosion of affordable office technologies and personal computing. Some notable thinkers in this aspect of design include:
Mike Cooley (Architect or Bee?, 2016), who explained how technology design choices exerted control over the labor force at the expense of social good. A key element of his arguments was to explain how the combination of conceptual design ability with the practical ability to understand the context of practice across multiple domains – common in the renaissance – has given way to a “deep dive” specialization in one area or another. This separation of “planning” from “doing” leads to design problems, as designers cannot envision the context in which their design will be used and make stupid mistakes. It also excludes consideration of social good when making design choices. Technology decisions are made on the basis of manufacturing cost rather than long-term, environmental impact.
Ken Eason, who argued in his early work (e.g. Eason, 1982) that designers’ choice of design approach affected system usability: a technology-led approach leads to ‘fire fighting’ when negative organizational effects become apparent; and user involvement in design tends to fail as users take longer to understand new technology than developers, so design is complete by the time they are able to make a contribution. He proposed an evolutionary design process that builds slowly from small-scale systems to large ones, retaining the flexibility to change and adapt to emerging user needs, promoting user learning via system prototypes and training, and involving users in system evaluation. His later work discusses how the typical “closed system” approach to IT design (goal-oriented and focused on predefined requirements) constrains the “open system” approach to design needed to balance the emergent needs of human users with technology goals, and also cater for the evolving system requirements engendered by changing global business environments (Eason, 2009).
Howard Rosenbrock (1981, 1988), was a visionary engineering theorist, who not only developed innovative techniques approaches to algorithm design for control engineering, but also saw engineering as an “art” (Rosenbrock 1988) that needed to balance the design of technology with the social needs, personal experience, and judgment of human beings. The opening to his 1981 paper, Engineers And The Work That People Do, contains the most chilling description of a work environment that I have ever read:
The plant was almost completely automatic. Parts of the glass envelope, for example, were sealed together without any human intervention. Here and there, however, were tasks which the designer had failed to automate, and workers were employed, mostly women and mostly middle-aged. One picked up each glass envelope as it arrived, inspected it for flaws, and replaced it if it was satisfactory, once every four-and one-half seconds. Another picked out a short length of aluminum wire from a box with tweezers, holding it by one end. Then she inserted it delicately inside a coil which would vaporize it to produce the reflector, repeating this again every four-and-one-half seconds. Because of the noise, and the isolation of the work places, and the concentration demanded by some of them, conversation was hardly possible.
This description still sends shivers down my spine. Not just because of the working conditions, but because of the casual way in which Rosenbrock mentions that the few manual work-processes on the light-bulb factory floor are not automated only because they are too complex or expensive to automate. They used human-beings for repetitive, demeaning jobs in which the environment made it too difficult to socialize with others, simply because they were cheaper or easier than designing an automated solution.
Participative Design
Obviously, any blog post cannot capture the whole of the socio-technical movement, with all the complexities that the various studies introduced. Here, I have tried to outline the tip of the iceberg, explaining the motivations that led to the HCI, CSCW, and agile design fields that influence contemporary design. But I cannot leave this discussion before mentioning the key influence of End Mumford. Professor Mumford was critical in promoting the importance of user participation in design (Mumford, 1983). She even conducted studies to demonstrate how users “went native” when participating in technology design, as technology-design skills were considered so glamorous and career-enhancing (1975). She devised a method – the ETHICS approach – that illustrated how to analyze requirements in ways that both balanced the technical and the social aspects of design, but also managed the inevitable subversion of social (work-system) design by considerations of technical expediency and optimization (Mumford & Weir, 1979; Mumford, 1995).
So how do we design human-centered systems that support workers in the work they need to do, while allowing them autonomy in the way that they do this work? The process devised over many years is to use socio-technical systems design.
As shown in Figure 1, above, socio-technical design balances the needs of a “supported system” of people doing work – a.k.a. the social system, with a “supporting system of information and communication technology – a.k.a. the technical system. It is important to start with the social system: the people who do the work are unfailingly the people who understand best what it requires, in the way of information and computing support. It is also important to see the drivers of design as the need to balance changes to the two systems, so the IT system supports the system of work (and not vice-versa). I refer to this principle as the co-design of business-process and IT systems. This concept was inspired by the Soft Systems Methodology approach of Peter Checkland (1981). Checkland argues that designed IT systems often solve the wrong problem, because designers fails to appreciate that the point of design is to support purposeful systems of human activity, rather than pursuing the separate aims of a technology architecture, data structures and information systems (Checkland, 1981; Winter, Brown, & Checkland, 1995). Socio-technical systems design balances the needs of the systems of purposeful human-activity (work or play) in which various people engage, and the supporting system of information technology and user experience design that makes those activities possible.
References
Checkland, P. (1981) Systems Thinking, Systems Practice, John Wiley & Sons, Chichester.
Eason, K. D. (1982). The Process Of Introducing Information Technology. Behaviour and Information Technology, 1(2), April-June 1982> Reprinted as Eason, K.D. (1984) “Managing Technological Change,” in Rob Paton, Suzanne Brown, Jake Chapman, Mike Floyd and John Hamwee (Eds.) Organizations: Cases, Issues, Concepts. The Open University, Milton Keynes, UK.
Emery, F. E., & Trist, E. L. (1960). Socio-Technical Systems. In C. W. Churchman & M. Verhulst (Eds.), Management Science Models and Techniques (Vol. 2). Oxford UK: Pergamon Press.
Mumford, E. & Sackman, H. (1975) Human Choice and Computers, North-Holland Publishing Company.
Mumford, E. & Weir, M. (1979), “Computer Systems in Work Design: the ETHICS Method”, John Wiley, New York
Mumford, E. (1983) Designing Participatively: A Participative Approach to Computer Systems Design. Manchester Business School, Manchester, UK.
Mumford, E. (1995) Effective Systems Design and Requirements Analysis: The ETHICS Approach. Macmillan, Basingstoke, UK
A Framework For Behavioral Studies of Social Cognition In Information Systems
Susan Gasson
Drexel University, USA
Please cite this paper as:
Gasson, S. (2004) ‘A Framework For Behavioral Studies of Social Cognition In Information Systems,’ Proceedings of ISOneWorld Conference, Las Vegas NV, 14-16 April 2004. Available from https://www.improvisingdesign.com/soc-cog/
Abstract
This paper examines framing processes in organizational information system definition, acquisition and use. Three theoretical lenses of social cognition are required to understand how individuals and groups frame IS problems and solutions. These are: (i) socially-situated cognition, (ii) socially-shared cognition, and (iii) distributed cognition. These three perspectives are often conflated in studies of that study mental models or framing in an IS context. The separation of analytical “levels” reveals different interiors of the “black box” of organizational IS design and adaptation, which are not well understood. In particular, this methodological framework highlights different assumptions concerning whether mental models are static or dynamic, and whether cognition is a property of individuals, groups, or technological systems.
Keywords: Social Cognition, Technological Frames, Mental Models, IS Design, Sensemaking, Improvisation
Introduction
The study of socially-situated cognition is becoming increasingly common in the information systems (IS) literature An interest in theoretical concepts such as organizational sensemaking (Weick, 2001), situated action (Lave and Wenger, 1991, Suchman, 1987, 1998), technological frames (Orlikowski and Gash, 1994), organizational improvisation (Orlikowski and Hofman, 1997, Weick, 1998), technology adaptation (Majchrzak et al., 2000, Orlikowski et al., 1995), emergent knowledge processes (Markus et al., 2002), and distributed cognition (Hutchins, 1991) reflect an attempt to understand the ways in which aspects of individual and group understanding inform the definition, design, acquisition, use and adaptation of technological systems that are situated within a specific social and organizational-work context. But much of this work is ad hoc and fragmented, with little understanding of the traditions that underlie these theoretical concepts and the relationship between them.
This paper is structured as follows. Section 2 presents a structured discussion of different theoretical perspectives that are incorporated into situated (or contextual) analyses of social cognition. The situated perspective is privileged here because contextual studies of social cognition are emerging as an important development in the organizational and IS literatures (Orlikowski and Gash, 1994, Porac, 1996, Resnick, 1991, Winograd and Flores, 1986). In section 3, a methodological framework is suggested for studies of social cognition in IS, accompanied by a discussion of how these concepts may be operationalized. Finally, there is a discussion of how this framework may be applied in IS research studies.
Theoretical Perspectives on Framing
The study of the processes by which human beings individually and collectively interpret, bound and make sense of phenomena and social interactions in the external world originated in the fields of cognitive and social psychology. Human beings are thought to act according to internal, cognitive structures that represent or symbolize external reality, constituting a language of thought (Fodor, 1975). These structures are variously referred to as schemas (Bartlett, 1932, Neisser, 1976), personal constructs (Kelly, 1955), scripts (Schank and Abelson, 1977) or mental models (Gentner and Stevens, 1983, Johnson-Laird, 1983). Earlier notions of cognitive processing emphasized information processing over the construction of meaning; the importance of both context and meaning became de-emphasized as a result (Bruner, 1990). More recently, human cognition has been viewed as a process that is situated within a socio-cultural context (Porac, 1996, Suchman, 1987, 1993). Thus, meaning “derives from an interpretation that is rooted in a situation” (Winograd and Flores, 1986, page 111). Mental models become more complex, abstract and organized with experience: this is pertinent in the IS profession, where experiential knowledge is valued because it brings an increased capacity for abstraction (Vitalari and Dickson, 1983).
These cognitive structure concepts from the psychology literature converge, and are extended to organizational research, in the notion of a “frame” (Goffman, 1974, Tannen, 1993). Framing operates at the intersection of a psychological-cognitive and a social-behavioral approach to human interaction (Ensink and Sauer, 2003). In framing a problem-situation, an individual both structures and bounds those elements of the situation that they consider relevant, just as a film-director frames a scene.
Framing As Socially-Situated Cognition
Underlying any study of social interaction is the understanding that individuals inhabit a socially constructed world and through their actions, reproduce and give meaning to that world (Berger and Luckman, 1966, Kelly, 1955). Individuals operate within distinct “social worlds” (Strauss, 1978, 1983) or “communities of practice” (Brown and Duguid, 1991, Lave and Wenger, 1991): local workgroups possessing their own social norms, social expectations and specific genres of communication. But people are also members of multiple social worlds, as their work and personal experience intersects with the knowledge and interests of different groups (Strauss, 1983, Vickers, 1974). Thus, organizational problems and meanings are not consensual but emerge through interactions between the various social worlds to which decision-makers belong. People behave according to “structures of expectation” (Tannen, 1993) that guide how they predict and interpret the behavior of others. Such structures are partly culturally-predetermined and partly based on prior experience of similar situations (Boland and Tenkasi, 1995, Minsky, 1975, Schank and Abelson, 1977, Tannen, 1993).
Communications are framed both within a specific, situational context and from an individual perspective (Ensink and Sauer, 2003, Tannen, 1993). Individuals provide conversational cues, on the basis of which hearers are able to place the communication within a specific context. But an individual cannot contribute to a discourse without displaying their view on the subject matter. Thus, individual frames are not static, but subjected to change during communicative and social interaction (Boland and Tenkasi, 1995, Ensink and Sauer, 2003, Eysenck and Keane, 1990). Suchman (1987, 1998) demonstrates how shared definitions of technology and work spaces are produced and reproduced through interactions between technology, people and potential work-spaces. Managers and workers make sense of their organizational environment and innovate through improvisation, to determine what works in practice and how it may be changed (Middleton, 1998, Orlikowski and Hofman, 1997, Weick, 1998, Zack, 2000). Organizational processes may no longer be viewed as static, but as “emergent knowledge processes” (Markus et al., 2002). Knowledge and meaning therefore derive from situated, shared experience, interpreted through continual adaptation and improvisation (Markus et al., 2002, Middleton, 1998, Weick, 1998, Zack, 2000).
The core problem, in determining how people frame a specific situation, is that of making evidence of internal, cognitive framing structures visible, for analysis. Bruner (1990) use of storytelling as a way to elicit implicit perspectives is well-established (Boje, 1991, Gershon and Page, 2001, Mitroff and Kilmann, 1975). However, it must also be recognized that people invent or post-rationalize narratives, as a way of making sense of uncomfortable or inappropriate behavior and situations (Angus, 2001). Boland and Tenkasi (1995) suggest that narrative be combined with techniques to stimulate reflexivity (Schutz, 1967) and also suggest the use of cognitive maps (Axelrod, 1976, Eden et al., 1983) to elicit implicit reasoning. Most studies of situated framing employ a discourse analysis of interview data, observation, or technology interactions. Rommes (2002), in a “thinking aloud” study of Internet interactions, found that the way in which first-time users interpreted the city metaphor in their use of a digital city internet resource was very different to the way in which technical designers framed the ‘city’ metaphor. Jacobs (2001, 2002) employed discourse analysis and a co-term analysis of survey data, to compare framing constructs held by members of different professional groups. He concluded that the similar life-experience of members of specific groups led them to frame the role of information technology in similar ways. Prasad (1993) interviewed and observed members of diverse occupations, in a computerization project at a large health-services organization. He concluded that the way in which technology was interpreted resulted from sociocultural influences, such as membership of a specific professional group, combined with the ways in which their local managers and opinion-makers ascribed meaning to the technology. For example, managers who advocated use of the new information system by ascribing human qualities to it, such as smartness or knowledgeability, raised expectations of how the technology would behave that were widely adopted by those who worked for them and which were often at odds with their experience. From these studies, it can be seen that meaning and expectations are affected both by life-experience (derived through membership of a specific social or work-group) and also by interactions with other individuals who work in the same context.
Socially-Shared Cognition
Groups of people who regularly work together on shared tasks have been observed to develop a repertoire of shared frames. Shared frames provide cognitive “shortcuts” that permit a group to share common interpretations of the organization without the need for complex explanations (Boland and Tenkasi, 1995, Brown and Duguid, 1991, Fiol, 1994, Lave and Wenger, 1991). The development of a community of professional practice, such as a design group, is contingent on the development of shared (or intersubjectively acknowledged) meanings and language (Lave, 1991, Prus, 1991). The use of specific language reinforces the extent of shared understanding within a work-group and allows them to reconcile competing or complementary perspectives (Lanzara, 1983, Prus, 1991, Winograd and Flores, 1986). For example, IT developers share a vocabulary that is often unintelligible to other workers, but which allows them to communicate and coordinate work, using shorthand terms such as “this is a blue screen error”. IS design depends upon intersubjectivity for effective communication between team members to take place. Technical system designers, “successful in sharing plans and goals, create an environment in which efficient communication can occur” (Flor and Hutchins, 1991, page 54). This type of perspective-sharing requires not only shared knowledge, but also a shared system of sociocultural norms and values. Organizational framing is embedded within a local system of shared, socio-cultural values that make sense of “how we do things here” (Brown and Duguid, 1991, Lave and Wenger, 1991, MacLachlan and Reid, 1994).
“Knowledge and understanding (in both the cognitive and linguistic senses) do not result from formal operations on mental representations of an objectively existing world. Rather, they arise from the individual’s committed participation in mutually oriented patterns of behavior that are embedded in a socially shared background of concerns, actions, and beliefs.” (Winograd and Flores, 1986, page 78)
Orlikowski and Gash (1994) studied “technological frames”: those aspects of shared cognitive structures that relate to the assumptions, expectations and knowledge that people use to understand technology in organizations. By identifying various domains associated with shared framing perspectives, Orlikowski and Gash were able to identify differences between the technological frames held by technologists vs. those held by users of the technology. However, in their study, Orlikowski and Gash argued that members of two identifiable stakeholder groups (technologists and technology-users) possessed shared frames as they performed similar work, possessed similar backgrounds and worked within a cohesive organizational culture. This is not true in all cases: a general assumption that individual frames can be analyzed as representative of a specific interest group is highly dangerous. We cannot assume shared frames just because group members share a similar culture (Krauss and Fussell, 1991). We also cannot assume the existence of a shared culture among design group members: recently formed groups, or groups with new members have diverse systems of value, behavior and expectation (Lave and Wenger, 1991, Moreland et al., 1996).
An analysis of the degree of congruence[1] between different group frames may allow us to understand why negotiations between different groups, or decisions taken by representatives from specific organizational groups, result in a specific outcome, which may in turn help us to predict or to manage such outcomes. But defining shared content depends upon the way in which the framing concept is itself defined: we need to examine what is shared, to understand the degree of frame congruence (Cannon-Bowers and Salas, 2001). Cannon-Bowers and Salas (2001) suggest that what is shared in studies of shared cognition falls into four categories: (i) task-specific knowledge, relating to the specific, collective task in hand; (ii) task-related knowledge, experiential knowledge from similar tasks, of how to perform the work-processes that are required; (iii) knowledge of teammates, i.e. who knows what; and (iv) attitudes and beliefs that guide compatible interpretations of the environment. In the Orlikowski and Gash (1994) study, the assumption of shared frames refers only to congruence in the fourth category, attitudes and beliefs that guide compatible interpretations of the environment. Davidson (2002) extended the technological frames concept by analyzing the process of frame sharing and the dominance of different frame domains within a collaborative group over time. She discovered that the adoption of a specific, shared frame domain provided a conceptual boundary, or filter, to group discourse. Different frame domains became salient to the group at different points in the process, resulting in the adoption of a different strategy towards the IS design. Changes in the salient frame domain appeared to be triggered or accompanied by the adoption of a new group metaphor for the rationale behind the current design strategy. At times when the business value of IT frame-domain dominated group discourse, this led to a radical reconsideration of project requirements. At times when the IT delivery strategy frame-domain dominated group discourse, the group reverted to a more conservative definition of requirements, consistent with the perceived need to deliver a known product. This use of the term ‘frame domain’ thus relates to an intersection of the task-related, experiential-knowledge category and of the attitudes and beliefs category defined above (Cannon-Bowers and Salas, 2001).
So the development of shared frames may lead to more coherent group action and that the adoption of a new framing metaphor may reflect a shift in the dominant framing domain that triggers a change in group strategy. But to analyze shared frames, we must be satisfied that frame congruence exists within a group, before we can analyze congruence across different groups. To do this, we need to develop some nomothetic dimensions of the framing domain: a reduced set of dimensions that are generalizable to other contexts. There are few studies that examine shared framing in any detail and none were identified that examine all four of the categories of knowledge suggested by Cannon-Bowers and Salas (2001). Such studies are highly complex, requiring detailed analysis over multiple data samples.
Distributed Cognition
Star (1989) argues that the development of distributed systems should use a social metaphor, rather than a psychological one, where systems are tested for their ability to meet community goals. A social perspective requires the incorporation of differing viewpoints for decision-making. This accords with the position of many authors working on the problem of how to reflect the diversity of organizational needs in IS design (for example, Checkland, 1981, Checkland and Holwell, 1998, Eden, 1998, Eden et al., 1983, Weick, 1987, Weick, 2001). Weick (1987) discusses how teams performing collaborative tasks require a requisite variety of perspectives, to detect all of the significant environmental factors affecting collective decisions. But this is balanced by the need for a homogeneity of culture, within which team members can trust and interpret information from other team members. A wide spread of experience must be expected to cause problems of group cohesion and productivity (Krasner et al., 1987, Orlikowski and Gash, 1994). Thus, an IS design that spans multiple organizational groups or knowledge domains involves distributed cognition. Understanding within the design team is distributed: each individual can comprehend only a part of how the target system of human activities operates (Hutchins, 1991). The implications of distributed cognition are shown in Figure 1. The intersection of frames represents the degree of shared knowledge possessed by group members. This is relatively small when compared to the union, that represents the total knowledge of the group. A distributed cognition perspective assumes that “heedful interrelating” between members of a cooperative workgroup is required for effective collaboration (Weick and Roberts, 1993). Heedful interrelating is accomplished by mobilizing the shared understanding between individuals – the intersection between two segments of the diagram in Figure 1.
Figure 1: The Problem Of Distributed Cognition In Collaborative Work
Individual group members need to have some interdependency, or overlap, with other individuals in their framing of what needs to be done and why, to be able to coordinate action. But the distributed cognition perspective takes the position that there is a lack of overall congruence between how individuals frame organizational work. There is often an implicit model of a “collective mind” (Weick and Roberts, 1993) in much of the work on distributed cognition. But understanding is not so much shared between, as “stretched over” members of a cooperative group (Star, 1989). For example, a pilot may not understand how a navigational bearing was derived, but he shares sufficient knowledge-overlap with his navigator to be able to implement that bearing, as a change in direction. The concept of distributed cognition provides an alternative to the assumption of shared knowledge discussed above:
“ Distributed cognition is the process whereby individuals who act autonomously within a decision domain make interpretations of their situation and exchange them with others with whom they have interdependencies so that each may act with an understanding of their own situation and that of others.” (Boland et al., 1994, page 457).
So where does group knowledge reside? In operationalizing the concept of distributed knowledge, we note that interactions between individuals in collaborative work are mediated by “boundary objects” (Star, 1989). Boundary objects are physical artifacts, such as maps and diagrammatic models, that provide a representation of the superset of domain knowledge across various actors in cooperative work. Individuals are able to collaborate with others by ascribing a shared meaning to a boundary object. But boundary objects provide a sufficiently vague (global) representation of domain knowledge that they can be adapted to individual, local needs and constraints. For example, the topographical map of the New York subway system does not represent a detailed model of the locations and distances between stations. But it is sufficiently vague that it can be used to coordinate knowledge about what to do (“how do I get from here to there?”), ease of task (“do I have to change trains to get there?”), and travel costs (“which stations are in which travel zone?”). So it can coordinate collaboration between New York subway train drivers, platform guards, experienced travelers, tourists, ticket sales staff and ticket collectors, even when those individuals do not share the same knowledge about the elements that comprise the New York subway system. These physical representations or external products of human interaction often contain a shared understanding that is not possessed individually by the people who produced them (Hutchins, 1991, Star, 1989, Weick and Roberts, 1993). Thus, “shared” understanding is often not explicit, but communicated through representations of work and its context, that represent implicit and partial “maps” of what needs to be achieved (Hutchins, 1991, Norman, 1991, Schmidt, 1997, Star, 1989, 1998). If we examine external representations produced through collaborative work, we may be able to understand the sum of the group knowledge: the union of individual design-frames, as distinct from the intersections that represent shared frames. But we have to understand that these representations are also incomplete, as they have to be sufficiently vague to represent different things to different people. So the resulting knowledge is nomothetic (reduced and generalizable), rather than ideographic (specific to an individual knowledge-domain and context).
A distributed cognition perspective allows us to conceptualize a theory of design that permits agreement and negotiated outcomes while recognizing that each individual group member’s design understanding may be incomplete, emergent and not congruent with the understanding of others. Established workgroups develop an understanding of who knows what, that allows them to operate with heedfulness to others’ tasks and the division of collective work (Moreland et al., 1996). But local (domain-specific) knowledge is embedded in practice, rather than being capable of articulation (Fiol, 1994, Lave and Wenger, 1991). Members of a boundary-spanning design group may not realize that they hold distributed knowledge or differ over locally-defined framing perspectives and so may perceive misunderstandings as the consequence of political differences. For example, Gasson (1999) discussed how an IS design group that involved both technical developers and organizational psychologists interpreted their inability to cooperate as “personality problems”, yet this stemmed in a large part from the different framing filters that they imposed on the design problem. While the technical developers framed the design problem as experimenting with new technology to support user collaboration in constructing a knowledge-base, the psychologists framed the design problem as understanding how, where and why users would wish to collaborate and what role technology could play in this process. The two frame-domains were fundamentally incommensurate and the group lacked a mechanism for reconciling their different framing perspectives. In traditional work groups, there are experts on which the group may rely for guidance, whereas in workgroups where knowledge is distributed across work-related domains, perceptions of expertise are subjective and negotiated: there is a “symmetry of ignorance” (Rittel, 1972). This is borne out by a study of software development teams performed by Faraj and Sproull (2000) indicated that the effective management of distributed cognition is significant in ensuring team effectiveness. While the possession of expertise did not directly affect team performance, the coordination of expertise was seen as critical to team success. Social integration was considered more important than having an expert on the team (Faraj and Sproull, 2000). Thus, a shared understanding of who-knows-what is often more important to a collaborative design group than a shared understanding of the design itself.
The Analysis of Framing In Studies of Social Cognition
MacLachlan and Reid (1994) note that the studies of cognitive framing can take a static perspective, analyzing a “snapshot” of framing perspectives adopted by subjects around specific issues, or a dynamic analysis, where influences on the evolution of specific perspectives are assessed over time. The majority of research studies appear to conceptualize cognitive framing as static. Tan (1999, Tan and Hunter, 2002) and (Daniels et al., 2002) suggest that a repertory grid technique may be used for the assessment of individual framing perspectives. Several authors (e.g. Bougon and Komocar, 1990, Daniels and Johnson, 2002, Eden, 1998, Weick and Bougon, 1986) have used cognitive mapping (Axelrod, 1976), to elicit or compare causal models of individual and/or group belief-structures. (Orlikowski and Gash, 1994) coined the term “technological frames” to describe how individuals understand and interpret the role of technology in their work and organizational life. They used a qualitative analysis of themes in interview data to determine the extent of congruence between technological frames held by technology users vs. technology developers. These studies draw conclusions that avoid the question of how framing perspectives evolve through interaction with contextual phenomena and with other people, even over a short period of time (Boland and Tenkasi, 1995). In contrast, studies that investigate framing evolution require more complex methods and a longer duration. Urquhart (1999) used a discourse analysis of interview data, combined with videotapes of discussions between users and technical requirements analysts, to discover how their framing perspectives evolved through interaction. (Davidson, 2002) qualitatively analyzed both interview data and observational meeting data over a period of two and a half years, to understand differences between individual frames and the changing nature of the shared technological frames held by an IS development project group. Gasson (1998) used a combination of discourse analysis and Soft Systems Methodology, to elicit and analyze explicit and implicit frames, in an 18-month study of a group of managers engaged in the co-design of business and IT systems. Barr et al. (1992) constructed cognitive cause maps from 50 letters to shareholders published by two companies over a 25-year period, to understand how managers’ framing perspectives were affected by developments in their company environment.
When analyzing framing perspectives, it is important to understand two problems. The first is that we, as researchers are interpreting constructs that reside in the heads of others. Thus we encounter the intersubjectivity problem. Intersubjectivity requires a “leap of consciousness” (Schutz, 1967). This leap is developed further in Heidegger’s (1962) hermeneutic phenomenology, which takes the position that it is the interpretation of common experience that leads to an intersubjective understanding of another’s intention. As researchers, we are unlikely to possess such common experience unless we participate in those activities that form the subject of our subjects’ cognitive frames. It could be argued that it is only through “talking aloud” observation, participant observation or action research that we might understand the cognitive frames of our subjects. The second problem relates to the implicit nature of knowledge that resides “in the head”. Much of what we know is know-how, rather than know-what: skills-based or experiential knowledge that it is difficult or impossible for us to articulate (Garud, 1997, Lave and Wenger, 1991, Schön, 1983). We understand such knowledge through interactions with others and with the context in which we work (Boland and Tenkasi, 1995, Schön, 1983). It is often not possible to articulate such knowledge, either in a work situation, or in an interview situation. So eliciting framing perspectives is problematic, as subjects themselves may not be aware of them.
The identification of metaphors used in discourse may resolve these problems (Kendall and Kendall, 1993, Walsham, 1993). Metaphors play a central role in the analysis of organizational sensemaking, as they associate the properties of familiar concepts or subjects to a relatively unknown subject (Grant and Oswick, 1986). Just as Weick (Weick, 1979) discusses cognitive maps as a belief-structure through which we filter external evidence, Morgan (1986) argues that the use of metaphor implies a way of thinking and seeing that forms our understanding of the external world. So the use of common metaphors may imply the existence of a shared belief structure. For example, a group of American IS developers may use metaphors derived from Baseball, such as hitting a home run or covering first base (metaphors derived from Baseball), to indicate a shared pride or anxiety. British IS developers use metaphors derived from Cricket, such as hit for six or a sticky wicket, for the same purpose. But metaphors only present a part of the complex and dynamic cognitive constructs – referred to here as mental models or “frames” – that underlie individual and shared sensemaking (Klimoski and Mohammed, 1994, Oswick et al., 2002).
Metaphors represent an acknowledged similarity between one concept and another. We also need to develop ways of surfacing implicit knowledge, to understand fully how actors in a specific situation frame that situation. Some possible approaches are:
(a) Interpreting actor behavior in observational and action research studies;
(b) Using interactive methods that have been developed to surface implicit knowledge, such as Soft Systems Methodology (Checkland, 1981, Checkland and Holwell, 1998), the analysis of organizational “stories” (Gershon and Page, 2001, Mitroff and Kilmann, 1975), and cognitive mapping (Axelrod, 1976, Eden, 1998);
(c) Employing a qualitative approach that focuses on a hermeneutic and multi-faceted analysis of subjects’ discourse (Klimoski and Mohammed, 1994, MacLachlan and Reid, 1994, Oswick et al., 2002, Tannen, 1993). For example, a subject’s statement that they seek a document management tool might conflict with their expressed goal of tracking development activity-completion, indicating that they frame their problem as one of progress-management or worker-commitment, rather than framing the problem as related to the use of specific documents.
Employing the lens of socially-situated cognition allows us to examine the ways in which internal, human, knowledge structures shape how people interpret events in a particular way, or sensitize them to specific events and phenomena over others (MacLachlan and Reid, 1994, Winograd and Flores, 1986). An IS design can be seen as the result of negotiation between multiple, socially-situated “worlds”, that represent reality in different ways to different people. The resulting IS reflects intersections between an overlapping set of individual and group perspectives, that shift and evolve as the design proceeds. Problem contents and boundaries are subjective, multiple and competing: “relevant” organizational problems are determined through argumentation and negotiation (Boland and Tenkasi, 1995, Rittel, 1972). Taking a framing perspective to socially-situated cognition allows us to conceptualize how similarities and differences in individual perspectives and understandings guide collective action.
A Framework For Social Cognition in Information Systems
To operationalize these levels of analysis, it is necessary to understand the different foci of different types of analysis and the assumptions underlying these foci. The dominant perspectives of socially-situated cognition, for each of the three theoretical lenses discussed above, are summarized in Table 1, through a discussion of how each perspective operationalizes the “framing” concept in different ways.
Table 1. A Framework Of Analytical Perspectives On Socially-Situated Cognition in IS
Level
Nature of Concept
Focus and Assumptions
Exemplars
Socially-Situated Cognition
Static framing: a snapshot of idiographic (locally-specific) framing perspectives adopted by individuals.
Defines individual frame domains and content to understand differences between individuals. Assumes that snapshot represents ongoing framing perspectives.
(Tan, 1999)
(Rommes, 2002)
(Jacobs, 2002)
Dynamic framing: a comparative analysis of framing perspectives over time.
Analyzes changes in, and/or influences on individual frame domains and content.
(Urquhart, 1999)
(King, 1997)
Socially-shared Cognition
Static frame comparison: a nomothetic (generalizable) analysis of shared framing perspectives in an interest group, or between groups.
Analyzes congruence in frame domains and content across members of a specific group, or assumes congruence within group, to analyze congruence between groups. Assumes that a snapshot represents beliefs, attitudes and knowledge generally held by subjects. Also assumes that frames can be reduced to a few, nomothetic concepts.
(Orlikowski and Gash, 1994)
(Sahay et al., 1994)
(Barrett, 1999)
(Gallivan, 2001)
Dynamic frame comparison: a comparative analysis of collective frames over time.
Assumes frame congruence between members of work or interest group, to analyze changes in dominant or shared frames over time.
(Davidson, 2002)
(Gasson, 1998)
(McLoughlin et al., 2000)
Distributed Cognition
Static comparison of frame congruence and differences: an analysis of ways in which work is coordinated across different knowledge or work domains.
Focuses on locally-constructed nature of knowledge and belief structures. Therefore, this type of analysis tends to privilege ideographic (specific) aspects of framing over nomothetic (generalizable) aspects.
(Ciborra and Andreu, 2000)
(Orlikowski, 2002)
(Carlile, 2002)
Dynamic analysis of frame intersections and union: an analysis of interactions that permit “heedful interrelating” between collaborative group members.
Analyzes distributed group work through the analogy of a “collective” mind. Examines coordination of diverse framing perspectives, usually privileging nomothetic aspects.
(Weick and Roberts, 1993)
(Gasson, 2004)
(Hutchins and Klausen, 1998)
Transactive frame mediation: an analysis of how an external (technology-mediated) group “memory” or “knowledge base” may be constructed and used.
Analyzes mediated “workspaces” or boundary-objects as a resource for distributed knowledge management in collaborative work. Assumes that collective, or coordinating knowledge may be represented in external artifacts.
(Star, 1989)
(Zhang and Norman, 1994)
(Perry et al., 1999)
(Suchman, 1998)
(Hollan et al., 2002)
Conclusion: Application Of The Framework
The literature review and the framework presented above summarized different analytical perspectives on the analysis of socially-situated cognition, by operationalizing the different approaches to “frame” analysis that are found in the IS and related literatures. It can be seen that the focus and underlying assumptions of each approach are very different. Each approach is intended to achieve a different end and so each suffers from the limitations of its specific set of assumptions about the nature of the data, or the ways in which it can be analyzed. This is not to suggest that such analyzes are valueless. But the different perspectives on socially-situated cognition that are represented here are often conflated. This leads to muddled analyses of “technological frames” (or a similar construct), with no clear objective or analytical model underlying the production of research evidence.
The framework presented here may help to clarify the selection of a specific approach to the analysis of socially-situated cognition. Specifically, it differentiates between different modes of knowing, that require different methods of investigation. The study of cognitive frames is a relatively recent departure for IS researchers and many concepts from the psychology and organizational literatures have become conflated in the process of translation. This framework identifies three aspects of social cognition, that are relevant to the current state of IS research:
Socially-situated cognition, which relates to an individual perspective, that is situated in a socio-technical context;
Socially-shared cognition, which relates to a group perspective, that filters and guides shared interpretations of collective goals, contextual events and other phenomena;
Distributed cognition, which relates to a “shared memory” or group consciousness, that is not possessed in common, but stretched across members of a collaborative group.
Each of these aspects of framing in social cognition may be analyzed as a static construct, taking a “snapshot” of individual or group frames to understand differences or congruence between various perspectives, or a dynamic construct, tracing the evolution of framing perspectives over time. Additionally, the distributed cognition aspect of framing also has associated with it a transactive memory construct, that investigates the ways in which technology might mediate group knowledge resources to support collaborative work.
Of course, the perspectives presented above are not mutually exclusive. But it is important to have a clear notion of what each analytical perspective achieves and to understand its limitations. The framework presented here depicts the different aspects of individual, group and inter-group frames dealt with by each analytical perspective. It is hoped that this will provide a mechanism to make the analysis of — and explanations from — studies of social cognition more open, explicit and rigorous.
References
Angus, J. (2001) To Tell the Truth. Knowledge Management Magazine,May.
Axelrod, R. (1976) The cognitive mapping approach to decision making. In Structure of Decision (Ed, Axelrod, R.) Princeton University Press, Princeton, NJ, pp. 221-250.
Barr, P. S., Stimpert, J. L. and Huff, A. S. (1992) Cognitive Change, Strategic Action, and Organizational Renewal. Strategic Management Journal,13 15-36.
Barrett, M. I. (1999) Challenges of EDI adoption for electronic trading in the London Insurance Market. European Journal of Information Systems,8 (1), 1-15.
Bartlett, F. (1932) Remembering: A Study In Experimental And Social Psychology, Cambridge University Press, London, UK.
Berger, P. L. and Luckman, T. (1966) The Social Construction Of Reality: A Treatise In The Sociology of Knowledge, Doubleday & Company Inc., Garden City N.Y.
Boje, D. M. (1991) The Storytelling Organization: A Study of Story Performance. Administrative Science Quarterly,36 (1), 106-126.
Boland, R., J and Tenkasi, R., V, (1995) Perspective Making and Perspective Taking in Communities of Knowing. Organization Science,6 (4), 350-372.
Boland, R. J., Tenkasi, R., V and Te’eni, D. (1994) Designing Information Technology to Support Distributed Cognition. Organization Science,5 (3), 456-475.
Bougon, M. G. and Komocar, J. M. (1990) Directing Strategic Change: A Dynamic Wholistic Approach,. In Mapping Strategic Thought (Ed, Huff, A. S.) Wiley and Sons, pp. 135-163.
Brown, J. S. and Duguid, P. (1991) Organizational Learning and Communities of Practice: Toward a Unified View of Working, Learning, and Innovation. Organization Science,2 (1), 40-57.
Bruner, J. (1990) Acts of Meaning, Harvard University Press, Cambridge MA.
Cannon-Bowers, J. A. and Salas, E. (2001) Reflections on shared cognition. Journal of Organizational Behavior,22 195-202.
Carlile, P. R. (2002) A Pragmatic View of Knowledge and Boundaries. Organization Science,13 (4), 442-455.
Checkland, P. (1981) Systems Thinking Systems Practice, John Wiley & Sons, Chichester UK.
Checkland, P. and Holwell, S. (1998) Information, Systems and Information Systems: Making Sense of the Field, John Wiley & Sons, Chichester UK.
Ciborra, C. U. and Andreu, R. (2000) “Knowledge Across Boundaries: Managing Knowledge In Distributed Organizations,” Working Paper #93, London School of Economics, ISSN 1472-9601.
Daniels, K. and Johnson, G. (2002) On trees and triviality traps: Locating the debate on the contribution of cognitive mapping to organizational research. Organization Studies,23 (1), 73-81.
Daniels, K., Johnson, G. and Chernatony, L. d. (2002) Task and institutional influences on managers’ mental models of competition. Organization Studies,23 (1), 31-62.
Davidson, E. J. (2002) Technology Frames and Framing: A Socio-Cognitive Investigation of Requirements Determination. MIS Quarterly,26 (4), 329-358.
Eden, C. (1998) Cognitive mapping. European Journal of Operational Research,36 1-13.
Eden, C., Jones, S. and Sims, D. (1983) Messing About In Problems, Pergamon Press, Oxford.
Ensink, T. and Sauer, C. (2003) Introduction. In Framing And Perspectivising In Discourse (Eds, Ensink, T. and Sauer, C.) University of Groningen, Germany.
Eysenck, M. W. and Keane, M. T. (1990) Cognitive Psychology, Erlbaum Press, Hove, East Sussex UK.
Faraj, S. and Sproull, L. (2000) Coordinating Expertise in Software Development Teams. Management Science,46 (12), 1554-1568.
Fiol, C. M. (1994) Consensus, Diversity and Learning In Organizations. Organization Science,5 (3), 403-420.
Flor, N. V. and Hutchins, E. L. (1991) Analyzing distributed cognition in software teams: a case study of team programming during perfective software maintenance. Proceedings of the Empirical Studies of Programmers – Fourth Workshop Norwood NJ: Ablex, 36-59.
Fodor, J. (1975) The Language of Thought., Harvard University Press, Cambridge MA.
Gallivan, M. J. (2001) Meaning to change: How diverse stakeholders interpret organizational communication about change initiatives. IEEE Transactions on Professional Communication,44 (4), 243-266.
Garud, R. (1997) On the distinction between know-how, know-why and know-what in technological systems. In Advances In Strategic Management, Vol. 14 (Eds, Huff, A. S. and Walsh, J. P.) JAI Press Inc., Greenwich, Connecticut, pp. 81-101.
Gasson, S. (1998) Framing Design: A Social Process View of Information System Development. Proceedings of the Proceedings of The Nineteenth International Conference on Information Systems (ICIS 98), Helsinki Finland,
Gasson, S. (1999) The Reality of User-Centered Design. Journal of End User Computing,11 (4), 3-13.
Gasson, S. (2004) The Management of Distributed Organizational Knowledge. Proceedings of the Proceedings of the Thirty-Seventh Hawaii International Conference on Systems Sciences (HICSS-37), Manua, Hawaii,
Gentner, D. and Stevens, A. L. (1983) Mental Models, Erlbaum, Hillsdale N.J.
Gershon, N. and Page, W. (2001) What storytelling can do for information visualization. Communications of the ACM,44 (3), 31-37.
Goffman, E. (1974) Frame Analysis, Harper and Row, New York, NY.
Grant, D. and Oswick, C. (1986) Introduction: Getting the Measure of Metaphors. In Metaphor and Organizations (Ed, Grant, D. a. O., C.) Sage, London.
Heidegger, M. (1962) Being and Time, Harper & Row New York, New York NY.
Hollan, J., Hutchins, E. and Kirsh, D. (2002) Distributed cognition: toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction (TOCHI),7 (2), 174 – 196.
Hutchins, E. (1991) The Social Organization of Distributed Cognition. In Perspectives on Socially Shared Cognition, Vol. pp. 283-307. (Eds, Resnick, L. B., Levine, J. M. and Teasley, S. D.) American Psychological Association., Washington DC.
Hutchins, E. and Klausen, T. (1998) Distributed cognition in an airline cockpit. In Cognition and Communication at Work (Eds, Engestrom, Y. and Middleton, D.) Cambridge University Press, New York, pp. 15-34.
Jacobs, N. (2001) Information technology and interests in scholarly communication: a discourse analysis. Journal of the American Society for Information Science and Technology,52 (13), 1122 – 1133.
Jacobs, N. (2002) Co-term network analysis as a means of describing the information landscapes of knowledge communities across sectors. Journal of Documentation,58 (5), 548-562.
Johnson-Laird, P. N. (1983) Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness, Harvard University Press, Cambridge, MA.
Kelly, G. A. (1955) The Psychology Of Personal Constructs., W.W. Norton & Company Inc., New York, NY.
Kendall, J. E. and Kendall, K. E. (1993) Metaphors and methodologies: Living beyond the systems machine. MIS Quarterly,17 (2), 149-171.
King, S. (1997) Tool support for systems emergence: A multimedia CASE tool. Information and Software Technology,39 (5), 323-330.
Klimoski, R. and Mohammed, S. (1994) Team Mental Model: Construct Or Metaphor? Journal of Management,20 (2), 403-437.
Krasner, H., Curtis, B. and Iscoe, N. (1987) Communication breakdowns and boundary spanning activities on large software projects. In Empirical Studies of Programmers: Second Workshop, Vol. pp 65-82 (Eds, Olson, G. M., Sheppard, S. and Soloway, E.) Ablex, New Jersey.
Krauss, R. M. and Fussell, S. R. (1991) Constructing shared communicative environments. In Perspectives on socially shared cognition (Eds, Resnick, L. B., Levine, J. M. and Teasley, S. D.) American Psychological Association, Washington, DC, pp. 172-200.
Lanzara, G. F. (1983) The Design Process: Frames Metaphors And Games. In Systems Design For With and By The Users (Eds, Briefs, U., Ciborra, C. and Schneider, L.) North-Holland Publishing Company, Amsterdam.
Lave, J. (1991) Situating Learning In Communities of Practice. In Perspectives on Socially Shared Cognition, Vol. pp 63-82 (Eds, Resnick, L. B., Levine, J. M. and Teasley, S. D.) American Psychological Association, Washington DC.
Lave, J. and Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation, Cambridge University Press, Cambridge UK.
MacLachlan, G. and Reid, I. (1994) Framing and Interpretation, Melbourne University Press, Melbourne, Australia.
Majchrzak, A., Rice, R. E., Malhotra, A., King, N. and Ba, S. (2000) Technology Adaptation: The Case of a Computer-Supported Inter-Organizational Virtual Team. MIS Quarterly,24 (4), 569-600.
Markus, M. L., Majchrzak, A. and Gasser, L. (2002) A Design Theory For Systems That Support Emergent Knowledge Processes. MIS Quarterly,26 (3), 179-212.
McLoughlin, I., Badham, R. and Couchman, P. (2000) Rethinking political process in technological change: Socio-technical configurations and frames. Technology Analysis & Strategic Management,12 (1), 17-37.
Middleton, D. (1998) Talking work: Argument, common knowledge and improvisation in teamwork. In Cognition and Communication at Work (Eds, Engestrom, Y. and Middleton, D.) Cambridge University Press, New York, pp. 233-256.
Minsky, M. (1975) A Framework For Representing Knowledge. In The Psychology of Computer Vision (Ed, Winston, P.) McGraw Hill, New York, pp. 211-277.
Mitroff, I. I. and Kilmann, R. H. (1975) Stories Managers Tell: A New Tool For Organisational Problem Solving. Management Review,64 (7), 18-28.
Moreland, R. L., Argote, L. and Krishnan, R. (1996) Socially shared cognition at work: Transactive memory and group performance. In What’s Social About Social Cognition? Research on Socially Shared Cognition in Small Groups (Eds, Nye, J. and Brower, A.) Sage, Thousand Oaks CA.
Morgan, G. (1986) Images of Organization, Sage, Newbury Park CA.
Neisser, U. (1976) Cognition and Reality, W.H. Freeman, San Francisco CA.
Norman, D. A. (1991) Cognitive Artifacts. In Designing Interaction: Psychology At The Human-Computer Interface (Ed, Carroll, J. M.) Cambridge University Press, UK.
Orlikowski, W. J. (2002) Knowing in Practice: Enabling a Collective Capability in Distributed Organizing. Organization Science,13 (3), 249-273.
Orlikowski, W. J. and Gash, D. C. (1994) Technological Frames: Making Sense of Information Technology in Organizations. ACM Transactions on Information Systems,12 (2), 174-207.
Orlikowski, W. J. and Hofman, D. (1997) An Improvisational Model of Change Management: The Case of Groupware Technologies. Sloan Management Review,38 (2), 11-22.
Orlikowski, W. J., Yates, J., Okamura, K. and Fujimoto, M. (1995) Shaping Electronic Communication – the Metastructuring of Technology in the Context of Use. Organization Science,6 (4), 423-444.
Oswick, C., Keenoy, T. and Grant, D. (2002) Metaphor And Analogical Reasoning In Organization Theory: Beyond Orthodoxy. Academy of Management Review,27 (2), 294-303.
Perry, M., Fruchter, R. and Rosenberg, D. (1999) Co-ordinating distributed knowledge: an investigation into the use of an organisational memory. Cognition, Technology and Work,1 142-152.
Porac, J. M., J. and Stubbart, C. (1996) Introduction. In Cognition within and betwen organizations (Eds, Meindl, J., Stubbart, C. and Porac, J.) Sage, Thousand Oaks, CA, pp. ix-xxiii.
Prasad, P. (1993) Symbolic processes in the implementation of technological change: A symbolic interactionist study of work computerization. Academy of Management Journal,36 (6), 1400-1429.
Prus, R. C. (1991) Symbolic interaction and ethnographic research : intersubjectivity and the study of human lived experience, State University of New York Press, Albany.
Resnick, L. B. (1991) Shared Cognition: Thinking As Social Practice. In Perspectives on Socially Shared Cognition (Eds, Resnick, L. B., Levine, J. M. and Teasley, S. D.) American Psychological Association, Washington DC, pp. 1-20.
Rittel, H. W. J. (1972) “Second Generation Design Methods,” Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 317-327., Interview in: Design Methods Group 5th Anniversary Report,
Rommes, E. (2002) Worlds apart: Exclusion-processes in DDS. Proceedings of the Digital Cities 2: Computational and Sociological Approaches, Second Kyoto Workshop on Digital Cities, Kyoto, Japan, 219-232.
Sahay, S., Palit, M. and Robey, D. (1994) A relativist approach to studying the social construction of information technology. European Journal of Information Systems,3 (4), 248-258.
Schank, R. C. and Abelson, R. P. (1977) Scripts, plans, goals, and understanding: An inquiry into human knowledge structures, Lawrence Erlbaum Associates, Hillsdale, NJ.
Schmidt, K. (1997) Of Maps and Scripts. Proceedings of the Proceedings of GROUP 97 ACM SIG: Distributed Group Work, University of Phoenix Arizona, 138-147.
Schön, D. A. (1983) The Reflective Practitioner: How Professionals Think In Action, Basic Books, New York NY.
Schutz, A. (1967) The phenomenology of the social world, Northwestern University Press, Evanston, IL.
Star, S. L. (1989) The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed Problem Solving. In Distributed Artificial Intelligence, Vol. II. (Eds, Gasser, L. and Huhns, M. N.) Morgan Kaufmann Publishers Inc., San Mateo CA, pp. pp. 37-54.
Star, S. L. (1998) Working together: Symbolic interactionism, activity theory and distributed artificial intelligence. In Cognition and Communication at Work (Eds, Engestrom, Y. and Middleton, D.) Cambridge University Press, New York, pp. 296-318.
Strauss, A. L. (1978) A Social World Perspective. In Studies In Symbolic Interaction, Vol. 1 (Ed, Denzin, N. K.) Jai Press Inc., Greenwich, Connecticut, pp. 119-128.
Strauss, A. L. (1983) Continual Permutations of Action, Aldine de Gruyter, New York.
Suchman, L. (1987) Plans And Situated Action, Cambridge University Press, Cambridge MA.
Suchman, L. (1993) Response to Vera and Simons Situated Action: A Symbolic Interpretation. Cognitive Science,17 (1), 71-76.
Suchman, L. (1998) Constituting shared workspaces. In Cognition and Communication at Work (Eds, Engestrom, Y. and Middleton, D.) Cambridge University Press, New York, pp. 350.
Tan, F. B. (1999) Exploring Business-IT Alignment Using the Repertory Grid. Proceedings of the 10th Australasian Conference on Information Systems, Wellington, New Zealand, 931-943.
Tan, F. B. and Hunter, M. G. (2002) The Repertory Grid Technique: A Method For The Study of Cognition In Organizations. MIS Quarterly,26 (1), 39-57.
Tannen, D. (1993) What’s In A Frame? In Framing in Discourse (Ed, Tannen, D.) Oxford University Press, Oxford, UK.
Urquhart, C. (1999) Themes in early requirements gathering: The case of the analyst the client and the student assistance scheme. Information Technology and People,12 (1), 44-70.
Vickers, G. (1974) some of which is reprinted in Checkland, P. (1985) “From Optimizing To Learning: A Development of Systems Thinking For The 1990s”, Journal of the Operational Research Society, 36 (9), pp. 757-767.
Vitalari, N. P. and Dickson, G. W. (1983) Problem-Solving for Effective Systems Analysis: An Experimental Exploration. Communications of the ACM,26 (11), 252-260.
Walsham, G. (1993) Reading the organization: metaphors in information management. Journal of Information Systems,3 (33-46).
Weick, K. E. (1979) The Social Psychology of Organizing, Addison-Wesley, Reading MA.
Weick, K. E. (1987) Organizational Culture As A Source Of High Reliability. California Management Review,29 (2), 112-127.
Weick, K. E. (1998) Improvisation as a Mindset for Organizational Analysis. Organization Science,9 (5), 543-555.
Weick, K. E. (2001) Making Sense of the Organization, Blackwell Scientific, Malden MA.
Weick, K. E. and Bougon, M. (1986) Organizations as cognitive maps: Charting ways to success and failure. In The Thinking Organization (Eds, H. P. Sims, J., Gioia, D. A. and Associates) Jossey-Bass, San Francisco, CA, pp. 102-135.
Weick, K. E. and Roberts, K. H. (1993) Collective Mind In Organizations: Heedful Interrelating on Flight Decks. Administrative Science Quarterly,38.
Winograd, T. and Flores, F. (1986) Understanding Computers And Cognition, Ablex Corporation, Norwood New Jersey.
Zack, M. H. (2000) Jazz improvisation and organizing: Once more from the top. Organization Science,11 (2), 227-234.
Zhang, J. and Norman, D. A. (1994) Representations in Distributed Cognitive Tasks. Cognitive Science,18 (87-122).
Notes:
[1] Frame congruence does not imply that frames are identical, but that they are related in structure (possessing common categories of frames) and content (with similar values in the common categories) Orlikowski, W. J. and Gash, D. C. (1994) Technological Frames: Making Sense of Information Technology in Organizations. ACM Transactions on Information Systems,12 (2), 174-207..
Brown and Duguid’s (2001) concept of a “network of practice” has been niggling away at my consciousness. The idea is that a collection of people are enabled to understand each others’ work because of commonalities in practice, but not to the extent that a Community of Practice creates shared ways of framing and performing work:
“we include under the rubric … groups whose members, to the extent that they have common practices, are able to read and understand one another’s work. Disciplinary networks of practice cut across heterogeneous organizations, including, for example, universities, think tanks, or research labs. Professions make up yet other such networks of practice, where again similar practitioners, by virtue of their practice, are able to share professional knowledge through conferences, workshops, newsletters, listservs, Web pages and the like. … different networks of practice cut horizontally across vertically integrated organizations and extend far beyond the boundaries of the latter. Along these networks, knowledge can flow.” (Brown and Duguid 2001, p. 206)
So create closer bonds than organizational membership, spanning organizational boundaries. If the type of intersubjectivity that derives from shared practice (i.e. what Polanyi calls tacit knowledge) does not underpin a network of practice, what does? This rings true, given the observation that IT professionals identify more with the interests of their profession than with their organization (Chou and Pearson 2012). Which brings me to the second property of networks of practice:
“it is important to note that networks of practice may also inhibit the flow of knowledge. As Lynn et al (1996) show, professional networks will occasionally work to resist the spread of ideas felt to be inimical to the interests of the network’s members.” (Brown and Duguid 2001, p. 207).
So how do networks of practice share knowledge? Brown and Duguid have an explanation:
“We have used the notion of networks of practice to explain leakiness. This is not, we have suggested, simply an inherent property of some kinds of knowledge. It does not result from making knowledge explicit and so tradable. It is, rather, a function of the common underlying practice, which creates social-epistemic bonds. Where practice doesn’t prepare the ground, knowledge is unlikely to flow.” (Brown and Duguid 2001, p. 207)
But this is not very satisfying when members of the network are not co-located. Surely, “common underlying practice” includes some form of shared framing as the basis of those social-epistemic bonds? I thought back to the work of Howard Rosenbrock (1981), who explains that IT professionals’ paradigm of system design with the aim of making users interchangeable results in deskilled, repetitive, and unfulfilling jobs for those who use these systems. He explains:
“The paradigm is transmitted from one generation to another, not by explicit teaching but by shared problem-solving. Young engineers take part in design exercises, and later in real design projects as members of a team. In doing so, they learn to see the world in a special way: the way in fact which makes it amenable to the professional techniques which they have available.” Rosenbrock (1981, p.6),
So we have design methods as a form of performativity, embedding ways of framing job design, as well as creating a shared design practice that ignores users’ psychological and motivation needs. But surely, IT professionals are continually learning, acquiring new skills and approaches to system design? It would appear not:
“The fact that most IS professionals learn the bulk of their technical skills during college or immediately afterward encourages recruiters to focus on technical skills for new hires. IS professionals generally learn non-technical skills in the workplace.” (Lee et al. 2001, p.28).
All is not lost. Lee et al. (2001) go on to observe
“IS professionals generally learn non-technical skills in the workplace. And because these non-technical skills are so valuable in the long term, new hires need to possess the aptitude to learn these skills. This may help explain why recruiters prefer graduates who took more MIS classes than those who concentrated strictly on computer science courses.” (Lee et al. 2001, p.28).
How can we remedy the perspective that leads to such impoverished outcomes? As Rosenbrock observes, IT systems can be seen as a replacement for human ingenuity and skill, or as a way of supporting these. We have a choice to automate or to informate work (Zuboff 1988). We also have two chances to undermine the automation-on-rails approach taught in so many methods classes. Back to the network of practice idea. IT professionals have a network of practice with really strong bonds. We can teach IS methods more thoughtfully to those who return – for ongoing education in Masters degrees, etc. Finally, we can mobilize the network of practice, on LinkedIn and elsewhere, to ensure that IT professionals are aware of the types of skill and knowledge-preserving approaches to organizational system design that we would want to see used in our own organizations.
References
Brown, J.S. and Duguid, P. 2001. “Knowledge and Organization: A Social-Practice Perspective,” Organization Science (12:2), pp. 198-213.
Chou, S.Y. and Pearson, J.M. 2012. “Organizational Citizenship Behaviour in It Professionals: An Expectancy Theory Approach,” Management Research Review (35:12), pp. 1170-1186.
Lee, S., Yen, D., Havelka, D., and Koh, S. 2001. “Evolution of Is Professionals’ Competency: An Exploratory Study,” The Journal of Computer Information Systems (41:4), pp. 21-30.
Rosenbrock, H.H. 1981. “Engineers and the Work That People Do,” IEEE Control Systems Magazine (1:3), pp. 4-8.
Zuboff, S. 1988. In the Age of the Smart Machine. New York NY: Basic Books.
I manage the website for an Animal Rescue shelter. I have been struggling with the design of the site for some time now, as I have some users who are still using IE6 under windows XP (on an SVGA screen), some who want to view the site on their mobile phones, and some who have really wide displays and think my two column design looks outdated (it does). While looking for a solution, I came across the concept of responsive web design. Because the reference I just provided is stuffed with code snippets (and I personally think it is obscure), I will point you instead to some really great examples that demonstrate how a website design can be responsive.
There is a neat concept at play in most of these designs, where a webpage layout is segmented into multi-device layout patterns, that simply “flow” differently, depending on the screen size that the user will display the site on. But screen size is not the only consideration – images have to be resized to scale with the device and the performance of the device must be considered (it is painful to load a large, graphics-intensive page on a slooow tablet!). I was also musing that – most relevantly to this course – site menus and navigation toolbar interfaces have to be designed so that they will work on any device or layout. Which is harder than you’d think, simply because of the layout conventions that we use on a typical web-page.
Off to experiment with scripts and pageflow layouts …
A recent emphasis on sociomateriality appears to have entered the IS literature because of discussions by Orlikowski (2010) and the excellent empirical study of Volkoff et al. (2007). Now that people have been sensitized to the literature on material practice, actor-network theory is classified as “tired and uninformative” [1]. Which leads me to wonder just how many IS academics have actually read the actor-network theorists? Or pondered how this applies to technology design?
Long before people started discussing socio-material “assemblages,” Bruno Latour (1987)and John Law (1987) were discussing how technology developed by means of “heterogeneous networks” of material and human actants, the combination of which directs the trajectory of technology design and form. Latour (1999) suggests that he should recall the term “actor-network,” as this is too easily confused with the world-wide web. Yet actor-networking – in the sense of a web of connectivity, where heterogeneous interactions between diverse individuals, between virtually-mediated groups, and between individuals and material forms of embedded intentionality – is exactly what is going on in today’s organizations.
In addition, Michel Callon’s (1986) work on how the “problematization” of a situation in ways that aligns the interests of others leads to their enrolment in a network of support for a specific technological frame. Once support has been enrolled, such networks endow irreversibility, which makes changes to the accepted form of a technology solution incredibly difficult. So we have paradigms that are embedded in a specific design. Akrich coined the term “script” to define the performativity of technology and the term was adopted by the other leading actor-network theorists [2]. This thread of work articulates incredibly deeply the ways in which technology design directs its users (and maintainers) into a set of roles and worldviews that are difficult to escape. We must “de-script” technology to repurpose it to other networks and other applications – which is much more difficult than one would suppose, given the embedded social worlds that are carried across networks of practice with the use of common technologies (Akrich 1992). So what does actor-network theory give us? It provides a conceptual and practical approach to understanding and modeling why design takes specific forms – and what needs to be “undone” for a design to be conceived differently than in the past [3]. It provides a rationale for understanding technology as a network actor in its own right, influencing behavior and constraining discovery. The assumptional frameworks for action embedded in – for example – a software book-pricing application will direct the evaluation of price alternatives in ways that reflects the model of decision-making adopted by the software’s author. This results in the type of stupid automaticity that recently saw an Amazon book priced at $23,698,655.93 (plus $3.99 shipping). The cause of this pricing glitch was traced back to an actor-network of two competing sellers, unknowingly connected via their use of the same automated pricing software [4].
Finally, I want to observe that a lot of the recent “materiality of practice” literature has identified new phenomena and new mechanisms of actor-networks. For example Knorr Cetina (1999) has sensitized us to how epistemology is embedded in socio-technical assemblages, Rheinberger (1997) has demonstrated how some technical objects are associated with emergence while others enforce standardization and Henderson (1999) demonstrates how the use of specific representations can conscript others around an organizational power-base. But I would argue that these effects can be understood by using Actor-Network Theory as one’s underpinning epistemology – and that exploring actor-network interactions continues to reveal ever newer mechanisms that are relevant to how we work today. I would strongly recommend Bruno Latour’s latest book, Reassembling The Social.
Notes: [1] I have to declare an interest here – this comment was contained in a review of one of my papers … 🙂 [2] As Latour (1992) argues: “Following Madeleine Akrich’s lead (Akrich 1992), we will speak only in terms of scripts or scenes or scenarios … played by human or nonhuman actants, which may be either figurative or nonfigurative.” [3] One of my favorite papers on the topic of irreversibility in design is ‘How The Refrigerator Got Its Hum,’ by Ruth Cowan (1995). Another good read is the introduction to the same book by MacKenzie and Wajcman (1999). [4] The amusing outcome is recounted by Michael Eisen, at http://www.michaeleisen.org/blog/?p=358
References: Akrich, M. 1992. The De-Scription Of Technical Objects. W.E. Bijker, J. Law, eds. Shaping Technology/Building Society: Studies In Sociotechnical Change. MIT Press, Cambridge, MA, 205-224. Callon, M. 1986. “Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay.” J. Law, ed. Power, Action, and Belief: a New Sociology of Knowledge? Socioogical Review Monograph 32. Routledge and Kegan Paul, London, 196-233. Cowan, R.S. 1995. “How the Refrigerator Got its Hum.” D. Mackenzie, J. Wajcman, eds. The Social Shaping of Technology. Open University Press, Buckingham UK, 281-300. Henderson, K. 1999. On Line and on Paper: Visual Representations, Visual Culture,and Computer Graphics in Design Engineering. MIT Press, Harvard MA. Knorr Cetina, K.D. 1999. Epistemic Cultures: How the Sciences Make Knowledge. Harvard Univ. Press, Cambridge, MA. Latour, B. 1987. Science in Action. Harvard University Press, Cambridge MA. Latour, B. 1992. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” W.E. Bijker, J. Law, eds. Shaping Technology/Building Society: Studies In Sociotechnical Change. MIT Press, Cambridge MA. Latour, B. 1999. “On Recalling ANT.” J. Law, J. Hassard, eds. Actor Network and After. Blackwell, Oxford, UK 15-25. Law, J. 1987. “Technology and Heterogeneous Engineering – The Case Of Portugese Expansion.” W.E. Bijker, T.P. Hughes, T.J. Pinch, eds. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. MIT Press, Cambridge MA. MacKenzie, D.A., J. Wajcman. 1999. Introductory Essay. D.A. Mackenzie, J. Wajcman, eds. The Social Shaping Of Technology, 2nd. ed. Open University Press, Milton Keynes UK, 3-27. Orlikowski, W. 2010. “The sociomateriality of organisational life: considering technology in management research.” Cambridge Journal of Economics 34(1) 125-141. Rheinberger, H.-J. 1997. Experimental Systems and Epistemic Things Toward a History of Epistemic Things: Synthesizing Proteins in the Test Tube. Stanford University Press, Stanford CA, 24-37. Volkoff, O., D.M. Strong, M.B. Elmes. 2007. “Technological Embeddedness and Organizational Change.” Organization Science 18(5) 832-848.