General System Theory by Walonick


 

Index

General Systems Theory

© 1993, David S. Walonick, Ph.D.

General systems theory was originally proposed by biologist Ludwig von Bertalanffy in 1928. Since Descartes, the “scientific method” had progressed under two related assumptions. A system could be broken down into its individual components so that each component could be analyzed as an independent entity, and the components could be added in a linear fashion to describe the totality of the system. Von Bertalanffy proposed that both assumptions were wrong. On the contrary, a system is characterized by the interactions of its components and the nonlinearity of those interactions. In 1951, von Bertalanffy extended systems theory to include biological systems and three years later, it was popularized by Lotfi Zadeh, an electrical engineer at Columbia University. (McNeill and Freiberger, p.22)

One common element of all systems is described by Kuhn. Knowing one part of a system enables us to know something about another part. The information content of a “piece of information” is proportional to the amount of information that can be inferred from the information (A. Kuhn., 1974).

Systems can be either controlled (cybernetic) or uncontrolled. In controlled systems information is sensed, and changes are effected in response to the information. Kuhn refers to this as the detector, selector, and effector functions of the system. The detector is concerned with the communication of information between systems. The selector is defined by the rules that the system uses to make decisions, and the effector is the means by which transactions are made between systems. Communication and transaction are the only intersystem interactions. Communication is the exchange of information, while transaction involves the exchange of matter-energy. All organizational and social interactions involve communication and/or transaction.

Kuhn’s model stresses that the role of decision is to move a system towards equilibrium. Communication and transaction provide the vehicle for a system to achieve equilibrium. “Culture is communicated, learned patterns… and society is a collectively of people having a common body and process of culture.” (p. 154, 156) A subculture can be defined only relative to the current focus of attention. When society is viewed as a system, culture is seen as a pattern in the system. Social analysis is the study of “communicated, learned patterns common to relatively large groups (of people).” (p. 157)

The study of systems can follow two general approaches. A cross-sectional approach deals with the interaction between two system, while a developmental approach deals with the changes in a system over time.

There are three general approaches for evaluating subsystems. A holist approach is to examine the system as a complete functioning unit. A reductionist approach looks downward and examines the subsystems within the system. The functionalist approach looks upward from the system to examine the role it plays in the larger system. All three approaches recognize the existence of subsystems operating within a larger system.

Descartes and Locke both believed that words were composed of smaller building blocks. Both thought that one could strip away all terms of ambiguity and be left with the clarity of comprehension. Kuhn argues for clear definitions in science. The criteria that Kuhn (1974) uses to evaluate system terminology is that it provides “analytic usefulness and consistency with other terms”.

Kuhn’s terminology is interlocking and mutually consistent. The following table summarizes his basic system definitions:

Term Definition

element any identifiable entity

pattern any relationship of two or more elements

object a pattern as it exists at a given moment in time

event a change in a pattern over time

system any pattern whose elements are related in a sufficiently regular way to justify attention

acting system a pattern where two or more elements interact

component any interacting element in an acting system

interaction a situation where a change in one component induces a change in another component

mutual interaction a situation where a change in one component induces a change in another component, which then induces a change in the original component

pattern system is a pattern where two or more elements are interdependent

interdependent a situation where a change in an element induces a change in another element

Systems can be identified by their structure. A real system is any system of matter and/or energy. An abstract or analytic system is a pattern system whose elements consist of signs and/or concepts. Unlike the real system, which can only exchange information, abstract systems are information. A nonsystem is one or more elements that show no pattern of change. Since change is measured relative to a reference, something can be viewed as both a system and a nonsystem depending on the researcher’s purpose.

A system variable is any element in an acting system that can take on at least two different states. Some system variables are dichotomous, and can be one of two values–the rat lives, or the rat dies. System variables can also be continuous. The condition of a variable in a system is known as the system state. The boundaries of a system are defined by the set of its interacting components. Kuhn recognizes that it is the investigator, not nature, that bounds the particular system being investigated. (A. Kuhn., 1974)

A controlled (cybernetic) system maintains at least one system variable within some specified range, or if the variable goes outside the range, the system moves to bring the variable back into the range. This control is internal to the system. The field of cybernetics is the discipline of maintaining order in systems.

A system’s input is defined as the movement of information or matter-energy from the environment into the system. Output is the movement of information or matter-energy from the system to the environment. Both input and output involve crossing the boundaries that define the system.

When all forces in a system are balanced to the point where no change is occurring, the system is said to be in a state of static equilibrium. Dynamic (steady state) equilibrium exists when the system components are in a state of change, but at least one variable stays within a specified range. Homeostasis is the condition of dynamic equilibrium between at least two system variables. Kuhn (1974) states that all systems tend toward equilibrium, and that a prerequisite for the continuance of a system is its ability to maintain a steady state or steadily oscillating state.

Negative equilibrating feedback operates within a system to restore a variable to an initial value. It is also known as deviation-correcting feedback. Positive equilibrating feedback operates within a system to drive a variable future from its initial value. It is also known as deviation-amplifying feedback. Equilibrium in a system can be achieved either through negative or positive feedback. In negative feedback, the system operates to maintain its present state. In positive feedback, equilibrium is achieved when the variable being amplified reaches a maximum asymtoptic limit. Systems operate through differentiation and coordination among its components. “Characteristic of organization, whether of a living organism or a society, are notions like those of wholeness, growth, differentiation, hierarchical order, dominance, control, and competition.” (von Bertalanffy, 1968)

A closed system is one where interactions occur only among the system components and not with the environment. An open system is one that receives input from the environment and/or releases output to the environment. The basic characteristics of an open system is the dynamic interaction of its components, while the basis of a cybernetic model is the feedback cycle. Open systems can tend toward higher levels of organization (negative entropy), while closed systems can only maintain or decrease in organization.

A system parameter is any trait of a system that is relevant to an investigation, but that does not change during the duration of study. An environmental parameter is any trait of a system’s environment that is relevant to an investigation, but that does not change during the duration of study.

Systems theory provides an internally consistent framework for classifying and evaluating the world. There are clearly many useful definitions and concepts in systems theory. In many situations it provides a scholarly method of evaluating a situation. An even more important characteristic, however, is that it provides a universal approach to all sciences. As von Bertalanffy (1968, p. 33) points out, “there are many instances where identical principles were discovered several times because the workers in one field were unaware that the theoretical structure required was already well developed in some other field. General systems theory will go a long way towards avoiding such unnecessary duplication of labor.”

Organizational development makes extensive use of general systems theory. Originally, organizational theory stressed the technical requirements of the work activities going on in the organizations. In the 1970’s, the rise of systems theory forced scientists to view organizations as open systems that interacted with their environment. Although there is now a consensus on the importance of the environment, there is still much disagreement about which features of the environment are most important.

Meyer and Scott (1983) identified three dominant models for analyzing the relationship between organizations and the environment. The organization-set model (often called resource-dependency theory) focuses on the resource needs and dependencies of an organization. The organizational population model looks at the collection of organizations that make similar demands from the environment and it stresses the competition created by limited environmental resources. The interorganizational field model looks at the relations of organizations to other organizations, usually within a localized geographic area.

Five major themes of organizational change were examined by Goodman. (1982)

1) Intervention methods represent alternative approaches to organizational change at the individual, group, and organizational levels. Most studies attempt to ascertain the effectiveness of these approaches by using survey feedback. Some utilize long-term longitudinal approaches to examine the impact of intervention methods. The cataloging of intervention methods is still the dominant way of thinking about planned change.

2) Large-scale multiple system intervention methods have been gaining in popularity since the late seventies. The interest in the quality of working life (QWL) is primarily responsible for this popularity. This approach places strong emphasis on designing innovative techniques that serve as a catalyst for change. It’s most important application is that is stresses the relationships between the individual, company, community, state, national, and international systems.

3) Assessment of change is a major theme that has emerged as a result of the large-scale multiple system intervention methods. These include models of assessment, instruments for measuring organizational change, the development of time-series models, and an overall increase in the use of multivariate analysis for the testing and evaluation of change.

4) The examination of failures provides us with valuable information about organizational change. It forces us to focus on the theoretical constructs of change. By comparing successful and unsuccessful attempts at implementing change, we can evaluate the effectiveness of various techniques.

5) The level of theorizing about organizational change has seen significant improvements in recent years. Of particular importance is broad-systems orientation. These theories propose a model of organizational change that examines inputs, transformational processes, and outputs. Inputs refer to the environmental resources. Transformation refers to the tasks, and the formal and informal system (organizational) components. Outputs include changes in both the individual and organization. The advantage of this approach is that it forces us to look at the broad spectrum of variables that need to be incorporated into the model.

Organizational and social systems must change in order to remain healthy. Both are open systems, and are sensitive to environmental changes. A change in the environment can have a profound impact on an open system. The overall health of and organization is strongly linked with its ability to anticipate and adapt to environmental change. Furthermore, the health of the environment is related to the matter-energy transactions taking place in the social and organizational systems. A bilateral relationship exists between the environment and the components of all subsystems operating within the environment.

Planned organizational or social change is an attempt to solve a problem or to catalyze a vision. A change is introduced into an organization or social system with the specific intent of affecting other system variables. Knowledge of the nonlinear relationships between variables gives planners the potential to effect large changes in a desired variable with relatively small changes in another. Systems theory forces planners to broaden their perspective, and to consider how their decisions will affect the other components of the system and the environment.

Chaos Theory

Chaos is the science of the global nature of systems. In a 1980 lecture, cosmologist Stephen Hawking pointed out that we already know the physical laws that govern our everyday experience. (Gleick, 1987) That being the case, we must now extend systems theory to include the phenomena that lies outside of our normal perceptual limits of experience.

Traditional predictive mathematical models have incorporated error into the model to explain seemingly random fluctuations. Chaos theory is an attempt to explain and model the seemingly random components of a system. It recognizes that systems are sensitive to initial conditions, so that seemingly small changes can produce large changes in the system.

Meteorologist Edward Lorenz (1963) used a microcomputer to simulate weather patterns in 1960. While inputting initial starting conditions to the computer, he inadvertently rounded one the numbers to three, instead of six decimal places. The small difference .506 (instead of .506127) produced rapidly divergent simulations of weather patterns. Small differences in initial conditions produced widely different results. Some meteorologists believed that Lorenz’s discovery meant that weather control was just around the corner. Small nonnatural changes could be used to manipulate large weather patterns. Lorenz, however, believed that this was the reason for the failure of long term forecasts. Any uncontrolled system variable could thwart efforts to control the overall state of the system.

Chaotic systems depend on the nonlinear nature of its components. Differential equations are used to describe the changes in a system over time. Chaotic systems can have both stable and unstable components.

Mathematician Stephen Smale (1980) began his work with dynamic systems in the mid 1950’s. Originally Smale proposed that stable chaos could not exist, but soon changed his theory. He proposed the concept of phase space, where if the system changed, a trajectory could be drawn on paper to represent the changing state of the system. Phase space contains the complete knowledge of a system. Each point in phase space represents the state of a dynamic system at an instant in time.

The problem with phase space is that it requires a dimension for each variable being studied. Modern computer graphing techniques do a good job of graphing three variables and representing them in two dimensions (on paper). However, even our greatest minds have difficulty conceptualizing phase space for four or five dimensions. It is interesting to note that computers have no difficulty performing calculations for any number of dimensions. The problem is in our ability to visually represent this space, not our ability to compute its characteristics.

Three dimensional plots of chaotic behavior can be very complex and difficult to interpret. The Poincaré map was developed as a way of understanding three dimensional systems by taking a series of two dimensional “slices” relative to a line through the origin (Gleick, 1987, p. 143). The slices are overlaid on top of each other to create the final map. Distinct patterns can emerge by combining the Poincaré sections.

The Poincaré map is a dimensional compression technique whereby three dimensions are displayed in two dimensional space. Unlike a photograph, which implies the third dimension through perspective, the Poincaré map involves the third dimension in its creation. It is interesting to speculate on the nature of the patterns revealed by Poincaré maps. The map itself is created by using a line drawn through the origin as a reference for defining the y-axis of the map. Different maps are produced for each of the infinite selections of lines through the origin. Patterns appear and disappear depending on the selection of the reference line. One interpretation might be that our concept of “order” is incorrect. We generally perceive of “order” as an absolute (i.e., the quest for the “true” nature of things). Poincaré maps imply that order is not an absolute, but rather, something that can only be understood relative to an observer. An observer using one reference line might see order, while another observer using a different reference line might see chaos, or a completely different pattern. In other words, the nature of a system is a matter of perception and/or beliefs.

At the same time that Lorenz was experimenting with weather forecasting models, ecologists were beginning to model population growth using a logistical difference equation. For many initial starting parameters, the equation shows the traditional growth model–a population grows, exceeds its optimal steady state level, and then experiences oscillations of diminishing magnitude as the system approaches equilibrium. Some starting values however, produce oscillations that do not diminish over time. At first, scientists did not recognize the stable chaos they were observing. They assumed that the fluctuations were just oscillations around an equilibrium. The equilibrium was the important point. Mathematician, James Yorke, believes that physicists had “learned not to see chaos…through the process of learning to solve differential equations, most scientists have lost sight of the fact that most differential equations cannot be solved”. (Gleick, 1987, p.67).

Biologist Robert May (1976) at Princeton studied the simple logistical difference equation. He noticed that as the growth factor increased beyond the value of three, equilibrium would never be reached. The system would enter a chaotic state. Furthermore, if a system displays a regular cycle of three, then the system will also display regular cycles of all other lengths.

The author wrote the following simple BASIC program to verify May’s findings:

INPUT “Enter the growth factor: “; R
INPUT “Enter the initial population size: “;X
LOOP:
XNEXT = R * X * (1-X) ‘Logistic differential formula
PRINT XNEXT
ITERATIONS = ITERATIONS + 1
IF XNEXT = LASTX THEN GOTO FINISH
LASTX=X
X=XNEXT
GOTO LOOP
FINISH:
PRINT “Iterations to achieve stability = “;ITERATIONS
END

Running this program clearly demonstrates May’s findings. When a value less than three is entered for the growth factor, the program achieves convergence. However, when a value of three of more is entered, the program never achieves stability. The computed value for the variable enters a state of stable chaos where it alternates between two or more values with periods of apparent randomness.

While examining line noise in IBM communication systems, Benoit Mandelbrot (1977) discovered that the apparent random noise bursts were actually following a regular cycle (the Cantor mathematical set). By examining the noise using various time periods, Mandelbrot was able to model the noise. German mathematician Georg Cantor (1845-1918) had discovered these sets nearly one hundred years before, while demonstrating that there are many different infinities. Cantor demonstrated a one-to-one correspondence between the space defined by a cube and the space of the universe. Both contained an infinite number of points (McNeill and Freiberger, 1993).

Mandelbrot also hypothesized the Noah and Joseph Effects. The Noah Effect states that change happens in discrete jumps. The Joseph effect states that some things tend to persist. These two effects push the world in different directions (Gleick, 1987, p. 92-94)

Mandelbrot has pointed out the chaos theory models a rough, pitted world. Mountains are not seen as cones and lightning doesn’t travel in a straight line. In Mandelbrot’s most famous experiment, he asks the question “how long is a coast line?”. Common sense would dictate that the distance is a real number, however, it turns out that it depends on the observers measuring technique. As the observer uses a smaller and smaller measurement tool, the estimate of the coastline becomes increasingly large. In fact, Mandelbrot argues that the actual length is infinite (at least until the measuring tool is at the atomic level). Furthermore, Mandelbrot proposed that the concept of dimension itself can only be stated relative to an observer. He proposed the word fractal as a way of visualizing infinity on the dimension of roughness. Fractal implies a quality of self-similarity.

Columbia University geologist Christopher Scholz (1982) began to apply Mandelbrot’s findings to the study of earthquakes. Fractal geometry provided a new way of viewing the fissures and bumpiness of the Earth’s surface. At the same time, biologists began to realize that fractal type geometry was operating throughout the body. Some argue that fractal scaling is universal to morphogenesis.

Turbulence has been a problem in the application of fluid dynamics. Sometimes turbulence is desirable. For example, a jet engine depends on the turbulence of burning fuel for its propulsion. Other times, turbulence can have disastrous effects, such as the loss of lift created by turbulent air-flow over the wing of an airplane. Turbulence is chaos on all scales. It is dissipative (i.e., it drains energy) and unstable.

Closer examination of turbulence, however, reveals that energy is not dissipated evenly through out the system. Areas of calm remain regardless of the observer’s scale. While studying turbulence, physicist David Ruelle (1971, 1980), coined the term strange attractor to describe the tendency of systems to move toward a fixed point, or to oscillate in a limited repeating cycle. A pendulum is a good example of a fixed point attractor. It moves closer to its steady state over time, as it gives up energy to air friction. Strange attractors imply that nature is constrained. The shape of chaos unfolds relative to the properties of the attractor. An interesting property of the strange attractor is that initial conditions make little difference. As long as the starting points lie somewhere near the attractor, the system will rapidly converge upon the strange attractor. (Gleick, 1987)

Cornell physicist Mitchell Feigenbaum (1978, 1979, 1981) examined simple nonlinear systems and described how these systems could often exist in two stable states. Intransitive systems have two stable states. After one of the states is achieved, the system will remain in that state until given a “kick” from the environment. A pendulum clock is an example, where it has two steady states–the swinging state and the at rest state. In the swinging state, energy is continually added to the system through the wind-up springs, and the clock keeps ticking. If, however, we momentarily stop the pendulum from swinging, it will continue to remain at rest when we release it. In the almost intransitive system, the system can change stable states without a push from the environment. At the present time, there are no explanations for almost intransitive systems. The study of fractal basin boundaries is an attempt to understand why a system chooses one steady state over another.

One of the most important discoveries from chaos theory is that a relatively small, but well-timed or well-placed jolt to a system can throw the entire system into a state of chaos. One group of scientists (Guevara, Glass, and Schrier, 1981) have experimented with cardiofibrillation and how the heart displays the same chaotic characteristics of other nonlinear systems. Some physiologists are now looking at diseases at breakdowns in the normal oscillator cycles of the body. Physicist James Lovelock (1979) proposed the Gaia hypothesis, where life itself creates the conditions for life, and is maintained by a self-sustaining process of dynamic feedback. Von Bertalanffy (1968) believes that life can exist only in an open system, and that feedback is the mechanism that provides an explanation for a wide range of physiological and biological processes. Erwin Schrodinger, one of the major pioneers of quantum physics, believed that life operates as an aperiodic crystal (different than the periodic crystals of the elements). Physicist Joesph Ford said that “evolution is chaos with feedback.” (Gleick, 1987, p. 314)

Seventeenth century Dutch physicist Christian Huygens was the first to discover entrainment or mode-locking (Gleick, 1987, p. 292-293). He noticed that several pendulum clocks in his laboratory were all operating in unison. Knowing that the timing of the clocks could not be that precise, he correctly hypothesized that the clocks became synchronized with each other through minute vibrations in the building. Examples of frequency locking abound in both the physical and biological sciences. Planetary systems, electronics, and the human body all show examples of entrainment.

Simple systems can behave in complex ways. Complex behavior implies complex causes. Different systems behave differently. In Thriving on Chaos (HarperPerennial, 1987), Tom Petersarpelld main hypothesis is that all institutions are operating in a chaotic environment, and that “no firm can take anything in its market for granted.” (p.13) Because of the interactions of many economic forces and the rapidity of change, institutions must constantly reassess their vision and adapt to abrupt changes in the environment.

Organizations and social systems operating within a chaotic environment are being continually challenged to maintain their purpose and structure. The paradox, however, is that larger and more established structures are usually less able to change. The inertia resulting from their size (e.g., number of people) makes it difficult to introduce planned organizational or social change. Large institutions generally encompass well-established patterns. The stability of these structures makes them less able to adapt to environmental and internal system changes. All other things being equal, small structures can adapt to change more efficiently than larger ones.

Chaos theory is beginning to teach us much about the nature of change in our organizations and social institutions. Nonlinear relationships among system components is a pathway to the introduction of institutional change. The challenge comes in the discovery of those relationships and the understanding of the dynamics of these systems. The planning of change involves the application of this knowledge.

Fuzzy Logic

At the heart of fuzzy logic is the question of how we categorize things. Cantor (1845-1918) examined the way that we categorize things into sets. He called the entire set, the universe of discourse. Of course, the definition of the universe depends on what is being studied–its definition is relative. For example, if we study a dog, the universe of discourse might be all dogs, all mammals, or all living creatures. The important point is that the universe contains variability. The complement of a set is all that does not belong to the set. Cantor’s studies of the relationships of sets led to precise definitions for intersections and unions. The problem with Cantor’s set theory had to do with the difficulty in defining the boundaries of a set. These boundaries were often vague, lacking in precision.

American philosopher Charles Peirce (1934, 1935) disagreed with Cantor’s method of classifying everything as either in the set or not in the set. He believed that all things existed on a continuum. Whether an object belonged to a set or not depended on where it fell on the continuum. At some points on the continuum, it is clearly part of the set. At other points, a vagueness exists making it difficult to determine membership. Bertund Russel (1945) proposed that this vagueness was a function of language, not reality.

In 1920, Polish mathematician Jan Lukasiewicz proposed the idea that the simple dichotomy of true or false might also contain a third logical value of possible. Once that assumption was made, Lukasiewicz (1970) asserted that any number of middle values were equally possible. Instead of simply true or false, a numerical value could be used to represent the degree of truthfulness.

Cornell mathematician Max Black (1937) proposed that vagueness is a matter of probability based on the distribution of human belief. If, for example, 60% of the population believe that something is true, then it is true to a .6 extent. The degree of truthfulness is .6. Berkeley electrical engineer Lofti Zadeh (1969) built on Black’s work and proposed the idea that set membership could be graded. Some items could belong completely to a set, while others could be expressed as a partial membership. The key to “fuzzy” membership is that judgment and context are used to assign values to membership. Zadeh points out that people have a remarkable ability to quantify set membership. People can easily assign a number between zero and one to represent the truthfulness of a statement. In spite of this, some logicians do not believe in the concept of a partial truth. They state that “truth” is an absolute, without the degrees implied by fuzzy logic.

One counter-intuitive assertion proposed by Zadeh is that “as complexity rises, precise statements lose meaning and meaningful statements lose precision”. (McNeill and Freiberger, 1993, p. 43) The so called “Law of Incompatibility” places limits on our ability to perform analysis of complex systems.

Zadeh was the recipient of much criticism over his fuzzy logic theories. The most prominent argument was that set membership was subjective. There was no way to objectively determine membership values, and therefore, fuzzy logic could not be counted on to yield accurate results. Others argued that fuzzy logic was a manifestation of unprecedented permissiveness in society. Rudolph Kalman, a former student of Zadeh’s, argued that things appeared fuzzy only until we understood them. William Kahan pointed out that fuzzification leads one to entertain illogical thoughts, that are not verifiable through logic. He called it the “cocaine of science”. (McNeill and Freiberger, 1993, p. 46-48)

One of the problems accepting fuzzy logic lies in its name. The word “fuzzy” implies a negative uncertainty that is mutually exclusive with the word logic. Fuzzy became equated with sloppy, and American industry ignored it. Since the early Greek cultures, we have assumed that things fall into dichotomous classes. Aristotle believed math provided that ultimate approach to logic. The classic problem with dichotomous thinking is evoked by the question “How many grains of sand constitute a heap?” If we keep adding more and more grains of sand to create a pile, at some point we’ll call it a heap. If we remove a grain, is it still a heap? The boundaries of the word “heap” are fuzzy (i.e., not well defined).

Aristotle originally proposed the first rules of logic. The Law of Contradiction states that A cannot be both B and not-B. The same thing cannot belong to a set and the complement of the set–opposites do not overlap. The Law of Bivalence states that A must be either B or not-B. In other words, something must be either true or not true. Both laws were accepted and became the foundation of logic for the next two thousand years. The great philosophers such as Descartes and Locke embraced the idea that every proposition was either true or false. This paradigm fostered our current way of thinking.

Fuzzification and probability are very similar to each other, however, probabilities change with increasing information, where fuzziness remains the same. Fuzzification attempts to deal in truths, where probability has to do with the likelihood of something.

The problem with the acceptance of fuzzy logic is that it feels natural for us to round things off (into categories). Rounding creates clear delineations between classes. It enables us to categorize things more easily.

Berkeley linguist George Lakoff (1987), worked with Zadeh to describe hedges. These are words that we use to modify (fuzzy) sets. The terms fall into various categories, such as:

All purpose modifiers (very, quite, extremely)

Truth-values (quite true, mostly false)

Probabilities (likely, not very likely)

Quantifiers (most, several, few)

Possibilities (nearly impossible, quite possible)

Some words (e.g., more or less) perform dilation and expand the set. Others (e.g., very) perform concentration and narrow the set. Hedges are vague, since they have no exact definition, but they do reflect human thought. Multiple experiments (Simpson, 1944; Hakel, 1968; Hoyt, 1972) have confirmed that people order these words the same. Words like always, very often, almost never and never, have shared (but not exact) meaning.

Psychologist Eleanor Rosch (1975) also examined how words were related to the fuzzy logic concepts. She found that certain words (prototypes) were better examples of a class than other words, and that the ranking of these words matched our intuitive understanding. Her research led to a three-tier classification of categories as superordinate (abstract categories), basic (concrete images), and subordinate (subcategories). Rosch has proposed that classes exist “to provide maximum information with the least cognitive effort”. (McNeill and Freiberger, 1993, p.89)

Summary

The Whorfian hypothesis states that linguistic patterns determine how an individual perceives and thinks about the world. This relativistic view is consistent with general systems theory. Our culture and experience define our understanding of all systems. The fact that systems theory recognizes the relativity of perception, may in itself, serve to expand our understanding of our role in the universe. It provides a framework for us to examine and understand our environment.

A systems approach provides a common method for the study of societal and organizational patterns. It offers a well-defined vocabulary to maximize communication across disciplines. Rather than being an end in itself, systems theory is a way of looking at things. It is a internally consistent method of scholarly inquiry that can be applied to all areas of social science.

Thomas Kuhn, in The Structure of Scientific Revolutions (1970) questioned the classic view of scientific knowledge. He challenged the historical notion that scientific truths were accumulated gradually over time. Kuhn maintained that knowledge increases to the limits of the current paradigm, and then gets replaced by a new paradigm. The paradigm shift that occurs reshapes scientific thinking until replaced by another new paradigm. Examples of paradigm shifts are abundant in history, but the most prominent feature is the enormous resistance that the scientific community to entertain new ideas. Scientists who have proposed new paradigms have been the subject of intense professional criticism. Physicist Max Planck summed up the scientific communities resistance when he said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because it opponents eventually die, and a new generation grows up that is familiar with it” (McNeill and Freiberger, 1993, p. 60). Systems theory is the emerging paradigm.

References

Black, M. 1937. “Vagueness: An exercise in logical analysis.” Science. 4:427-455.

Feigenbaum, M. 1978. “Quantitative universality for a class of nonlinear tranformations.” Journal of Statistical Physics 19:25-52.

Feigenbaum, M. 1979. “The universal metric properties of nonlinear transformations.” Journal of Statistical Physics 21:669-706.

Feigenbaum, M. 1981. “Universal behavior in nonlinear systems.” Los Alamos Science 1:4-27.

Guevara, M., L. Glass, and A. Schrier. 1981. “Phase locking, period-doubling bifurcations, and irregular dynamics in periodically stimulated cardiac cells.” Science 214:1350.

Gleick, J. 1987. Chaos: Making a New Science. New York: Viking Penguin.

Goodman, P., et al. 1982. Change in Organizations. San Francisco: Jossey-Bass.

Hakel, M. 1968. “How often is often.” American Psychologist 23:533-534.

Hoyt, J. 1972. Do Quantifying Adjectives Mean the Same Thing to All People? Minneapols, MN: University of Minnesota Agricultural Extension Service.

Kuhn, A. 1974. The Logic of Social Systems. San Francisco: Jossey-Bass.

Kuhn, T. 1970. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Lakoff, G. 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago, University of Chicago Press.

Lorenz, E. 1966. “Large-scale motions of the atmosphere: Circulation.” In Advances in Earth Sciences. ed. P. Hurley. Cambridge, MA: The M.I.T. Press.

Lovelock, J. 1979. Gaia: A New Look at Life on Earth. Oxford: Oxford University Press.

Lukasiewicz, J., ed. L. Borkowski. 1970. Selected Works. London: North Holland.

Mandelbrot, B. 1977. The Fractal Geometry of Nature. New York: Freeman.

May, R. 1976. “Simple mathematical models with very complicated dynamics.” Nature 261:459-467.

McNeill, D., and P. Freiberger 1993. Fuzzy Logic. New York: Simon & Schuster.

Meyer, J., and W. Scott 1983. Organizational Environments: Ritual and Rationality. Beverly Hills: Sage.

Peters, T. 1987. Thriving on Chaos. New York: Harper Perennial.

Peirce, C., eds. C. Hartshorne and P. Weiss 1934, 1935. Collected Papers of Charles Sanders Peirce. Vol. 5 and 6. Cambridge, MA: Harvard University Press.

Rosch, E., ed. T. Moore. 1973. “On the internal structure of perceptual and semantic categories.” Cognitive Development and the Acquisition of Language. New York: Academic Press. p. 27-48.

Ruelle, D. 1980. “Strange attractors.” Mathematical Intelligencer 2:126-137.

Ruelle, D., and F. Takens 1971. “On the nature of turbulence.” Communications in Mathematical Physics 20:167-192.

Russel, B. 1945. A History of Western Philosophy. New York: Simon and Schuster.

Scholz C. 1982. “Scaling laws for large earthquakes.” Bulletin of the Seismological Society of America 72:1-14.

Simpson, R. 1944. “The specific meaning of certain terms indicating differing degrees of frequency.” The Quarterly Journal of Speech 30:328-330.

Smale, S. 1980. “How I got started in dynamical systems.” In The Mathematics of Time: Essays on Dynamical Systems, Economic Processes, and Related Topics. Smale, Yorke, Guckenheimer, Abraham, May, Feigenbaum. New York: Springer-Verlag. p. 147-151.

Stewart, I. 1989. Does God Play Dice? Cambridge, MA: Blackwell.

von Bertalanffy, L. 1968. General System Theory: Foundations, Developments, Applications. New York: Braziller.

Zadeh, L. 1969. “Biological application of the theory of fuzzy sets and systems.” The Proceedings of an International Symposium on Biocybernetics of the Central Nervous System Boston: Little Brown. p. 199-206.