Persistence, Parallelism, and RISC: What Smart, Enterprising
People and Organizations Can Learn from the Architecture of Dumb Machines
Owen Ambur,University of Maryland University College, July
8, 1998
Let's face it. As Guilder (1989) has said, "Computers are dumb." Not only
are they not God, but they're not smart nor are they brave or daring. Literally,
all they can do is manipulate ones and zeros. Barring electrical or mechanical
failure, they do exactly as they're told. That's it. That's all they do.
And that certainly doesn't seem like much. However, they can do it very,
very fast and that is their essential genius. For better or worse, it is
often observed that life tends to imitate art. In that vein, smart, enterprising
organizations would do well to learn from the architecture of those "dumb"
computing machines. Indeed, such lessons are among those that enterprises
whose livelihood depends upon the quality, quantity, and availability of
information dare not fail to learn.
In positing a framework for enterprise architecture, Zachman (1996)
highlighted the problem and tradeoff involved in designing and building
complex systems:
... there are simply too many details and relationships to consider
simultaneously. However, at the same time, isolating single variables and
making decisions out of context results in sub-optimization with
all its attendant costs and dissipation of energy.(1)
To avoid sub-optimization and needless dissipation of energy in computer
supported systems, Zachman (1998) asserts:
At the point in time when the Enterprise recognizes that the computer
is not simply a productivity enhancement tool but the "system" IS the Enterprise,
the "Owner," "Designer" and "Builder" will have to sit together, have a
meeting of the minds and decide what the Enterprise is and is to be, how
to "architect" it in that context and then how to transform that architecture
into reality ...
It is somewhat ironic that "sitting together" and having a "meeting of
the minds" might be considered to be requirements for designing an information
architecture and building a complex system to support daring action. Traditional
group-think processes are a prescription for mindless compromise and inaction.(2)
While acknowledging that individual genius is not enough, particularly
when paradigm shifts are required, Schrage (1995) argues that meetings
are not necessarily a requirement for collaboration (p. 40) and that collaboration
can be based upon the "patterns and symbols people create." (p. 34)(3)
Indeed, the reality is that people collaborate only to the degree
that they effectively share patterns and symbols that are meaningful to
each other. Whether they meet or not is irrelevant, except to the degree
that non-verbal, emotional cues are helpful in establishing shared understanding.(4)
Nor for the sake of collaboration per se is it necessary that people agree
or even that they be agreeable in the sense of being polite.(5)
In arguing for a new, process-centric approach, Hammer (1996) notes:
... conflicts ... have always existed in business organizations, but
were hidden by the traditional organization chart's false simplicity. By
unmasking them the process-centered organization compels recognition of
the fact that concern with both process and people, customers and costs,
and short- and long-term consequences inevitably provokes controversy among
well-intentioned people. (p. 137)
In fact, by definition, disagreement or at least lack of agreement
is a prerequisite for the establishment of shared new meaning. Moreover,
the quest for knowledge is unending. As Jonathan Miller as says, "All work
in art and science is the extending of unfinished business."(In Schrage,
p. 42) The fourth of Deming's famous Fourteen Points for Total Quality
Management (TQM) is that change must be continuous and all-encompassing.
(Gabor, 1990) As is often said, "TQM is a journey not a destination"
and it is characterized by continuous improvement.
What is required for continuous improvement and for efficient and effective
enterprise action is coalescence around explicit understandings
- not around computer screens or water coolers, much less meeting
tables or battle fields. In the past, it may have been accurate to say
that "only the strong survive." Strength sprang from numbers and it was
vital to mass the troops on the field of battle. However, clearly, in the
cyber age strength depends not upon brute force but upon ability to share
information quickly, efficiently, and accurately. Moreover, in a highly
competitive environment, survival depends upon initiative, if not daring.(6)
What is needed for daring action is shared insight, if
not necessarily genius, together with the fortitude to step out front of
the pack before wider consensus arrives in the marketplace. Such
insight must exist within the enterprise - regardless of how "the
enterprise" is defined - and it must be widely and readily shared. Concerning
the ease, breadth, and decentralized nature with which information must
be shared, Hough (1998) observes:
The need for interoperability is growing exponentially at the enterprise
level. Until now, the main emphasis has been only for specific instances
of need - mostly at the departmental level... But distributed computing
is now upon us!
Zachman (1998) asserts, "Architecture IS a survival issue because of complexity
and the high rates of change and [that] many Enterprises are failing for
lack of it." With reference to information sharing, he suggests:
No longer is all the information (and therefore all the power) concentrated
in a very few people at the very top of an Enterprise. Now everybody has
access to the same information at the same time. A "Powershift" has taken
place. The power shifts outboard within the Enterprise ... even beyond
the Enterprise as the customers have access to the same information as
the Enterprise.
If only that were really true, there is no telling what wondrous advances
in efficiency, effectiveness, and discovery might occur. Of course, to
an ever increasing degree it is becoming true, particularly in enterprises
that depend upon market forces for survival. Sadly, however, it remains
far from true in many organizations, particularly in governmental agencies,
whose livelihood is several steps removed from exposure to the direct exchange
of value with customers. And in many organizations systems architectural
planning efforts focus on technology rather than information,
as if the shape of the table were more important than the understandings
to be conveyed over it.(7)
Several laws have been enacted in efforts to force Federal agencies
to do a better job of creating, managing, and providing access to Government
information. Some of those include the Federal Records Act, Freedom of
Information Act, Electronic Freedom of Information Act Amendments, Chief
Financial Officers Act, and Information Technology Management Reform Act.
(See Ambur, 1998) However, notwithstanding the Vice President's admonitions
about a "Government that costs less and works better," too many agencies
still find too many excuses not to capture and manage information efficiently
and effectively. Often they argue that the necessary tools and procedures
simply don't fit their "culture".(8)
If a computer evinced such an attitude and operated that way, who would
buy it? No one, of course. The market for hardware and software simply
would not tolerate such behavior. While people have a right to be treated
with dignity and respect as human beings, rather than as mere cogs in the
wheels of a machine, is there any reason to think that the organizations
and associations they form for the purpose of doing business
should not operate efficiently and effectively?
To explain why dumb things happen to smart companies, Stewart (1998)
poses a common scenario and proffers a strategic approach: "You've hired
the smartest people and you're spending tons on R&D and customer service,
yet you keep blowing it. Time to look at how you manage brainpower." In
support of customer-focused, process-oriented organizations, Hammer (1996)
notes: "No system that depends upon segregating wisdom and decision-making
into a managerial class can possibly offer the speed and agility customers
demand." (p. 157) Addressing network architectural issues, Lucky (1997)
observes:
The issue of the location of intelligence has a lot to do with ownership,
control, and innovation... Centralization may ... be the optimum design...
But in real life there are a couple of difficulties. First, there is the
assumption that you know in advance what will be needed, or are able to
respond quickly enough to changes in technology and applications. The other
flaw is the disproportionate power and flexibility of the periphery. There
are a lot more "other people" ... Therefore, more money and more innovation
exist at the periphery... that power can be unleashed, so long as it isn't
inhibited by inherent limitations...
Referencing the failures of government intelligence operations, Steele
(1998) goes so far as to suggest: "In an age characterized by distributed
information, where the majority of the expertise is in the private sector,
the concept of 'central intelligence' is an oxymoron, and its attendant
concentration on secrets is an obstacle to both national defense, and global
peace."
In the competitive milieu of the private marketplace, companies that
are bureaucratic, fat, slovenly, secretive, and centrally controlled are
not likely long for this world. Why should it be any different for governmental
agencies that owe their existence not only to the taxes paid by companies
but especially by individual citizens who are expected to work hard and
well to earn a decent livelihood for their families?(9)
If we can agree with the Vice President that Governmental agencies should
be slimmer and more potent - more enterprising and perhaps more daring
- is it also possible that we might agree that agencies and organizations
of all kinds might take a lesson or two from the structure and functions
of the computer? Certainly, computers are not gods, but they do some things
very, very well. A pair of particularly pertinent principles are persistence
and parallelism, and dare we suggest that RISC ought also be brought into
play.
Persistence in pursuit of a pervasively popular cause is widely recognized
as a commendable trait. It is one of ten great moral virtues cited by Bennett
(1996). Persistence in the cause of the mundane is less roundly celebrated.
However, persistence in the form of "memory" is critical to the success
of the computer and so too is "institutional memory" to the success of
an enterprise.
Thompson and Strickland (1995) define "core competence" as "something
a firm does especially well in comparison to rival[s] ..." and that it
is "... a basis for competitive advantage because it represents specialized
expertise that rivals don't have and can't readily match." Implicit is
the notion that the knowledge that underpins the competency can be maintained,
i.e., that it will persist in the organization. Moreover, as Thompon and
Stickland highlight, "Strategic unity and coordination across ... functional
areas add power to the business strategy." (p. 42) They also point out,
"When it is difficult or impossible to outstrategize rivals ... the other
main avenue to industry leadership is to outexecute them ..." (p. 243)
Certainly, that means the organization must be able to effectively maintain
and efficiently share its corporate knowledge base.(10)
Hammer (1996) asserts:
The truth is that superior people cannot compensate for the deficiencies
of an inferior process. (p. 102) ... One of the most crucial things ...
is [to] provide a channel of communication so that people can share their
expertise and learn from one another... Clearly, modern communication technology
is the glue that holds ... organizations together. (p. 124)
Among the traits Thompson and Strickland identify for core competencies
are that they "... rarely consist of narrow skills or the work efforts
of a single department" and the "... selected bases of competence need
to be broad enough to respond to an unknown future." (p. 244) In computer
programming parlance, "unknown futures" are equivalent to a "conditional
branch" that is dependent upon values that cannot be reliably predicted
or determined in advance. Nutt (1998) notes: ".... as the business world
gets more complex, there's a not-so-fine line between being decisive and
being blind..." Yet, as a requisite for enterprise and business success,
Hammer says: "True innovation entails anticipating the opportunities for
meeting latent need, for solving problems that customers may not even recognize
that they have." (p. 99)
Making the case for organizations built around processes comprising
complete value chains leading to customers, Hammer cites features common
to all of them:
-
Although their forms are many, centers of excellence are alike in their
task and function - to "leverage our talent"...
-
They are ... linked electronically to form a "global community" that shares
information... (p. 125)
Hammer also addresses the negative attributes that seem to be universal
to virtually all contemporary and industrial-era organizations:
Despite their many differences, there are great similarities across
most contemporary corporate cultures. Certain themes resonate almost everywhere:
avoiding blame and responsibility, treating coworkers as competitors, feeling
entitled, and not feeling intense and committed ... Most everyone ... workers
and managers alike, found life in the industrial era corporation stifling
and disheartening. Inventiveness was frustrated by protocols and work rules.
Ambition expressed itself more in politics than productivity. Craftsmanship
was a thing of the past, and creativity a thing of the future - for after-hours.
(pp. 153-155)
Thompson and Strickland point out that "... in traditional functionally
organized structures, pieces of strategically relevant activities are often
scattered across many departments... [and] parceling strategy-critical
work across many specialized departments contributes to an obsession with
activity ... rather than results ... [One of the keys] in weaving support
activities into the organization design is to establish reporting and coordinating
arrangements that ... contain the costs of support activities and minimize
the time and energy internal units have to spend doing business with
each other." (p. 247, emphasis added)
On the other hand, the theory of the firm holds that companies form
when the cost of transactions becomes too high without them.(11)
Thus, the trick is to find the happy medium between too much "process"
and too little... between too much versus too little structure, hierarchy,
and formality... between too much internal business that
does not involve the customer directly versus too little intra-organizational
communication to meet the customer's needs effectively. Thompson and Strickland
conclude, "Delayered corporate hierarchies and rapid diffusion of information
technologies make greater empowerment feasible," thereby helping to minimize
needless "business" among internal units versus transactions with external
customers. (p. 249)
It is the virtual organization of the corporate
knowledge that truly matters. More than the structure of the corporate
hierarchy, it is the organization's institutional memory that defines its
soul and, at least in private enterprise, determines its life span as well
as its quality of life. How an organization fosters and uses its knowledge
is key. Consider the comments of these notable, quotable scholars:
-
Hammer says: "What, after all, is a company? Management turns over,
employees come and go, products have ever shorter lifetimes. At the end
of the day a company is the processes through which it creates value...
Companies are what they do - or can do - best." (p. 197)
-
Moore (1996) points out: "... one of the central tasks of management is
to create networks of competencies and relationships ..." (p. 82)
-
Steele cautions: "Lest we become too complacent about connectivity as 'virtual'
strategy, let us paraphrase the earlier observation of the (then) Commandant
of the Marine Corps: 'Connectivity without content is noise; content without
connectivity is irrelevant.'"
-
Schrage comments: "... expert knowledge ... is less a matter of what each
individual knows than of their joint ability to produce the right information
when and where it's needed."(12) (p. 42)
-
Peters (1992) forcefully declares: "Organizations are pure information
processing machines - nothing less, nothing more: Organizational structures,
including hierarchies, capture, massage, and channel information - period...
It's about centralization and about decentralization - simultaneously ...
centralized from an information technology standpoint... On the other hand,
the objective of the centralization is to foster more and more decentralization."
(pp. 110 & 126)(13)
In organizations and humans as well as in computers, knowledge is either
volatile or nonvolatile, and nonvolatile memory is either alterable or
unalterable. In a volatile memory, information decays naturally or is lost
when the power is switched off. In nonvolatile memory, information remains
without deterioration until deliberately changed. Nonerasable memory cannot
be altered, except by destroying the storage unit. (Stallings, p. 103)
Volatile memory is functionally equivalent to human consciousness and short-term
memory. Nonvolatile memory is equivalent to long-term memory in humans.
With training and experience, the information it contains can and inevitably
does change, but on the whole, it is relatively stable so long as the circuits
are alive. Nonerasable memory is similar in some respects to instincts
and reflexes, although reflexes are basically "hardwired" into the nervous
system.
Stepping up a level from the individual to the organization, institutional
long-term memory requires documentation and record-keeping, because individuals
come and go.(14) In addition, "power" in
the form of guidance and training must be applied not only to maintain
skill levels but especially to acquire the new institutional memories required
to respond to changing external realities. Whereas individuals in "civilized"
societies are considered to have the right to survive regardless of whether
they learn and produce or not, organizations and particularly profit-making
businesses are afforded no such privilege in a market economy. Thus, organizations
must continually be "refreshing" their institutional memories in order
to survive and prosper.
In computers, the information stored in random access memory (RAM) can
be retrieved for use in any order, because each block of information is
given a unique identifier (memory address). Through the wonders of nature,
human memory is RAM. However, due to the prevalence of hierarchies and
the use of paper, institutional memory seldom is. Read-only memory (ROM)
is nonerasable and cannot be altered except by destruction. Theoretically,
institutional memory need not be ROM but high rates of business failures
attest that it often is, as enterprises prove incapable of overwriting
outdated, excessively persistent corporate memory. Dynamic RAM uses capacitors,
meaning the information it contains is readily altered to reflect "current"
information. (Pun unintended but acknowledged.) However, like human and
institutional memory, DRAM decays unless periodically refreshed. Moreover,
due to capacity limits, it must rely upon rapid and accurate communications
with other components more capable of compiling and conserving the quantities
of content coveted for corporate conquest.
Static RAM (SRAM) is implemented using "flip-flops," which are circuits
or devices capable of sustaining either one of two stable states at any
time. SRAM holds information as long as power is supplied to it, without
refreshment. It might be considered to be equivalent to principles, on
which humans, particularly politicians, for example, have been known to
flip-flop. The principles of ordinary citizens have been known to change
when refreshments of the alcoholic nature are applied, and politicians
seem quite prone to refreshments that fall into the category of campaign
contributions. However, SRAM is clearly not equivalent to integrity, personal
or institutional, which is defined by "what you do when no one is looking"
(i.e., no external power is supplied). At the institutional level, SRAM
is similar to corporate culture. At the personal level, integrity is ROM.
Speaking of personal traits that are universally critical to organizational
success, Hammer asserts: "Fundamentally, all professionals require the
same set of attitudes, regardless of their field. The first of these is
self-motivation and discipline... Intensity, seriousness of purpose, sincerity,
self-reliance: These may be classical virtues, but they are also critical
requirements for our new, decidedly nonclassical context." (p. 56) Taking
the concept of universal principles further, Hammer also notes:
Most business units, bestowing on themselves the debased adjective
"unique," will almost always claim that they deserve to go their own way
because they are different from all their peers. Wise corporate leaders
will listen to such pleas but recognize the self-interest in them and decide
in the enterprise's interest as a whole... The real motivation for wedding
two business units should be to find ways of integrating them so that each
performs better. Therefore, a multiunit enterprise's real value lies in
its opportunities to manage processes across many units... Standardized
process design and centralized process management can eliminate much of
the overhead associated with traditional decentralization, ensuring a degree
of consistency and uniformity previously achievable only by physical centralization.
(pp. 189 &190)
What could be more important to the success of an organization than to
standardize the means by which its members process, share, update, and
maintain their corporate knowledge? What could be more critical than memory
- in institutions, individuals, or computers? Indeed, memory is a basic
elements of every computer, and Stallings notes that the "contemporary
memory hierarchy" includes the following components: registers, cache,
main memory, disk cache, magnetic disk, optical disk, and magnetic tape.
(pp. 27 & 104)
Registers are small, very fast memory that is physically located on
the CPU chip itself. They contain the data upon which the CPU actually
operates. Instructions and data are generally exchanged between the registers
and main memory. However, to improve performance, instructions and data
that are likely to be needed by the CPU may also be stored temporarily
in cache memory, which is smaller and faster than main memory. Generally
speaking, main memory exchanges instructions and data with magnetic disks
("hard drives" and "floppies"). Again, to improve performance, a virtual
cache may be implemented on magnetic disk to avoid having to search for
randomly stored instructions and data that are frequently used or for which
there is a high probability of use in the active program. Magnetic disks
are the primary "work horse" media for nonvolatile (persistent) information.
Magnetic tapes are generally used to back up data in the event of hard-disk
failure. Rapid and significant improvements in optical disk storage technology
are increasing its use for large volumes of information that needs to be
maintained in a stable state over longer periods of time.
With advances in microprocessor technology, Stallings asserts that computing
power has become virtually "free" but that other critical components have
not kept up.(15) (p. 36) "Nowhere," Stallings
notes, "is the problem created by such mismatches more critical than in
the interface between processor and main memory ... the speed with which
data can be transferred between main memory and the processor has lagged
badly." (pp 38 & 39) Such is also the case with access to and sharing
of corporate knowledge.
Beyond further layering of the memory hierarchy, one of the means by
which computer architects have endeavored to solve the problem is through
the use of "interrupts" to improve the efficiency of utilization of the
central processing unit (CPU). As Stallings notes, "With interrupts, the
processor can be engaged in executing other instructions while an [input/output]
operation is in progress." (p. 57) Classes of interrupts include:
-
Program - Generated by a condition that occurs as a result of execution
of an instruction.
-
Timer - Generated by a timer within the processor, which allows the system
to perform certain functions on a regular basis.
-
Input/Output (I/O) - Generated by an input/output controller to signal
completion of a task and thus readiness for another task, or to indicate
a variety of error conditions that require further attention.
-
Hardware Failure - Generated by such things as power failure or a memory
error.
While interrupts are widely used, they entail two drawbacks: 1) the I/O
transfer rate is limited by the speed with which the CPU can test and service
a device, and 2) the CPU is tied up managing the transfer. To avoid those
problems, another method of improving processing efficiency is Direct Memory
Access (DMA).(16) DMA modules relieve the
CPU of responsibility for controlling the transfer of information between
main memory and input/output devices. The transfer occurs directly between
memory and the device via the DMA module, thereby freeing the CPU to do
other, less routine, higher-value work. (Stallings, p. 65 & 199-201)
With respect to organizational success, Peters has noted, "It's about
centralization and about decentralization - simultaneously," and that is
a fairly apt description of parallelism in computers. The central
processing unit is the engine that make the machine "run" but what does
"central" mean when there are many CPU's involved in different aspects
of the very same process?
Stallings notes that the computer has traditionally been view as a sequential
machine, but that has never been entirely accurate. (p. 569) For a number
of years, however, there has been interest in developing systems with massive
numbers of processors working in parallel - much like a large "enterprise"
in which many people, offices, and divisions are all working together in
pursuit of a common objective. (p. 597) Stallings classifies multi-processor
systems into four categories:
-
Loosely Coupled - A collection of relatively autonomous systems in which
each CPU has its own main memory and I/O channels. Such systems are often
called "multicomputers".
-
Functionally Specialized - A group of specialized processors are controlled
by a master, general-purpose CPU and provide services to it.
-
Tightly Coupled - A set of processors share main memory and under under
control of an integrated operating system.
-
Parallel Processing - Tightly coupled multiprocessors that can cooperatively
work on one task or job in parallel. (pp. 569-570)
Stallings identifies the key characteristics that all multiprocessing systems
have in common, as follows:
-
Two or more similar general-purpose processors of comparable capacity
-
Shared access to a "global" memory, although some "local" (private) memory
may also be used.
-
Shared access to I/O devices.
-
Control by an integrated operating system that provides interaction between
processors and their programs at the job, task, file, and data element
levels. (p. 570)
The latter characteristic highlights the distinction between true multiprocessors
and loosely coupled multiprocessing systems. With loosely coupled systems,
the normal unit of transfer is a complete file, whereas multiprocessors
can interact at the data element level with a great deal of cooperation
between processes. In human terms, multiprocessors are a bit like the long-married
couple who can complete each other's sentences. On the other hand, most
folks and certainly most contemporary organizations need their agreements
specified in writing. Their transactions are conducted at "arm's length,"
supported by the formal exchange of documents, e.g., the offer and acceptance
of contracts.
More and more organizations are building closer and stronger alliances
with their suppliers, becoming virtual vertical enterprises. In highly
competitive markets survival may offer no other choices. However, in any
enterprise in any market and any organization in any endeavor, at the very
least the corporate knowledge contained in its documents and files should
be well managed and readily shared. By definition, organizations are at
least loosely coupled processing systems. For some organizations, particularly
those of a social nature, loose coupling may be exactly what is warranted,
but to the degree that business and economic values are involved, tightly
coupled, parallel processing may be essential.
For multiprocessing to occur, each CPU must be self-contained, with
a control unit, arithmetic and logic unit (ALU), registers, and perhaps
a cache memory. However, each CPU shares access to main memory and I/O
devices through interconnections. In some configurations the processors
may share some information directly with each other, but most communications
are conducted by leaving messages and status information in main memory.
The memory is often organized so that separate blocks can be accessed simultaneously.
As Stallings (pp. 570-573) notes, the organization of multiprocessing systems
can be classified as:
-
Time Shared or Common Bus - This approach has the virtue of being simple,
flexible, and reliable, but since all communications must pass over the
bus, it is inadequate for high-speed performance. To alleviate that problem,
each CPU may have its own cache memory, but procedures must then be implemented
to ensure that changes to the information in the cache are synchronized
to the main memory.
-
Multiport Memory - Each CPU and I/O module may be given direct, independent
access to the main memory, with logic for resolving conflicts among simultaneous
access requests. One means of resolving such conflicts is to permanently
assign priorities among ports to the memory. While this approach is more
complex than having a common bus, it should provide better performance.
-
Central Control Unit - With this approach separate data streams are funneled
back and forth among independent CPU, memory, and I/O modules. The controller
buffers requests and performs arbitration and timing functions. This approach
provides the flexibility and simplicity of interfacing advantages of the
common bus approach, but the control unit may be quite complex and can
become a bottleneck in performance.
In conjunction with any of these hardware approaches, it falls to the operating
system to make the multiple processors act in a seamless fashion as one,
by scheduling the execution of programs and the allocation of resources.
Stallings (p. 574) outlines seven functions performed by a multiprocessor
operating system:
-
resource allocation and management
-
table and data protection
-
prevention of system deadlock
-
abnormal termination
-
I/O load balancing
-
processor load balancing
-
reconfiguration.
Of these functions, Stallings notes that only the last three are unique
or substantially different for multiprocessing system versus uniprocessors.
With reference to load balancing, he points out that there are two dimensions
to the scheduling function: whether processors are dedicated to specific
processes and how the processes are scheduled. Processes can be queued
up either in separate queue for each processor or in a common queue for
all of them, in which case different portions of a job may be handled on
different processors.
Regardless of whether dedicated or common queues are used, some means
of assigning processes to processors is needed. Two approaches have been
used: master/slave and peer. With the former approach, the operating system
always runs on a particular processor and it is responsible for allocating
and scheduling jobs on the other processors. While that approach is simple
and straightforward, the master can become a performance bottleneck and
failure of the master brings down the whole system. With the peer approach,
the operating system can run on any processor, and each processor performs
self-scheduling from the pool of available processes. Such an approach
can avoid the risk of bottleneck and failures associated with the master,
but it requires a more complex operating system that can keep track of
the assignment of processes. (Stallings, p. 574)
As mentioned, each CPU in a multiprocessor system may have its own cache
memory, and such "local" memory is required in order to achieve reasonable
performance. However, the problem of "cache coherence" results. That is,
multiple copies of the same data from main memory may exist in various
cache memories, and if the processors are allowed to update that data in
their cache, inconsistencies can occur. To avoid that problem, software
or hardware solutions are possible but the "MESI" protocol is most widely
used.
The simplest software approach is to prevent any shared data from being
cached, but that is not efficient since sharing data is of the essence
of multiprocessing. More efficient approaches rely upon analyzing programs
to determine periods when it is safe to cache data that will not be needed
by other processes.
Hardware solutions fall into two categories - directory protocols and
"snoopy" protocols. Directory protocols track the use of blocks of data,
usually by maintaining a directory in main memory indicating the location
of each cached block. Every local action taken on data in cache memory
must be reported to the central controller. The downside is that the controller
can become a bottleneck and the need to report back every change results
in overhead communications. However, Stallings (p. 580) reports that directory
schemes are effective in large-scale systems that involve multiple buses
or other complex interconnections.
Snoopy protocols distribute the responsibility for cache coherence among
the controllers in the cache of each of the multiprocessors Specifically,
the cache controller must recognize when it controls a block of data that
is shared by others, and when the block is updated locally, the change
must be announced to the other caches. Each cache controller "snoops" on
the network and reacts appropriately when such a notification is broadcast.
Such an approach is well-suited to systems that rely upon a common bus.
However, one of the reasons for employing local cache is to avoid the need
for bus access, and communications overhead associated with the need to
broadcast and snoop for change notifications may cancel out the gain from
using cache memory. (Stallings, p. 580)
Two snoopy protocol approaches have been investigated - write/invalidate
and write/update (or write/broadcast). With the write/invalidate approach,
a data block may be read by multiple processors at once but only one may
write to it at anytime. With write/update there may be multiple writers
as well as readers, but updates (writes) must be distributed to all of
the others. Stallings (p. 581) reports that the write/invalidate approach
is most commonly used in commercial systems. Using two extra bits (status
bits), the protocol calls for every cache line to be marked as modified,
exclusive, shared, or invalid. The protocol is identified by its acronym
- MESI - and the four possible states can be summarized as follows:
-
Modified - The line in the cache has been modified from the original block
in main memory and is available in its current form only in this cache
memory.
-
Exclusive - The line in the cache has not been modified from the original
in main memory and the only copy is in this cache memory.
-
Shared - The line in the cache is the same as the original in main memory
and may be present in other cache memories.
-
Invalid - The line in the cache does not contain valid data.
Snoopy protocols are a bit (if not a byte) like the organizational philosophy
of "management by walking around" - except they are much more explicit
about what the "controllers" are looking for as they "walk" and they are
much more precise about how new information should be "flagged" to the
attention of cohorts. That is, snoopy protocols support tightly coupled
multiprocessing, using a common operating system. They may also enable
multiple processors to work simultaneously on different aspects of the
very same program, job or task - which is known as parallel processing.
Stallings (p. 598) notes that the term "parallel processing" is normally
applied to high-level parallelism among multiple processors, as opposed
to low-level parallelism in single-processor machines. Examples of low-level
parallelism include:
-
Instruction Pipelining - Multi-part instructions are divided into sequential
stages and multiple instructions are executed at once, each at different
stages.
-
Multiple Processor Functional Units - Multiple arithmetic and logic units
(ALUs) are available within a single CPU so that multiple instructions
can be executed at once, all at the same stage.
-
Separate Specialized Processors - Functions such as I/O are off-loaded
to specialized processors in order to free the CPU to execute more complex
and generalized tasks.
Encompassing both high- and low-level parallelism, Flynn proposed the following
taxonomy for parallel processing computer systems:
-
Single Instruction Single Data (SISD) Stream - A single instruction stream
is executed by a single processor operating on data in a single memory.
Some parallelism can be achieved using the low-level techniques listed
above.
-
Single Instruction Multiple Data (SIMD) Stream - A single instruction stream
controls the simultaneous execution of a number of operations by multiple
processors using data in multiple memories.
-
Multiple Instruction Single Data (MISD) Stream - This structure has never
been implemented but, theoretically speaking, a sequence of data could
be transmitted to multiple processors, each of which might execute a different
sequence of instructions.
-
Multiple Instruction Multiple Data (MIMD) Stream - Multiple processors
simultaneously execute multiple instructions on multiple data sets. (Stallings,
pp. 598 & 599)
Organized as MIMD, each processor must be able to execute all of the necessary
operations to act on the data. If the processors share a common memory,
they are called "multiprocessors". If each processor has its own memory,
communications must be provided via fixed paths or message-switching mechanisms.
Such systems are called "multicomputers". Stallings (p. 599) notes that
practical parallel processing systems may be organized as SIMD or as MIMD
multiprocessors or multicomputers.
With reference to high- versus low-level parallelism in professional
organizations, Hammer (1996) highlights an important distinction that parallels
Stallings' explanation of parallelism in computers:
To be a professional a person needs education as well as training.
The presumption that workers do only simple jobs allows organizations to
view them simply as empty vessels into which the instructions for performing
tasks can be poured. A professional, by contrast, doesn't work according
to explicit instructions. Directed toward a goal and provided with significant
latitude, the professional must be a problem-solver - able to cope with
unanticipated and unusual situations without running to management for
guidance. This requires a reservoir of knowledge, a grounding in the discipline
that underlies the job as well as an appreciation for how this knowledge
can be applied to different situations. (pp. 47 & 48)
In other words, professionals must assimilate a complex set of "instructions"
and be able to execute them independently and in parallel, just as multiprocessors
and multicomputers do. Moreover, considerations of instruction complexity
in professional organizations is paralleled in the structure of computers
as well. In computers the issue is framed in terms of complex instruction
set computing (CISC) versus reduced instruction set computing (RISC). As
with professional organizations, the trend has been toward more and more
complex instructions in computers.
However, that trend may be changing. The cost of hardware has fallen
relative to the cost of software, and complex instructions have contributed
to the existence and persistence of software bugs over lengthy periods
of time.(17) Thus, the obverse proposition
has attracted attention - to make the architecture of computers simpler,
rather than more complex. Similar efforts have occurred in professional
organizations as well, notably in the Federal government. Indeed, such
efforts are required by law, as set forth in the Paperwork Reduction Act.
With respect to computers, research has shown that attempting to make
the instruction set reflect as closely as possible all of the requirements
of the high-level programming language is not the most effective design.
Instead, the requirements can best be met by optimizing performance on
the most time-consuming features. Tannenbaum, for example,
found that 98 percent of the procedures executed by computers required
fewer than 6 instructions, and 92 percent required fewer than 6 data elements.
Relatively few "words" of instruction and variable elements of data are
required to perform the bulk of all operations. (Stallings, p. 433)
Rather than attempting to address all possible contingencies via complex
and highly detailed instructions, it is more important to ensure that the
processors have fast and ready access to the relatively few instructions
and data elements needed to do their specific jobs. Thus, RISC architectures
have focused on three elements:
-
Use of a large number of registers - which are small, very fast units of
memory that are internal to the CPU - to optimize reference to the operands
(the data upon which action will be taken).
-
Careful attention to the design of the instruction "pipelines", i.e., the
sequences in which operations in instruction sequences are broken down
and delivered to the CPU for execution.
-
A simplified (reduced) set of instructions. (Stallings, pp. 433-434)
Building on those three architectural elements, RISC systems are characterized
by four operational features:
-
One Instruction Per Cycle - Only one instruction is fetched during each
machine cycle, with the cycle defined to encompass the time it takes to
fetch two operands from registers, perform an ALU operation, and store
the results back to a register.
-
Register-to-Register Operations - Main memory should only be accessed when
necessary to load instructions or data that are not available locally in
registers, or to store data that may be required by other processors.
-
Simple Address Mode - Since nearly all required references are to local
registers, the addressing information needed to access them is greatly
simplified. When more complex addresses are needed, they can be synthesized
from simple ones.
-
Simple Instruction Formats - Only one or a few formats are generally used,
and the structure of each instruction is fixed so that they can be used
without decoding. (Stallings, p. 443-444)
The issue of simple versus complex instruction sets is not clear cut or
fully resolved, nor is the issue of general versus highly tailored processors.
For example, Siewiorek, Bell, and Newell (1982) commented: "There is remarkably
little shaping of computer structure to fit the function to be performed.
At the root of this lies the general-purpose nature of computers, in which
all the functional specialization occurs at the time of programming and
not at the time of design." (In Stallings, p. 6 & 7) However, Gilder
(1989) proffers:
No longer is it taken for granted that the dominant computers of the
future will be general-purpose processors. They may well resemble more
closely the application-specific devices of the past. Forty years after
the heyday of analog, analog concepts return on every side: in signal processors,
in multipliers, in artificial intelligence chips, in multiprocessor machines,
in parallel database servers, none of which will know and do everything,
like some Fifth Generation son of God, but all of which will perform intelligently
in their particular roles. (p. 260)
No doubt, there will be a melding of CISC and RISC computing as well as
continuing proliferation of both general-purpose and highly tailored processors.
After all, it is results that count - in computers and enterprises. Achieving
high performance is more important than the artificial architectural constructs
employed in any particular system at any point in time. Hammer drives home
that point by contrasting the old hierarchical organizational paradigm
with the new, process-centric, results-oriented, information-dependent
enterprise:
-
In a process-centered environment, you are paid for the results that you
produce... In traditional pay systems, people are paid for seniority, for
showing up, for following the rules, for being pleasant to the boss, or
perhaps even for performing and completing assigned tasks. But they aren't
paid for producing results, which is ultimately the only thing that
really matters. (p. 57) ...
-
When performance is measured objectively and you have direct responsibility,
you can't pass the buck or duck the blame. For better or worse, you and
your work can be seen by all. There is no place to hide. (p. 67)
-
[The] idea that "the process is the product" is reminiscent in some ways
of Marshall McLuhan's famous pronouncement that "the medium is the message."
McLuhan meant that the electronic media were fundamentally transforming
society not only by shaping the content of the information that
people received, but also by changing the way in which they perceived
and used it. (p. 192)
Some people may disagree with Hammer's focus on process, much less McLuhan's
notion that the "medium is the message". Certainly, many managers and supervisors
are still more interested in giving orders, expecting results, and not
being troubled with the details of how to produce them. However, those
folks might do well to consider Stallings description of the "control unit,"
which is one of three components of every CPU(18):
The control unit is the engine that runs the entire computer. It does
this based only on knowing the instructions to be executed and the nature
of the results ... It never gets to see the data being processed or the
actual results produced. And it controls everything with a few control
signals to points within the CPU and a few control signals to the system
bus. (p. 514)
More than "control" in the traditional sense, control units perform a coordinative
and scheduling function. In particular, with reference to TQM and process-orientation,
they do not impose themselves as a bottleneck between main memory, specialized
processors, functional units, and I/O modules leading to and from customers
and suppliers.
Many people and organizations may dispute Zachman's assertion that the
computer system is the enterprise. However, fewer would contest
the fact that information is key. Thus, persistence, parallelism,
and RISC are concepts that smart people and organizations would do well
to incorporate into their operations:
Persistence in the form of memory is the vault of knowledge upon which
success is built. The alternative is continuously reinventing the primordial
wheel on a Sisyphean assembly line. Corporate memory should be dynamic
and readily addressable, devoid to the greatest degree possible of needless,
artificial, and outmoded hierarchical constructs.
Parallelism is the simple acknowledgment that "two heads are better than
one" and that life and logic need not always be constrained by one-track
mindedness. Not always is it necessary for the runner to wait for the baton
to begin to run his or her segment of the race.
RISC is the recognition that while life and labor may be complex, even
the longest journey begins with a single step. Logic and leadership observe
simple and basic principles. Seldom is it possible in one great stride
to leap from start to finish, and it may very well be better to take very
many small steps exceedingly rapidly.
Hough (1998) concludes: "Business success in the next century will be
measured by a company's culture, its business style and its business processes
- not the specific products it makes. Those who are the most successful
will be those who are the quickest to respond to the whims of the customer
and the shifts of global demand."
And Simon (1998) says: "The biggest challenge in delivering enterprisewide
information is bridging the gap between those who create information ...
and ... those who 'consume' it. Not only are there technological barriers,
there are cultural barriers. As with all changes, people are resisting...
When they share 'their' data, they risk losing control of it."
Notwithstanding such fears and regardless of how information is sliced,
diced, and served up, quick and ready access to it is of the essence not
only to dumb computers but also to smart people and organizations.(19)
Organizations that fail to effectively steward and efficiently share the
seeds of information that constitute the core of their capabilities are
liable to find themselves out on a limb - perhaps an unconditional branch
to nowhere. If they're lucky, they may have time to scramble to a safer
roost before it is sawed off.(20) If not,
oh, well... At least they won't be the first enterprise to fail to soar
for lack of persistence, parallelism, and RISC.
References
Ambur, O. (1996, May 9). Critical Success Factors for a Discussion Database
in a Large, Geographically Dispersed Organization. Available at: http://www.erols.com/ambur/Discuss.html
Ambur, O. (1996, September). Metadata or Malfeasance: Which Will It
Be? Available at: http://computer.org/conferen/proceed/meta97/papers/oambur/malfea1.html
Ambur, O. (1997, May 29). Automated Forms: Putting the Customer First
Through Intelligent Object-Oriented Chunking of Information and Technology.
Available at: http://www.erols.com/ambur/Eforms.html
Ambur, O. (1998). Some Provisions of Law Relating to Access to Public
Information. Available at: http://www.fws.gov/laws/infolaw.html
Bennett, W.J. (1996). The Book of Virtues. New York, NY: Touchstone.
Frank, R.H., and Cook, P.J. (1995). The Winner-Take-All Society:
Why the Few at the Top Get So Much More Than the Rest of Us. New York,
NY: Penguin Books.
Gabor, A. (1990). The Man Who Discovered Quality. New York, NY:
Penguin Books. p. 18.
Gilder, G. (1989). Microcosm: The Quantum Revolution in Economics
and Technology. New York, NY: Touchstone.
Hammer, M. (1996). Beyond Reengineering: How the Process-Centered
Organization is Changing Our Work and Our Lives. New York, NY: HarperCollins
Publishers, Inc.
Hough, D.A. (1998, May/June). "Business Without Barriers." Document
Management. pp. 18 & 19.
Lucky, R.W. (1997, November). "When is Dumb Smart?" IEEE Spectrum. Available
at: http://ursula.manymedia.com/david/press/lucky.html
(1998, January 16)
Mancini, J.F. (1998, July) "There's No Business Like Show Business."
AIIM's inform magazine. p. 8. AIIM's home page is at http://www.aiim.org
Moore, J.F. (1996). The Death of Competition: Leadership and Strategy
in the Age of Business Ecosystems. New York, NY: HarperCollins Publishers,
Inc.
Negroponte, N. (1995). Being Digital. New York, NY: Alfred A.
Knopf, Inc.
Nutt, P.C. (1998) Why do smart companies do such dumb things? Available
at: http://www.fastcompany.com/online/11/smartdumb.html
Pearlstein, S. (1998, June 29). "Reinventing Xerox Corp." The Washington
Post. pp. A1, A8 & A9.
Peters, T. (1992) Liberation Management: Necessary Disorganization
for the Nanosecond Nineties. New York, NY: Alfred A. Knopf, Inc.
Raines' Rules. (1997). Abbreviated version available at: http://www.fws.gov/laws/itmra.html
Rifkin, J. (1996). The End of Work: The Decline of the Global Labor
Force and the Dawn of the Post-Market Era. New York, NY: Tarcher/Putnam.
Samuelson, R.J. (1998, June 24). "The Trouble With Japan." The Washington
Post. p. A17.
Schrage, M. (1995). No More Teams! Mastering the Dynamics of Creative
Collaboration. New York, NY: Currency Doubleday.
Siewiorek, D., Bell, C., and Newell, A. (1989). Computer Structures:
Principles and Examples. New York, NY: McGraw-Hill. Quoted in Stallings,
W. (1996). pp. 6 & 7.
Simon, J. (1998, May/June). "The Soul of a New Corporation." Document
Management. pp. 16 & 17.
Smart, T. (1998, July 2). "A Work Force With Surprising Staying Power."
The Washington Post. pp. E1 & E2.
Stallings, W. (1996). Computer Organization and Architecture: Designing
for Performance. Upper Saddle River, NJ: Prentice Hall.
Steele, R.D. (work in progress). Smart People, Dumb Nations: Harnessing
the Distributed Intelligence of the Whole Earth Through the Internet.
Abstract available at: http://www.cs.su.oz.au/~bob/Inet95/Abstracts/010.html
Steele, R.D. (1998). "Virtual Intelligence: Conflict Avoidance and Resolution
Through Information Peacekeeping." Available at: http://www.oss.net/VIRTUAL/.
See also: http://www.oss.net/
Stewart, T.A. (1998). "Why Dumb Things Happen to Smart Companies." Available
at: http://www.controlfida.com/News/articoli/WhyDumbHappen.htm
Sullivan, C. (1998, July) "AIIM '98 Report: Industry Experts Evaluate
the Show in Anaheim." AIIM's inform magazine. p. 15.
Thompson, A.A., and Strickland, A.J. (1995). Strategic Management:
Concepts and Cases. Chicago, IL: Irwin.
Webster's New Collegiate Dictionary. (1975). Springfield, MA: Merriam-Webster.
Zachman, J.A. (1996). "The Framework for Enterprise Architecture: Background,
Description and Utility". La Canada, CA: Zachman International.
Zachman, J.A. (1998). "Enterprise Architecture: Looking Back and Looking
Ahead". (Advance draft for publication in May edition of DataBase Newsletter).
La Canada, CA: Zachman International.
End Notes
1. For a discussion of complexity with respect to
the design of databases, see Ambur (1997) - particularly the section entitled
"Reverse Engineer People and Processes or Data and Databases?" The author
argues that the database approach to designing applications is doomed to
failure in large-scale enterprises.
2. Samuelson (1998) notes:
The trouble with Japan is that it is run by the Japanese. Their consensus-seeking
culture seems capable of great changes only after calamity shows that change
is inevitable... Consensus thinking is reinforced by webs of self-interest
between government bureaucrats and protected private constituencies...
Japan is a more centralized society than the United States. More power
flows from national government and business; there are fewer enclaves of
dissent and experimentation... None of this matters much when things are
going well. Indeed, it can help a country pursue a desirable goal. But
it is a huge disadvantage when change becomes imperative and the consensus
it blind.
3. The original title of Schrage's book when it was
first published in 1989 was Shared Minds: The New Technologies of Collaboration.
4. For a discussion of the merits of using "groupware"
as an alternative to traditional meetings, see Ambur, 1996.
5. Schrage quotes Francis Crick, who co-discovered
the double helix, as saying, "Politeness is the poison of all good collaboration
in science." Schrage suggests that "good manners" should not be allowed
to "get in the way of a good argument." (p. 35) Hammer concurs:
Teams are breeding grounds for conflict. Even people who share a common
goal will have different views on how to achieve it. The more responsibility
that people have for achieving a goal and the more respect that people
have for their own views, the more likely and intense such conflict will
be. It is when you don't care what happens that you don't expend your energy
arguing about it. (p. 136)
Nutt (1998), who advertises himself as an advisor to business leaders,
notes:
The worst way to reach a decision is to impose your ideas on the organization....
The typical problem isn't just that decisions lack merit. It's that staffers
resent these heavy-handed tactics and thus resist or undermine bosses who
resort to them.
Hammer argues that dissent should be accepted and yoked:
The culture of a process-centered organization must ... encourage people
to accept the inevitability of tension and even conflict... not ... the
old political infighting and back-stabbing, the turf protection and empire
building... rather ... the conflict that inevitably arises when independent
people must work together to achieve multiple objectives in an environment
of flux, ambiguity, and scarce resources... it is possible to fashion various
mechanisms for coping with conflict. But better yet would be for the organization
to fashion a culture that appreciates the creative power of conflict and
seeks to harness it... A tolerance for risk is another aspect of the process-centered
organization that runs counter to traditional corporate cultures... people
... go to great lengths to avoid admitting the existence of problems or
taking responsibility for them. (pp. 164 & 165)
6. Fortunately, in a civilized society the physical
survival of individuals does not depend upon survival of the particular
enterprise with which they may be associated at any particular time. While
the relative wealth of any individual is linked to the success of his or
her enterprise(s), the wealth of the society as a whole depends not only
upon the creation of successful enterprises but equally upon the "creative
destruction" of those that are inefficient. Thus, paradoxically, the greatest
good for the greatest number quite literally depends upon the relative
insecurity of all - a fact that should not be lost in considering the antitrust
actions being brought against Microsoft and Intel, for example.
7. That is not to suggest that the shape of the table
may not be important in terms of symbolism, efficiency, and group dynamics.
However, neither the furniture nor the equipment or media by which shared
understandings are created should be the focal point. As a medium
for records, paper, for example, is passe. While it remains a perfectly
good medium for the display of information in intimate settings, it is
a lousy medium for managing and sharing knowledge of record throughout
an enterprise.
8. Based upon a survey of attendees at the AIIM Show,
Mancini (1998) reports that changing company cultures is among the top
three issues that are driving IT users crazy. The other two are "choosing
the right technology" and "training users in new processes."
9. Government agencies are being urged to consider
outsourcing activities that are not inherently governmental in nature.
Indeed, with reference to information technology investments, Raines' Rules
(1997) specify that agencies should use commercial off-the-shelf software
(COTS) and not undertake any developmental activities that can be conducted
by the private sector. Moreover, legislation has been proposed that would
go even further toward privatizing activities currently conducted by public
employees.
10. "Knowledge management" (KM) is a relatively
new buzz word in the IT industry. However, like the term "groupware," its
meaning is fuzzy. Everyone agrees it is important but no one is exactly
sure what it means. Sullivan (1998) reports the following definitions by
three IT consulting groups:
-
Gartner Group - A discipline that promotes an integrated approach to identifying,
capturing, evaluating, retrieving, and sharing all of an enterprise's information
assets. These assets may include databases, documents, policies, and procedures,
and previously uncaptured tacit expertise and experience in individual
workers.
-
Delphi Consulting - Knowledge is the information resident in people's minds
that is used for making decisions in unknown contexts. KM, in turn, refers
to the practices and technologies that facilitate the efficient creation
and exchange of knowledge on an organization-wide level to enhance the
quality of decision-making.
-
CAP Ventures - "KM" encompasses management strategies, methods, and technology
for leveraging intellectual capital and know-how to achieve gains in human
performance and competitiveness.
11. Hough (1998) notes:
... wasteful and redundant processes ... can [account for] as much
as 99% of the total time to make a product... Since most of the time (70%)
is spent on inter-enterprise (going from "outside the box" to "inside the
box"), so is most of the cost. If 99% of the 70% is waste, the best results
have to come from outside the box. Taking the easy route and focusing only
within is not the answer.
12. The utility principle of marketing holds that products
and services are valueless unless they are delivered where, when, and the
form desired by the customer.
13. Negroponte (1995) notes:
... every communication and decision need not go back to a central
authority for permission ... [decentralization] is viewed more and more
as a viable way to manage organizations and governments. A highly intercommunicating
decentralized structure shows far more resilience and likelihood of survival.
It is certainly more sustainable and likely to evolve over time.
14. According to statistics compiled by the Bureau
of Labor Statistics (BLS), the median length of time that workers have
been employed at their current workplace has actually inched up in recent
years - contrary to the perception of loss of job security and increased
mobility. Nevertheless, in 1996 the median job tenure was only 3.8 years,
up from 3.5 years in 1983. However, among older men aged 55 to 64 - who
have disproportionately accounted for institutional memory in the past
- job tenure has indeed declined from 15.3 to 10.5 years, a fairly dramatic
drop. Tenure has also declined significantly for men aged 45 to 54 - from
12.8 to 10.1 years. Conversely, for older women the trend is up slightly
- from 9.8 to 10 years for those aged 55 to 64. (Smart, 1998)
In today's economy and society, 10 years may seem like a long time for
corporate memory to persist and even 3.8 years may exceed the life cycle
of many information products. However, any and all self-respecting enterprises
would wish to persist and prosper for a much longer period. In order to
secure reasonable assurance of doing so, they cannot afford to rely wholly
or perhaps even primarily upon mobile, "carbon-based" human memory units
as the repository for their critical, core information assets. While human
memory clearly constitutes the soul of the institution, just as clearly
it should not be the sole source for crucial corporate knowledge.
15. Even as the cost of computer processing power
has fallen rapidly, Rifkin (1996) argues that human labor has been devalued
to the vanishing point by advances in technology. Frank and Cook (1995)
highlight the problem of greater concentration of wealth among the upper
economic classes and suggest that a progressive tax on consumption is needed
to achieve a more desirable distribution of income while helping to "steer
our most talented citizens to more productive tasks." (p. 231)
16. Coincidentally, DMA is also the acronym for
the Document Management Alliance, an industry group organized under the
auspices of the Association for Information and Image Management (AIIM)
in pursuit of standards for interoperability among electronic document
management systems (EDMSs). Information on the DMA is available at http://www.aiim.org/standards.
17. Hough (1998) laments: "... we have become trapped
in the bureaucracy of our own application code... Have a problem? Solve
it with code - more and more code. We are drowning in code."
18. Besides the control unit, the other two components
of the Central Processing Unit (CPU) are the Arithmetic and Logic Unit
(ALU) and registers.
19. For a discussion of the importance of document
metadata to government agencies, see Ambur (1996, September).
20. The appendix contains a case study quoting some
of the key points from an article by Pearlstein (1998) reporting on the
success of Xerox Corporation in reinventing itself and using information
technology as an invaluable aid in doing so.
The following passages are excerpted from an article by Steven Pearlstein
entitled "Reinventing Xerox Corp: Success Mirrors Efforts Across Corporate
America," which appeared in the June 29, 1998, edition of The Washington
Post:
... the secrets of ... America success [include] doing a lot of what
you do best and outsourcing the rest, empowering employees and holding
them accountable, reengineering processes for cost and quality ...
But if Xerox is any indication, what gives American companies an advantage
in global markets these days is that they have figured out ways to hard-wire
the process of adaption and reinvention into the corporate culture and
to take what began as a one-time revolution and make it ongoing.
... the American genius is for trial-and-error management ...
It's not that American managers, all of a sudden, became brilliant ...
it's that [they] found ways of releasing the creative energies of everyone
in the company and harnessing that over a sustained period.
By dispersing its operations around the globe, Xerox has learned to
take advantage of what every region has to offer - be it talent, technology,
lower cost or access to local markets and capital.
... the task of "thinking global and acting local" has become a full-time
preoccupation.
... "best practices" developed at one plant are quickly disseminated
to every other facility. The factory walls are plastered with the same
printouts on sales and production... And they are spit out from computers
that, increasingly, are tied into a worldwide network using common standards
and software.
Many of [the company's] suppliers are required to have quality programs
certified by Xerox inspectors. And they are linked into Xerox's central
nervous system through phone lines and computers that exchange information
on production volumes, billing and design changes.
It is through its network of alliances with key suppliers ... that Xerox
and other successful American companies have been able to achieve most
of the economies of scale without buying into all of the diseconomies that
would come from trying to do everything in-house - the old model of the
vertically integrated corporation.
With people relying increasingly on computers, printers and faxes to
generate, transfer, produce and store most documents, the old light and
lense copier looked to be going the way of carbon paper.
Computers, hand-held scanners and bar-codes on every part have banished
all paperwork on the [production] line, along with any need to maintain
more than a day or two worth of parts inventory.
What distinguishes the old Xerox from the new Xerox is that these days
[they] bet the company every time [they] bring out a major new product
line...
Winning those bets ... boils down to a handful of critical skills.