Persistence, Parallelism, and RISC:  What Smart, Enterprising People and Organizations Can Learn from the Architecture of Dumb Machines

Owen Ambur,University of Maryland University College, July 8, 1998


Let's face it. As Guilder (1989) has said, "Computers are dumb." Not only are they not God, but they're not smart nor are they brave or daring. Literally, all they can do is manipulate ones and zeros. Barring electrical or mechanical failure, they do exactly as they're told. That's it. That's all they do. And that certainly doesn't seem like much. However, they can do it very, very fast and that is their essential genius. For better or worse, it is often observed that life tends to imitate art. In that vein, smart, enterprising organizations would do well to learn from the architecture of those "dumb" computing machines. Indeed, such lessons are among those that enterprises whose livelihood depends upon the quality, quantity, and availability of information dare not fail to learn.

In positing a framework for enterprise architecture, Zachman (1996) highlighted the problem and tradeoff involved in designing and building complex systems:

To avoid sub-optimization and needless dissipation of energy in computer supported systems, Zachman (1998) asserts: It is somewhat ironic that "sitting together" and having a "meeting of the minds" might be considered to be requirements for designing an information architecture and building a complex system to support daring action. Traditional group-think processes are a prescription for mindless compromise and inaction.(2) While acknowledging that individual genius is not enough, particularly when paradigm shifts are required, Schrage (1995) argues that meetings are not necessarily a requirement for collaboration (p. 40) and that collaboration can be based upon the "patterns and symbols people create." (p. 34)(3)

Indeed, the reality is that people collaborate only to the degree that they effectively share patterns and symbols that are meaningful to each other. Whether they meet or not is irrelevant, except to the degree that non-verbal, emotional cues are helpful in establishing shared understanding.(4) Nor for the sake of collaboration per se is it necessary that people agree or even that they be agreeable in the sense of being polite.(5) In arguing for a new, process-centric approach, Hammer (1996) notes:

In fact, by definition, disagreement or at least lack of agreement is a prerequisite for the establishment of shared new meaning. Moreover, the quest for knowledge is unending. As Jonathan Miller as says, "All work in art and science is the extending of unfinished business."(In Schrage, p. 42) The fourth of Deming's famous Fourteen Points for Total Quality Management (TQM) is that change must be continuous and all-encompassing. (Gabor, 1990) As is often said, "TQM is a journey not a destination" and it is characterized by continuous improvement.

What is required for continuous improvement and for efficient and effective enterprise action is coalescence around explicit understandings - not around computer screens or water coolers, much less meeting tables or battle fields. In the past, it may have been accurate to say that "only the strong survive." Strength sprang from numbers and it was vital to mass the troops on the field of battle. However, clearly, in the cyber age strength depends not upon brute force but upon ability to share information quickly, efficiently, and accurately. Moreover, in a highly competitive environment, survival depends upon initiative, if not daring.(6)

What is needed for daring action is shared insight, if not necessarily genius, together with the fortitude to step out front of the pack before wider consensus arrives in the marketplace. Such insight must exist within the enterprise - regardless of how "the enterprise" is defined - and it must be widely and readily shared. Concerning the ease, breadth, and decentralized nature with which information must be shared, Hough (1998) observes:

Zachman (1998) asserts, "Architecture IS a survival issue because of complexity and the high rates of change and [that] many Enterprises are failing for lack of it." With reference to information sharing, he suggests: If only that were really true, there is no telling what wondrous advances in efficiency, effectiveness, and discovery might occur. Of course, to an ever increasing degree it is becoming true, particularly in enterprises that depend upon market forces for survival. Sadly, however, it remains far from true in many organizations, particularly in governmental agencies, whose livelihood is several steps removed from exposure to the direct exchange of value with customers. And in many organizations systems architectural planning efforts focus on technology rather than information, as if the shape of the table were more important than the understandings to be conveyed over it.(7)

Several laws have been enacted in efforts to force Federal agencies to do a better job of creating, managing, and providing access to Government information. Some of those include the Federal Records Act, Freedom of Information Act, Electronic Freedom of Information Act Amendments, Chief Financial Officers Act, and Information Technology Management Reform Act. (See Ambur, 1998) However, notwithstanding the Vice President's admonitions about a "Government that costs less and works better," too many agencies still find too many excuses not to capture and manage information efficiently and effectively. Often they argue that the necessary tools and procedures simply don't fit their "culture".(8)

If a computer evinced such an attitude and operated that way, who would buy it? No one, of course. The market for hardware and software simply would not tolerate such behavior. While people have a right to be treated with dignity and respect as human beings, rather than as mere cogs in the wheels of a machine, is there any reason to think that the organizations and associations they form for the purpose of doing business should not operate efficiently and effectively?

To explain why dumb things happen to smart companies, Stewart (1998) poses a common scenario and proffers a strategic approach: "You've hired the smartest people and you're spending tons on R&D and customer service, yet you keep blowing it. Time to look at how you manage brainpower." In support of customer-focused, process-oriented organizations, Hammer (1996) notes: "No system that depends upon segregating wisdom and decision-making into a managerial class can possibly offer the speed and agility customers demand." (p. 157) Addressing network architectural issues, Lucky (1997) observes:

Referencing the failures of government intelligence operations, Steele (1998) goes so far as to suggest: "In an age characterized by distributed information, where the majority of the expertise is in the private sector, the concept of 'central intelligence' is an oxymoron, and its attendant concentration on secrets is an obstacle to both national defense, and global peace."

In the competitive milieu of the private marketplace, companies that are bureaucratic, fat, slovenly, secretive, and centrally controlled are not likely long for this world. Why should it be any different for governmental agencies that owe their existence not only to the taxes paid by companies but especially by individual citizens who are expected to work hard and well to earn a decent livelihood for their families?(9) If we can agree with the Vice President that Governmental agencies should be slimmer and more potent - more enterprising and perhaps more daring - is it also possible that we might agree that agencies and organizations of all kinds might take a lesson or two from the structure and functions of the computer? Certainly, computers are not gods, but they do some things very, very well. A pair of particularly pertinent principles are persistence and parallelism, and dare we suggest that RISC ought also be brought into play.

Persistence in pursuit of a pervasively popular cause is widely recognized as a commendable trait. It is one of ten great moral virtues cited by Bennett (1996). Persistence in the cause of the mundane is less roundly celebrated. However, persistence in the form of "memory" is critical to the success of the computer and so too is "institutional memory" to the success of an enterprise.

Thompson and Strickland (1995) define "core competence" as "something a firm does especially well in comparison to rival[s] ..." and that it is "... a basis for competitive advantage because it represents specialized expertise that rivals don't have and can't readily match." Implicit is the notion that the knowledge that underpins the competency can be maintained, i.e., that it will persist in the organization. Moreover, as Thompon and Stickland highlight, "Strategic unity and coordination across ... functional areas add power to the business strategy." (p. 42) They also point out, "When it is difficult or impossible to outstrategize rivals ... the other main avenue to industry leadership is to outexecute them ..." (p. 243) Certainly, that means the organization must be able to effectively maintain and efficiently share its corporate knowledge base.(10) Hammer (1996) asserts:

Among the traits Thompson and Strickland identify for core competencies are that they "... rarely consist of narrow skills or the work efforts of a single department" and the "... selected bases of competence need to be broad enough to respond to an unknown future." (p. 244) In computer programming parlance, "unknown futures" are equivalent to a "conditional branch" that is dependent upon values that cannot be reliably predicted or determined in advance. Nutt (1998) notes: ".... as the business world gets more complex, there's a not-so-fine line between being decisive and being blind..." Yet, as a requisite for enterprise and business success, Hammer says: "True innovation entails anticipating the opportunities for meeting latent need, for solving problems that customers may not even recognize that they have." (p. 99)

Making the case for organizations built around processes comprising complete value chains leading to customers, Hammer cites features common to all of them:

Hammer also addresses the negative attributes that seem to be universal to virtually all contemporary and industrial-era organizations: Thompson and Strickland point out that "... in traditional functionally organized structures, pieces of strategically relevant activities are often scattered across many departments... [and] parceling strategy-critical work across many specialized departments contributes to an obsession with activity ... rather than results ... [One of the keys] in weaving support activities into the organization design is to establish reporting and coordinating arrangements that ... contain the costs of support activities and minimize the time and energy internal units have to spend doing business with each other." (p. 247, emphasis added)

On the other hand, the theory of the firm holds that companies form when the cost of transactions becomes too high without them.(11) Thus, the trick is to find the happy medium between too much "process" and too little... between too much versus too little structure, hierarchy, and formality... between too much internal business that does not involve the customer directly versus too little intra-organizational communication to meet the customer's needs effectively. Thompson and Strickland conclude, "Delayered corporate hierarchies and rapid diffusion of information technologies make greater empowerment feasible," thereby helping to minimize needless "business" among internal units versus transactions with external customers. (p. 249)

It is the virtual organization of the corporate knowledge that truly matters. More than the structure of the corporate hierarchy, it is the organization's institutional memory that defines its soul and, at least in private enterprise, determines its life span as well as its quality of life. How an organization fosters and uses its knowledge is key. Consider the comments of these notable, quotable scholars:

In organizations and humans as well as in computers, knowledge is either volatile or nonvolatile, and nonvolatile memory is either alterable or unalterable. In a volatile memory, information decays naturally or is lost when the power is switched off. In nonvolatile memory, information remains without deterioration until deliberately changed. Nonerasable memory cannot be altered, except by destroying the storage unit. (Stallings, p. 103) Volatile memory is functionally equivalent to human consciousness and short-term memory. Nonvolatile memory is equivalent to long-term memory in humans. With training and experience, the information it contains can and inevitably does change, but on the whole, it is relatively stable so long as the circuits are alive. Nonerasable memory is similar in some respects to instincts and reflexes, although reflexes are basically "hardwired" into the nervous system.

Stepping up a level from the individual to the organization, institutional long-term memory requires documentation and record-keeping, because individuals come and go.(14) In addition, "power" in the form of guidance and training must be applied not only to maintain skill levels but especially to acquire the new institutional memories required to respond to changing external realities. Whereas individuals in "civilized" societies are considered to have the right to survive regardless of whether they learn and produce or not, organizations and particularly profit-making businesses are afforded no such privilege in a market economy. Thus, organizations must continually be "refreshing" their institutional memories in order to survive and prosper.

In computers, the information stored in random access memory (RAM) can be retrieved for use in any order, because each block of information is given a unique identifier (memory address). Through the wonders of nature, human memory is RAM. However, due to the prevalence of hierarchies and the use of paper, institutional memory seldom is. Read-only memory (ROM) is nonerasable and cannot be altered except by destruction. Theoretically, institutional memory need not be ROM but high rates of business failures attest that it often is, as enterprises prove incapable of overwriting outdated, excessively persistent corporate memory. Dynamic RAM uses capacitors, meaning the information it contains is readily altered to reflect "current" information. (Pun unintended but acknowledged.) However, like human and institutional memory, DRAM decays unless periodically refreshed. Moreover, due to capacity limits, it must rely upon rapid and accurate communications with other components more capable of compiling and conserving the quantities of content coveted for corporate conquest.

Static RAM (SRAM) is implemented using "flip-flops," which are circuits or devices capable of sustaining either one of two stable states at any time. SRAM holds information as long as power is supplied to it, without refreshment. It might be considered to be equivalent to principles, on which humans, particularly politicians, for example, have been known to flip-flop. The principles of ordinary citizens have been known to change when refreshments of the alcoholic nature are applied, and politicians seem quite prone to refreshments that fall into the category of campaign contributions. However, SRAM is clearly not equivalent to integrity, personal or institutional, which is defined by "what you do when no one is looking" (i.e., no external power is supplied). At the institutional level, SRAM is similar to corporate culture. At the personal level, integrity is ROM.

Speaking of personal traits that are universally critical to organizational success, Hammer asserts: "Fundamentally, all professionals require the same set of attitudes, regardless of their field. The first of these is self-motivation and discipline... Intensity, seriousness of purpose, sincerity, self-reliance: These may be classical virtues, but they are also critical requirements for our new, decidedly nonclassical context." (p. 56) Taking the concept of universal principles further, Hammer also notes:

What could be more important to the success of an organization than to standardize the means by which its members process, share, update, and maintain their corporate knowledge? What could be more critical than memory - in institutions, individuals, or computers? Indeed, memory is a basic elements of every computer, and Stallings notes that the "contemporary memory hierarchy" includes the following components: registers, cache, main memory, disk cache, magnetic disk, optical disk, and magnetic tape. (pp. 27 & 104)

Registers are small, very fast memory that is physically located on the CPU chip itself. They contain the data upon which the CPU actually operates. Instructions and data are generally exchanged between the registers and main memory. However, to improve performance, instructions and data that are likely to be needed by the CPU may also be stored temporarily in cache memory, which is smaller and faster than main memory. Generally speaking, main memory exchanges instructions and data with magnetic disks ("hard drives" and "floppies"). Again, to improve performance, a virtual cache may be implemented on magnetic disk to avoid having to search for randomly stored instructions and data that are frequently used or for which there is a high probability of use in the active program. Magnetic disks are the primary "work horse" media for nonvolatile (persistent) information. Magnetic tapes are generally used to back up data in the event of hard-disk failure. Rapid and significant improvements in optical disk storage technology are increasing its use for large volumes of information that needs to be maintained in a stable state over longer periods of time.

With advances in microprocessor technology, Stallings asserts that computing power has become virtually "free" but that other critical components have not kept up.(15) (p. 36) "Nowhere," Stallings notes, "is the problem created by such mismatches more critical than in the interface between processor and main memory ... the speed with which data can be transferred between main memory and the processor has lagged badly." (pp 38 & 39) Such is also the case with access to and sharing of corporate knowledge.

Beyond further layering of the memory hierarchy, one of the means by which computer architects have endeavored to solve the problem is through the use of "interrupts" to improve the efficiency of utilization of the central processing unit (CPU). As Stallings notes, "With interrupts, the processor can be engaged in executing other instructions while an [input/output] operation is in progress." (p. 57) Classes of interrupts include:

While interrupts are widely used, they entail two drawbacks: 1) the I/O transfer rate is limited by the speed with which the CPU can test and service a device, and 2) the CPU is tied up managing the transfer. To avoid those problems, another method of improving processing efficiency is Direct Memory Access (DMA).(16) DMA modules relieve the CPU of responsibility for controlling the transfer of information between main memory and input/output devices. The transfer occurs directly between memory and the device via the DMA module, thereby freeing the CPU to do other, less routine, higher-value work. (Stallings, p. 65 & 199-201)

With respect to organizational success, Peters has noted, "It's about centralization and about decentralization - simultaneously," and that is a fairly apt description of parallelism in computers. The central processing unit is the engine that make the machine "run" but what does "central" mean when there are many CPU's involved in different aspects of the very same process?

Stallings notes that the computer has traditionally been view as a sequential machine, but that has never been entirely accurate. (p. 569) For a number of years, however, there has been interest in developing systems with massive numbers of processors working in parallel - much like a large "enterprise" in which many people, offices, and divisions are all working together in pursuit of a common objective. (p. 597) Stallings classifies multi-processor systems into four categories:

Stallings identifies the key characteristics that all multiprocessing systems have in common, as follows: The latter characteristic highlights the distinction between true multiprocessors and loosely coupled multiprocessing systems. With loosely coupled systems, the normal unit of transfer is a complete file, whereas multiprocessors can interact at the data element level with a great deal of cooperation between processes. In human terms, multiprocessors are a bit like the long-married couple who can complete each other's sentences. On the other hand, most folks and certainly most contemporary organizations need their agreements specified in writing. Their transactions are conducted at "arm's length," supported by the formal exchange of documents, e.g., the offer and acceptance of contracts.

More and more organizations are building closer and stronger alliances with their suppliers, becoming virtual vertical enterprises. In highly competitive markets survival may offer no other choices. However, in any enterprise in any market and any organization in any endeavor, at the very least the corporate knowledge contained in its documents and files should be well managed and readily shared. By definition, organizations are at least loosely coupled processing systems. For some organizations, particularly those of a social nature, loose coupling may be exactly what is warranted, but to the degree that business and economic values are involved, tightly coupled, parallel processing may be essential.

For multiprocessing to occur, each CPU must be self-contained, with a control unit, arithmetic and logic unit (ALU), registers, and perhaps a cache memory. However, each CPU shares access to main memory and I/O devices through interconnections. In some configurations the processors may share some information directly with each other, but most communications are conducted by leaving messages and status information in main memory. The memory is often organized so that separate blocks can be accessed simultaneously. As Stallings (pp. 570-573) notes, the organization of multiprocessing systems can be classified as:

In conjunction with any of these hardware approaches, it falls to the operating system to make the multiple processors act in a seamless fashion as one, by scheduling the execution of programs and the allocation of resources. Stallings (p. 574) outlines seven functions performed by a multiprocessor operating system: Of these functions, Stallings notes that only the last three are unique or substantially different for multiprocessing system versus uniprocessors. With reference to load balancing, he points out that there are two dimensions to the scheduling function: whether processors are dedicated to specific processes and how the processes are scheduled. Processes can be queued up either in separate queue for each processor or in a common queue for all of them, in which case different portions of a job may be handled on different processors.

Regardless of whether dedicated or common queues are used, some means of assigning processes to processors is needed. Two approaches have been used: master/slave and peer. With the former approach, the operating system always runs on a particular processor and it is responsible for allocating and scheduling jobs on the other processors. While that approach is simple and straightforward, the master can become a performance bottleneck and failure of the master brings down the whole system. With the peer approach, the operating system can run on any processor, and each processor performs self-scheduling from the pool of available processes. Such an approach can avoid the risk of bottleneck and failures associated with the master, but it requires a more complex operating system that can keep track of the assignment of processes. (Stallings, p. 574)

As mentioned, each CPU in a multiprocessor system may have its own cache memory, and such "local" memory is required in order to achieve reasonable performance. However, the problem of "cache coherence" results. That is, multiple copies of the same data from main memory may exist in various cache memories, and if the processors are allowed to update that data in their cache, inconsistencies can occur. To avoid that problem, software or hardware solutions are possible but the "MESI" protocol is most widely used.

The simplest software approach is to prevent any shared data from being cached, but that is not efficient since sharing data is of the essence of multiprocessing. More efficient approaches rely upon analyzing programs to determine periods when it is safe to cache data that will not be needed by other processes.

Hardware solutions fall into two categories - directory protocols and "snoopy" protocols. Directory protocols track the use of blocks of data, usually by maintaining a directory in main memory indicating the location of each cached block. Every local action taken on data in cache memory must be reported to the central controller. The downside is that the controller can become a bottleneck and the need to report back every change results in overhead communications. However, Stallings (p. 580) reports that directory schemes are effective in large-scale systems that involve multiple buses or other complex interconnections.

Snoopy protocols distribute the responsibility for cache coherence among the controllers in the cache of each of the multiprocessors Specifically, the cache controller must recognize when it controls a block of data that is shared by others, and when the block is updated locally, the change must be announced to the other caches. Each cache controller "snoops" on the network and reacts appropriately when such a notification is broadcast. Such an approach is well-suited to systems that rely upon a common bus. However, one of the reasons for employing local cache is to avoid the need for bus access, and communications overhead associated with the need to broadcast and snoop for change notifications may cancel out the gain from using cache memory. (Stallings, p. 580)

Two snoopy protocol approaches have been investigated - write/invalidate and write/update (or write/broadcast). With the write/invalidate approach, a data block may be read by multiple processors at once but only one may write to it at anytime. With write/update there may be multiple writers as well as readers, but updates (writes) must be distributed to all of the others. Stallings (p. 581) reports that the write/invalidate approach is most commonly used in commercial systems. Using two extra bits (status bits), the protocol calls for every cache line to be marked as modified, exclusive, shared, or invalid. The protocol is identified by its acronym - MESI - and the four possible states can be summarized as follows:

Snoopy protocols are a bit (if not a byte) like the organizational philosophy of "management by walking around" - except they are much more explicit about what the "controllers" are looking for as they "walk" and they are much more precise about how new information should be "flagged" to the attention of cohorts. That is, snoopy protocols support tightly coupled multiprocessing, using a common operating system. They may also enable multiple processors to work simultaneously on different aspects of the very same program, job or task - which is known as parallel processing.

Stallings (p. 598) notes that the term "parallel processing" is normally applied to high-level parallelism among multiple processors, as opposed to low-level parallelism in single-processor machines. Examples of low-level parallelism include:

Encompassing both high- and low-level parallelism, Flynn proposed the following taxonomy for parallel processing computer systems: Organized as MIMD, each processor must be able to execute all of the necessary operations to act on the data. If the processors share a common memory, they are called "multiprocessors". If each processor has its own memory, communications must be provided via fixed paths or message-switching mechanisms. Such systems are called "multicomputers". Stallings (p. 599) notes that practical parallel processing systems may be organized as SIMD or as MIMD multiprocessors or multicomputers.

With reference to high- versus low-level parallelism in professional organizations, Hammer (1996) highlights an important distinction that parallels Stallings' explanation of parallelism in computers:

In other words, professionals must assimilate a complex set of "instructions" and be able to execute them independently and in parallel, just as multiprocessors and multicomputers do. Moreover, considerations of instruction complexity in professional organizations is paralleled in the structure of computers as well. In computers the issue is framed in terms of complex instruction set computing (CISC) versus reduced instruction set computing (RISC). As with professional organizations, the trend has been toward more and more complex instructions in computers.

However, that trend may be changing. The cost of hardware has fallen relative to the cost of software, and complex instructions have contributed to the existence and persistence of software bugs over lengthy periods of time.(17) Thus, the obverse proposition has attracted attention - to make the architecture of computers simpler, rather than more complex. Similar efforts have occurred in professional organizations as well, notably in the Federal government. Indeed, such efforts are required by law, as set forth in the Paperwork Reduction Act.

With respect to computers, research has shown that attempting to make the instruction set reflect as closely as possible all of the requirements of the high-level programming language is not the most effective design. Instead, the requirements can best be met by optimizing performance on the most time-consuming features. Tannenbaum, for example, found that 98 percent of the procedures executed by computers required fewer than 6 instructions, and 92 percent required fewer than 6 data elements. Relatively few "words" of instruction and variable elements of data are required to perform the bulk of all operations. (Stallings, p. 433)

Rather than attempting to address all possible contingencies via complex and highly detailed instructions, it is more important to ensure that the processors have fast and ready access to the relatively few instructions and data elements needed to do their specific jobs. Thus, RISC architectures have focused on three elements:

Building on those three architectural elements, RISC systems are characterized by four operational features: The issue of simple versus complex instruction sets is not clear cut or fully resolved, nor is the issue of general versus highly tailored processors. For example, Siewiorek, Bell, and Newell (1982) commented: "There is remarkably little shaping of computer structure to fit the function to be performed. At the root of this lies the general-purpose nature of computers, in which all the functional specialization occurs at the time of programming and not at the time of design." (In Stallings, p. 6 & 7) However, Gilder (1989) proffers: No doubt, there will be a melding of CISC and RISC computing as well as continuing proliferation of both general-purpose and highly tailored processors. After all, it is results that count - in computers and enterprises. Achieving high performance is more important than the artificial architectural constructs employed in any particular system at any point in time. Hammer drives home that point by contrasting the old hierarchical organizational paradigm with the new, process-centric, results-oriented, information-dependent enterprise: Some people may disagree with Hammer's focus on process, much less McLuhan's notion that the "medium is the message". Certainly, many managers and supervisors are still more interested in giving orders, expecting results, and not being troubled with the details of how to produce them. However, those folks might do well to consider Stallings description of the "control unit," which is one of three components of every CPU(18): More than "control" in the traditional sense, control units perform a coordinative and scheduling function. In particular, with reference to TQM and process-orientation, they do not impose themselves as a bottleneck between main memory, specialized processors, functional units, and I/O modules leading to and from customers and suppliers.

Many people and organizations may dispute Zachman's assertion that the computer system is the enterprise. However, fewer would contest the fact that information is key. Thus, persistence, parallelism, and RISC are concepts that smart people and organizations would do well to incorporate into their operations:

Persistence in the form of memory is the vault of knowledge upon which success is built. The alternative is continuously reinventing the primordial wheel on a Sisyphean assembly line. Corporate memory should be dynamic and readily addressable, devoid to the greatest degree possible of needless, artificial, and outmoded hierarchical constructs. Parallelism is the simple acknowledgment that "two heads are better than one" and that life and logic need not always be constrained by one-track mindedness. Not always is it necessary for the runner to wait for the baton to begin to run his or her segment of the race. RISC is the recognition that while life and labor may be complex, even the longest journey begins with a single step. Logic and leadership observe simple and basic principles. Seldom is it possible in one great stride to leap from start to finish, and it may very well be better to take very many small steps exceedingly rapidly.

Hough (1998) concludes: "Business success in the next century will be measured by a company's culture, its business style and its business processes - not the specific products it makes. Those who are the most successful will be those who are the quickest to respond to the whims of the customer and the shifts of global demand."

And Simon (1998) says: "The biggest challenge in delivering enterprisewide information is bridging the gap between those who create information ... and ... those who 'consume' it. Not only are there technological barriers, there are cultural barriers. As with all changes, people are resisting... When they share 'their' data, they risk losing control of it."

Notwithstanding such fears and regardless of how information is sliced, diced, and served up, quick and ready access to it is of the essence not only to dumb computers but also to smart people and organizations.(19) Organizations that fail to effectively steward and efficiently share the seeds of information that constitute the core of their capabilities are liable to find themselves out on a limb - perhaps an unconditional branch to nowhere. If they're lucky, they may have time to scramble to a safer roost before it is sawed off.(20) If not, oh, well... At least they won't be the first enterprise to fail to soar for lack of persistence, parallelism, and RISC.


References

Ambur, O. (1996, May 9). Critical Success Factors for a Discussion Database in a Large, Geographically Dispersed Organization. Available at: http://www.erols.com/ambur/Discuss.html

Ambur, O. (1996, September). Metadata or Malfeasance: Which Will It Be? Available at: http://computer.org/conferen/proceed/meta97/papers/oambur/malfea1.html

Ambur, O. (1997, May 29). Automated Forms: Putting the Customer First Through Intelligent Object-Oriented Chunking of Information and Technology. Available at: http://www.erols.com/ambur/Eforms.html

Ambur, O. (1998). Some Provisions of Law Relating to Access to Public Information. Available at: http://www.fws.gov/laws/infolaw.html

Bennett, W.J. (1996). The Book of Virtues. New York, NY: Touchstone.

Frank, R.H., and Cook, P.J. (1995). The Winner-Take-All Society: Why the Few at the Top Get So Much More Than the Rest of Us. New York, NY: Penguin Books.

Gabor, A. (1990). The Man Who Discovered Quality. New York, NY: Penguin Books. p. 18.

Gilder, G. (1989). Microcosm: The Quantum Revolution in Economics and Technology. New York, NY: Touchstone.

Hammer, M. (1996). Beyond Reengineering: How the Process-Centered Organization is Changing Our Work and Our Lives. New York, NY: HarperCollins Publishers, Inc.

Hough, D.A. (1998, May/June). "Business Without Barriers." Document Management. pp. 18 & 19.

Lucky, R.W. (1997, November). "When is Dumb Smart?" IEEE Spectrum. Available at: http://ursula.manymedia.com/david/press/lucky.html (1998, January 16)

Mancini, J.F. (1998, July) "There's No Business Like Show Business." AIIM's inform magazine. p. 8. AIIM's home page is at http://www.aiim.org

Moore, J.F. (1996). The Death of Competition: Leadership and Strategy in the Age of Business Ecosystems. New York, NY: HarperCollins Publishers, Inc.

Negroponte, N. (1995). Being Digital. New York, NY: Alfred A. Knopf, Inc.

Nutt, P.C. (1998) Why do smart companies do such dumb things? Available at: http://www.fastcompany.com/online/11/smartdumb.html

Pearlstein, S. (1998, June 29). "Reinventing Xerox Corp." The Washington Post. pp. A1, A8 & A9.

Peters, T. (1992) Liberation Management: Necessary Disorganization for the Nanosecond Nineties. New York, NY: Alfred A. Knopf, Inc.

Raines' Rules. (1997). Abbreviated version available at: http://www.fws.gov/laws/itmra.html

Rifkin, J. (1996). The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era. New York, NY: Tarcher/Putnam.

Samuelson, R.J. (1998, June 24). "The Trouble With Japan." The Washington Post. p. A17.

Schrage, M. (1995). No More Teams! Mastering the Dynamics of Creative Collaboration. New York, NY: Currency Doubleday.

Siewiorek, D., Bell, C., and Newell, A. (1989). Computer Structures: Principles and Examples. New York, NY: McGraw-Hill. Quoted in Stallings, W. (1996). pp. 6 & 7.

Simon, J. (1998, May/June). "The Soul of a New Corporation." Document Management. pp. 16 & 17.

Smart, T. (1998, July 2). "A Work Force With Surprising Staying Power." The Washington Post. pp. E1 & E2.

Stallings, W. (1996). Computer Organization and Architecture: Designing for Performance. Upper Saddle River, NJ: Prentice Hall.

Steele, R.D. (work in progress). Smart People, Dumb Nations: Harnessing the Distributed Intelligence of the Whole Earth Through the Internet. Abstract available at: http://www.cs.su.oz.au/~bob/Inet95/Abstracts/010.html

Steele, R.D. (1998). "Virtual Intelligence: Conflict Avoidance and Resolution Through Information Peacekeeping." Available at: http://www.oss.net/VIRTUAL/. See also: http://www.oss.net/

Stewart, T.A. (1998). "Why Dumb Things Happen to Smart Companies." Available at: http://www.controlfida.com/News/articoli/WhyDumbHappen.htm

Sullivan, C. (1998, July) "AIIM '98 Report: Industry Experts Evaluate the Show in Anaheim." AIIM's inform magazine. p. 15.

Thompson, A.A., and Strickland, A.J. (1995). Strategic Management: Concepts and Cases. Chicago, IL: Irwin.

Webster's New Collegiate Dictionary. (1975). Springfield, MA: Merriam-Webster.

Zachman, J.A. (1996). "The Framework for Enterprise Architecture: Background, Description and Utility". La Canada, CA: Zachman International.

Zachman, J.A. (1998). "Enterprise Architecture: Looking Back and Looking Ahead". (Advance draft for publication in May edition of DataBase Newsletter). La Canada, CA: Zachman International.


End Notes

1. For a discussion of complexity with respect to the design of databases, see Ambur (1997) - particularly the section entitled "Reverse Engineer People and Processes or Data and Databases?" The author argues that the database approach to designing applications is doomed to failure in large-scale enterprises.

2. Samuelson (1998) notes:

3. The original title of Schrage's book when it was first published in 1989 was Shared Minds: The New Technologies of Collaboration.

4. For a discussion of the merits of using "groupware" as an alternative to traditional meetings, see Ambur, 1996.

5. Schrage quotes Francis Crick, who co-discovered the double helix, as saying, "Politeness is the poison of all good collaboration in science." Schrage suggests that "good manners" should not be allowed to "get in the way of a good argument." (p. 35) Hammer concurs:

Nutt (1998), who advertises himself as an advisor to business leaders, notes: Hammer argues that dissent should be accepted and yoked: 6. Fortunately, in a civilized society the physical survival of individuals does not depend upon survival of the particular enterprise with which they may be associated at any particular time. While the relative wealth of any individual is linked to the success of his or her enterprise(s), the wealth of the society as a whole depends not only upon the creation of successful enterprises but equally upon the "creative destruction" of those that are inefficient. Thus, paradoxically, the greatest good for the greatest number quite literally depends upon the relative insecurity of all - a fact that should not be lost in considering the antitrust actions being brought against Microsoft and Intel, for example.

7. That is not to suggest that the shape of the table may not be important in terms of symbolism, efficiency, and group dynamics. However, neither the furniture nor the equipment or media by which shared understandings are created should be the focal point. As a medium for records, paper, for example, is passe. While it remains a perfectly good medium for the display of information in intimate settings, it is a lousy medium for managing and sharing knowledge of record throughout an enterprise.

8. Based upon a survey of attendees at the AIIM Show, Mancini (1998) reports that changing company cultures is among the top three issues that are driving IT users crazy. The other two are "choosing the right technology" and "training users in new processes."

9. Government agencies are being urged to consider outsourcing activities that are not inherently governmental in nature. Indeed, with reference to information technology investments, Raines' Rules (1997) specify that agencies should use commercial off-the-shelf software (COTS) and not undertake any developmental activities that can be conducted by the private sector. Moreover, legislation has been proposed that would go even further toward privatizing activities currently conducted by public employees.

10. "Knowledge management" (KM) is a relatively new buzz word in the IT industry. However, like the term "groupware," its meaning is fuzzy. Everyone agrees it is important but no one is exactly sure what it means. Sullivan (1998) reports the following definitions by three IT consulting groups:

11. Hough (1998) notes: 12. The utility principle of marketing holds that products and services are valueless unless they are delivered where, when, and the form desired by the customer.

13. Negroponte (1995) notes:

14. According to statistics compiled by the Bureau of Labor Statistics (BLS), the median length of time that workers have been employed at their current workplace has actually inched up in recent years - contrary to the perception of loss of job security and increased mobility. Nevertheless, in 1996 the median job tenure was only 3.8 years, up from 3.5 years in 1983. However, among older men aged 55 to 64 - who have disproportionately accounted for institutional memory in the past - job tenure has indeed declined from 15.3 to 10.5 years, a fairly dramatic drop. Tenure has also declined significantly for men aged 45 to 54 - from 12.8 to 10.1 years. Conversely, for older women the trend is up slightly - from 9.8 to 10 years for those aged 55 to 64. (Smart, 1998)

In today's economy and society, 10 years may seem like a long time for corporate memory to persist and even 3.8 years may exceed the life cycle of many information products. However, any and all self-respecting enterprises would wish to persist and prosper for a much longer period. In order to secure reasonable assurance of doing so, they cannot afford to rely wholly or perhaps even primarily upon mobile, "carbon-based" human memory units as the repository for their critical, core information assets. While human memory clearly constitutes the soul of the institution, just as clearly it should not be the sole source for crucial corporate knowledge.

15. Even as the cost of computer processing power has fallen rapidly, Rifkin (1996) argues that human labor has been devalued to the vanishing point by advances in technology. Frank and Cook (1995) highlight the problem of greater concentration of wealth among the upper economic classes and suggest that a progressive tax on consumption is needed to achieve a more desirable distribution of income while helping to "steer our most talented citizens to more productive tasks." (p. 231)

16. Coincidentally, DMA is also the acronym for the Document Management Alliance, an industry group organized under the auspices of the Association for Information and Image Management (AIIM) in pursuit of standards for interoperability among electronic document management systems (EDMSs). Information on the DMA is available at http://www.aiim.org/standards.

17. Hough (1998) laments: "... we have become trapped in the bureaucracy of our own application code... Have a problem? Solve it with code - more and more code. We are drowning in code."

18. Besides the control unit, the other two components of the Central Processing Unit (CPU) are the Arithmetic and Logic Unit (ALU) and registers.

19. For a discussion of the importance of document metadata to government agencies, see Ambur (1996, September).

20. The appendix contains a case study quoting some of the key points from an article by Pearlstein (1998) reporting on the success of Xerox Corporation in reinventing itself and using information technology as an invaluable aid in doing so.


The following passages are excerpted from an article by Steven Pearlstein entitled "Reinventing Xerox Corp: Success Mirrors Efforts Across Corporate America," which appeared in the June 29, 1998, edition of The Washington Post: