Monitoring Contractor Performance

Owen Ambur

University of Maryland University College, April 27, 1998


The degree to which contractor performance can be effectively monitored depends in large measure upon the adequacy of project planning. For, as Berra (chesco.com) cautions, "You got to be careful if you don't know where you're going, because you might not get there." In other words, the objectives of the project must be clearly stated and, hopefully, well aligned with the business information requirements of the organization.

The technical requirements should be narrowly specified to meet the needs of the customers, i.e., those who are called upon to use the system to carry out the business requirements. As required by Raines' Rules (Raines), small, interoperable, standards-compliant, market-leading commercial off-the-shelf (COTS) software components should be used. Each component should be tested by actual users as soon as possible, allowing no more than 6 months for each deliverable, preferably much less.

The contractor should be expected to meet minimum professional requirements and should be able to produce evidence of having successfully performed similar applications, including statements of recommendation from previous customers. It is desirable that the contractor has achieved at least the "repeatable" level on the Capability Maturity Model. (Software Engineering Institute) The contractor should be willing to guarantee the interoperability of the components of the project as well as the quality and timeliness of the work of the contractor's employees in carrying out the integration and installation processes. The contractor should agree and be held accountable to provide comprehensive documentation of the project that is found to be acceptable by both the managers and well as the users of the system. The amount of post-acceptance technical support that may be required should be agreed upon in advance and the contractor should provide evidence of capability to provide such support at reasonable cost.

In tandem with a capable contractor, the skills of the project leader and his or her technical representative are critical to success. The contract officer and Contracting Officer's Technical Representative (COTR) should be well trained to carry out their roles in monitoring contractor performance. Measurable performance objectives and time lines should be established. The Project Leader should ensure that measurable outcome objectives are established. The COTR should ensure that the outcome objectives are stepped down to measurable output objectives. The Contracting Officer should ensure that the contract provisions are clear and enforceable.

Frame (p. 237) suggests there are two principle aspects to contract monitoring. The first "entails regular reviews of progress" and the second "entails looking at whether the contractor is achieving predetermined milestones effectively." On their face, these appear to be opposite sides of the same coin, a difference without a distinction. Frame distinguishes them in terms of regular periods of time versus the schedule of deliverables, which may be irregular. However, there is no point in meeting if the meetings bear no relationship to the project schedule. Nor, on the other hand, should information system project deliverables be scheduled so infrequently as to result in lengthy periods between reviews of incremental components.

Indeed, the COTR should facilitate routine, ongoing communications between the contractor, Project Leader, and prospective users of the system. The COTR may propose a schedule for periodic status-report/reality-check meetings, but the Project Leader should take responsibility for driving the scheduling of such meetings and ensuring the participation of the appropriate contacts, both from the contractor and among the prospective user community. Each meeting should be conducted in accord with a structured agenda expressly designed to address issues pertinent to the next deliverable as well as adherence to the overall schedule. Indeed, to the degree that the review criteria can be operationalized in the system or its prototype and then subjected to hands-on testing, there may be no need or point in having a contemporaneous meeting just to talk.

Time is not free. Meetings take time. Moreover, actions may actually be delayed in anticipation of discussions at the next meeting. As Calderia (1997) says, "You get what you measure..." To the extent that meetings measure anything, it is talk but talk is merely a means and not an end unto itself. To that degree, after an initial introductory, get-acquainted session, the need for additional "live" meetings may be taken as evidence of communications failures. It is not hard to imagine successful contractual arrangements that entail no face-to-face meetings at all, even for introductory purposes. Indeed, many meetings are in fact scheduled due to perceived, actual, or feared output failures. If meetings can help to head off such failures, they are probably time and money well spent. However, that does not mean that they are either necessary or the best means of achieving the objective. At worst, they distract attention from the actually desired outcomes.

For example, no frequency or time spent in meetings is any substitute for a clear definition of requirements clearly specified in writing in the statement of work. Moreover, as Schrage (1990) says, "if a [meeting] can't generate a document worth distributing, perhaps it's not a meeting worth holding." Nor, once the development process is underway, are meetings any substitute for hands-on usability testing of each component on a schedule so compressed as to render the need for additional meetings moot. Functionally speaking, meetings are equivalent to talking about doing the work, rather than actually doing the work. Insofar as the contractor monitoring process is concerned, care should be taken to ensure that meetings are not allowed to become an amorphous, feel-good alternative to the kind of hard-nosed, practical, performance- oriented kind of testing that is required to assure system quality.

In its best practices guide, the Office of Federal Procurement Policy (OFPP, 1996/97) offers the following description of a Quality Assurance Plan (QAP) and it relationship to contractor surveillance:

OFPP also asserts: "The QAP should focus on the quality, quantity, and timeliness etc. of the product to be delivered by the contractor, and not on the steps required or procedures used to provide the product or service." In other words, it is the outputs, or deliverables, that are of concern to the customer. How they are produced should be left up to the contractor.

However, beyond the deliverables specified in the PWS, the QAP should address and the Project Leader should be held accountable for configuration and change-order management. The COTR should advise the Project Leader and be responsible for technical and cost consultations with the contractor. The Project Leader should be held accountable for clear and complete documentation of user needs, and the COTR should be accountable for clear, effective, and complete translation of those needs into technical specifications.

Frame (p. 238) points out that pressure often arises to change a project's original scope, and it is common for disputes to arise over whether specific changes were authorized or not. Regardless of who is responsible for paying for them, the customer or the contractor, changes generally do increase costs. Frame highlights that such costs are not just monetary but may also be institutional, in terms of confusion and loss of confidence associated with "abandoning old commitments and making new plans." Frame emphasizes:

Indeed, change is inevitable and Clemmer (p. 185) suggests that "change management" is a contradiction in terms. He says: Toward that end, it is important that the prospective users be represented in the contract monitoring process through a User Acceptance Team (UAT). Members of the UAT should be adequately trained on each component as soon as it is ready for prototyping and/or acceptance, and other users should be trained immediately prior to rollout of each component. Traditionally, user acceptance has been considered to follow completion of projects. As described by Frame (p. 238): However, that approach has led to many failures. As Frame puts it, "Problems often occur at the customer acceptance stage... Customers may complain that the deliverable does not satisfy all of their needs and wants, as captured in the statement of work." (p. 239) To avoid such problems, users should be involved at every step of the project, from conception to burial. In particular, users should be involved in monitoring and assessing the quality and acceptability of the work of contractors.

Information system users may have little to say about who is hired and fired within their own organization, since that is still presumed in most organizations to be the province of upper-level managers. On the other hand, there is no reason that users should not be intimately involved in determining the quality and acceptability of each component of information systems delivered by contractors. After all, it is they who will be left to work the systems long after the contractors have departed. The simple fact is that no Project Leader nor COTR can be as fully prepared as Clemmer suggests so as to deal with change as it affects the users themselves. As Matsushita (1994) avers:

Deller (1998) points out that some organizations use integrated project teams (IPTs) that are "charged with managing application development while assuring the efficiency of existing ones." He notes that IPTs are a good approach to use in many instance, but in order to obtain the highest level of objectivity, agencies have generally turned to contractors to conduct independent verification and validation (IV&V). However, the question is whether that is the best approach, particularly if the actual success of the project depends significantly upon the subjective judgment of its users.

To the degree that proven COTS components are used, user acceptance problems can be minimized. Special care should be taken to apply user scrutiny to customizations applied by the contractor. Each digression from COTS should be treated in effect as a change order, specifically approved in advance by the COTR, if not the Project Leader, and field tested by users on a component-by-component basis in the routine course of the project development process. Customizations should be monitored on a weekly, if not daily basis, by at least one person who will actually be using the customized features.

OFPP suggests that acceptable surveillance methods include:

Notwithstanding OFPP's assertions, it might be argued, in effect, that 100-percent inspection is the only true alternative in the real world. The only real issue is at what point the "inspection" will occur -- before or after the project has been "accepted" and the contractor has been paid and released from further obligation. In other words, operationally speaking, from the perspective of the user -- who is by definition the "customer" -- there is no difference between the 100-percent inspection and the customer-complaint method. Sooner or later, the cows will come home to roost, as Yogi (Berra) might say. As Deller explains and asserts: While the Project Leader and COTR should be held accountable for ensuring that the functional and technical output requirements are met by each component, user satisfaction should be taken as the most important measure of project outcome. As Clemmer (p. 229) emphasizes: User satisfaction is one such measure, and measuring it should be considered to be an inherent part of the contractor monitoring process, rather than as a post project-acceptance activity. In that regard, Petrillo (1998) notes: While acknowledging that it is only common sense to begin to consider such qualitative information, Petrillo argues that past performance indicators have been oversold as a means of discriminating among contractors for source selection - particularly for the bulk of the contractors who fall in the middle of the standard distribution. At the same time, he highlights that government policy-makers are worried about grade inflation (i.e., that all contractors will receive the highest rankings) while contractors fear being unjustly downgraded. (See also Brewin et al.)

As Richter (1996) asserts, "ANY contract performance is to one degree or another subjective." Thus, in fairness both to contractors as well as to users, members of the UAT should be given a structured survey based upon the functional and technical requirements, to be used in assessing each component, in near-real time, as each component is developed. The structure of the survey will lend a measure of certainty for the benefit of both parties. Beyond the issues of certainty and fairness, still more important is the focus that a structured process brings to bear on the desired outputs and outcomes, which should be clearly reflected in the questions asked in the survey. Without such structure -- cutting directly to the core of the issues to be addressed -- output and outcome measures risk being fuzzy and inefficient, if not completely ineffective in achieving the objectivity required to facilitate responsible decision-making.

Based upon the experience of the UAT, the surveys should be refined for completion by a wider sample of the users at a specified time following rollout of each component. To the greatest degree possible, the process for completing and submitting the surveys should be automated, through the use of forms automation technology. Brewin et al. (1998) report that the Defense Information Systems Agency (DISA), for example, has developed an online evaluation tool and has made it available to other Defense Department agencies. (For a generic discussion of E-forms, see Ambur, 1997.) The performance objectives of each component of the project should be rendered in terms of user satisfaction measures in intelligent, electronic forms (E-forms).

In addition to user satisfaction, as identified by DTIC, other measures of contractor performance include quality of the product or service, cost control, timeliness, and business relations. Such factors are certainly important, but "quality" and "business relations" are both subjective and closely related to customer satisfaction. Cost control and timeliness can be more objectively measured against pre-agreed standards. However, neither is any substitute for customer satisfaction, and negative performance on either of them is likely to be reflected in customer dissatisfaction. Moreover, a project that is within budget and schedule but which fails to satisfy the customers can only be characterized as an well-managed rush to failure.

After the initial baseline acceptance levels have been established for each component, E-forms should also be used to facilitate ad hoc suggestions for continuous improvement, as well as to periodically survey all or selected groups of users for insight into business process and technology improvement opportunities. (For information on how various agencies are using customer satisfaction forms, see: INS, NPR, Navy, Army, Air Force, DISA, DTIC, DSMC, NAVICP and ABA.)

As Clemmer asserts, it may be impossible to manage change. It may also be true, as Petrillo argues, that the benefit of considering past performance has been oversold as a means of discriminating among most vendors. However, by applying E-forms technology to facilitate a continuous user-feedback loop, contractor performance can be more effectively monitored in near-real time, not only for the benefit of current projects but for future awards as well (OFPP, 1992). Rather than being viewed as relatively large, discrete monoliths, projects themselves can be considered more properly as collections of smaller, interoperable components -- each with its own life cycle within the continuous and ever-expanding flow of knowledge.

Moreover, change itself can be transformed from a threat into an unending stream of opportunity -- a stream within which performance measures will be far more accurate and valid, knowledge management will be more effectively fostered, high performers will be justly recognized, the needs of customers will be more fully satisfied, and taxpayers will obtain greater returns on their investment in government services.


References

Air Force, Department of. Monitor Performance. Available at: http://otsservr.ssg.gunter.af.mil/guide/MONITOR.htm

Ambur, O. (1997). Automated Forms: Putting the Customer First Through Intelligent Object-Oriented Chunking of Information and Technology. University of Maryland University College. Available at: http://www.erols.com/ambur/Eforms.html

American Bar Association (ABA). Past Performance Survey. Available at: http://www.abanet.org/contract/pastperf.html

Army, Department of. Construction Contractor Appraisal Support System (CCASS). Summary information available at: http://www.hq.usace.army.mil/cemp/transition/tmp-ccass.htm

Berra, Y. Quotes available at: http://www.chesco.com/~artman/berra.html, http://www.bemorecreative.com/one/64.htm, and http://www.millennial.org/mail/talk/fmf-humor/hyper/0313.html

Brewin, B., Tillett, L.S., and Varon, E. (1998, March 23). "Agencies start to wield past performance club." Federal Computer Week. pp. 14 & 18.

Calderia, E. (1997) Measuring and Rewarding Trade Contractor Performance. NAHB Research Center. Available at: http://www.nahbrc.org/seminars/9703art.htm

Clemmer, J. (1995). Pathways to Performance: A Guide to Transforming Yourself, Your Team, and Your Organization. Rocklin, CA: Prima Publishing. Excerpts available at: http://www.clemmer-group.com/jimbooks.htm

Deller, R. (1998, March 23). "Make GPRA requirements high-level activity." Government Computer News. pp. 33 & 36.

DISA. Form and Instructions: Task Order Evaluation. Available at: http://www.disa.mil/d4/diioss/diiic/icto7.htm

DSMC. Monitoring Contractor Performance of Software Development Efforts: Difficulties and Approaches. ROAR'n database. Available at: http://www.dsmc.dsm.mil/r/port/aa/aco7626a.htm

DTIC. Contractor Performance Report Form, Rating Guidelines, and Instructions. Available at: http://www.dtic.mil/c3i/bprcd/6002a3.htm#TOC

Frame, J.D.  (1994).  The New Project Management.  San Francisco, CA: Jossey-Bass.  pp. 237-238.

General Services Administration (GSA). Performance Pathways. Available at: http://www.itpolicy.gsa.gov/mkm/pathways/pathways.htm

Immigration and Naturalization Service (INS). Information Technology Partnership (ITP). Summary information available at: http://www.itpolicy.gsa.gov/mkm/pathways/ins-itp.htm

Matsushita, K. (1995) In Covey, S. First Things First. New York: Simon & Shuster. p. 207.

National Performance Review (NPR). Form: Performance Evaluation of Contract. Available at: http://www-far.npr.gov/BestP/Appendix2.html

National Performance Review (NPR). Contractor Performance Report, Rating Guidelines, and Contractor Performance Report Instructions. Available at: http://www-far.npr.gov/BestP/Appendix3.html

NAVICP. Strategy Four: Measure Contractor Performance. Available at: http://www.navicp.navy.mil/prodserv/acqstrat/strat4.htm

Navy, Department of. Contractor Performance Assessment Reporting System (CPARS). Summary information available at: http://www.abm.rda.hq.navy.mil/cpars.html

Office of Federal Procurement Policy (OFPP). (1996, April, last updated 1997, March 14) A Guide to Best Practices for Performance Based Service Contracting, Chapter 5, Quality Assurance Plan (QAP) and Surveillance. Office of Management and Budget (OMB), Executive Office of the President. Available at: http://www.arnet.gov/BestP/BestPPBSC.html

Office of Federal Procurement Policy (OFPP). (1992, December 30) Past Performance Information. Policy Letter No. 92-5. Office of Management and Budget (OMB), Executive Office of the President. Available at: http://www-far.npr.gov/References/Policy_Letters/PL92-5.html

Petrillo, J.J. (1998, March 23). "The benefits of past performance have been oversold." Government Computer News. p. 30.

Raines, F. (1996, October 25). Funding Information Systems Investments. Memorandum for Heads of Executive Departments and Agencies. Available at: http://www.itpolicy.gsa.gov/mke/capplan/raines.htm and summarized at: http://www.fws.gov/laws/itmra.html

Richter, P. (1996, August 9). Evaluating Past Performance. Electronic Forum. ARNet Discussion Group. Available at: http://www-far.npr.gov/Discussions/FAR/Source_Selection/0056.html

Schrage, M. (1990). Shared Minds: The New Technologies of Collaboration. New York: Random House. p. 206. For additional information from Schrage and others on the use of IT in lieu of meetings to facilitate collaboration, see Ambur, 1996, "Critical Success Factors for a Collaborative Database in a Large, Geographically Dispersed Organization." Available at: http://www.erols.com/ambur/Discuss.html

Software Engineering Institute. (1997). Summary of Capability Maturity Model (CMM). Available at: http://www.sei.cmu.edu/technology/cmm.html