Vanguard Recovery Assistance v. United States , 101 Fed. Cl. 765 ( 2011 )


Menu:
  • OPINION AND ORDER1

    LETTOW, Judge.

    This post-award bid protest is before the court on plaintiffs motion for judgment upon the administrative record and the government’s and intervening-defendants’ cross-motions for judgment. Plaintiff, Vanguard Recovery Assistance, Joint Venture (“Vanguard”), alleges that the Federal Emergency Management Agency (“FEMA” or the “agency”) improperly evaluated proposals in a multi-award procurement of architect-engineering services to support disaster relief. Pl.’s Mem. in Support of Mot. for Judgment *768on the Admin. Record (“Pl.’s Mem.”) at 1. Specifically, Vanguard claims that FEMA overlooked information about the incumbents’ performance that was “too close at hand” to ignore, used disparate standards in assessing the proposals, and misapplied the evaluation criteria. Id. at 1-2. Each of the four recipients of the procurement contracts, Fluor Enterprises, Inc. (“Fluor”); Architecture, Engineering, Consulting, Operations and Management Services, Inc. (“AECOM”); Nationwide Infrastructure Support Technical Assistance Consultants, LLC (“NISTAC”); and CH2M Hill-CDM PA TAC Recovery Services (“CCPRS”) has intervened in the protest.

    The case has been complicated by extensive prior protest proceedings, including one agency-level protest and several protests before the Government Accountability Office (“GAO”). Those proceedings resulted first in amendment of the solicitation and then in two separate recommendations by GAO that FEMA take corrective action to remedy defects in the procurement. See Vanguard Recovery Assistance, J.V. v. United States, 99 Fed.Cl. 81, 85-87 (2011) (citing, among other things, Shaw-Parsons Infrastructure Recovery Consultants, LLC; Vanguard Recovery Assistance, J.V., B- 401679.4-.7, 2010 CPD ¶ 77, 2010 WL 1180085 (Comp.Gen. Mar. 10, 2010) (“GAO’s First Decision”); Shaw-Parsons Infrastructure Recovery Consultants, LLC; Vanguard Recovery Assistance, J.V., B- 401679.8-.10, 2010 CPD ¶ 211, 2010 WL 3677164 (Comp.Gen. Sept. 8, 2010) (“GAO’s Second Decision”)).2 The consequent corrective actions led to revisions in FEMA’s procurement decisions.

    The record before GAO contained conflicting indicia about materials and information available to FEMA regarding the performance of incumbent contractors in predecessor procurements. Those conflicts were reflected in competing declarations provided to GAO by FEMA officials and Vanguard, and the disagreements persisted in the proceedings before the court. Some controversies over the content of the evolving record of the procurement were resolved by the court in its prior decision, see Vanguard, 99 Fed.Cl. at 92-103, but several disputes about the record have spilled over to consideration of the merits. During the pendency of this case before the court, FEMA on several occasions retrieved additional materials from its saved electronic files, and the government recently submitted a third motion for leave to correct the administrative record, see Def.’s Third Mot. Seeking Leave to Correct the Admin. Record (“Def.’s Mot. for Third Correction”), (ECF No. 115), which motion remains unresolved.

    FACTS3

    Vanguard protests FEMA’s award of contracts styled Public Assistance Technical Assistance Contracts or “PA TAC III.” AR 1-1, 3-7.4 The contracts call for the provision of “architect-engineer, consultant, and other professional services in support of the Public Assistance (PA) Program.” AR 2-2. The PA Program provides federal aid to state and local governments, Indian tribes, and certain nonprofit organizations in the wake of man-made or natural disasters. AR 6-28. These entities use the grants both to deal with the *769immediate aftermath of a disaster (e.g., clearing debris) and then to facilitate long-term recovery (e.g., repairing public infrastructure). Id. One of the main services to be procured under PA TAC III is professional assistance in assessing the damage inflicted by a disaster and estimating the cost of rebuilding. AR 6-34 to -36.

    A. The Two Prior Contracts: PA TAC I and, PA TAC II

    PA TAC III is the latest in a series of technical assistance contracts to support the PA Program. See AR 105-5204. The first of these contracts, PA TAC I, was awarded in 2004, and the second, PA TAC II, was awarded in 2006. Def.’s Cross-Mot. at 4. Both were multiple-award contracts, and three firms were awardees in both procurements: Fluor, NISTAC, and Emergency Response Program Management Consultants, which later became AECOM. See AR 105-5205 & n. 1; Def.’s Cross-Mot. at 4.

    One common thread across all three of the PA TAC contracts has been Ms. Lorine Boardwine. She was the Chief of the Public Assistance Technical Assistance Contract Management Branch of the Public Assistance Division of FEMA. See Vanguard, 99 Fed.Cl. at 93 n. 12. She served as the Contracting Officer’s Technical Representative (“COTR”) on both the PA TAC I and PA TAC II contracts. See Dep. of Lorine Boardwine (June 7, 2011) (“Boardwine Dep.”) at 9:9-13; id. at 131:16-17.5 She also was a voting member of the Source Evaluation Board (“SEB”) for the PA TAC II contract. Id. at 86:16-20, 87:12. In the PA TAC III procurement, Ms. Boardwine served in the non-voting role of Technical Advisor to the SEB. AR 136-5694.

    1. FEMA’s failure to prepare performance evaluations during PA TAC I and PA TAC II

    During the PA TAC I contract, FEMA employees were supposed to report to Ms. Boardwine on the quality of the contractors’ work. Boardwine Dep. at 116:1-20. In principle, these employees, called Task Monitors, would fill out Technical Evaluation Worksheets (“TEWs”) assessing each contractor’s performance. Id. In practice, the Task Monitors often neglected this responsibility. Id. at 117:24 to 118:25. Ms. Boardwine estimates that she received approximately one-third of the expected TEWs. Id. at 118:14-17. She brought the low response rate to the attention of her supervisor on at least one occasion, but the agency seems to have taken no action on the matter. See id. at 118:18-25.6

    As meager as the evaluations were under PA TAC I, they far surpassed the information gathered under PA TAC II, the successor contract. Although Ms. Boardwine had worked with the PA TAC II awardees to develop a quality assurance surveillance plan (“QASP”) for the new contract, Boardwine Dep. at 133:18 to 134:4; AR 79-4905 ((Deck of Lorine Boardwine) (Jan. 22, 2010) (“First Boardwine Deck”)), she was unable to persuade FEMA management to incorporate the QASP into the contract, apparently due to the agency’s convoluted approval process. Id. at 138:6 to 141:2. Consequently, there was no system in place for gathering performance evaluations for the contractors’ *770work under PA TAC II. Id. at 142:23-24; see also AR 105-5218 n. 20 (GAO’s First Decision) (noting that FEMA’s failure to include performance assessments in the contract violated various Federal Acquisition Regulations, citing particularly the First Bo-ardwine Declaration and 48 C.F.R. (“FAR”) §§ 36.604 and 37.601(b)).

    This is not to say that FEMA never received any information on the performance of the PA TAC II contractors. What evaluations it received, however, were informal, sporadic, and undocumented. Boardwine Dep. at 142:24 to 143:8; see also id. at 123:11 to 129:6; AR 105-5217 n. 19 (GAO’s First Decision). As the COTR, Ms. Boardwine was the “single point of contact” for all the Task Monitors. Boardwine Dep. at 16:14-16. From time to time they would call Ms. Bo-ardwine and express them frustration with the work performed by a contractor. Board-wine Dep. at 44:6-8; see also id. at 123:11-19. On the whole, the Task Monitors expressed frustration “with just about every aspect of ... the [contractors’] work.” Id. at 128:14-18; see also id. at 125:12-15. They leveled these sorts of complaints against all three of the PA TAC II contractors. Id. at 147:4-12. The Task Monitors criticized individual employees of contractors for being generally inexperienced, but they rarely substantiated that criticism with examples. Id. at 26:16 to 27:16. Ms. Boardwine would encourage them to document their complaints in writing and cite specific examples so that FEMA could take action. Id. at 143:1-16, 154:8-13. When the Task Monitors did lodge specific complaints, they addressed conduct of particular contractor personnel, e.g., tardiness, intoxication on the job, or lack of professionalism. Id. at 25:7 to 26:13.

    2. GAO’s and Inspector General’s reports concerning PA TAC I and PA TAC II

    Although FEMA had limited insight into the PA TAC contractors’ performance, two federal entities undertook oversight and review. GAO issued two reports that assessed the quality of work performed under PA TAC I and PA TAC II. See U.S. Government Accountability Office, GAO-08-301, Disaster Cost Estimates: FEMA Can Improve Its Learning from Past Experience and Management of Disaster-Related Resources (Feb. 2008) (“First GAO Report”); U.S. Government Accountability Office, GAO-09-129, Disaster Recovery: FEMA’s Public Assistance Grant Program Experienced Challenges with Gulf Coast Rebuilding (Dec. 2008) (“Second GAO Report”). The Office of the Inspector General for Homeland Security also issued a report on the management of the PA TAC contracts. See Department of Homeland Security, Office of Inspector General, OIG-11-02, Improvements Needed in FEMA’s Management of Public Assistance-Technical Assistance Contracts (Oct. 2010) (“OIG Report”).7 Ms. Boardwine was familiar with both GAO reports prior to the PA TAC III solicitation, Boardwine Dep. at 45:4-7, and she was interviewed for the OIG Report, id. at 45:20-25.

    The First GAO Report examined FEMA’s cost estimating work in 83 disasters between 2000 and 2004. First GAO Report at 9.8 Its focus was not limited to the PA Program but encompassed cost estimates generated for the Individual Assistance Program, the Hazard Mitigation Grant Program, mission assignments, and the agency’s overall administrative costs. Id. at 11-14. GAO found that, contrary to the agency’s claims, FEMA failed to generate “accurate” cost estimates — ie., estimates within 10% of the actual costs— until six to twelve months after the disaster. Id. at 14-15. The delay reportedly hampered decision makers in making timely, informed budget choices. Id. at 14, 16-17. The report identified a number of challenges *771to generating accurate cost estimates, some outside of the agency’s control and others within its power to address. Id. at 16-25. Notably, the First GAO Report did not discuss the qualifications or abilities of any of the PA TAC I or PA TAC II contractors. In fact, the report never mentioned the word “contractor.”

    The Second GAO Report addressed problems faced by the PA Program which slowed down its efforts to restore the Gulf Coast in the aftermath of Hurricanes Katrina and Rita in 2005. Second GAO Report at 2-4. GAO listed unreliable cost-estimating as one of five main obstacles to smooth day-to-day operation of the PA Program. Id. at 19. It identified two primary causes for the inaccurate cost estimates. First, both the agency’s employees and its contractors were unfamiliar with FEMA’s Cost Estimating Format (“CEF”). A 2007 study found that only 50 percent of PA TAC personnel were trained in using the CEF. Id. at 25. Second, FEMA staff often generated cost estimates very early in the process, before the full extent of the hurricanes’ damage was known. Id. at 26.

    The Second GAO Report also noted professional-staff challenges, which affected not only cost estimating but FEMA’s overall mission. At least half of the contractors lacked PA program experience or adequate training before being deployed to the Gulf Coast. Id. at 37. Agency officials explained that, “as a cost-benefit decision, FEMA does not require its contractors to take PA training prior to a disaster, but the agency typically provides some training on the PA program to staff right before they are deployed to a specific disaster.” Id. The report also found that much of the PA staff — presumably agency employees and contractors alike — was inexperienced, resulting in inaccurate or contradictory decisions. Id. at 38. In particular, some FEMA staff lacked the specialized skills need to assess damage to certain infrastructure. Id. Although the Second GAO Report discusses PA TAC contractors generally, it does not single out any particular contractor for criticism.

    The report by the Office of the Inspector General for Homeland Security focused on flaws in FEMA’s contract management, including its method of awarding task orders and its failure to evaluate contractor performance. OIG Report at 7, 10-12. The report touched briefly on the quality of the contractors’ woi’k in two respects. First, it found that “contractor staff availability was inadequate in quantity and quality to meet” the needs of the Long-Term Community Recovery (“LTCR”) program. Id. at 17. The report remarked that FEMA had since issued a separate contract for the LTCR program. Id. at 18. As a result, the LTCR program was omitted from the Statement of Work for PA TAC III. Compare AR 6-39, with Pl.’s Mem. Ex. 4, at 000193 (PA TAC II Statement of Work). Second, the OIG Report discussed contractor performance by referencing another document, the Remedial Action Management Plan (“RAMP”) Report issued by FEMA in March 2009. OIG Report at 12. According to the OIG Report, the RAMP Report “cited issues with PA[]TAC contractor performance” and “reported that additional actions were needed to ensure that project worksheets’ estimated cost of projects were accurate.” Id. However, the OIG Report did not provide details on this point, and the RAMP Report itself is not part of the administrative record.

    B. The PA TAC III Solicitation

    On February 19, 2009, FEMA issued Sources Sought Notice (“SSN”) HSFEHQ-09-R-0411 for PA TAC III. AR 1-1, 2-6. The SSN envisioned the award of up to four PA TAC III contacts. AR 2-2. Each contract would be an Indefinite-Delivery, Indefinite-Quantity contract with a minimum guaranteed amount of $500,000. Id. The government would issue Time and Materials and Firm Fixed Price task orders against these contracts to obtain services during disasters. Id. The total expected life-cycle cost of the procurement was estimated at $2 billion. Id.

    Under the terms of the SSN, FEMA would evaluate the proposals against five criteria: (1) “specialized experience and technical competence,” (2) “capacity to accomplish work within required time,” (3) “professional qualifications,” (4) “past performance,” and (5) “location in the general geographical area of the project and knowledge of the locality *772of the project.” AR 2-3 to -5. The first three factors were to be weighted equally and were to be more important than the other two criteria. AR 2-3. The fifth factor was more important than the fourth. Id.

    The first factor, “specialized experience and technical competence,” had three subfac-tors: (a) experience in cost estimating; (b) experience in dealing with issues relating to environmental and historical preservation, insurance adjustments, and hazardous waste removal; and (c) experience in staffing. AR 2-3 to -4. For Subfaetor 1(a), offerors were instructed to “[g]ive a detailed explanation of the reasons for any variances that exceed plus or minus 10 percent” of the estimated cost. Id. The SSN stated that Subfaetor 1(a) was significantly more important than Sub-factor 1(b) but did not indicate the relative importance of Subfactor 1(e). AR 2-3. FEMA eventually amended this factor after an agency-level protest described infra.

    Under the second factor, “capacity to accomplish work within required time,” the of-ferors would be judged on their ability to provide sufficient numbers of technical specialists. AR 2-4. Each firm was required to demonstrate its ability to identify at least 300 people in 37 different professions on 48 hours’ notice. Id. FEMA would evaluate the education and experience of those staff under Factor Three, “professional qualifications.” Id.

    For the fourth factor, “past performance,” the agency would evaluate the firms’ prior success on contracts of similar size, type, and scope “in terms of project management, accuracy of cost estimates, cost control, quality control, completion of projects within budget, and compliance with performance schedules.” AR 2-4. To that end, each offeror was required to provide references for at least five contracts within the past three years. Id. The agency reserved the right “to use information outside of the response in evaluating past performance, including agency knowledge of the firm[’]s performance.” AR 2-5.

    Lastly, under the fifth factor, FEMA would evaluate firms based on “location in the general geographical area of the project and knowledge of the locality of the project.” AR 2-5. The SSN explained that the “[Location of personnel in multiple locations is generally indicative of greater flexibility and experience in a wider variety of work.” Id.

    The SSN stated that FEMA would evaluate the proposals in a two-step process. See AR 2-3. For the first step, the agency required each offeror to submit a Standard Form (“SF”) 330.9 Id. Based on these SF 330s, all but the most technically highly rated firms would be eliminated from the competition. Id. The agency would invite the remaining offerors to Washington, D.C., for “one-on-one due diligence sessions” to assist them in preparing their submissions for the second step. Id. In step two, the firms would make oral presentations before the agency, after which they would be re-evaluated and re-ranked. Id.

    C. Past Performance Questionnaires and the First Evaluation of Offerors’ Past Performance

    FEMA received nine offers in response to the SSN. Seven of those offerors were selected for the shortlist and inclusion in the due diligence sessions and final rankings and ratings, namely, (1) Vanguard, (2) Fluor, (3) AECOM, (4) NISTAC, (5) CCPRS, (6) Shaw-Parsons Infrastructure Recovery Consultants, LLC (“Shaw-Parsons” or “IRC”), and (7) PB Americas, Inc. (“PB”). See Vanguard, 99 Fed.Cl. at 85.

    On April 20, 2009, shortly before the firms gave their step-two oral presentations, FEMA instructed the offerors to have at least five of their clients complete a Past Performance Questionnaire (“PPQ”). See AR 45^424 to -4425. The form FEMA used for the PPQ asked the clients to rate the firms in four categories: quality of product or service, cost control, timeliness of performance, and business relations. E.g., AR 24-1598 to -1601 (Vanguard’s PPQ from [* * ). The form also had space for the client to provide narrative comments about the offeror’s performance. Id. FEMA origi*773nally gave the offerors a little over one day to have the clients complete and send the PPQs back to the agency. AR 45-4425. This deadline was later extended to three days. AR 45-4423. Presumably because of this extraordinarily short time for submission, two of the offerors — including Fluor— could not obtain five PPQs from their clients. See AR 41-4327. The other firms were able to cause five or six PPQs to be submitted. Id.

    All three incumbents — Fluor, NISTAC, and AECOM — asked Ms. Boardwine to complete a PPQ for their work under PA TAC II. Boardwine Dep. at 91:9-13; see also Def.’s Mot. for Third Correction Ex. A, at 5, 11. However, Ms. Boardwine declined to provide PPQs for any of the incumbents. Boardwine Dep. at 91:14; see also Def.’s Mot. for Third Correction Ex. A, at 4, 9. In a contemporaneous e-mail to one of her colleagues, she explained that “the Contracting Officer has determined that due to my involvement on the PA TAC Acquisition, my response to a[PPQ] for the acquisition is an apparent conflict of interest.” Def.’s Mot. for Third Correction Ex. A, at 1. However, she also noted that “the SEB may consult with me, at its discretion, should it have questions on the current eontractors[’] past performance.” Id.; see also id. Ex. A, at 3. In her deposition, Ms. Boardwine elaborated on the decision that she would not provide PPQs for the incumbents:

    [I]n writing the program requirements, and working on writing the sources sought notice, and participating in industry, and noting the needs of the Public Assistance Program, [my supervisors and I felt that] we could unduly ... influence a decision, and that was not my intent. I did not want to write something that could be perceived [as], [“]gosh, she really wanted the incumbents in there, and look at that knock out, stand out, performance assessment that she gave.[”] Or, [“]man, you know, they must have really got on her bad side, because ... she has just crucified them.[”] I wanted to be as objective as possible, and again I thought that I was to[o] vested in the actual procurement itself to provide a response that would not have the appearance of somehow being biased. [So] essentially I ... recuse[d] myself from all three [requests to provide PPQs for the incumbent contractors].

    Boardwine Dep. at 93:18 to 94:10, 94:15.

    In lieu of a PPQ prepared by her, Ms. Boardwine says that she recommended the incumbents seek out the PA TAC II Task Monitors to answer the PPQs. Boardwine Dep. 91:16-24.10 However, none of the monitors submitted PPQs for the incumbents. See AR Tabs 12, 15, 18. Vanguard avers that the incumbents never contacted the Task Monitors regarding the PPQs, Pl.’s Mem. at 16, a claim that is not disputed by the defendant-intervenors, see Def.-Interve-nors’ Cross-Mot. at 25. However, two of the offerors, Fluor and CCPRS, obtained very favorable PPQs for their work on PA TAC I. See AR 9-193; AR 15-699 to 703.11

    Six clients submitted PPQs evaluating Vanguard: [* * *] (valued at $[* * *] million or $[* * *] million), compare AR 22-*7741442, with AR 24-1620, [* * *] ($[* * *] million or $[* * *] million), compare AR 22-1432, with AR 24-1616, [* * *] (over $[* * *] million), AR 24-1598, [* * *] ($[* * *] million), [* * *] ($[* * *] thousand), and [* * *] ($[* * *] thousand or $[* * *] thousand), compare AR 22-1434, with AR 24-1602. Vanguard also listed six reference contracts in its SF 330 to satisfy Factor 4. See AR 22-1468 to -1469. The six clients who completed PPQs for Vanguard did not overlap completely with the six clients listed in its SF 330. Four of them were the same: [* * *], [* * *], [* * *], and [* * *]. AR 22-1468 to -1469. Two for the SF 330 were different: [* * *] ($[* * *] million), AR 22-1438, and [* * *] (value unknown). Id.

    The SEB chose not to use the PPQs in the initial evaluation. See, e.g., AR 40-4299; cf. AR 40-4305 (commentary of SEB on re-review). In that first evaluation, FEMA determined that Vanguard had submitted information for five contracts of similar size, type, and scope. AR 40-4299.12 However, the SEB regarded as a weakness the fact that Vanguard’s “sample projects lack specificity and clarity to allow the board to understand the full requirement and extent of the project and how the stated accomplishment impacts the overall job.” Id. It rated Vanguard’s past performance as acceptable. Id.

    D. The Initial Award Decision, the Agency-Level Protest, and FEMA’s Amendment of Subfactor 1(a)

    On June 15, 2009, FEMA announced its decision to award the four available contracts to Fluor, AECOM, NISTAC, and Shaw-Parsons. See AR 105-5206 (First GAO Decision). This decision was protested at the agency level by two of the unsuccessful competitors. Id. In response, on August 13, 2011, FEMA took corrective action by amending Subfaetor 1(a). See AR 6A-40.1; see also AR 105-5206 (GAO’s First Decision). The amendment provided more detail regarding how firms would be evaluated under that subfactor. See AR 105-5206 (GAO’s First Decision). Amended Subfactor 1(a) required each offeror “to identify completed projects from the past five (5) years that demonstrate its experience developing reliable cost estimates.” AR 6A-40.2. These examples should “demonstrate [the film’s] experience, methodology, and tools” for cost estimating. Id. As before, the offerors were asked to explain any estimates that varied by more than 10 percent of the actual costs. Id. The amendment clarified that Subfaetor 1(a) was more important than Subfaetors 1(b) and 1(c) combined, and that the latter two were of equal importance. Id.

    The seven short-listed firms dutifully complied with FEMA’s instructions, and the SEB evaluated each of their Subfactor 1(a) submissions. In grading the submissions under this subfactor, the SEB looked at both the offerors’ general experience and the quality of past work. See, e.g., AR 35-4094 (“The 12 projects presented, all completed within the last five years, total over $8.71 billion with an overall variance of just over 2% and demonstrate an extensive level of reliability in the firm[’]s cost estimating abilities.” (emphasis added)). The SEB also considered the extent to which each firm’s submission demonstrated its experience in particular areas of importance to the agency. See AR 36-4133 (finding a weakness in AECOM’s proposal because it “did not address its use of forward pricing for multi-year projects”).

    In evaluating Subfactor 1(a) for the three incumbents, the SEB did not consider the quality of their cost-estimating work for PA TAC I or PA TAC II. None of the incumbents cited these contracts in their lists of projects that demonstrated experience in reliable cost estimating. See AR 8-189; AR 11-253; AR 17-774 to -776. In addition, Ms. Boardwine, who served as the COTR for those two contracts, did not discuss the firms’ cost-estimating performance with the SEB. Boardwine Dep. at 85:5-8. The SEB’s members never asked Ms. Boardwine about the quality of the incumbents’ cost-estimating work under these two contracts, and she did not volunteer any information on the matter. Id. at 85:22 to 86:4.

    *775As a consequence, the SEB evaluated Sub-factor 1(a) using only the contents of the offerors’ amended SF 330s. Based on those SF 330s, the board rated AECOM as acceptable and Fluor, NISTAC, CCPRS, and Vanguard as superior for that subfaetor. See AR 136-5705. Among those firms with a superior rating, Vanguard was ranked last. Id. The SEB explained this decision by saying that Vanguard’s “discussion on methodology and quality control was very theoretical and did not provide as detailed a description on its actual methodology used to produce reliable cost estimates.” AR 136-5706. Consequently, the Board did not regard Vanguard to be as highly’ qualified as Fluor, NISTAC, or CCPRS. AR 136-5706 to - 5707.

    The SEB re-ranked the various offerors using the revised scores for Subfaetor 1(a). AR 136-5707 to -5708. Based on this new ranking, FEMA awarded contracts to Fluor, AECOM, CCPRS, and NISTAC, effectively replacing Shaw-Parsons with CCPRS. AR 136-5708.

    E. The First GAO Protest

    At that point, Shaw-Parsons and Vanguard each filed a protest before GAO. See AR Tab 54 (Shaw-Parson’s First GAO Protest (Dec. 4, 2009)); AR Tab 55 (Vanguard’s First GAO Protest (Dee. 7, 2009)). Vanguard alleged that FEMA improperly “failed to consider negative cost estimating accuracy experience and other experience and past performance information regarding the incumbent PATACs under the [Factor 1] Specialized Experience and Technical Competence and [Factor 4] Past Performance ... that w[ere] ‘too close at hand to ignore.’ ” AR 55-4623. Shaw-Parsons, among other things, protested that FEMA had improperly altered its evaluation methodology without allowing offerors an adequate opportunity to modify their proposals. AR 54-4609. PPQs became an issue. FEMA defended its decision to ignore the PPQs on the grounds that (1) the solicitation said that offerors’ past performance would be evaluated in six categories, while the PPQ asked clients to assess the offerors’ past performance using four criteria that differed to some extent from the six criteria; (2) some offerors were unable to get five of the clients to submit PPQs within the time allowed; and (3) some of the PPQs were for contractors’ performance under individual task orders, while others assessed the firms’ overall work on a contract. AR 79-4902.

    GAO decided both of these protests in a decision rendered on March 10, 2010. See AR Tab 105 (GAO’s First Decision). GAO rejected Vanguard’s challenge to the Sub-factor 1(a) ratings because in GAO’s view “it confuse[d] the concepts of experience and past performance.” AR 105-5216. GAO explained that experience “focuses on the degree to which an offeror has actually performed similar work,” whereas past performance “focuses on the quality of the work.” Id. GAO took the view that Subfactor 1(a) evaluated only experience; thus, any information concerning the poor quality of the incumbents’ cost estimating could not affect their ratings under this subfactor. Id.

    GAO also dismissed Vanguard’s claim that FEMA should have considered the incumbents’ work under the previous PA TAC contracts in evaluating Factor 4. AR 105-5217 to -5218 (GAO’s First Decision). GAO found no evidence that the SEB possessed knowledge of the incumbents’ performance for those particular contracts. AR 105-5217. GAO observed that “it would be reasonable to expect” that the SEB advisor, Ms. Lorine Boardwine, would know about the incumbents’ past performance since she served as the COTR for the PA TAC II contract. Id. However, GAO explicitly accepted at face-value Ms. Boardwine’s post-award declaration and the corresponding declarations of SEB members that they had no such knowledge. AR 105-5217 & nn. 18-19. Under those assumed circumstances, GAO refused to fault the SEB for ignoring information that was not in its possession. AR 105-5218.

    GAO sustained part of Shaw-Parson’s protest based on the “too close at hand” doctrine. AR 105-5209 (GAO’s First Decision). FEMA had sought and received PPQs from a number of the offerors’ references. Id. However, it had not considered these PPQs when evaluating past performance, instead relying solely on the offerors’ self-assessment re-*776fleeted in their SF 330s. AR 105-5210. GAO ruled that “regardless of what discretion the solicitation may have afforded FEMA in seeking out additional information, once it had the PPQs, it could not simply ignore them.” AR 105-5212. It recommended that FEMA reevaluate the shortlists ed offerors, giving “reasonable consideration” to the PPQs in its possession. AR 105-5218.

    F. FEMA’s Second Evaluation of Offerors’ Past Performance

    After GAO sustained Shaw-Parsons’ protest, the SEB reconvened to reevaluate the offerors’ past performance. AR 136-5708. This time, the SEB considered three criteria: (1) the SF 330 comments provided by the offeror, (2) the PPQ narrative comments provided by clients, and (3) the PPQ adjectival ratings provided by clients. AR 136-5710; see also AR 40-4305 (reflecting the SEB’s evaluation worksheets for the re-review of Vanguard’s past performance information, including PPQs).13 The SEB weighed the information in the SF 330s and PPQs equally. AR 136-5710. In considering the PPQs, the SEB assigned greater importance to the narrative comments than to the adjectival ratings. Id. The agency mapped all of the PPQ comments onto the six past performance areas listed in the SSN: project management, accuracy of cost estimates, cost control, quality control, completion of projects within budget, and compliance with performance schedules. E.g., AR 40-4306. The SEB then provided an assessment of each of these areas, drawing on both the SF 330 and PPQ narrative comments. E.g., AR 40-4308 to - 4309.

    FEMA arrived at the last component of the past performance score, PPQ adjectival ratings, by using a mathematical formula. In each PPQ, the client evaluated the offeror based on four criteria: quality of product or service, cost control, timeliness of performance, and business relations. E.g., AR 24-1598. For each of these criteria, the reference could rate the offeror’s performance as superior, acceptable, or unacceptable. E.g., id. FEMA assigned a point value for each rating: 10 for a “superior” rating in a contract of similar size and scope, 7.5 for a “superior” in a less-relevant contract, 5 for an “acceptable,” and 1 for an “unacceptable.” See AR 41-4327; AR 134-5662 (GAO’s Second Decision). FEMA calculated the average value of each offeror’s PPQs, out of a possible 40 points. AR 134-5662. If an offeror had an average of 36 or more points, it was rated superior for this component of the past performance factor. Id.

    In Vanguard’s case, the SEB highlighted a number of strengths in the narrative comments of the PPQs but again noted areas where it found Vanguard’s SF 330 to be very general. AR 134-5663 (GAO’s Second Decision). The SEB also remarked that two of Vanguard’s six projects listed on its SF 330 were not “multi-million dollar [contracts] of similar size and scope.” AR 40-4309; see also AR 404322. Considering all of this information, the SEB found that for Vanguard both the SF 330 descriptions and the PPQ narrative comments merited a rating of acceptable. AR 40-4309. Vanguard’s score for the adjectival ratings was 34.17, and so the SEB rated it as acceptable for this component. See AR 134-5663 & n. 3 (GAO’s Second Decision) (noting that the SEB miscalculated Vanguard’s score as 34.83). Consequently, Vanguard had an overall rating of acceptable for past performance. AR 40-4309.

    Once again, the agency re-ranked the of-ferors based on the newly revised ratings. See AR 136-5713. The order of the firms did not change, and FEMA reaffirmed its decision to award contracts to Fluor, AECOM, CCPRS, and NISTAC. Id.

    G. The Second GAO Protest

    After learning the results of FEMA’s reevaluation, Vanguard filed a second protest before GAO. AR Tab 112 (Vanguard’s Second GAO Protest (June 2, 2010)). In this protest, Vanguard alleged that the agency had unreasonably given it an “Acceptable” rating for past performance when its references and SF 330 merited a “Superior.” AR 112-5292. GAO agreed in part, finding that *777FEMA’s formula for calculating past performance ratings improperly penalized offer-ors for submitting less-relevant contracts. AR 134-5667 (GAO’s Second Decision). GAO recommended that the agency reevaluate Vanguard’s past performance and make a new source selection determination. AR 134-5674.

    H. FEMA’s Third Evaluation of Vanguard’s Past Performance

    In a third and final evaluation, or fourth, if one counts the change arising from the agency protest, FEMA re-examined only Vanguard’s past performance information. See AR 136-5713. The SEB once again looked at the SF 330 comments, the PPQ comments, and the PPQ adjectival ratings. See AR 40-4323 to -4324. This time, the Board undertook a much more thorough analysis of Vanguard’s SF 330 and the PPQ comments. Compare AR 40-4315 to -4321, with AR 40-4308 to -4309, and AR 40-4299. As in the prior evaluation, the SEB noted that Vanguard had only submitted SF 330 descriptions for four contracts of comparable size and scope, instead of five as requested in the SSN. AR 40-4322. The SEB found fault with many of Vanguard’s SF 330 descriptions. AR 40-4315 to -4321. For each of the six evaluation elements, the SEB criticized Vanguard’s SF 330 narrative as being too genei’al, focusing on actions taken rather than outcomes achieved, or merely demonstrating compliance with contractual requirements. See, e.g., AR 40-4316 (“[Tjhese comments [on cost-estimating] were very general and outlined expected actions to be taken on these types of contracts”). See generally AR 40-4315 to -4321. The board concluded that the SF 330 merited a “low acceptable” based on the inadequate descriptions and the fact that Vanguard had included only four contracts of similar size and scope. AR 40-4323.

    The SEB also engaged in a close reading of the PPQ narratives for Vanguard. See AR 40-4315 to -4321, -4323. It accepted most of the clients’ positive comments at face-value, identifying them as strengths or evidence of superior performance. See, e.g., AR 40-4315 (“The SEB identified within the comments of the PPQs, statements from one client, [[* * *]], indicating performance in this area was superior”). In a few instances, the SEB denigrated a client’s positive evaluation. See, e.g., AR 40-4316 (“The SEB noted these comments [by [* * *]] as a strength [but] also noted that the contract ... was not of like size and scope to this procurement.”); AR 40-4317 (‘Although the comment [by [* * *]] mentions cost controls, the SEB concluded that it did not indicate a ‘strength’ [because] these identified measures are ... consistent with the SEB[’]s view of acceptable and expected performance levels.”). The Board was also concerned that Vanguard had provided only four PPQs from contracts of similar size and scope, instead of the five requested in the amended solicitation. AR 40- 4323. Two of the individual board members had assigned Vanguard an acceptable rating for this component, while the other two gave it a superior. Id. After recapitulating the PPQ comments, the SEB concluded by giving Vanguard a “very borderline [superior” rating. Id.

    Lastly, the SEB recalculated the score for the PPQ adjectival ratings using a methodology recommended by GAO. AR 40-4323; AR 41- 4327; cf. AR 134-5663 n. 3, -5667 n. 5 (GAO’s Second Decision). Under the new scheme, Vanguard had an average score of 37.27, warranting a superior rating for this component. AR 41-4327; AR 136-5716. Vanguard’s score was on the low end of the superior spectrum (ranging from 36 to 40), however, so the Board qualified the rating as a “low [superior.” AR 40-4324. Given that Vanguard achieved a superior rating by a narrow margin for both the PPQ comments and the PPQ adjectival ratings, the SEB gave Vanguard a “borderline superior” for the overall PPQ review. Id.

    The SEB arrived at the final past performance rating by considering the firm’s SF 330 rating (“low acceptable”) and its PPQ rating (“borderline superior”). AR 40-4324. Per the SEB’s decision in the second evaluation, each of these factors was given equal importance. See AR 136-5710. The Board concluded that, when taken together, these two factors merited an overall rating of acceptable for the past performance factor. AR 40-4324.

    *778I. FEMA’s Final Rankings

    After completing the reevaluation of Vanguard’s past performance, the SEB once again re-ranked the offerors. AR 136-5715 to -5716. As in the previous reevaluation, the rankings did not change. See AR 136-5716. The final rankings adopted by the Source Selection Authority showed that the short-listed firms were each competitive (“S” indicates a superior rating and “A” indicates an acceptable rating):

    [[Image here]]

    AR 136-5716. The SEB ranked the firms based on their adjectival ratings and the weight given each factor in the SSN. AR 136-5707. Under this system, NISTAC, IRC, and Vanguard were tied for fourth place. Id. The SEB used the ranking for the amended Subfactor 1(a) to break this tie, leaving Vanguard in sixth place. Id.

    In acting on the SEB’s evaluation, the Source Selection Authority (“SSA”) took a new approach, conducting her own ranking of the offerors by the same factors. AR 135-5678 to -5688. Unlike the SEB, the SSA did not use Subfactor 1(a) as a tie-breaker for firms with equal adjectival ratings; rather, she considered the specific benefits offered by each firm and ranked them accordingly. See AR 135-5680 to -5681; see also AR 53-4599 to -4600 (SSA Original Source Selection Decision (Oct. 6, 2009)). The SSA’s final rankings were:

    [[Image here]]

    AR 135-5681 to -5688. Based on these rankings, the SSA determined that the four most highly qualified firms were Fluor, AECOM, CCPRS, and NISTAC. AR 135-5688. Significantly, although Vanguard had received a superior rating under both Factors 1 and 2, the Source Selection Authority did not give it a ranking for those factors. All other offer-ors with a superior rating for a given factor were listed in the rankings. In addition, NISTAC was ranked under Factor 3 despite, in common with CCPRS, Vanguard, and PB, having received an acceptable rating.

    J. Third GAO Protest

    On November 10, 2010, FEMA informed Vanguard that it had not been awarded a *779contract. AR 138-5740. Twelve days later, on November 22, 2010, Vanguard filed its third GAO protest. In this protest, Vanguard alleged disparate evaluation standards and other procurement errors which in its view demonstrated that “the evaluation panel and/or its advisors have lost objectivity and have taken improper actions to retain the PA TAC II incumbents and CCPRS in conducting the re-evaluation.” Vanguard’s Third GAO Protest at 1. Vanguard asked GAO to recommend that FEMA replace the evaluation team, reevaluate the proposals of Vanguard and certain other offerors, and ultimately award a contract to Vanguard. Id. at 16. Vanguard supplemented its protest on November 29, 2010, providing more examples of what it described as “disparate treatment” in the evaluation of Vanguard as compared with other firms. See Vanguard’s Third GAO Protest Supplement at 1. Vanguard partially relied upon a declaration of a former FEMA employee, Martin Atman, who described the existence of documents which would support Vanguard’s position but which had not been presented to GAO. Pl.’s Mot. to Supplement the Administrative Record Ex. 13, at 2. Vanguard also requested that GAO order FEMA to produce documents relating to its and other offerors’ oral presentations, as well as documents relating to PA TAC I past performance evaluations. Vanguard’s Third GAO Protest at 15; Vanguard’s Third GAO Protest Supplement at 8-9.

    K. Proceedings in This Court

    Shortly thereafter, on January 12, 2011, Vanguard filed a bid protest in this court.14 Vanguard’s complaint reiterated the themes of Vanguard’s three protests before GAO, claiming that FEMA’s evaluation of the of-ferors’ reliable cost estimating and past performance was unreasonable, that the final rankings of bidders improperly departed from the requirements of the SSN, and that FEMA failed to evaluate Vanguard’s proposal objectively. See Compl. at 19, 20, 24, 28.

    Mter the government filed the administrative record with the court, Vanguard moved to supplement the record and to depose Ms. Boardwine. Vanguard, 99 Fed.Cl. at 92-102. The court granted its motion in part and denied it in part. Id. at 103. Specifically, the court made part of the record the First GAO Report, the Second GAO Report, the OIG Report, and “the materials encompassed within the record before GAO in the three protests of the procurement before that entity.” Id. As previously noted, it also permitted the plaintiff to conduct a four-hour deposition of Ms. Boardwine. Id. With those preliminary and preparatory steps accomplished, the parties filed and briefed the competing cross-motions for judgment on the record. In the course of that briefing, the government located additional electronic records related to the procurement, and those records were incorporated into the record. See Order of August 18, 2011, EOF No. 112 (granting the government’s second motion to correct the administrative record).15

    STANDARDS FOR DECISION

    Pursuant to 28 U.S.C. § 1491(b)(4), the court reviews a challenge to an agency’s award of a contract using the standards set out in the Administrative Procedure Act, 5 U.S.C. § 706. These standards permit the court to set aside an agency’s contracting decision if it is “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,” 5 U.S.C. § 706(2)(A), assuming the criteria for equitable relief are satisfied. See PGBA, LLC v. United States, 389 F.3d 1219, 1224-28 (Fed.Cir.2004).

    This standard of review is “highly deferential” to the agency’s procurement decision. Advanced Data Concepts, Inc. v. United States, 216 F.3d 1054, 1058 (Fed.Cir.2000). So long as the court finds a “reasonable basis” for the agency’s decision, it “should stay its hand even though it might, as an original proposition, have reached a different conclusion.” Honeywell, Inc. v. United States, 870 F.2d 644, 648 (Fed.Cir.1989) (quoting M. Steinthal & Co. v. Seamans, 455 F.2d 1289, 1301 (D.C.Cir.1971)). *780The court must show especially great deference to the agency’s technical evaluations, past performance ratings, and other “minutiae of the procurement process ... which involve discretionary determinations of procurement officials.” E.W. Bliss Co. v. United States, 77 F.3d 445, 449 (Fed.Cir.1996); see also Seaborn Health Care, Inc. v. United States, 101 Fed.Cl. 42, 48 (2011); Pitney Bowes Gov’t Solutions, Inc. v. United States, 94 Fed.Cl. 1, 6-7, 11-14 (2010).

    The court may overturn an award only “if ‘(1) the procurement official’s decision lacked a rational basis; or (2) the procurement procedure involved a violation of regulation or procedure.’ ” Centech Grp., Inc. v. United States, 554 F.3d 1029, 1037 (Fed.Cir.2009) (quoting Impresa Construzioni Geom. Domenico Garufi v. United States, 238 F.3d 1324, 1332 (Fed.Cir.2001)). An agency’s decision lacks a rational basis if the contracting officer “entirely failed to consider an important aspect of the problem, offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view or the product of agency expertise.” Keeton Corrs., Inc. v. United States, 59 Fed.Cl. 753, 755 (2004) (quoting Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43, 103 S.Ct. 2856, 77 L.Ed.2d 443 (1983)) (internal quotation marks omitted).

    Even if a protestor proves errors in the procurement process, it must still demonstrate that it was “significantly prejudiced” by those errors. Bannum, 404 F.3d at 1353. To make this showing in the context of a post-award protest, the protestor “must [establish] that there was a ‘substantial chance’ it would have received the contract award absent the alleged error.” Banknote Corp. of Am. v. United States, 365 F.3d 1345, 1351 (Fed.Cir.2004) (quoting Emery Worldwide Airlines, Inc. v. United States, 264 F.3d 1071, 1086 (Fed.Cir.2001)).

    ANALYSIS

    I. FEMA’S FAILURE TO EVALUATE PERFORMANCE OF INCUMBENT CONTRACTORS

    At the core of this case is FEMA’s decision not to collect contemporaneous evaluations of the incumbent contractors’ performance on the PA TAC II contracts. This failure has blinded the agency to information that would bear on two of the evaluation criteria: Sub-faetor 1(a) and Factor 4. Contrary to the government’s continued protestations, see Def.’s Cross-Mot. at 25-26, Subfaetor 1(a) included both contractors’ experience and past performance in cost-estimating accuracy, see Vanguard, 99 Fed.Cl. at 93-94. It required the agency to consider not only the extent of the offerors’ previous work, but also how well they performed that work. See AR 6A-40.2 (requiring a “detailed explanation” for “any variances” in cost estimates “that exceed plus or minus 10 percent”).

    Although both Subfactor 1(a) and Factor 4 deal with past performance, the two criteria are not coterminous. Subfactor 1(a) limits its focus to cost-estimating reliability and confines its inquiry to the five references provided by the offeror. AR 6A-40.2 (“The firms are required to provide a detailed explanation of the reasons for any variances on the identified completed projects that exceed plus or minus 10 percent.” (emphasis added)). In contrast, Factor 4 covers past performance generally and specifically reserves the agency’s “right to use information outside of the response in evaluating past performance.” AR 2-5 (emphasis added).

    Consequently, the lack of past performance data for PA TAC II poses less of a problem for Subfaetor 1(a) than it does for Factor 4. The incumbents were not obligated to cite their work on PA TAC I or PA TAC II for this subfactor, and in fact none of them did.16 Thus, under the terms of the solicitation, the agency was bound to consider *781each offeror’s cost-estimating reliability only as captured in the five sample contracts it identified. If FEMA’s evaluators possessed some personal knowledge pertaining to one of the offeror’s identified contracts, he or she might be obliged to consider it. See International Res. Recovery, Inc. v. United States, 64 Fed.Cl. 150, 163 (2005) (“[S]ome [past performance] information is simply too close at hand to require offerors to shoulder the inequities that spring from an agency’s failure to obtain[,] and consider[,] the information.” (second alteration in original) (quoting International Bus. Sys., Inc., B- 275554, 97-1 CPD ¶ 114, at 5, 1997 WL 113958, at *4 (Comp.Gen. Mar. 3, 1997))). Factor 4 is worded much more broadly and does not restrict the agency to the five references submitted by the offerors. AR 2-5. Thus, under the “too close at hand” doctrine, FEMA would be obliged to draw upon internal information that concerned any of the firms’ prior work, even if the offeror did not cite it. See Northeast Military Sales, Inc. v. United States, 100 Fed.Cl. 96, 99 (2011) (applying the “too close at hand” principle when “the [solicitation provide[d] that the evaluation shall be ‘based on a consideration of all relevant facts and circumstances.’ ”).

    A. FEMA Was Responsible for Collecting Performance Information During PA TAC II

    FEMA’s failure to conduct performance evaluations violated explicit law. The FAR states in no uncertain terms that the agency shall perform “[p]ast performance evaluations ... for each architect-engineer services contract of $30,000 or more.” FAR § 42.1502(f). These evaluations “shall be prepared ... at the time the work under the contract or order is completed.” FAR § 42.1502(a). Additionally, “interim evaluations shall be prepared as specified by the agencies ... for contracts or orders with a period of performance, including options, exceeding one year.” Id.

    Here, the combined value of the PA TAC II contracts was approximately $2 billion. See AR 105-5218 (GAO’s First Decision). This amount surpassed the $30,000 threshold of FAR § 42.1502(f) by several orders of magnitude, triggering a legal obligation for FEMA to evaluate the contractors once the work was complete. Moreover, because the period of performance for PA TAC II exceeded one year, the agency was obliged to conduct interim evaluations throughout the life of the contract. See FAR § 42.1502(a). FEMA carried out neither of these responsibilities.

    Both GAO and the Office of the Inspector General have criticized FEMA for violating these FAR provisions. See AR 105-5218 n. 20 (GAO’s First Decision) (noting that the agency’s failure to incorporate any performance metric or monitoring contravened FAR § 36.604 and FAR § 37.601(b));17 see also AR 105-5218 (describing the failure to compile performance information as a “troubling” “lack of oversight and accountability”); OIG Report at 10 (“Without performance metrics or evaluations of performance, FEMA was unable to determine whether the PA-TAC contractors performed their responsibilities or if the federal government received a fair return for PA-TAC services valued at more than $188 million.”); id. at 20 (“[T]he task order files did not contain the required quality assurance surveillance plans to be used by the Task Monitors to evaluate the contractor’s performance for timeliness, quality, customer service, and cost.” (emphasis added)). Ms. Boardwine, the former COTR for the PA TAC II contract, acknowledged the agency’s failure to comply with the FAR. AR 79-4905 (First Boardwine Deck) (“Although the 2006 PA TACs were awarded as performance-based contracts, the Quality Assurance Surveillance Plan (QASP) and applicable FAR clauses were not incorporated into the 2006 contract awards.”).

    B. Absence of PPQ.s

    In lieu of contemporaneous evaluations of past performance, FEMA could have generated performance evaluations for the PA TAC II incumbents in connection with this procurement. “[N]othing in ... the FAR[ ] prohibits an agency from evaluating a pro*782posal based on past performance information that was not recorded contemporaneously.” Kathpal Techs., Inc., B- 291637.2, 2003 CPD ¶ 69, at 4 n. 7, 2003 WL 1918465, at *3 n. 7 (Comp.Gen. Apr. 10, 2003). In respect to this issue, the court has a different and more comprehensive record than that which was before GAO. The most significant addition is derived from the deposition of Ms. Lorine Boardwine, in which she expanded considerably upon the declarations that she provided to GAO. Compare Boardwine Dep. at 91:2 to 94:10, with AR 79-4905 (First Boardwine Deck), and AR 99-5152 to -5153 (Supplemental Decl. of Lorine Boardwine (Feb. 25, 2010) (“Second Boardwine Deck”)).

    Notably, FEMA seems to have actively worked to deny itself any information about the performance of the incumbent contractors on the PA TAC II contract. First, it gave offerors just three days to obtain PPQs from their references. AR 45-4423. Then, when the three incumbents asked Ms. Board-wine to complete PPQs for the PA TAC II contract, she advised that she could not do so, Boardwine Dep. at 91:9-14; see also Def.’s Mot. for Third Correction Ex. A, at 4, 14, even though she was FEMA’s most knowledgeable official on the subject of the PA TAC II contract performance, Boardwine Dep. at 15:14-16 (describing herself as the “single point of contact” for all the Task Monitors). She abjured preparation of any PPQs for the incumbents on very questionable grounds. In her deposition, she said that she “was to[o] vested in the actual procurement itself to provide a response that would not have the appearance of somehow being biased.” Id. at 94:8-10. In prior cases, such a rationale has been found to be insufficient to excuse an agency’s obligation to consider an incumbent’s past performance on a predecessor contract. See Seattle Sec. Servs., Inc. v. United States, 45 Fed.Cl. 560, 568 (2000) (holding that the contracting officer should have considered the incumbent’s performance notwithstanding the fact that “she was the [contracting officer] for that [prior] contract and was concerned that it would appear prejudicial if she evaluated plaintiff on that contract”); see also Inlingua Sch. of Languages, B- 229784, 88-1 CPD ¶340, at 4, 1988 WL 227429, at *3 (Comp.Gen. Apr. 5, 1988) (sustaining a bid protest where the agency refused to consider the incumbent’s performance because it “felt an unfair advantage would have been accorded [the protestor] had [the agency] used its own evaluators as references”).18

    C. Synopsis

    FEMA’s failure to evaluate the incumbents’ performance on PA TAC II was a manifest error. Rather than try to remedy this problem with PPQs from the former PA TAC II COTR, Ms. Boardwine, the agency appears to have taken steps to forestall Ms. Boardwine’s participation, or, at best, to acquiesce in her silence. The question remains whether this violation of law affected the procurement and, if so, in what manner.

    II. THE RANKING OF THE OFFERORS

    A The SSA’s Overall Ranking

    The SSA accepted the findings of the SEB, but she also conducted a faetor-by-factor ranking of the proposals. See AR 135-5681 to -5688. What is striking about her analysis is that, for the three most important factors, she ranked only the top four firms. AR 135-5681 to -5685; see supra, at 778-79 (reproducing the SSA’s final rankings in the form of a chart). As the chart demonstrates, *783the SSAs rankings are incomplete. She did not rank Vanguard at all for the first three factors — those that were most significant— despite the fact that Vanguard was rated superior in two of those factors by the SEB. See AR 136-5713. Nor does she indicate where several of Vanguard’s competitors fell in the spectrum of certain ratings. For instance, awardee AECOM was one of only two offerors to receive an acceptable rating for Factor 1, see AR 136-5708, and was ranked in last place for Factor 1(a) by the SEB, see AR 136-5705. Yet the SSA’s chart conveys none of this information, which presumptively was highly relevant to a best-value determination.

    The agency provides no explanation why, for Factors 1 through 3, a firm’s ranking should be relevant if the offeror is one of the top four contenders but irrelevant if it is not. The SSA compounded her error by creating a rank order for the first three factors in which she only considered firms that placed in the top four of her individual rankings for any of the first three factors. AR 135-5684 to-5685. Vanguard was not on this list. Yet one can easily conceive of a scenario in which an offeror might not be among the top four firms for any particular factor but still merit a place in the top four overall. For instance, if Vanguard placed fifth in all three factors with a slim margin between it and the other candidates, and AECOM or CCPRS was a distant last in Factor 1 or 3 (respectively), then Vanguard might well deserve a spot among the top four firms. The SSA’s approach on its face appears to be arbitrary insofar as it disregarded reasonable possible outcomes, but this possibility will be examined in greater depth in assessing prejudice, infra.

    B. Vanguard’s Challenge to Specific Evaluations and Rankings

    In addition to finding fault with the SSA’s incomplete ranking of the offerors, Vanguard also claims that specific ranking decisions were irrational. First, Vanguard contends that it was wrongfully ranked below IRC for Subfactor 1(a) and that the SEB irrationally found that its description of its cost-estimating methodology was “very theoretical.” Pl.’s Mem. at 27-28; Pl.’s Resp. at 11-12; see also AR 136-5706. This technical judgment was “within the broad discretion of the procuring agency,” and consequently the court will interject its own assessment only “in the absence of truly irrational conduct on the part of the agency.” FirstLine Transp. Sec., Inc. v. United States, 100 Fed.Cl. 359, 397 (2011) (citing E.W. Bliss, 77 F.3d at 449). In that respect, the court has reviewed Vanguard’s submissions for Subfaetor 1(a), and it has found no grounds for invalidating the SEB’s technical assessment.

    Second, Vanguard criticizes the SSA’s methodology for ranking the offerors regarding Factors 2 and 5. Pl.’s Mem. at 36-37. It claims that the SSA acted irrationally in using the total number of staff employed by each offeror to rank the firms for Factor 2 (“Capacity to Accomplish Work Within Required Time”). Id. at 37; see AR 135-5682 to -5683. Vanguard points out that the agency is not procuring labor indiscriminately, but is looking for professionals from 37 particular disciplines. Pl.’s Mem. at 37; see AR 2-4. Consequently, it argues, the agency should have considered the total staff in those 37 disciplines to rank the firms, or, alternatively, FEMA should have looked to the management components of the offerors’ Past Performance to gauge each offeror’s track record for completing work on time. Pl.’s Mem. at 36-37. Similarly, Vanguard argues that the SSA arbitrarily looked to the number of offices of each offeror to rank them for Factor 5 (“Location in the General Geographical Area of the Project and Knowledge of the Locality of the Project”). Id. at 37 n. 19; see also AR 135-5687 to -5688. Vanguard avers the SSA should have distinguished the firms by looking at past performance instead.

    When comparing one proposal to another under a specific evaluation factor, “an agency may consider all matters that offerors would reasonably have believed to be within the scope of the factor.” NEQ, LLC v. United States, 88 Fed Cl. 38, 48 (2009) (quoting John Cibinic, Jr. & Ralph C. Nash, Jr., Formation of Government Contracts 830 (3rd ed. 1998) (internal quotations omitted)); see also KMS Solutions, LLC, B-405323.2-.3, 2011 CPD ¶ 209, at 9 n. 15, 2011 *784WL 5115073, at *5 n. 15 (Comp.Gen. Oct. 6, 2011) (“A solicitation need not identify every possible consideration under each stated evaluation factor, provided the matters the agency considers are reasonably related to, or encompassed by, the stated criteria.” (citing Avogadro Energy Sys., B- 244106, 91-2 CPD ¶ 229, at 4, 1991 WL 182393, at *3 (Comp.Gen. Sept. 9, 1991))). Here, FEMA did not act irrationally by using the staffing capacities and number of offices to rank the offerors for factors 2 and 5. The SSN did not provide specific criteria for how the firms might be ranked within any particular factor. The SEB and the SSA accordingly could use an evaluative method that was “reasonably related” to the factors themselves.19 A firm with a larger staff may be able to furnish more technical specialists on short notice than one with a smaller staff. Similarly, if a company has more offices than another, it may have a greater geographic dispersion. Cf. NEQ, 88 Fed.Cl. at 48. Granted, these relationships are not certain; in fact, these features may not even be the best indicators of the offerors’ abilities to satisfy the factors in the SSN. Here, however, the court cannot say that the SSA’s methodologies are arbitrary — ie., that in reaching its conclusion, the agency “entirely failed to consider an important aspect of the problem, offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view or the product of agency expertise.” State Farm, 463 U.S. at 43, 103 S.Ct. 2856.

    III. THE EVALUATION OF VANGUARD’S PAST PERFORMANCE

    A The Reasonableness of the SEB’s Weighting of SF 330s and PPQs

    Vanguard argues that the agency unreasonably assigned equal weight to the SF 330s and PPQs in evaluating Factor 4. Pl.’s Mem. at 28-30. It claims that it was arbitrary and capricious for FEMA to give equal credence to an offeror’s self-promotion via the SF 330 and a third party’s objective commentary. Id.20 The government counters that the agency had legitimate reasons for being reluctant to rely entirely on the PPQs because the four criteria addressed in the PPQ questionnaire did not correspond to the six criteria described in the SSN. Def.’s Cross-Mot. at 14.

    “[T]he assignment of a past performance rating is reviewed ‘only to ensure that it was reasonable and consistent with the stated evaluation criteria and applicable statutes and regulations, since determining the relative merits of the offerors’ past performance is primarily a matter within the contracting agency’s discretion.’ ” Todd Constr., L.P. v. United States, 88 Fed.Cl. 235, 247 (2009), aff'd, 656 F.3d 1306 (Fed.Cir.2011). The deference owed to the procuring agency extends not just to the actual rating, but also to its “selection of] a method for evaluating offerors’ past performance.” Line Gov’t Servs., LLC v. United States, 96 Fed.Cl. 672, 718 (2010) (citing FAR § 15.305(a)(2)(ii)). An agency may consider information obtained from the offeror, third *785parties, government databases, or some combination of these bases. Cf. RISC Mgmt. J.V. v. United States, 69 Fed.Cl. 624, 628 (2006) (“[T]he government would obtain past performance information about an offeror from three sources: the offeror itself, directly from references, and through its own research.”).

    Vanguard does not challenge FEMA’s use of the offerors’ self-assessments per se; rather, it contends that such information cannot reasonably be considered on par with the more objective evaluations obtained from their references via the PPQs. This argument, however, is in tension with the “well-recognized” principle that “an agency’s evaluation of past performance is entitled to great deference.” Al Andalus Gen. Contracts Co. v. United States, 86 Fed.Cl. 252, 264 (2009) (citing Westech Int’l, Inc. v. United States, 79 Fed.Cl. 272, 293 (2007)).

    On the one hand, FEMA has articulated a reason for its hesitancy to depend on the PPQs. See AR 79-4902 (Decl. of Valerie Rhoads, SEB Chairperson (Jan. 22, 2010)) (“The SEB ... found the questionnaires to have inconsistent factors as compared to the factors the SEB evaluated per the Sources Sought Notice.”).21 FEMA’s wariness regarding the PPQs is not entirely misplaced. On the other hand, the misalignment is a result of FEMA’s own action in developing the questionnaire, which in turn reflects its failed efforts to develop and apply agreed past performance criteria. Hr’g Tr. 83:1 to 85:14 (Sept. 13,2011). Both this court and GAO have sustained protests when the procuring agency relied on past performance questionnaires that were not appropriately aligned to the evaluation criteria. See Serco Inc. v. United States, 81 Fed.Cl. 463, 483 (2008) (sustaining a protest where “the questions employed [in the survey] largely were not designed to produce answers responsive to the past performance evaluation standards”); Cooperativa Muratori Riuniti, B-294980, 2005 CPD ¶21, at 7-8, 2005 WL 277303, at *6 (Comp.Gen. Jan. 21, 2005) (same).

    If this court were conducting the source selection, it would give greater credence to the PPQ responses than to the SF 330s. Yet the role of the court is “not to substitute its judgment for that of the agency,” but rather to determine whether the agency had a rational basis for its decision. See Alabama Aircraft Indus., Inc.-Birmingham v. United States, 586 F.3d 1372, 1376 (Fed.Cir.2009) (quoting State Farm, 463 U.S. at 43, 103 S.Ct. 2856). The court agrees with GAO that “FEMA had reasonable concerns regarding the utility of the [PPQ] ratings.” AR 134-5673 (GAO’s Second Decision). Consequently, the court cannot say that the agency’s weighting of the SF 330s and PPQs was arbitrary and capricious.

    B. The Reasonableness of the SEB’s Evaluation

    1. Whether Vanguard submitted five contracts of similar size and scope.

    One of the reasons that Vanguard was given a rating of “low acceptable” for its SF 330 was that it “did not provide the [five] requested contracts of like scope, [size, and] type.” AR 40-4323. Vanguard contends that this finding was irrational because (1) it did submit five similar contracts in its SF 330, and (2) the agency had access to information concerning an additional large contract in the form of a PPQ from [* * *]. Pl.’s Mem. at 31.

    First, Vanguard argues that the SEB mischaracterized its $[* * *] million contract with [* * *], AR 22-1438, -1469, as not being of similar size to the PA TAC III. In doing so, Vanguard fights an uphill battle against the well-established principle that “what does or does not constitute ‘relevant’ past performance falls within the [source selection authorityjs considered discretion.” PlanetSpace, Inc. v. United States, 92 Fed.Cl. 520, *786539 (2010) (citing FAR § 15.305(a)(2)(ii)); see also Poly-Pacific Techs. Inc., B- 295496.3, 2006 CPD ¶ 21, at 3, 2006 WL 133697, at *2 (Comp.Gen. Jan. 18, 2006) (giving “due deference to the agency’s broad discretion to determine whether a particular contract is relevant to the evaluation of past performance”).

    Vanguard’s strongest argument is that the agency’s decision to discount its [* * *] contract was the result of disparate treatment. It claims that FEMA singled out its $[* * *] million contract as being too small, while accepting a $[* * *] million contract for NISTAC at face-value. See Pl.’s Resp. at 14. Yet the premises of this argument are not borne out by the record. First, the NISTAC contract in question was for $[* * *] million per contract period. AR 18-786. The contractor had performed the work under three consecutive contracts since 1998, giving a total contract value of $[* * *] million. Id.22 Second, the FEMA evaluators regarded the size of the contract between NISTAC and [* * *] as a source of concern. See AR 38-4239 (“All [references] are multi-million dollar contracts of similar size and scope[.] However, [one] contract was for [$][* * *] maximum limitation per contract period.”). Consequently, FEMA’s designation of the [* * *] as a small contract was not disparate treatment.

    Vanguard notes that the SEB’s initial past performance evaluation indicated it had identified five contracts of similar size and scope, see AR 40-4299, and contends that the agency cannot contradict itself in a subsequent evaluation, see Pl.’s Mem. at 30. Nonetheless, an agency has the right to change its mind in the course of an evaluation if it has good reason. See Fort Carson Support Servs. v. United States, 71 Fed.Cl. 571, 604 (2006) (“[I]t is certainly the [agency]’s prerogative to change its mind — its failure to identify an existing weakness in a proposal does not preclude the [agency] from considering the weakness later.”); Dismas Charities, Inc. v. United States, 61 Fed.Cl. 191, 201-02 (2004) (recognizing that, as a general rule, agencies are permitted to re-score proposals). Here, FEMA had grounds for changing its mind on the size and scope of the [* * *] contract. Vanguard’s Factor 4 submission for that contract said it was “valued at $[* * *] million,” AR 22-1469, implying that it was a large contract. See AR 130-5645 (Supplemental Deck of Valerie Rhoads) (“The SEB took the statement in the SF 330 that the contracts were of like type, scope and size ... at its word in its original evaluation.”). When the agency considered the [* * *] contract in light of the information contained in other sections of the proposal, see AR 22-1438, the agency realized that it was actually worth only $[* * *] million.23

    Vanguard further argues that, even if it listed only four contracts of similar size in its SF 330, the agency could have looked to its PPQs to find a fifth multimillion dollar contract. Ph’s Mem. at 31. In fact, FEMA did have in its possession a PPQ for a $[* * *] million contract (with [* * *]) that was not listed in Vanguard’s SF 330. AR 24-1610.24 However, “contracting agencies evaluating one section of a proposal are not obligated to go to unrelated sections of the proposal in search of needed information which the offeror has omitted or failed ade*787quately to present.” Savcmtage Fin. Servs., Inc., B- 299798, 2007 CPD ¶ 214, at 9, 2007 WL 4326742, at *6 (Comp.Gen. Aug. 22, 2007); see also Professional Performance Dev. Grp., Inc., B- 311273, 2008 CPD ¶ 101, at 10, 2008 WL 2486460, at *8 (Comp.Gen. June 2, 2008) (“[T]he agency was not required to piece together disparate parts of the firm’s proposal to determine its intent; rather, it was [the offeror]’s responsibility to submit [the necessary information] as required by the RFP.”). Here, the SSN instructed offerors to include in their SF 330s references for at least five contracts of similar size, type, and scope. AR 2-4. Vanguard complied with these instructions, except for one sample contract of smaller value, and the agency acted within its discretion in noting this one listing as a cause for concern.25

    A similar issue arises respecting Vanguard’s PPQ rating. Vanguard received a “borderline [s]uperior” for its PPQs in part because it lacked five PPQs for large contracts. AR 40-4323.26 Vanguard contends that the agency erroneously found that it submitted only four relevant PPQs when the evaluators possessed other contract references from Vanguard’s SF 330. See PL’s Mem. at 32. This contention fails for the same reason as Vanguard’s previous argument: it is the responsibility of the offeror to prepare its proposal according to the SSN specifications, and not the obligation of the agency to piece together a nonconforming proposal. Professional Performance, 2008 WL 2486460, at *8. Furthermore, Vanguard’s argument ignores important differences between the information contained in the SF 330 and the PPQs. Although the parties differ on the relative merits of the SF 330s and PPQs, they agree that the two are not fungible. See PL’s Mem. at 29 (claiming that PPQs are a superior source); Def.’s Cross-Mot. at 14 (contending that PPQs suffer from various limitations). Because the two are not interchangeable, the agency acted reasonably in declining to substitute Vanguard’s SF 330 references for a missing PPQ.

    2. Whether the SEB reasonably evaluated Vanguard’s SF 330 and PPQs.

    Vanguard also challenges the agency’s evaluations under Factor 4, specifically contending that FEMA (1) unfairly discounted strengths solely because they came from small contracts, (2) irrationally ignored other strengths, and (3) employed a more relaxed standard in evaluating the other offerors. See PL’s Mem. at 28-34.

    In its final evaluation of Vanguard’s past performance, the SEB was concerned that “not all areas had a comment indicating a strength ... from a contract ... of like size and scope.” AR 40-3423. Vanguard argues that this statement is “eonclusory” and “makes no sense.” PL’s Mem. at 33. However, “when evaluating an offeror’s past performance, the [contracting officer] may give unequal weight, or no weight at all, to different contracts when [the contracting officer] views one as more relevant than another.” Seaborn Health Care, 101 Fed.Cl. at 51 (quoting Line Gov’t Servs., 96 Fed.Cl. at 718) (alterations in original) (internal quotation marks omitted). Here, the SSN specified that offerors would be judged based on their “past performance on contracts of similar size, type, and scope.” AR 2-4 (emphasis added). Vanguard was well-advised that the agency could regard small contracts as less relevant. FEMA did not act irrationally in choosing to accord less weight to them.

    In its evaluation of Vanguard’s SF 330, the SEB repeatedly found that Vanguard’s comments were general or merely stated standard expectations. AR 40-4315 to -4321. Vanguard claims that the SEB ignored “numerous specific statements of successful outcomes” and employed an irrationally high standard. PL’s Mem. at 32. These sort of technical assessments are precisely the “minutiae of the procurement process” that are *788best left to the procurement authority. See E.W. Bliss Co., 77 F.3d at 449. The court is in no position to displace the agency’s procurement officials in deciding, for instance, whether completing a project on schedule and within budget is a standard expectation or, rather, constitutes more than was required by obligation. See AR 40-4318. Moreover, if the SEB did have an exacting rubric for a superior grade, it appears to have applied it uniformly to all applicants. See AR 36-4133 (finding a weakness where AECOM provided “only a general reference to forward pricing without any discussion or demonstration on their utilization”); AR 38-4238 (noting that NISTAC’s SF 330 comments “lack specific detail and information to determine successful outcomes” and are “very general”).

    Lastly, Vanguard argues that its proposal has suffered disparate treatment. It claims that many of its SF 330 comments and PPQ responses are nearly identical to those of its competitors; yet, according to Vanguard, FEMA routinely credited other offerors with strengths while finding Vanguard’s remarks merely acceptable. See Pl.’s Mem. at 32 n. 15, 33, 34 nn. 16 & 17; Pl.’s Resp. at 15 & n. 9, 16. It is, of course, true that “uneven treatment goes against the standard of equality and fair-play that is a necessary underpinning of the federal government’s procurement process and amounts to an abuse of the agency’s discretion.” PGBA, LLC v. United States, 60 Fed.Cl. 196, 207 (2004), aff'd, 389 F.3d 1219 (Fed.Cir.2004); see also TLT Constr. Corp. v. United States, 50 Fed.Cl. 212, 216 (2001) (“A fundamental principle of government procurement is that [contracting officers] treat all offerors equally and consistently apply the evaluation factors listed in the solicitation.” (citing 10 U.S.C. § 2305)). In this instance, however, the bulk of Vanguard’s allegations are based on a misreading of the past performance evaluations. Vanguard avers that the SEB found a strength wherever its evaluation quoted an offeror’s SF 330. See, e.g., Pl.’s Mem. at 32 n. 15 (citing AR 36-4161, -4163; AR 37-4198, -4200; AR 38-4238 to -4239). This was not what happened.27 In actuality, the evaluators specifically stated when a proposal’s feature was valuable enough to warrant a strength; although the SEB cited other features as well, such comments were apparently taken as evidence of acceptable performance but not deemed significant enough to merit a strength. E.g., compare AR 36-4163 (“During the SEB[’]s review of the SF[ ]330 for AECOM, the SEB noted the following strength for Quality Control.”) (emphasis added), with id. (“As noted during the SEB[’]s review of the SF 330, AECOM specifically addressed its ability to complete projects within budget.”).28 Vanguard prem*789ises its argument on the mistaken assumption that the SEB regarded this second category of comments as strengths. Deprived of this assumption, the majority of Vanguard’s claims of disparate treatment are unavailing.

    The remaining examples of allegedly disparate treatment also are not persuasive. Vanguard highlights a number of statements from its SF 330 that are quite similar to comments from other offerors’ PPQs. See, e.g., PL’s Mem. at 32 n. 15 (citing AR 36-4161; AR 37-4198). It then claims disparate treatment because the SEB regarded the former as “standard expectations” and considered the latter as strengths. Id. at 32. Yet, for all the offerors (including Vanguard), the agency employed a more generous standard when assessing the PPQs than when evaluating the SF 330s. For example, the SEB gave Vanguard strengths for comments in its PPQs which, if found in an SF 330, would have been deemed overly general or mere compliance with the terms of the contract. See, e.g., AR 40-4313 (finding a strength where PPQ stated Vanguard “consistently meets all major ... milestone[s]”); id. (finding a strength where PPQ noted Vanguard’s “[ejxcellent track record regarding timely completion of their assignments”). Vanguard’s proposal did not suffer disparate treatment in this regard.

    IV. PREJUDICE AND RELIEF

    FEMA’s failure to collect performance information on the PA TAC II contractors was a violation of federal procurement law recognized by GAO, see AR 105-5218 n. 20 (GAO’s First Decision), by the Office of Inspector General for the Department of Homeland Security, see OIG Report at 20, and now by this court. This abdication of responsibility bears on the agency’s abilities to assure effectiveness in disaster relief work. The failure by FEMA to collect past performance information about incumbent contractors adversely affected its evaluation of the offerors in the PA TAC III procurement. GAO directed corrective action toward addressing the resulting gap by recommending that the agency consider the PPQs in its possession. AR 105-5218 (GAO’s First Decision). This measure yielded greater insight into the past performance of all the offerors, but it proved inadequate since the most crucial PPQs— those concerning the PA TAC II contract— were never provided. In effect, FEMA disabled its most knowledgeable employee, Ms. Boardwine, from completing PPQs for the incumbents on the PA TAC II contract. See, e.g., Boardwine Dep. at 93:18 to 94:10, 94:15. Moreover, FEMA accorded offerors a very constrained period of time to obtain PPQs from clients. This resulting record does not lead ineluctably to the conclusion drawn by Vanguard, that FEMA demonstrated “deliberate disregard of statutes and regulations,” PL’s Mem. at 23, but it does hint at that possibility.

    Once a court has found a procurement error, it must determine whether that mistake prejudiced the protestor. “Not every error by a [contracting officer] justifies overturning an award.” CRAssociates, Inc. v. United States, 95 Fed.Cl. 357, 389 (2010) (alteration in original) (quoting United Int’l Investigative Servs., Inc. v. United States, 42 Fed.Cl. 73, 87 (1998), aff'd, 194 F.3d 1335 (Fed.Cir.1999)). A disappointed bidder must demonstrate that the agency’s error was prejudicial. Impresa Construzioni, 238 F.3d at 1333. To establish prejudice, the protestor “must show that there was a ‘substantial chance’ it would have received the contract award absent the alleged error.” Banknote Corp., 365 F.3d at 1350 (quoting Emery, 264 F.3d at 1086). This “substantial chance” test is “more lenient than showing actual causation, [ie.], showing that but for the errors [the protestor] would have won the contract.” Bannum, 404 F.3d at 1358. On the other hand, the protestor “must demonstrate more than a ‘mere possibility that the protester would have received the contract but for the error.’ ” Asia Pac. Airlines v. United States, 68 Fed.Cl. 8, 18 (2005) (quoting Data Gen. Corp. v. Johnson, 78 F.3d 1556, 1562 (Fed.Cir.1996)).

    The question, then, is whether Vanguard has shown a substantial chance of winning a PA TAC III contract but for the agency’s several errors in failing to obtain cost-estimating reliability and past performance information regarding the work of its incumbents on the predecessor contract. *790That is, has Vanguard demonstrated that it would have fared differently if FEMA were well-informed about the incumbents’ performance on the PA TAC II contract? The court concludes that Vanguard has not made this showing. The evidence of record indicates that the SSA did not act arbitrarily on the information before her. And, Vanguard has not shown that individual incumbent contractors would have fared poorly on cost-estimating reliability and past performance evaluations. The reports produced by GAO and the departmental Inspector General set out trenchant criticisms of FEMA’s overall performance but fail to point to defects in the past performance of any particular PA TAC II incumbent contractor. In this respect, the court concurs with GAO that “the reports simply do not provide any direct indication that the problems associated with FEMA’s cost estimating issues were due to poor performance by the PA TAC contractors.” AR 105-5217 (GAO’s First Decision). The indirect indicia in that regard, based on inferences drawn from the reports, are not sufficient to take the place of specific evidence.

    In reviewing the record, the court cannot say that Vanguard would have had a substantial likelihood of winning the contract but for FEMA’s errors. While it is evident that the agency violated the law, the consequences of that transgression insofar as this procurement are concerned remain obscured. “In a post-award protest, the protester bears the burden of establishing prejudice,” Ceres Gulf, Inc. v. United States, 94 Fed.Cl. 303, 314 (2010) (citing Bannum, 404 F.3d at 1353), and Vanguard has failed to carry this burden, i.e., that it was prejudiced by FEMA’s failure to gather incumbent contractor pei’-formance information.

    Similarly, the agency’s weighting, evaluations, and rankings seem to have been somewhat skewed to Vanguard’s detriment. Vanguard is the smallest entity of the seven offerors shortlisted for detailed consideration, at least as judged by personnel and offices. See AR 135-5688; compare AR 135— 5683, with AR 19-863, and AR 25-1674. Yet, the court cannot conclude that the overall outcome was arbitrary. The only specifically irrational portion of the SSA’s analysis was nonprejudicial. As discussed supra, the SSA relied on an incomplete ranking of the offerors which omitted any ranking of Vanguard for the three most important factors and omitted ranking several other firms for at least one of those factors. See AR 135-5681 to -5685. Making a best-value determination based upon such an omission can be arbitrary and capricious. However, the court concludes that in this particular case, it did not adversely affect the procurement. Although Vanguard was the only offeror with a superior rating for Factors 1 and 2 that was not ranked by the SSA, the narrative descriptions indicate it would have been placed fifth in each category. See AR 135— 5682. For Factor 3, the SSA made critical statements about both CCPRS and Vanguard, but specifically noted that Vanguard’s proposal was only “borderline [ajeceptable.” AR 135-5683 to -5684. Thus it would have been ranked lower than CCPRS. The SSA’s rankings, with the addition of italicized implicit rankings derived by the court solely from her narrative comments, produce the following result:

    [[Image here]]

    *791From the information in the record, even if the SSA had completed the chart, she would still have selected the original awardees. CCPRS outranked Vanguard in every category. Fluor and NISTAC exceeded it in every factor save past performance, which was the least important category. AECOM outranks Vanguard in every area except Factor 1. Given the disparities between the awardees’ scores and Vanguard’s, the SSA’s decision itself is sustainable even if her method of reaching her results was not fully satisfactory.

    CONCLUSION

    For the reasons stated, the plaintiffs motion for judgment on the administrative record is DENIED, and the government’s and the defendant-intervenors’ cross-motions for judgment on the administrative record are GRANTED. The clerk shall enter judgment in accord with this disposition.29

    No costs.

    It is so ORDERED.

    . Because this opinion and order might have contained confidential or proprietary information within the meaning of Rule 26(c)(1)(G) of the Rules of the Court of Federal Claims ("RCFC”) and the protective order entered in this case, it was initially filed under seal. The parties were requested to review this decision and to provide proposed redactions of any confidential or proprietary information on or before November 28, 2011. The resulting redactions are shown by asterisks enclosed within brackets, e-g-, T * *]•’’

    . GAO’s decisions have been included in the record of the procurement as required by 31 U.S.C. § 3556. See Vanguard, 99 Fed.Cl. at 102. Subsequent citations to those decisions will be to the administrative record, viz., AR 105-5203 to -5218 (GAO’s First Decision) and AR 134-5659 to -5675 (GAO’s Second Decision), respectively.

    . The recitations that follow constitute findings of fact by the court drawn from the administrative record of the procurement and the parties’ evidentiary submissions. See Bannum, Inc. v. United States, 404 F.3d 1346, 1356 (Fed.Cir.2005) (bid protest proceedings "provide for trial on a paper record, allowing fact-finding by the trial court’’); Santiago v. United States, 75 Fed.Cl. 649, 653 (2007) ("In accord with RCFC 52.1, the court is required to make factual findings ... from the [administrative] record as if it were conducting a trial on the record.’ ” (quoting Acevedo v. United States, 216 Fed.Appx. 977, 979 (Fed.Cir.2007))).

    . "AR_” refers to the administrative record filed with the court in accord with RCFC 52.1(a). The administrative record has been subdivided into tabs. The first number in a citation to the administrative record refers to a particular tab, and the number after the hyphen refers to the particular page number of the administrative record, e.g., "AR 6-28.” The pages of the administrative record are paginated sequentially without regard to the tabs.

    . In earlier proceedings to settle the administrative record in this case, the court granted Vanguard's motion to supplement the administrative record with a deposition of Ms. Boardwine. See Vanguard, 99 Fed.Cl. at 103. During the prior GAO protest proceedings, Ms. Boardwine had provided declarations that raised questions about the materials available to FEMA at the time the initial procurement award was being considered. See id. at 93-94 & n. 13. Those declarations had played a substantial role in GAO’s decisions to recommend that FEMA take corrective action in this procurement, see id. at 94-96, but her declarations were not amplified during FEMA’s proceedings to take corrective action after GAO’s recommendations. In the circumstances, the court ruled that the government’s continued reliance in this court on Ms. Boardwine's declarations was an "unsatisfactory mode of proceeding.” Id. at 100. Especially because ”[h]er declarations [we]re more significant for what they d[id] not state than what they aver[red] directly," id., a deposition to provide an explanation was essential. Id. at 100-01.

    . Those TEWs that Ms. Boardwine did receive were generally positive. She recalls only one TEW that gave a contractor a poor rating. See Boardwine Dep. at 119:1-3. Even that rating was later expunged from the agency’s records once Ms. Boardwine determined that it did not have a sound basis. Id. at 120:11 to 121:4.

    . The court previously granted Vanguard's motion to supplement the record with the two GAO reports and the report by the Department of Homeland Security's Office of the Inspector General on FEMA's disaster relief programs. See Vanguard, 99 Fed.Cl. at 97-98.

    . The study was limited to non-catastrophic natural disasters for which FEMA had final (or near final) figures on the actual amounts of damages. See First GAO Report at 9. GAO did not consider data from the 264 non-catastrophic disasters between 2000 and 2006 for which final damage totals were not available. Id. Thus, the 83 cases considered by GAO represent approximately 24% of the total pool of non-catastrophic natural disasters for those years. Id.

    . SF 330 is a form used to provide information in aid of evaluating firms competing for architect-engineer services. See FAR § 36.702(b).

    . The defendant-intervenors dispute whether Ms. Boardwine actually suggested this course of action. Def.-Intervenors' Cross-Mot. at 25 & n. 17. In this respect, Ms. Boardwine’s e-mails to NISTAC and Fluor do not mention the Task Monitors at all. See Def.'s Mot. for Third Correction Ex. A, at 4, 9. However, in her deposition Ms. Boardwine states that the incumbents "called” her, so she may have made this recommendation in a telephonic conversation. See Bo-ardwine Dep. at 91:12.

    . The record contains some confusion on CCPRS’s role in PA TAC I. Ms. Boardwine stated that there were only three contracts awarded for PA TAC I. Boardwine Dep. 63:4-9. The first GAO decision identifies the PA TAC I contractors as Fluor, AECOM, and NISTAC. AR 105-5205 n. 1 (GAO’s First Decision). Even so, CCPRS lists PA TAC I as one of its reference contracts. Its proposal gives no indication that the firm was working as a sub-contractor, see AR 13-672, and in fact, its PPQ for the contract was written by a FEMA PA Coordinator who stated that the client was FEMA, AR 15-699 to -700. Nonetheless, resolution of this inconsistency is not relevant to the case at hand. The parties agree that CCPRS was not an incumbent on PA TAC II, nor does the record suggest otherwise. See CCPRS’s Cross-Mot. at 2; Pl.’s Reply to Cross-Mots, at 19. Consequently, the portions of the plaintiff’s argument aimed at the three PA TAC II incumbents — namely that the SEB ignored negative information pertaining to their work on that contract — do not apply to CCPRS.

    . The initial FEMA evaluation does not indicate which five of the six contracts listed by Vanguard were deemed to be of similar size, type, and scope. See AR 40-4299. This oversight was addressed in subsequent evaluations. AR 40-4309 to -4322.

    . Tab 136 is the third and final Source Selection Board Consensus Report, which describes the decision-making process in the second evaluation. The administrative record filed with the court does not contain a consensus report or Source Selection Decision for the second award.

    . As a result of this filing, GAO dismissed Vanguard's third GAO protest. See Vanguard Recovery Assistance-JV, B- 401679.11 — . 12 (Comp.Gen. Jan. 14,2011).

    . The government’s third motion to correct the record and CCPRS’ first motion to supplement the administrative record remain pending before the court.

    . Although Fluor did not list either PA TAC contract among its completed projects, see AR 8-189, it did provide a narrative description of its cost-estimates of medical buildings for FEMA, AR 8-191 to -192. Fluor also provided two “success stor[ies] for FEMA,” highlighting its work on the New Orleans Superdome and the South Cameron Memorial Hospital. Id. Similarly, AECOM did not include either PA TAC contract in its Subfactor 1(a) submission, but it did provide a brief vignette of its cost-estimates for schools after Hurricane Katrina. See AR 11-253.

    . FAR § 36.604 (2010) now consists of a cross-reference to FAR § 42.1502(f), with apparently the same requirements.

    . There is a seeming incongruity between the statements made by Ms. Boardwine during the PA TAC III evaluation and the declarations given by her during protest proceedings before GAO. During the source selection, Ms. Board-wine resisted filling out the PPQs for fear of creating an appearance of bias. See Def.'s Mot. for Third Correction Ex. A, at 1 ("Basically, the Contracting Officer has determined that due to my involvement on the PA TAC Acquisition, my response to a [PPQ] for the acquisition is an apparent conflict of interest.”). Afterwards, before die GAO, she emphasized that she was incapable of filling out a PPQ due to a lack of knowledge. See AR 99-5153 (Second Boardwine Deck) ("I attest that there is no past performance information on PA TAC II ... that I could have made available to the SEB.”); see also AR 105-5217 (GAO’s First Decision). These inconsistencies raise the question of why Ms. Boardwine consulted with the contracting officer about the propriety of filling out the incumbents’ PPQs if she lacked the knowledge to do so.

    . Indeed, the SSN does not indicate whether the offerors would be ranked on a factor-by-factor basis at all. Vanguard appears to believe that this omission prohibits the SSA from ranking the firms by factor. See Pl.’s Mem. at 35-36. To the contrary, however, the silence of the solicitation commends the question of ranking to the reasoned discretion of the SSA. FirstLine, 100 Fed.Cl. at 393-94; cf. D & S Consultants, Inc. v. United States, 101 Fed.Cl. 23, 38 (2011) (“[B]e-cause the agency did not commit itself to a particular evaluation methodology in the Solicitation, it was entitled to exercise broad discretion with regard to how to conduct its analysis.”).

    . Vanguard also implies that the agency chose to assign equal weight to the SF 330s and the PPQs with the purpose of undermining its proposal. Pl.’s Mem. at 28 (“Fully aware that Vanguard ... had better PPQ ratings than the awar-dees who had initially been rated higher in their SF 330s, the SEB unreasonably assigned equal weight to [the two components].”). An inspection of the Factor 4 ratings reveals that this allegation is not proven, although the result is close. Contrary to its claims, Vanguard’s PPQ ratings were not better than AECOM's — both of-ferors received a superior and an acceptable. AR 136-5712. On the other hand, CCPRS received two acceptables for its PPQ ratings. Nonetheless, CCPRS got a better rating overall because of its superior-rated SF 330. The other contending offerors, NISTAC and Fluor, were not rated higher in their SF 330s than Vanguard. Id.

    . Ms. Rhoads’ declaration is manifestly a post hoc rationalization of the procurement decision reached after the agency protest. Her declaration was, however, put before GAO in the first protest, along with the First Boardwine Declaration and others provided by the additional SEB members. See Vanguard, 99 Fed.Cl. at 94 n. 13. Because those declarations played a role in GAO’s First Decision and the ensuing corrective action by FEMA, the court may now properly take the declarations into account.

    . In oral argument, plaintiff’s counsel noted that a $[* * *] million contract over twelve years would amount to $[* * *] per year. See Hr’g Tr. 49:7-13 (Sept. 13, 2011). However, it is up to the agency to determine whether it is concerned with the aggregate value of the contract or its average annual value. See Seattle Sec. Servs., 45 Fed.Cl. at 570 ("An agency has discretion to determine the scope of the offeror's performance history to be considered provided all proposals are evaluated consistently.” (citing Pacific Ship Repair & Fabrication, Inc., B- 279793, 98-2 CPD ¶ 29, 1998 WL 412421 (Comp.Gen. July 23, 1998))); see also Patriot Taxiway Indus., Inc. v. United States, 98 Fed.Cl. 575, 584 (2011) (upholding the agency’s decision to aggregate the values of a contract and its follow-on even though they were performed concurrently for only two weeks).

    . Plaintiff’s briefing repeats this error. See Pl.’s Mem. at 31 (describing the [* * *] contract as "a $[* * *] million contract”).

    . Vanguard’s SF 330 cites another [* * *] contract worth $[* **] million. AR 22-1446. Vanguard lists this contract as an "example project[] which best illustrated] proposed team’s qualifications,” id., but did not reference it in the section on past performance. See AR 22-1468 to-1469.

    . Moreover, the SSN also required the offerors to provide a "narrative discussion” of each reference for Factor 4. AR 2-4. The PPQ provided by the [* * *] does not include a narrative discussion of the contract beyond remarking that Vanguard was “responsive” and "one of our better contractors.” AR24-1610.

    . Each of Vanguard’s PPQ contracts was easily classified either as large (four contracts valued at $[* * *] million or more) or small (two contracts valued at $[* * *] thousand or less). See AR Tab 24.

    . Vanguard's misunderstanding appears to have several bases. First, the past performance reevaluation consisted of three separate score sheets: "Contractor Strengths" drawn from PPQ comments, "Contractor Weaknesses” also drawn from PPQ comments, and "SEB Overall Assessment” that uses information from both the PPQs and SF 330s. See, e.g., AR 38-4236 to -4238 (NISTAC score sheets). As indicated by the header "Contractor Strengths,” the SEB regarded every PPQ comment quoted under the header as a strength. Vanguard appears to have assumed that every SF 330 comment cited in the "SEB Overall Assessment" was considered a strength as well. Given that the appearance of these sheets is nearly identical, this confusion is understandable.

    Second, in conducting its final re-evaluation of Vanguard’s past performance, the SEB used a different format for the “SEB Overall Assessment” sheet for Vanguard than it did for the other offerors. Compare AR 40-4315 to -4321, with, e.g., AR 38-4238 to -4239. The score sheet for Vanguard was far more detailed and spelled out the SEB's reasoning to a much greater extent. Id. In particular, the agency explicitly stated why it found certain SF 330 comments did not warrant a strength. See, e.g., AR 40-4316 ("The SEB thought that these comments were very general and outlined expected actions to be taken on these types of contracts.”). FEMA did not provide similar explanations for why the other offerors did not receive a strength for their SF 330 comments. See, e.g., AR 38-4238. Vanguard appears to take this silence as evidence of comments that earned competitors a strength. In fact, the agency scored the comments pertinent to the other offerors in the same manner as Vanguard’s: they were sufficient to avoid the opprobrium of a weakness but not meritorious enough to justify a strength.

    . This distinction is even more apparent if one examines the first re-evaluation of Vanguard’s past performance, which uses the same format as the other offerors’ re-evaluations. AR 40-4305 to -4309. There, the SEB repeatedly quoted Vanguard’s SF 330; nonetheless, it only accorded Vanguard a strength in one instance. AR 40-4308.

    . The government’s third motion to correct the record with additions is GRANTED. Defendant-Intervenor CCPRS’ motion to supplement the administrative record is DENIED.