Fort Carson Support Services v. United States , 71 Fed. Cl. 571 ( 2006 )


Menu:
  • OPINION AND ORDER

    WOLSKI, Judge.

    This post-award bid protest was before the Court on cross-motions for judgment on the administrative record. Plaintiffs Fort Carson Support Services (“FCSS”) and The Ginn Group, Inc. (“TGG”) separately challenged the award of a contract to intervenor KIRA, Inc. (“KIRA”). For the reasons that follow, the Court DENIES the motions of FCSS and TGG, and GRANTS the cross-motions of the government and KIRA.

    I. BACKGROUND

    A. The Solicitation

    On April 13, 2004, the United States Army issued solicitation number W911RZ-04-R-0002 (“Solicitation”), requesting proposals for a contract to perform base operations and maintenance services for the Directorate of Public Works at Fort Carson, Colorado. See AR Tab 7. This negotiated procurement was for a cost-plus contract, covering a phase-in and one-year base period and nine one-year option periods. See id. § F.2. As was explained in the Solicitation, the award was to be made based on the best value to the government. Offerors were informed:

    This decision will not be made by the application of a predefined formula, but by the conduct of a tradeoff process among the factors, subfactors, and elements identified below and by the exercise of the sound business judgment on the part of the Source Selection Authority_ Proposals must conform to solicitation requirements and will be judged by an integrated assessment as being most advantageous to the Government, cost and other factors considered.

    Solicitation § M.1.1, AR Tab 7 (emphasis added).

    The Army further explained that “cost is not the primary consideration,” and that it “reserve[d] the right to award to other than the lowest proposed cost or to other than the *574offeror with the highest quality rating.” Id. § M.1.2. The Solicitation elaborated:

    Offerors are advised that 'primary consideration will be given to the evaluation of the Quality Factor, rather than to the Cost Factor. The Government will determine the best overall value based on an offeror’s proposal, including the Quality and Cost Factors and the risks associated with successful contract performance. The ultimate decision for award will be derived subjectively in the event that the highest quality proposal is not also the lowest cost. In that event, the Source Selection Authority will make a determination, based on the best interests of the Government, whether the differences in quality are sufficient to offset differences in costs.

    Id. (emphasis added).

    The two evaluation factors for the source selection decision were “Quality and Cost, with Quality being significantly more important than Cost.” Id. § M.2.1 (emphasis added). The Quality factor was composed of two subfactors, Management and Past Performance and Experience (“PPE”), which were “of approximately equal importance.” Solicitation § M.2.1.1, AR Tab 7. The Management subfactor was made up of five elements: the Management and Technical element, which was “substantially more important than any of the other elements,” id.; the Contract Administration, Human Resources Management, and Quality Control elements, which were “of approximately equal importance,” id. and the Phase-In Planning element, which was substantially less important than any of the others. Id. The four elements of the PPE subfaetor — Performance, Managerial, Business Relations, and Cost Control— were “of approximately equal importance.” Id.

    The Solicitation further provided that the Quality factor evaluation would be “a composite evaluation using all of the subfactors and elements associated with this factor,” in which “[t]he Management subfaetor and each of its elements will receive an overall adjectival and proposal risk assessment rating,” and “[t]he PPE subfactor and each of its elements will receive a performance risk rating.” Id. § M.2.2. The Solicitation explained, however, that the rating descriptions provided “may not precisely describe the assessment of a specific factor, subfactor, or element. In that case, evaluators will assess the rating with the description most closely describing the actual assessment. Id. § M.3 (emphasis added). Thus, the source selection decision process was not geared toward the generation of a particular label to place on the quality of each subfactor and element, but employed labels to assist in the actual assessment of the quality of the proposals.

    The adjectival ratings for the Management subfactor and its elements ranged from Outstanding, Good, Satisfactory, and Marginal to Unsatisfactory, and each rating was given a full description in the Solicitation. Id. § M.3.1. A “Good” rating, for instance, was merited when a “proposal offers some advantages to the Government as well as fully meeting the requirements of the RFP,” and described an approach that “has a high probability of meeting or exceeding the requirements.” Id. A “Satisfactory” rating “indicated that the proposal offers no particular advantages to the Government but does fully meet the ‘minimum’ requirements of the RFP,” and was associated with an approach having “an acceptable probability of meeting the requirements.” Id. An “Outstanding” rating meant, among other things, that “the proposal offers significant advantages to the Government” and had “an excellent probability of exceeding the requirements,” while a “Marginal” rating meant the finding of “minor omissions or misunderstandings of the requirements of the RFP” and “a low probability of meeting some of the requirements of the RFP. Id. “Unsatisfactory” was reserved for proposals with “major omissions and/or misunderstandings,” which have “a high probability of not being able to meet a significant number of the requirements of the RFP.” Id.

    The risk ratings for the Management sub-factor and its components were based on the amount of “disruption of schedule, increase in cost, or degradation of performance” expected, as well as the amounts of “contractor emphasis” and “Government monitoring” that would be required. These ratings ran *575from best to worst in this order: Minimal, Low, Moderate, Medium-High, and High. Id. § M.3.2.

    Three risk ratings were used for the PPE subfactor, based on the amount of doubt that an “offeror will successfully perform the required effort.” Id. § M.3.3. A “Low” rating meant little doubt, a “Moderate” rating some doubt, and a “High” rating substantial doubt. Id. If an offeror lacked an identifiable performance record, it would instead receive the neutral “Unknown” rating. Id. Regarding PPE, the Solicitation explained that “more similar PPE, in terms of type and scope of work, will be given greater credit than less similar.” Id. § M.2.2.2. The PPE was to be “evaluated by examining both the offeror’s proposal and questionnaires provided to the Contracting Officer by references selected by the offeror.” Id. The Army was also to “consider the breadth and depth of experience reflected in the offeror’s list of work accomplished,” and would consider information obtained from references, from firms or individuals connected to the list of work supplied by the offeror, and from “other relevant sources available to it.” Id.

    As was noted above, the Cost factor was significantly less important than the Quality factor. Offerors were told that this factor “will be assessed for completeness, reasonableness, and realism, but will not be assigned an adjectival or risk rating.” Solicitation § M.2.3, AR Tab 7. The reasonableness assessment entailed a comparison of proposed costs to an Independent Government Estimate (IGE). Id. § M.2.3.2. An assessment of realism considered, among other things, whether proposed costs were “consistent with the unique methods of performance and materials described in the offeror’s Management subfactor proposal.” Id. § M.2.3.3. Sounding this same theme, the Solicitation’s discussion of the Management and Technical element of the Management subfactor explained that “[t]he Government will consider whether the proposal provides any innovative ideas that could improve services, enhance customer service, and/or reduce costs.” Id. § M.2.2.1.1. Offerors were specifically warned, albeit parenthetically, of the interplay between the Management subfactor and the cost realism aspect of the cost analysis: “Deficiencies or weaknesses in cost realism may result from comparable deficiencies or weaknesses in the Management subfactor proposal and may, thus, also result in lower adjectival and higher risk ratings] in the evaluation of that subfactor.” Id. § M.2.3.3.

    The evaluation of the Cost factor involved the consideration of two subfactors, Total Estimated Cost (TEC) and cost realism. The former “includes an assessment of the offeror’s total estimated cost and fee,” an evaluation that “will include fairness and reasonableness of the Total Estimated Cost based on the competition associated with this acquisition.” Id. § M.2.3.5. Concerning cost realism, the Solicitation provided:

    Cost realism includes the evaluation of variance, balance and allocation. The Government will develop a Most Probable Cost (MPC) based on the Government’s cost realism analysis. The MPC will be determined by adjusting each offeror’s proposal to reflect any additions or reductions in costs to realistic levels based on the results of the Government’s cost realism analysis. The MPC may differ from the proposed cost and will reflect the Government’s best estimate of the cost that is most likely to result from the offeror’s proposal. An of-feror’s eligibility for award may be significantly downgraded if the offeror’s proposed costs and technical approach result in significant cost realism adjustments.

    Id. § M.2.3.6 (emphasis added).

    Each of the three aspects of cost realism— variance, balance, and allocation — was to be “evaluated as either satisfactory or substandard.” Id. §§ M.2.3.6.1-M.2.3.6.3. Variance was defined as the difference between a proposal’s TEC, claimed by the offeror, and the MPC, as determined by the Army. Offerors were informed that “consideration [will be] given to the percentage of the variance, the dollar amount of the variance, the anticipated impact on contract performance, and the of-feror’s proposed method of performance.” Id § M.2.3.6.1. Balance was to be based on whether “cost is underestimated in some areas or periods and overstated in others,” id. § M.2.3.6.2, and allocation considered the *576treatment of direct and indirect costs. Id. § M.2.3.6.3.

    B. The Source Selection Plan

    The Source Selection Plan (SSP) specified that the Source Selection Authority (SSA), Mr. Steve J. McCoy, would appoint a Source Selection Evaluation Board (SSEB) to assist him in the evaluation of proposals. SSP, AR Tab 34 at 6. The SSEB consisted of a Chairman, a Quality Committee, a Cost and Past Performance Committee, the Contracting Officer, the Contract Attorney, and technical advisors. See Attachment E to SSP, AR Tab 34 at 38; see also AR Tab 31 (SSEB appointment letters). While all members of the SSEB were to read the Solicitation and the SSP, it appears that only the individual members of the Quality Committee were charged with reading and evaluating each proposal to recommend ratings for the Management sub-factor and its elements. SSP § III.B(2), AR Tab 34 at 9.2 These individuals were to provide a written narrative supporting their evaluations. Id. The SSP required “the relevant members of the SSEB” to meet, once these individual evaluations were completed, to “reach a consensus evaluation” for the Management subfactor and its elements. Id. The SSP then provided:

    The SSEB will assign adjectival ratings to the elements, and based upon the relatively equal importance of the elements, assign an overall adjectival rating to the overall subfactor. Ratings must be accompanied by a consistent narrative assessment, inclusive of strengths and weaknesses that describes the basis for the assigned ratings.

    Id. The proposal risks for the Management subfactor and its elements were to be “assess[ed] and assign[ed]” at this same time. SSP § III.B(3), AR Tab 34 at 9. The SSP also required, for this subfactor and elements, that “[a]ll evaluations shall be supported with detailed rationale.” SSP § III.B(4), AR Tab 34 at 9. The SSP used the same definitions of adjectival and risk ratings as were contained in Solicitation section M, which was also included as attachment C to the SSP. Compare SSP § III.B(5)-(6), AR Tab 34 at 10-11 with Solicitation §§ M.3.1-M.3.2, AR Tab 7 at 52-53; see also Attachment C to SSP, AR Tab 34 at 18.

    Concerning the Past Performance and Experience evaluation, SSEB members were to “[djocument [the] individual assessments, conduct a consensus evaluation and prepare a summary report for the [Contracting Officer].” SSP § III.C(2), AR Tab 34 at 11. The performance risk definitions from the Solicitation were repeated in the SSP. See id. § III.C(3) at 11. The SSEB’s Cost factor evaluation language fairly closely tracked the method explained in the Solicitation. Compare SSP § III.D, AR Tab 34 with Attachment C to id.3 The SSP provided: “Cost proposal information will be restricted to the SSA, the SSEB Chairman, the member(s) of the Cost team, the Contracting Officer, and the Contract Attorney. This information will be secured separately from Management and Past Performance submissions.” SSP § III.D(5), AR Tab 34 at 13.

    The SSP explained that SSEB members were to “[participate in group discussions to arrive at consensus ratings of each proposal,” and emphasized that these ratings “shall not be mathematically computed based on the individual evaluations, but rather shall be established by agreement among the SSEB members.” SSP § II.B(5)(d), AR Tab 34 at 8. The SSEB’s consensus evaluation was to be contained in a Proposal Analysis (PA) Report. See id. § II.B(5)(c) at 8. The PA Report was described as a “formal report that fully documents the results of the [SSEB], using objective appraisals and ratings of the Quality and Cost proposals.” Id. § I.F(8) at 4. The “ratings and appraisals” in the PA Report were required to “be the consensus agreement of all SSEB members.” Id. The SSP further explained that one of the *577responsibilities of the Chairperson of the SSEB was that this individual:

    Prepares and submits a comprehensive PA (Proposal Analysis) Report of the results of the evaluation to include a summary; adjectival ratings of the Management sub-factor along with a statement of the specific deficiencies, strengths, and weaknesses with respect to the overall subfactor and each element; the proposal and performance risk assessments; and overall recommendations to the [Contracting Officer]. The report shall be supported by the individual SSEB members’ evaluations and other reports/write-ups on the evaluation process.

    Id. § II.B(4)(h) at 7.4

    As was provided in the Solicitation, the SSA was ultimately responsible for the award decision, as well as for “document[ing] the supporting rationale.” Id. § II.B(l)(f) at 6. According to the SSP, the proposals were to be evaluated by the SSA by assessing the Management subfactor and its five elements, the PPE subfactor and its four elements, and the Cost factor. Id. § IV.A(1). The cost factor assessment was said to involve an evaluation for completeness, reasonableness, and realism. Id. In addition to again omitting any reference to the Total Estimated Cost subfaetor from section M of the Solicitation, see Solicitation § M.2.3.5, AR Tab 7 at 51, the SSP contained this erroneous instruction to the SSA: “Within the Quality factor, the Management subfactor is considered substantially more important than the Past Performance and Experience subfactor. Within both subfactors, the elements are of roughly equal importance.” SSP § IV.A(2), AR Tab 34 at 14.5 The SSP also provided:

    The SSA shall utilize all evaluation reports and analysis in determining the successful proposal pursuant to the basis for award language shown in both FAR Provision 52.215-1 Instruction to Offerors — Competitive and FAR Part 15.305 Proposal Evaluation. In making the award determination, the SSA can draw on his business judgment and personal knowledge.

    Id. § IV.B(l) at 14.

    C. The Initial Evaluation

    Six proposals were submitted in response to the Solicitation. See AR Tab 76 at 13. After the SSEB performed its individual and consensus evaluations, and briefed the SSA,6 the decision was made to award the contract to plaintiff Fort Carson Support Services (FCSS), a joint venture comprising Olgoonik Management Services, LLC, and Technical Design, Inc. AR Tab 77 (Aug. 11, 2004 Source Selection Decision Document (SSDD)). Although the Solicitation incorporated by reference the FAR provision expressing the intent to award a contract after discussions, see Solicitation, AR Tab 7 at 35 (referencing 48 C.F.R. § 52.215-1 Alternate I), the Army had reserved the right to award a contract based on initial offers, without discussions. Id. at 38, § L.5.5. The SSEB had given the identical PPE risk rating of “Low” to each offeror for the overall PPE subfactor, as well as for each element of this subfactor. See AR Tab 76 (Aug. 10, 2004 Decision Briefing) at 16, 22, 27, 33, 38, 44. The SSA concurred in this evaluation, and thus decided: ‘With all offerors essentially equal, this subfactor was not determinative in selecting the best value offeror.” AR Tab 77 (SSDD) at 2.

    The initial award thus came down to a consideration of the Management subfactor evaluations and the cost of each proposal. The SSA found that FCSS’s “proposal re*578ceived significantly higher ratings in the Quality factor, showing greater understanding of requirements, better attention to detail, and more innovations than any of the other proposals.” AR Tab 77 (SSDD) at 5.7 For the overall Management subfactor, FCSS was assigned a “Good” adjectival rating and a “Low” proposal risk rating. By contrast, intervenor KIRA, to which the contract was ultimately awarded, received a “Marginal” adjectival rating and a “Medium-High” proposal risk rating. Id. at 2-3.8 These same ratings were also assigned to the other plaintiff, The Ginn Group, and a fourth offeror, Chenega Advanced Solutions & Engineering, LLC. Id. at 3^t.9 The other two offerors were rated even lower — one received an “Unsatisfactory” and a “High” risk, and the other a “Marginal” and a “High” risk. Id.

    These two offerors with the lowest-rated Quality factor also proposed the lowest cost. The TEC of FCSS was fourth lowest, just $[XXXX] higher than The Ginn Group’s over ten years, and its MPC was the lowest of all offerors. Id. at 2-5. The SSA concluded, “[a]ny modestly lower costs possible from any of the other offers are more than offset by the significantly higher ratings [FCSS] achieved in the Quality factor (reflecting its superior presentation of understanding of the requirements and innovative approaches to satisfying them),” and he selected FCSS as the Best Value Offeror. Id. at 5. The SSA found FCSS’s proposal to be “clearly superi- or to the other proposals,” and thought “the other proposals would require significant improvement to be competitive with” FCSS’s, justifying an award without discussions. Id.

    D. The Initial Protests

    On August 12, 2004, the Contracting Officer and other officials signed and approved the Price Negotiation Memorandum memorializing the decision to award the contract to FCSS. See AR Tab 78. Five days later, the Army Contracting Agency approved the award. AR Tab 79. Pre-award notification letters were sent to the unsuccessful offerors on August 26, 2004. AR Tab 81. On September 1, 2004, the contract was awarded to FCSS. AR Tab 85. That same day, post-award notification letters, which also served as written debriefings, were sent to the unsuccessful offerors. AR Tab 87 (letter to KIRA), Tab 88 (letter to TGG). Each letter briefly informed the unsuccessful offeror of the strengths, significant weaknesses, and deficiencies of that offeror’s proposal and revealed its MPC. The letters also explained that FCSS’s “proposal received significantly higher ratings in the Quality factor, showing greater understanding of the requirements, better attention to detail, and more innovations than any of the other proposals.” AR Tabs 87, 88. Perhaps most significantly, the letters revealed FCSS’s MPC of $204,301,121, and identified FCSS’s proposed cost to be $165,766,025. Id.

    After receiving the written debriefing, KIRA, TGG, and a third offeror filed protests of the award with the U.S. Government Accountability Office (GAO).10 AR Tabs 90, 91, 93. Each of these protests identified the variance between FCSS’s proposed cost and its MPC as a ground for challenging the award. AR Tab 90 (KIRA protest) at 5-9; Tab 91 (TGG protest) at 14-15; Tab 93 (protest of Chenega) at 10, 12. In preparing the government’s defense of the protests, the Contracting Officer discovered some “problems with the information” used in the evaluation process. See AR Tab 95 at 2 (Oct. 1, 2004 Memo to SSA). First, the proposed cost figure for FCSS that was provided to the unsuccessful offerors in the post-award notification letters and that was used in the *579award — approximately $165.8 million — differed significantly from the figure used in the SSA’s evaluation, which was approximately $178 million. Id.; see AR Tab 76 at 34; Tab 77 at 3; see also Tab 78. This difference was due to FCSS using different numbers in two places in its proposal, which the Contracting Officer decided was “too significant to be considered a minor or clerical error,” but instead was a deficiency precluding award to FCSS. AR Tab 95 at 2. Second, the Contracting Officer realized that regardless of whether the proposed costs were $165.8 million or $178 million, the presentation to the SSA of FCSS’s variance was greatly understated: it was presented “as $10.2 million and 6 percent, as opposed to the actual $26.3 million and 15 percent for the $177 million cost or $38.5 million and 23 percent for the $165 million cost.” Id; see also AR Tab 76 at 35. And third, the Contracting Officer believed “there were areas in which the documentation of the SSEB’s actions was not as good as it should have been.” AR Tab 95 at 2.

    In response to the discrepancy in FCSS’s proposed cost figures, and at the recommendation of the Contracting Officer, the adjectival and risk ratings given to the FCSS proposal for the Management and Technical element and the overall Management subfactor were downgraded from “Good” and “Low” to “Satisfactory” and “Moderate.” AR Tabs 94, 95. Presumably because of the “problems with the information” identified by the Contracting Officer, see AR Tab 95 at 2, on October 5, 2004 the Army took corrective action, staying performance of the contract awarded to FCSS and establishing a competitive range that included FCSS and the three protesting offerors. AR Tab 97; see also id. Tab 105 (Nov. 8, 2004 Prenego-tiation Objective Memorandum determining the offerors in the competitive range).11 The GAO subsequently dismissed the three protests as “academic.” AR Tab 103 (Nov. 4, 2004 GAO decision). Apparently because it recognized that information concerning FCSS’s costs and evaluation was included in the post-award notification letters, the Army considered providing each offeror with a copy of the others’ letters, but FCSS waived the right to see any debriefing documents due to concerns that this “could constitute technical leveling.” See AR Tab 98 (Oct. 7, 2004 letter from FCSS).

    E. Evaluation of the Final Proposal Revisions

    After the SSEB met on several occasions to discuss the strengths, uncertainties, weaknesses, significant weaknesses, and deficiencies previously identified in the evaluations of the various proposals, see AR Tabs 99, 101, 102, letters were sent to each offeror in the competitive range, with a list of the uncertainties, weaknesses, significant weaknesses, and deficiencies of that offeror’s proposal. AR Tabs 115-17. At the bottom of each list, after the identification of “Deficiencies” and set off from the rest of the document by a line running the width of the text, was the identical note for each:

    In addition to the above matters, the proposed staffing reflected substantial variances from that which the Government considered necessary. These variances included large numbers of instances of un-derstaffing and a much smaller number of instances of overstaffing.

    Id.

    Discussions were separately held between representatives of each offeror and members of the SSEB. See AR Tabs 132, 135, 138 (Army’s discussion minutes). During these discussions, the Contracting Officer fielded questions about the specific items from each list of criticisms provided to that offeror, and also followed a written script so that each offeror was provided the same general description of the need to recognize and consider possible understaffing and to justify any reduction in staff needs due to innovations.12 Each offeror was told the corresponding categories from the Solicitation’s Performance Work Statement, see AR Tab 13, for which *580its proposal was understaffed or overstaffed, and this information was repeated in e-mails sent the following week. See AR Tabs 143, 146, 147. In the discussion with FCSS, the Army told FCSS what had already been revealed to its competitors — that the Most Probable Cost estimate for its proposal was $204.3 million. Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 37.

    On December 21, 2004, the Army issued Amendment 10 to the Solicitation, which invited each offeror in the competitive range to submit a Final Proposal Revision (FPR). AR Tab 149. In this amendment, the Army repeated that “the Quality factor is significantly more important than the Cost factor” and emphasized: “Especially in light of this, revised proposals must show a clear correlation between methods and means for accomplishing or exceeding the requirements of the solicitation as expressed in the Management subfactor proposal and the costs shown in the Cost factor proposal.” Id. at 2. Offerors were informed that the Army “will re-validate PPE information, including any new information that may be available for activities since the original proposal submission date,” and that offerors would be allowed to add two additional references who may submit PPE questionnaires. Id. The amendment also stressed: “As stated in paragraph M.2.2.1.1, the Government will consider any innovative ideas, but proposals should clearly identify and explain ... any cost savings that rely upon those innovations. ” Id. at 2-3 (emphasis added).13

    Fort Carson Support Services, TGG, and KIRA each timely submitted an FPR.14 See AR Tab 176. Following guidelines stating that they were to “[r]e-evaluate the proposal based on revisions,” AR Tab 177, the SSEB members proceeded to review each FPR. Caucuses were held to discuss the proposals. See AR Tabs 181 (TGG caucus notes), 185 (KIRA caucus notes), 189 (FCSS caucus notes), 196 (Finalization of Management Sub-factor Scoring). Of the three offerors, only KIRA submitted additional PPE information, obtaining questionnaires for two additional contracts as allowed under Amendment 10. See AR Tab 205 (PPE Pre-caucus Evaluator Comments); AR Tab 194 (KIRA PPE Re-validation Documentation). KIRA also submitted information concerning FCSS’s proposed subcontractor, Kellogg Brown & Root (“KBR”), characterizing that firm’s alleged loss of property in Kuwait as “one of the worst contractor performances in recent memory,” and attaching an audit report concerning that KBR contract. AR Tab 179 at 3 (Jan. 26, 2005 letter from KIRA). The Army also searched the Past Performance Information Retrieval System for new evaluations concerning the offerors, and found one report for an on-going KIRA contract, and two new reports for one of the FCSS joint venture partners. See AR Tabs 194,195 (FCSS PPE Re-validation Documentation), 205.

    The Army evaluated the FPRs to determine the Most Probable Cost for each. This involved identifying instances of overstaffing and understaffing, relative to the Independent Government Estimate (“IGE”) for the contract, taking into account labor savings due to proposed innovations. See AR Tabs 197(IGE), 206 (KIRA MPC adjustments), 207 (TGG MPC adjustments), 208 (FCSS adjustments).

    On March 30, 2005, the Chairman of the SSEB submitted the Proposal Analysis to the Contracting Officer, reflecting the consensus judgment of the SSEB. AR Tab 209. The PA Report explained that the time between the last caucus meeting concerning an individual FPR — February 17, 2005 — and the final caucus on March 4, 2005, “was used by SSEB members to re-review the initial proposals as necessary and to reflect on whether they had evaluated all the final proposals in the same manner.” Id. at 3. Concerning PPE, the Chairman noted “that the descriptions of the various performance risk assess*581ments are very broad,” and thus “offerors which have received the same assessment may not be equal in regard to PPE, and, as a result, the supporting documentation may provide additional relevant insight, if necessary.” Id. at 5. The PA Report further explained:

    As part of the evaluation process for the final revised proposals, the SSEB reconsidered the PPE information in light of the importance of the PPE Subfactor (relatively equal to the Management Subfactor) and in light of the fact that the Management Subfaetor ratings and proposal risk assessments of the offerors were closer than they had been in the initial evaluation round. In this reconsideration, the SSEB found some instances in which it felt that its initial evaluations of PPE had been erroneous and made appropriate adjustments in this round.

    Id. at 5 (emphasis added).

    The PA Report set out the basis of the MPC calculation for each proposal. Starting with the IGE’s 258 full-time equivalent employees, an additional five were added for KIRA to reflect positions proposed under its method of performance that were not included in the IGE. Id. at 6. This number was then reduced by 6.3 man-years, reflecting credit “for proposed innovations that the SSEB deemed justified.” Id. The SSEB found that other reductions for efficiencies proposed by KIRA “were not supported by documentation in the proposal and, thus, were not reflected in the MPC calculation.” Id. The KIRA proposal was determined to be overstaffed in five positions and understaffed in forty-two, for a net shortage of 30.7 man-years. See id.; see also AR Tab 211 at 39. The KIRA proposal’s MPC over ten years was calculated as $209.3 million — a variance of $[XXX], or [XXX] percent, over its TEC. AR Tab 209 at 7. The SSEB determined that TGG was overstaffed in 8.5 positions and short by 33 man-years, for a net shortage of 24.5 productive man-years relative to the IGE. See id.; see also AR Tab 211 at 59. The total MPC for TGG’s proposal was $[XXXX] million, a variance of $[XXX] million, or 7.4 percent, over its TEC. AR Tab 209 at 8. The FCSS proposal was found to be overstaffed by nineteen positions and understaffed by 94.5 man-years’ worth. Id. The proposal “was credited with 3 man-years for proposed innovations and .5 man years for home office effort.” Id. This resulted in a total MPC of $208.6 million, which was $32.7 million — or 15.4 percent — higher than the FCSS proposal’s TEC. Id.

    In discussing the variance of the calculated MPC from the proposals’ estimated costs, the Chairman noted that KIRA’s variance for the first option year was just 5.8 percent or $1.1 million. Id. at 7. This variance “was evaluated as not likely to significantly impact contract performance and reasonable in relation to the offeror’s proposed method of performance.” Id. The higher variance of 10.6 percent from the KIRA proposed cost for the fall-life of the contract, “did raise more of a question about impact on contract performance and reasonableness of estimated costs,” but “was still considered satisfactory.” Id. The TGG proposal’s smaller variance of 7.4 percent based on total MPC “was evaluated as not likely to significantly impact contract performance” and was found satisfactory. Id. at 8. The FCSS variance of 15.7 percent for the first option year and 15.4 percent from its total estimated cost “was evaluated as possible to impact contract performance and raised [a] question in relation to the offeror’s proposed method of performance,” resulting in the SSEB’s finding that this variance was “only marginally satisfactory.” Id.

    Concerning the Quality factor, the PA Report contained less information. Despite the statement, quoted above, that the SSEB found that some of “its initial evaluations of PPE had been erroneous,” id. at 5, no risk ratings or adjustments relating to the PPE consensus evaluation are identified. For the Management Subfactor scoring, the briefing charts prepared for the SSA presentation, see AR Tab 211, are referenced as providing the consensus ratings and proposal risk assessments. AR Tab 209 at 3. The Chairman did, however, elaborate on the methodology followed by the SSEB:

    As part of the Management Subfaetor evaluation, the SSEB considered whether the offerors’ proposed staffing was eonsis-*582tent with their proposed means and methods for performing the contract. This item was a 'point of emphasis in the discussion sessions with the offerors in December and, afterwards, was specifically re-emphasized in Amendment 10, paragraph 4b. As a result, the SSEB scrutinized this issue much more closely in the second evaluation, and did not, as it may have in the first evaluation, give benefit of doubt to the offerors. Shortfalls and excesses in proposed staffing thus determined were then used in the evaluation of the Cost Factor, as well as, where appropriate, in assessing the rating and proposal risk assessment for the Management Sub-factor.

    Id. at 4 (emphasis added).

    The briefing charts used in the SSEB’s presentation of its evaluation results to the SSA identified the strengths and weaknesses found for each proposal under the Management Subfaetor and each corresponding element, and provided the adjectival and proposal risk ratings. See AR Tab 211 (Mar. 31, 2005 FRP Evaluation Briefing). These charts also explained the PPE evaluations and risk ratings, and provided the Cost factor information. Id. The Management Sub-factor adjectival and proposal risk ratings for KIRA improved from the initial “Marginal” and “Medium-High” to “Satisfactory” and “Moderate.” Id. at 32. For the individual elements, the SSEB found several improvements in the KIRA FPR. The ratings in the most important Management and Technical element improved, as did the overall Subfactor scoring, from the initial “Marginal” and “Medium-High” to “Satisfactory” and “Moderate.” AR Tab 211 at 22. The adjectival rating for Human Resources Management improved from “Satisfactory” to “Good,” although the risk rating slipped from “Low” to “Moderate/Low.” Id. at 28. The Phase-In Planning adjectival rating moved from the initial “Good” to “Outstanding.” Id. at 24.

    For offeror TGG, the Management Subfactor evaluation also showed an improvement in its FPR. The over-all ratings and those for the Management and Technical element— initially “Marginal” with “Medium-High” risk — were raised by the SSEB to “Satisfactory” and “Moderate” risk. Id. at 42, 52. For the Contract Administration and Human Resources Management elements, TGG’s adjectival ratings improved from “Marginal” to “Satisfactory.” Id. at 46, 48. But TGG’s Quality Control element continued to be rated “Unsatisfactory,” with “High” proposal risk. Id. at 50. The SSEB found this aspect of TGG’s proposal to be a deficiency, because it “lacked sufficient detail to be evaluated,” id. at 51, with the result that TGG “would not be eligible for award.” Id. at 52.

    The SSEB slightly reduced the over-all Management Subfactor ratings of FCSS, dropping its evaluation of performance risk from “Low” to “Moderate.” AR Tab 211 at 72. The Management and Technical element was downgraded from the initial “Good” and “Low” risk to “Satisfactory” and “Moderate” risk. Id. at 62. On the other hand, the significantly less important Quality Control element ratings were boosted from “Marginal” and “Moderate” risk to “Good” and “Low” risk. Id. at 70.

    Concerning Past Performance and Experience, the SSEB determined that neither KIRA nor TGG had any experience with cost contracts, and changed the risk rating for that element from “Low” to the neutral “Unknown” for both. Id. at 35, 55. The “Low” risk assessment for the other elements of PPE, and for PPE over-all, was retained for these two offerors. Id. at 35-36, 55-56. Two of the four ratings of the PPE elements for FCSS dropped from “Low” to “Moderate” — Managerial, with the comments that “[rjatmgs varied” and “lower scores generally came from Plum Island,” id. at 75; and Cost Control, for which the SSEB noted one of the joint venture members “had problems with only cost contract.” Id. The SSEB gave FCSS an overall PPE Subfactor assessment of “Low/Moderate” risk, compared to the initial “Low” assessment. Id. at 76. The briefing charts also contained the same cost information that was included in the PA Report. See AR Tab 211 at 38-40, 58-60, 78-80, 83.15

    *583F. The Source Selection Decision

    Two weeks after receiving the SSEB briefing, the SSA issued his second SSDD for this procurement. AR Tab 214 (Apr. 14, 2005 SSDD). The SSA stated that he had “fully considered the merits of the proposals” and “the facts and conclusions provided by” the SSEB. Id. at 1. He determined that KIRA’s proposal offered the best value, and documented his rationale. Id. The SSA stated that, “[e]xcept where noted,” he concurred in the SSEB’s evaluations of the offerors. Id. at 2. The SSA explained that KIRA “substantially revised its proposal for this round,” and he assigned it a “Satisfactory” rating with “Moderate” proposal risk for the Management Subfactor and the Management and Technical element, as did the SSEB. Id. The SSA briefly discussed some of the strengths and weaknesses of the KIRA proposal. He found that KIRA’s PPE information “reflected that it has had experience as a prime contractor, although not in projects of the scope or magnitude to [sic] this contract.” Id. The SSA noted that the proposed main subcontractor of KIRA, however, “has had experience in similar projects,” and the PPE performance risk of “Low” was assigned to KIRA across-the-board. Id.

    For TGG, the SSA concurred in the “Satisfactory” rating and “Moderate” proposal risk given the FPR by the SSEB, id., but also concurred in the deficiency identified by the SSEB:

    In contrast to [TGG]’s Management strengths, the Quality Control plans provided in its proposal, as revised, lacked detail sufficient to reflect an understanding of the requirements. As a result, the SSEB evaluated this element as deficient, with an unsatisfactory rating and a high proposal risk.

    Id. at 3. The performance risk assessment of “Low” was given TGG, for the PPE Subfactor and each element, with the SSA noting the “submitted information reflected experience as a prime contractor, although not of the magnitude of this contract.” Id.

    In discussing the FCSS proposal, the SSA briefly discussed its “significant strengths,” but added that it “had an even more significant weakness of a very large shortage in staffing, evaluated as almost a fourth of the total workforce estimated by the Government as necessary for this contract.” Id. The SSA characterized the FCSS past performance information as “showing experience for each of its two joint venturers as a prime, although of considerably less magnitude and narrower scope than this contract.” Id. (emphasis added).

    Discussing his best value determination, the SSA explained that this “included consideration of both the stated weights of the evaluation factors and subfactors and the large range of the various rating definitions.” AR Tab 214 at 3. For the Management Sub-factor, the SSA “considered [KIRA]’s proposal to be at the high end of satisfactory.” Id. at 4. Although FCSS’s proposal “was better written than either of the other two, based on its distinctly better maintenance plans,” the SSA agreed with the SSEB that “this advantage was undercut by its significant understaffing, which was almost twice that of [KIRA] and more than twice that of [TGG].” Id. He concluded:

    This understaffing raised [a] question of the credibility to be afforded [FCSS]’s Management proposal; i.e., whether [FCSS] really understood the contract requirements. As a result, I considered [FCSS]’s Management subfactor to be only slightly better than [KIRA]’s, with [TGG] below the other two.

    Id. Concerning PPE, the other subfactor under Quality, the SSA explained:

    Looking beyond the ratings to the substantive information, I considered [KIRA] to be well above the other two. This was based on [KIRA]’s experience as a prime and the higher quality of the ratings by its references. Both [TGG] and [FCSSJs experiences as primes were significantly less relevant than [KIRA]’s, and [FCSS]’s ratings were considerably more mixed than either of the other two offerors.

    *584Id. Regarding the Quality factor, the SSA found KIRA’s proposal to be the best of the three:

    Overall, then, for the Quality factor, [KIRAJs substantially better PPE than [FCSSJs more than offset [FCSSJs slightly better Management subfactor proposal. I considered [TGGJs PPE to be somewhat better than [FCSSJs, but not enough to offset [FCSSJs considerably better Management subfactor proposal.

    Id.

    Looking at the Cost factor, the SSA noted that although FCSS’s “Total Estimated Cost was about $11 million lower than [KIRAJs, that difference is mostly attributable to [FCSSJs understaffing,” and recognized that KIRA’s “MPC was only about $700,000 higher than” FCSS’s. Id. at 4. The SSA believed the “minor difference” in MPC over ten years was “essentially insignificant.” Id. Moving to the staffing shortage, the SSA explained that in the initial evaluation, he “gave [FCSS] a substantial credit in manpower based on its assertion of savings anticipated from proposed innovative methods of performance,” but this still left “a significant manpower shortage, which was pointed out to [FCSS] in discussions.” Id. He noted that “[i]n the second round of evaluation, the SSEB reassessed [FCSSJs essentially still undocumented claims of innovation savings and gave them a minimal credit.” Id. The SSA acknowledged that KIRA “also had a significant staffing issue,” based on undocumented alleged efficiencies that progressively grow in the option years. Id. But the KIRA variance was “substantially less” than FCSS’s, and KIRA’s “proposal for the first full contact year showed reasonable staffing,” which the SSA thought were “major distinctions” between the two offerors. Id. at 5. The SSA added that he and the SSEB had “concern about the credibility of [FCSSJs technical approach,” as FCSS’s “large staffing shortfall existed throughout its proposal.” Id.

    The SSA thus determined that KIRA’s proposal offered the best value to the government. This decision was confirmed in a Post-Price Negotiation Objective Memorandum signed April 14, 2005, and approved April 21, 2005. AR Tab 223. Pre-award notifications were sent to FCSS and TGG on April 25, 2005. AR Tabs 225, 227. The contract award was executed on June 14, 2005, AR Tab 245; see also id. Tab 244 (June 10, 2005 Congressional Notification), and the post-award notices were sent the next day. AR Tabs 248, 251. On June 16, 2005, the Army held a post-award debriefing for FCSS. See AR Tab 255. Five days later, TGG received its post-award debriefing. See AR Tab 259.

    G. Procedural History

    The post-award bid protest of FCSS was filed in this Court on June 20, 2005, and the protest of TGG was filed on July 1, 2005. A protective order was issued on June 21, 2005. The cases were consolidated on July 11, 2005.

    The government filed the Administrative Record, consisting of fourteen volumes, on July 18, 2005. On several occasions during the pendency of this case, the government and both plaintiffs have moved for leave to file additional documents as part of the Administrative Record, to add recently discovered documents, transcriptions of briefings, or clearer copies of documents previously submitted.16 By order dated August 31, 2005, the Court granted the plaintiffs’ motions for leave to file affidavits for the purpose of attempting to demonstrate irreparable injury and the propriety of injunctive relief. Order (Aug. 31, 2005) at 1 (citing Gulf Group, Inc. v. United States, 61 Fed.Cl. 338, 349 (2004)). That order also granted a motion of FCSS requiring the government to supplement the record with additional materials. The government had represented that the Administrative Record was limited to materials considered by the contracting officer, and this formulation suggested that materials used by the SSA and the SSEB were omitted. The government was reminded that the record must reflect “all of the information considered by the agency officials *585involved in the evaluation,” as well as audio tapes or transcripts of debriefings. Id. at 2 (citing Gulf Group, 61 Fed.Cl. at 347, and App. C to the Rules of the United States Court of Federal Claims ¶ 22(j), (n), (r)).

    To allow the matter to be set for a normal briefing schedule, the government initially extended the contract of the incumbent contractor through the end of 2005, see Order (June 21, 2005) at 1, and then further extended that contract to June 30, 2006. Tr. at 325. Oral argument was held on March 14, 2006. The Court issued an order on June 1, 2006, granting the government’s and KIRA’s motions for judgment on the administrative record, and denying the plaintiffs’ motions for judgment on the administrative record. This opinion explains the reasoning behind that order.

    II. DISCUSSION

    A. Standard of Review

    Post-award bid protests are heard by this Court under the Tucker Act, as amended by the Administrative Dispute Resolution Act of 1996 (“ADRA”), Pub.L. No. 104-320, §§ 12(a)-(b), 110 Stat. 3870, 3874 (1996). 28 U.S.C. § 1491(b)(1) (2000). This provision requires our Court to follow Administrative Procedure Act (“APA”) standards of review. 28 U.S.C. § 1491(b)(4). Those standards, incorporated by reference, provide that a:

    reviewing court shall ... (2) hold unlawful and set aside agency action, findings, and conclusions found to be — [¶] (A) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law; [¶] (B) contrary to constitutional right, power, privilege, or immunity; [¶] (C) in excess of statutory jurisdiction, authority, or limitations, or short of statutory right; [¶] (D) without observance of procedure required by law; [¶] (E) unsupported by substantial evidence in a case subject to sections 556 and 557 of this title or otherwise reviewed on the record of an agency hearing provided by statute; or [¶] (F) unwarranted by the facts to the extent that the facts are subject to trial de novo by the reviewing court. In making the foregoing determinations, the court shall review the whole record or those parts of it cited by a party, and due account shall be taken of the rule of prejudicial error.

    5 U.S.C. § 706 (2000).

    Based on an apparent misreading of the legislative history, see Gulf Group. Inc. v. United States, 61 Fed.Cl. 338, 350 n. 25 (2004), the Supreme Court had determined, before the 1996 enactment of the ADRA, that the de novo review standard of 5 U.S.C. 706(2)(F) does not usually apply in review of informal agency decisions — decisions, that is, such as procurement awards. See Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U.S. 402, 415, 91 S.Ct. 814, 28 L.Ed.2d 136 (1971). Instead, courts in those cases are supposed to apply the standard of 5 U.S.C. § 706(2)(A): whether the agency’s acts were “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” See Overton Park, 401 U.S. at 416, 91 S.Ct. 814 (citation omitted). The “focal point for judicial review should be the administrative record already in existence, not some new record made initially in the reviewing court.” Camp v. Pitts, 411 U.S. 138, 142, 93 S.Ct. 1241, 36 L.Ed.2d 106 (1973). This applies even where, as here, the matter under review was not the product of a formal hearing. See Florida Power & Light Co. v. Lorion, 470 U.S. 729, 744, 105 S.Ct. 1598, 84 L.Ed.2d 643 (1985).

    A motion under RCFC 56.1 for judgment on the administrative record differs from motions for summary judgment under RCFC 56, as the existence of genuine issues of material fact does not preclude judgment on the administrative record. See Bannum, Inc. v. United States, 404 F.3d 1346, 1355-56 (Fed.Cir.2005). Rather, a motion for judgment on the administrative record examines whether the administrative body, given all the disputed and undisputed facts appearing in the record, acted in a manner that was arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law. See Arch Chemicals, Inc. v. United States, 64 Fed.Cl. 380, 388 (2005). If arbitrary action is found as a matter of law, the Court will then decide the factual question of whether the action was prejudicial to the bid protester. See Bannum, 404 F.3d at 1351-54. Under *586this “arbitrary and capricious” standard, the Court considers “whether the decision was based on a consideration of the relevant factors and whether there has been a clear error of judgment” by the agency. Overton Park, 401 U.S. at 416, 91 S.Ct. 814. Although “searching and careful, the ultimate standard or review is a narrow one. The court is not empowered to substitute its own judgment for that of the agency.” Id. The Court will instead look to see if an agency has “examine[d] the relevant data and articulate[d] a satisfactory explanation for its action,” Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 46S U.S. 29, 43, 103 S.Ct. 2856, 77 L.Ed.2d 443 (1983), and “may not supply a reasoned basis for the agency’s action that the agency itself has not given.” Bowman Transp., Inc. v. Ark.-Best Freight Sys., Inc., 419 U.S. 281, 285-86, 95 S.Ct. 438, 42 L.Ed.2d 447 (1974). The Court must determine whether “the procurement official’s decision lacked a rational basis,” Impresa Construzioni Geom. Domenico Garufi v. United States, 238 F.3d 1324, 1332 (Fed.Cir.2001) (adopting APA standards developed by the D.C. Circuit); see also Delta Data Sys. Corp. v. Webster, 744 F.2d 197, 204 (D.C.Cir.1984).

    Because of the deference courts give to discretionary procurement decisions, “the disappointed bidder bears a ‘heavy burden’ of showing that the award decision ‘had no rational basis.’ ” Impresa Construzioni 238 F.3d at 1333 (quoting Saratoga Dev. Corp. v. United States, 21 F.3d 445, 446 (D.C.Cir.1994)). In particular, the evaluation of proposals for their technical excellence or quality is a process that often requires the special expertise of procurement officials, and thus reviewing courts give the greatest deference possible to these determinations. See E.W. Bliss Co. v. United States, 77 F.3d 445, 449 (Fed.Cir.1996); Arch Chems., 64 Fed.Cl. at 400; Gulf Group, 61 Fed.Cl. at 351; Overstreet Elec. Co. v. United States, 59 Fed.Cl. 99, 102, 108, 117 (2003). Challenges concerning “the minutiae of the procurement process in such matters as technical ratings ... involve discretionary determinations of procurement officials that a court will not second guess.” E.W. Bliss, 77 F.3d at 449. “Naked claims” of disagreement with evaluations, “no matter how vigorous, fall far short of meeting the heavy burden of demonstrating that the findings in question were the product of an irrational process and hence were arbitrary and capricious.” Banknote Corp. of Am. v. United States, 56 Fed.Cl. 377, 384 (2003).

    The presence (by the government or inter-venor) or absence (by the protester) of any rational basis for the agency decision must be demonstrated by a preponderance of the evidence. See Gulf Group, 61 Fed.Cl. at 351; Overstreet, 59 Fed.Cl. at 117; Info. Tech. & Appl’ns Corp. v. U.S., 51 Fed.Cl. 340, 346 (2001) (citing GraphicData LLC v. United States, 37 Fed.Cl. 771, 779 (1997)), aff'd, 316 F.3d 1312 (Fed.Cir.2003).

    An additional ground for a contract award to be set aside is when the protester can show that “the procurement procedure involved a violation of regulation or procedure.” Impresa Construzioni, 238 F.3d at 1332.17 This showing must be of a “clear and prejudicial violation of applicable statutes or regulations.” Id. at 1333 (quoting Kentron Haw., Ltd. v. Warner, 480 F.2d 1166, 1169 (D.C.Cir.1973)).

    B. Fort Carson Support Services’ Protest

    As was recounted above, FCSS was initially awarded the contract at issue after the first round of evaluations. See AJÍ Tab 77. At that time, the PPE ratings of all offerors were considered equal, and the SSA focused on the Management subfactor evaluation and cost. Id. at 2. No other offeror equaled FCSS’s rating of “Good” for the subfaetor, or its “Low” proposal risk rating, and its MPC was the lowest of all offerors. See id. at 2-5. The low MPC resulted from the Army’s decision to adjust FCSS’s estimated costs to reflect only a thirty-nine employee shortfall, although the FCSS proposal was short a net seventy-eight full-time equivalents (FTEs) *587relative to the IGE. AR 76 at 34; see also AR Tab 67 (comparing to IGE the staffing proposed by FCSS).

    In its final proposal, FCSS made modest changes to its staffing levels.18 Even though the base period for the contract was shortened from one year to four months, the Most Probable Cost of the FCSS proposal, as calculated by the Army, increased by $ 4.3 million over the prior determination. The bulk of this increase was due to the treatment of “Other Direct Costs,” a category of costs that had been completely removed for all offerors in the first round.19 The MPC and variance calculations the second time around were based on a net understaffing of 72.5 FTEs, compared to the thirty-nine FTE figure used in the initial evaluation. See AR Tab 211 at 79. Two additional PPE evaluations of one of the partners in the FCSS joint venture were considered in the final evaluations. See AR Tab 205. The SSEB decided these contained “good ratings” but were of “low relevance.” AR Tab 211 at 74. But FCSS’s PPE risk rating fell from “Low” to “Moderate” for two of the four elements, and was no longer described as “Low,” but rather as “Low/Moderate,” for the PPE subfaetor over-all. See AR Tab 214 at 4; AR Tab 211 at 75-76. In the best value trade-off, KIRA was awarded the contract over FCSS, on the strength of its edge in the PPE subfactor, and in light of the larger variance between FCSS’s proposed and probable costs. AR Tab 214 at 4-5.

    Fort Carson Support Services challenges the award to KIRA on several grounds. First, it contends that under the Solicitation criteria, its proposal was the highest rated as well as lowest priced and thus it should have been awarded the contract. Second, it argues that the Army failed to articulate a rational basis for preferring the KIRA proposal. Third, it disputes the Army’s evaluations of its proposed staffing levels, its past performance and experience, and the cost of its proposal. And finally, it claims that the Army failed to engage in meaningful discussions concerning its proposal.20

    1. Was the FCSS Proposal, as Evaluated, Entitled to the Contract Award?

    The first argument of FCSS is that a best value trade-off was neither necessary nor permitted, as it claims to have received the highest score for Quality, and it proposed the lowest price. FCSS Mot. at 24-31. This sort of argument could, perhaps, have been avoided had the Army followed a source selection plan in which an overall rating were applied to the Quality factor. But instead, the Quality evaluation was “a composite evaluation using all of the subfae-tors and elements associated with this factor” that “evaluates the extent to which an offeror’s proposal demonstrates a sound, practical methodology for meeting or exceeding the overall requirements associated with the PWS.” Solicitation § M.2.2, AR Tab 7 at 49. Unlike the two subfactors and their elements, the evaluation of the Quality factor did not result in a particular rating, label, or category.21 This methodology has thus en*588abled FCSS to argue that the point of the evaluation process was to slap one-word labels on the subfactors for each proposal, which can then simply be compared by the SSA to determine the highest quality proposal.

    Thus, FCSS focuses on the one-word ratings generated by the evaluation process. In the second evaluation, FCSS’s RFP received an overall adjectival rating of “Good” and proposal risk rating of “Moderate” for the Management subfactor; KIRA and TGG’s RFPs each received the adjectival rating of “Satisfactory” and risk rating of “Moderate.” AR Tab 214 at 2-3. For the PPE subfactor, FCSS received an overall performance risk rating of “Low/Moderate,” while KIRA and TGG each received a “Low” rating. Id. at 4. But since the Solicitation provides for just three distinct performance risk ratings22— “Low” when there is “little doubt” of successful performance, “Moderate” when there is “some doubt,” and “High” when there is “substantial doubt,” AR Tab 7 at 53 — FCSS argues that this hybrid rating is improper and should be considered a “Low.” FCSS Mot. at 26.23 If FCSS’s PPE rating were “Low,” the argument goes, then all three offerors were tied for the PPE subfactor, and FCSS would have the highest Quality rating by virtue of its Management subfaetor rating of “Good.”

    The “Low/Moderate” performance risk rating was the result of the SSEB’s determination that two PPE elements warranted a “Low,” while the other two elements merited just a “Moderate.” See AR Tab 211 at 75-76. Although the Solicitation instructed that its rating descriptions “may not precisely describe the assessment of a specific ... element” and “[i]n that case, evaluators will assess the rating with the description most closely describing the actual assessment,” AR Tab 7 at 52, the SSEB avoided the decision of whether the overall performance risk was closer to “some doubt” or “little doubt,” and attempted to split the difference. Fort Carson Support Services would place emphasis on the words “description most closely describing,” in essence arguing that the SSEB and SSA were required to generate a single label for their risk rating of this subfactor. But the last three words, “the actual assessment,” appear to be the more important. In the course of the evaluation process, the Management subfactor and elements would “receive an overall adjectival and proposal risk assessment rating,” and the PPE subfactor and elements would “receive a performance risk rating.” Solicitation § M.2.2, AR Tab 7 at 49. But these ratings are not themselves the actual evaluation or assessment, but merely tools employed in the assessment. If the purpose of the evaluation were to generate a label used in comparing proposals, there would be no need for the SSA to utilize a “tradeoff process among the factors, subfaetors, and elements, Id. § M.1.1, AR Tab 7 at 48 (emphasis added), to determine the best value; there would be no need to delve below the subfactor scores.

    But in the evaluation process established for this procurement, the elements matter. The Source Selection Plan required that “[t]he SSA shall utilize all evaluation reports and analysis in determining the successful proposal,” SSP § IV.B(l), AR Tab 34 at 14, not that he shall use just the ratings, let alone just the ratings of the subfaetors. In this context, neither the SSEB nor the SSA *589can be faulted for failing to pigeonhole the FCSS PPE as presenting either a “Low” or “Moderate” risk, any more than a university would be remiss in not identifying a student receiving two As and two Bs as either an A or a B student. Someone evaluating the student’s performance would see that the record is evenly split between As and Bs, just as the SSA would see that the equally-weighted elements of the PPE were evenly split between “Low” and “Moderate” risk. But even if FCSS is correct that its PPE performance risk rating should be considered a “Low” for purposes of the Source Selection Decision, this does not compel the conclusion that its proposal was the highest quality of the three. Once again, the fixation on the labels obscures the real evaluation process.

    As the SSEB Chairman noted in his PA Report:
    [T]he descriptions of the various performance risk assessments are very broad. As a result, offerors which have received the same assessment may not be equal in regard to PPE, and, as a result, the supporting documentation may provide additional relevant insight, if necessary.

    AR Tab 209 at 5. The SSA explained that his “determination of the best value awardee included consideration of both the stated weights of the evaluation factors and subfac-tors and the large range of the various rating definitions.” AR Tab 214 at 3 (emphasis added). In other words, proposals of varying quality might be described as offering “no particular advantages to the Government but ... fully meetpng] the ‘minimum’ requirements of the RFP,” or as involving an approach with “an acceptable probability of meeting the requirements,” to take the “Satisfactory” adjectival rating as an example. See Solicitation § M.3.1, AR Tab 7 at 52. And the difference between some of these proposals, and the “Good” ones for which it can be said that “some advantages to the Government” are offered, or that they embody an approach with “a high probability of meeting or exceeding the requirements,” might be small. See id. This seems particularly true for the performance risk ratings, such as the “Low” risk resulting when “little doubt exists that the offeror will successfully perform the required effort.” See Solicitation § M.3.3, AR Tab 7 at 53.

    Thus, the evaluation process cannot simply be boiled down to whether two proposals received the same rating for one subfactor, and one was a grade higher in one of the two ratings used for another subfaetor. To be sure, it would be a different matter if the SSA had determined that the FCSS proposal was a full category better than KIRA’s in the adjectival and proposal risk ratings for the Management subfactor and in the overall performance risk rating. Under such circumstances, it would be arbitrary and inconsistent to judge the KIRA proposal to be the highest quality. But those circumstances are not presented here. In the Source Selection Decision, to look just at the labels for the moment, it was determined that KIRA’s evaluated PPE risk of “Low” exceeded the value of FCSS’s risk of “Low/Moderate” (or “Low,” if you will) by more than the FCSS Management subfactor evaluation of “Good” with “Moderate” risk exceeded the KIRA evaluation of “Satisfactory” with “Moderate” risk. Is this, as FCSS argues, necessarily incorrect?

    To return to the university example, the use of broad rating categories, without any greater refinements, is like a grading system that uses solely the letter grades A, B, C, D, and F. If an A were earned by a final exam score of 93 or higher, and a B by a score from 86 to 92, then the difference between an A and a B could be smaller than the difference between two Bs — say, if the corresponding scores were 93, 92, and 86. A person who reviews the underlying exam scores will evaluate academic performance differently than a person who lacks access to the raw scores.

    Or, to put it another way, since the evaluation process for this procurement did not involve mathematical scoring, consider the use of plus/minus in grading. Without plus/minus, a high B and a low B are seen as the same by one perusing a transcript, and would both appear to be equally below an A. In an evaluation calculating grade-point averages, the A merits a 4.0 and the B a 3.0. But if the grades are further refined, using one common approach to pluses and minuses that *590adds or subtracts three-tenths of a point, it is clear that some Bs may be closer to As than to other Bs: as a B + receives a value of 3.3, which is only four-tenths of a point below the value of an A(3.7), but is six-tenths of a point higher than a B(2.7).

    To translate the example to our case, it is not inconsistent with the Solicitation evaluation criteria for the SSA to decide that a KIRA A+ compared to an FCSS Aout-weighs the difference between an FCSS A- and a KIRA B + . Of course, the evaluation process did not use ratings distinctions similar to a plus or minus, nor is there a way of mathematically comparing a “Good” adjectival rating with a “Low” performance risk rating. But the evaluation did involve the SSA using more than the one-word labels— the equivalent of letter grades — to determine which proposal merited the highest Quality factor assessment. He may not have been given pluses and minuses, but he did not need them, as he had the benefit of the equivalent of the exam scores (or, perhaps more analogously, term papers with the professor’s comments) to use in assessing the proposals.

    Concerning past performance, the SSA specifically stated that he was “[Booking beyond the [PPE subfactor] ratings to the substantive information” in determining that KIRA’s PPE was “well above the other two.” AR Tab 214 at 4. He based this determination on KIRA’s “experience as a prime and the higher quality of the ratings by its references.” Id. The SSA noted that FCSS “submitted information showing experience for each of its two joint venturers as a prime, although of considerably less magnitude and narrower scope than this contract.” Id. at 3. He found that FCSS’s and TGG’s “experiences as primes were significantly less relevant than [KIRA’s], and [FCSS’s] ratings were considerably more mixed than either of the other two offerors,” and concluded that KIRA had a “substantially better PPE than” FCSS. Id. at 4.

    Likewise, the SSA looked beyond the adjectival ratings to determine that KIRA’s Management proposal was “at the high end of satisfactory,” and that FCSS’s Management proposal was “only slightly better than [KIRA]’s.” AR Tab 214 at 4.,Moreover, the SSA determined that although FCSS’s Management proposal was “better written” than the other two offerors’ because of FCSS’s “distinctly better maintenance plans,” “this advantage was undercut by its significant understaffing,” which “raised [the] question of the credibility to be afforded [FCSS’s] Management proposal ... i.e., whether [FCSS] really understood the contract requirements.” AR Tab 214 at 4. The net result of the SSA’s analysis concerning the Quality factor was that “[KIRA]’s substantially better PPE than [FCSS]’s more than offset [FCSS]’s slightly better Management subfactor proposal.” Id. The Court does not find this conclusion to be inconsistent with the Solicitation’s evaluation criteria. As was explained above, the rating categories themselves are broad enough so that the difference between a low “Good” and a high end “Satisfactory” may well be exceeded by the difference between two Past Performance and Experience ratings of “Low” risk. The SSA “articulate[d] a satisfactory explanation for” his decision, Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43, 103 S.Ct. 2856, and the Court accordingly rejects the argument that FCSS received the best Quality factor evaluation of the three offerors.

    2. Did the Army Adequately Articulate a Basis for the Award to KIRA ?

    In its first ground for protest, discussed above, FCSS erred in positing an evaluation process in which the ultimate deci-sionmaker could not see through the various labels attached to aspects of proposals, but instead was forced to consider all ratings with the same label to be equal. As part of its argument for its second ground for protest, FCSS errs in the opposite direction, by placing undue attention on the raw number of strengths and weaknesses identified for each proposal. In arguing that it was irrational for the Army to have preferred the KIRA proposal to its own, FCSS relies on the SSEB’s caucus determinations that KIRA’s final proposal contained seventeen strengths, one uncertainty, and nineteen weaknesses (including two significant weaknesses), see AR Tab 203, while its own final *591proposal contained eleven strengths, three uncertainties, and two weaknesses (including one significant weakness). See AR Tab 204; FCSS Mot. at 31-33.

    When the SSEB briefed the SSA on the results of its FPR evaluations, the strength and weakness gaps between the two offerors narrowed. Considering the strengths and weaknesses the SSEB thought were notable enough to be included in the presentation, KIRA had thirteen strengths to FCSS’s twelve, and five weaknesses to FCSS’s two (including one significant weakness for each). See AR Tab 211 at 23-33, 63-73. In the PA Report, the Chairman of the SSEB explained that “these categories [of strengths and weaknesses] are all very broad and that, as a result, the listings within them are not of equal value or importance,” and that “no attempt has been, or should be, made to establish ratings or risk assessments based solely on the numbers listed in each such category.” AR Tab 209 at 3. After citing these statements of the Chairman, FCSS nonetheless argues that “the total numbers must have significance, otherwise the entire evaluation process ... would be meaningless.” FCSS Mot. at 33.

    But the reason that strengths and weaknesses are identified, and must be documented in an evaluation, see 48 C.F.R. § 15.305(a), is to help ensure that the evaluations are thoughtful. When evaluators focus on the strengths, weaknesses, significant weaknesses, and deficiencies of proposals, they are better able to classify and articulate the quality of a proposal. Documentation of this adds transparency to the process, in two ways: it provides evidence that each proposal was carefully considered, and provides a reviewing court with some decisions that are verifiable (in circumstances when the description of a strength or weakness may be flatly contradicted by the portion of the proposal it references). The identification of strengths or weaknesses, however, does not convert the evaluators’ subjective judgment into some objective fact that may be disproved in court, nor does it result in a product that can be mechanically summed or subtracted. To be sure, an agency could probably adopt an evaluation process in which certain strengths and weaknesses are given positive or negative value in advance, and the evaluators merely check-off the number of positive or negative features present in a proposal, and total these up to reach their conclusion. This was not the process adopted here.

    The numbers that FCSS points to simply cannot be used in the way it suggests. There is no reason why the “total numbers must have significance,” as FCSS argues, any more than the total number of words used to describe an evaluation is intrinsically significant. And in any event, even if one could view all weaknesses and all strengths to be of equal value, the mere numbers do nothing to advance FCSS’s ease. These strengths and weaknesses all relate to the Management subfactor. See AR Tabs 203, 204, 211. But no one denies that FCSS had the highest rated offer under the subfactor-an adjectival rating of “Good” with “Moderate” risk, compared to the adjectival ratings of “Satisfactory” and “Moderate” risk for the other two offerors. See AR Tab 214 at 3; see also AR Tab 211 at 32, 52, 72. The fact that FCSS had the highest rating for the Management subfactor does not render the award decision irrational, as the PPE subfactor and its elements carried the same weight as the Management subfactor and its components. See AR Tab 214 at 1; AR Tab 7 at 48.

    The SSA’s decision was not based on which offeror had the best Management subfaetor rating, but considered the overall Quality factor, taking into account the subfactors and their elements, relative to the less important Cost factor. AR Tab 214 at 4-5. Concerning this decision, FCSS argues that the Army failed to adequately articulate the basis for its best value tradeoff. FCSS Mot. at 35-39; FCSS Reply at 36-37. Portions of the post-award debriefing by the Contracting Officer are quoted to demonstrate the lack of an adequate articulation of the tradeoff decision. See FCSS Mot. at 35-38. But while such debriefings might, on occasion, yield useful party admissions, any new explanations of the evaluation process made post-award are the sort of “post hoc rationalizations” that courts reject when offered by the *592government in bid protest cases. See Orion Int’l Techs. v. United States, 60 Fed.Cl. 338, 343 (2004) (quoting Motor Vehicle Mfrs. Ass’n, 463 U.S. at 50, 103 S.Ct. 2856). The place to look for the required rationale is the Source Selection Decision Document.

    In the SSDD, the Source Selection Authority provides a summary of the proposal evaluations based on the relevant factors identified in the Solicitation, and a supporting narrative, see AR Tab 214, as the Federal Acquisition Regulations (FAR) require. See 48 C.F.R. § 15.305(a)(3). The SSA’s rationale for his business judgment and tradeoff decision is also supplied. After explaining that KIRA’s “substantially better PPE than [FCSSj’s more than offset [FCSSj’s slightly better Management subfactor proposal,” the SSA turned to an analysis of the Cost factor. AR Tab 214 at 4. He noted that the $11 million difference between the proposed costs of KIRA and FCSS was “mostly attributable to [FCSSj’s understaffing.” Id. He looked at the Most Probable Cost for the two proposals, as mandated by the FAR, see 48 C.F.R. § 15.404 — l(d)(2)(i), and determined that KIRA’s $700,000 higher MPC was a “minor difference over ten years and in relation to the overall cost of over $200 million is essentially insignificant, certainly not enough to offset [KIRAJ’s advantage in the Quality factor.” AR Tab 214 at 4.

    Fort Carson Support Services argues that the Army failed to “articulate what it gained in Quality by awarding to KIRA.” FCSS Mot. at 35; see also id. at 39. But the SSA plainly explained that FCSS’s “overall manpower shortage ... was considered detrimental,” AR Tab 214 at 4, and that he found KIRA’s proposal preferable, despite KIRA’s “significant staffing issue,” because KIRA’s “proposal for the first full contract year showed reasonable staffing, although with some shortfalls, thus allowing a conclusion that its technical approach was credible.” Id. at 4-5. The SSA’s confidence in the staffing level proposed by KIRA for the first full year of performance, and KIRA’s “substantially” smaller variance between proposed and probable costs, relative to FCSS, were the benefits received by the Army in return for the higher price. This explanation satisfies the FAR, as “documentation need not quantify the tradeoffs that led to the decision.” 48 C.F.R. § 15.308 (2005). The record contains an adequate, rational basis for the award to KIRA, and this ground for protest is accordingly denied.24

    3. The Army’s Evaluation of the FCSS Proposal

    After placing undue emphasis on the labels used in the evaluation process, and then on the raw numbers of strengths and weaknesses identified by evaluators, FCSS in its third ground for protest finally gets around to issues that are central to a court’s review of procurement decisions. Fort Carson Support Services argues that the Army’s evaluation of its proposal, in several respects, was arbitrary and capricious, and unsupported by the record. In particular, it challenges the Army’s determinations relating to its staffing levels, FCSS Mot. at 39-53; the Army’s PPE evaluation, id. at 53-60; and the Army’s Cost factor analysis. Id. at 60-68.

    Review concerning these issues entails identifying the judgments made by the relevant officials and verifying that the relevant information was considered, the relevant factors were employed, and a satisfactory explanation was articulated. See, e.g., Overton Park, 401 U.S. at 416, 91 S.Ct. 814; Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43, 103 S.Ct. 2856. In this regard, the Court rejects the argument made by KIRA that the Army’s failure to follow the Source Selection Plan would not be a valid basis for a bid protest. See KIRA Br. at 30-31. To the contrary, the SSP provides operating rules for the evaluators, to make the process more transparent and less ad hoc. While the Army adopts this plan, and as an agency is free to change it, the subordinate officials whose activities are controlled by an SSP do not have the power to deviate from it. Unless an element of the SSP is expressly waived by an official with the power to do so, via a valid, articulated *593reason, the failure to follow that element is, by its very nature, an arbitrary act.

    If FCSS can prevail on any of these issues despite the great deference courts must give to agency evaluators of past performance or quality, see E.W. Bliss Co., 77 F.3d at 449; Gulf Group, 61 Fed.Cl. at 351, then the Court must determine whether the arbitrary action was prejudicial to FCSS. Bannum, 404 F.3d at 1351-54. On the factual question of prejudice, the method of evaluation employed by the Army might be problematic. Under a method that relies on numerical scores, the prejudice determination is as simple as comparing the points affected by an arbitrary decision to the spread between the protester and winning bidder. See, e.g., Beta Analytics Int’l, Inc. v. United States, 67 Fed.Cl. 384, 407-08 (2005). The narrative approach followed here, however, makes the government vulnerable to a finding of prejudice whenever a decision that appeared to have influenced the ultimate decision of best value is found arbitrary — that is, is found to rely on false statements, is contradicted by the evidence, violates procurement rules, etc. But the Army accepted this risk in employing this particular evaluation process.

    a. Staffing level evaluation.

    First, FCSS argues that the Army incorrectly evaluated its staffing levels and failed to properly credit it for proposed innovations. In making this argument, FCSS charges the government with failure to provide a reason for “reducing” the size of the credit for staff positions saved due to innovations that its proposal received in the initial evaluation. FCSS Mot. at 49. It is true that the Army did not specifically identify positions that it no longer felt could be credited against FCSS’s net staffing shortfall, in performing the evaluation of the final proposals. But this is because the initial decision to credit FCSS with several dozen FTEs was not itself a reasoned decision.

    In the first round of evaluations, the IGE prepared by the Army determined that 263 employees were needed to perform the contract, spread throughout various classifications. See AR Tab 52(1). The Army then compared the number of employees proposed by the offerors, in each classification, to the IGE to calculate understaffing and overstaff-ing. See AR Tabs 65 (KIRA), 66(TGG), 67 (FCSS). It was determined that the initial proposal of FCSS was understaffed, relative to the IGE, by seventy-eight positions. AR Tab 67 at 2.

    When this information was originally presented to the SSA by the SSEB, the number of positions proposed by FCSS appears to have been inaccurately identified as 182, as opposed to the 185 used in the IGE comparison. Compare AR Tab 69 at 36 (July 26, 2004 SSA Briefing) with AR Tab 67 at 2. The net understaffing, identified as a “variance,” as a consequence rose to eighty-one positions. See AR Tab 69 at 36. But the number of additional employees used in the MPC calculation was forty-seven, which purportedly resulted in an increase of $ 1,645,000 for the base year. Id. Since the FCSS proposal that was understaffed by a net eighty-one positions was adjusted only for a shortfall of forty-seven, the SSEB appeared to be implicitly crediting FCSS with thirty-four FTEs.

    Two weeks later, the SSA was briefed again by the SSEB, this time resulting in the decision to award a contract to FCSS without discussions. See AR Tab 76. In these briefing materials, FCSS was said to have proposed 185 employees, consistent with the number used in the IGE comparison. Id. at 34. The difference between this staffing level and the IGE, or “variance,” was then calculated to be seventy-eight — the difference between 263 FTEs determined in the IGE and the number proposed by FCSS. Id. But the SSEB also calculated that 224 employees were needed for FCSS to perform the contract, apparently as part of the Army’s MPC calculation. Id. The resulting variance was thus just thirty-nine FTEs. This thirty-nine employee shortfall was then used to calculate the MPC adjustment to the FCSS proposed base period cost, which included the “[ajddition of 39 employees @ $ 2,763,359.” Id.; see also AR Tab 68 at 1.

    The briefing materials do not explain how the FTE shortfall was reduced from seventy-eight to thirty-nine — or, in other words, how a thirty-nine employee credit was derived for *594the FCSS proposal.25 The SSEB did state that “[t]he offeror is somewhat low in manpower compared to the IGE, but the reduction is spread throughout the functional areas. With all the other innovations in this proposal, this shortage may be overcome resulting in a savings to the Government.” Tab 76 at 30.26 A number of “innovative ideas” and efficiencies are identified in the briefing materials, see id. at 30-31, but these are not associated with any particular staffing savings. No PA Report was prepared by the Chairman at that stage, to explain these decisions. See AR Tab 209 at 1. In the initial SSDD, the SSA noted that FCSS “presented several innovative approaches to reduce costs and improve efficiencies," and identified two of these. AR Tab 77 at 3. The SSA reiterated that FCSS’s “[o]verall staffing also seemed somewhat low,” but explained that “this weakness was discounted somewhat by the fact that the possible shortage was spread throughout the functional areas, and the SSEB felt that it could be at least partially offset by the innovative approaches proposed.” Id.

    In the Price Negotiation Memorandum memorializing the decision to award the contract to FCSS, the Army explained the MPC methodology used in the initial evaluation: “The process tended to normalize the proposals to the IGE [for labor costs] except in the case where [FCSS] proposed efficiencies with convincing discussion which will lead to labor reductions and cost savings.” AR Tab 78 at 4. Discussing FCSS’s proposal, this memorandum reiterated nearly verbatim the language from the SSDD concerning the innovations, efficiencies, and the staffing shortfall, explaining that the latter “may be offset by the innovations proposed.” Id. at 7. The FCSS staffing credit is discussed under the “Cost” subheading: “The Management Committee determined [FCSS] would have to increase their staffing by an overall 39 FTEs based on the proposed efficiencies, in lieu of the full 78 variance compared to the IGE.” Id. No further explanation is given, and there appears to be no documentation of this decision of the SSEB.27 Six months later, during the SSEB’s caucus discussing the FCSS final proposal, when asked how the innovations credit number was initially determined, the Contracting Officer stated, “the labor savings connected with PDA’s comprised the biggest portion of the credit.” AR Tab 189 at 8.

    After offerors in the competitive range were warned that “proposed staffing reflected substantial variances” that “included large numbers of instances of understaffing and a much smaller number of instances of overstaffing,” AR Tabs 115-17, and informed through Amendment 10 that “the Government will consider any innovative ideas, but proposals should clearly identify and explain ... any cost savings that rely upon those innovations,” AR Tab 149, FCSS submitted its FPR. In this final offer, FCSS did add additional language relating to proposed innovations. A paragraph, for instance, was added to the Executive Summary concerning the “one man and a truck” concept, and the use of communications and management tools that are “productive of cost savings,” although the number of staff hours saved is still deemed “countless.” AR Vol. VIII at revised 3-4. A few sentences are added to the “Work Reception & Dispatch Desk” (“WR & DD”) discussion concerning these same innovations. Id. at revised 24. A couple of lines are added concerning the use of MAXIMO Mobile Suite and Nextel TeleNav-track, see id. at rev. 38, 38a, 39, and a new chart identifies the various phases in which these technologies are to be deployed. Id. at *595rev. 38b. None of these changes appear to explain any corresponding cost savings or labor force reductions.

    The FPR also adds a sentence that explains that FCSS “will significantly increase grounds maintenance efficiency and lower costs by preparing the battlefield/grounds prior to each season’s workload, and by working with tenant units to coordinate efforts.” AR Vol. VIII at rev. D-4. Corresponding to PWS C.5.4.2.1.1, regarding grounds mowing and trimming, FCSS elaborates that it can “accomplish the requirements with significantly fewer FTEs than are currently being used” by removing rock piles and rotting landscaping timber, “thoroughly prepar[ing] working areas annually,” and conducting “time and motion studies.” Id. at rev. D-9. Thirty-two pages of spread sheets are added to the Appendix C to Section II of the RFP, which FCSS explained as “its workload takeoff and estimating process, figures, assumptions and fulltime-equivalent (FTE) staffing computations.” App. to FCSS Mot., Ex. C at 5 (describing AR Vol. VIII App. C at rev. 13^4). And FCSS explains in greater detail the process it and its subcontractor followed in estimating the number of employees needed. AR Vol. VI Tab 6 at rev. 1-6.

    After reviewing the new material in the FCSS proposal, the evaluator responsible for assessing the staffing levels believed that FCSS “did not show their manpower savings from innovative approaches like [KIRA].” AR Tab 189 at 1. In its evaluation of the final proposals, the Army credited FCSS with three FTEs for innovations — two for carpenter positions, and one for a plumber. See AR Tab 208 at 2-3. A credit of one-half of an FTE was also given to FCSS for the unexplained “home office effort.” AR Tab 211 at 79. KIRA received credit for 6.3 FTEs due to innovations, see AR Tab 211 at 39, although the MPC calculation breakdown shows innovation credits totaling 6.6 FTEs— two for grounds maintenance laborers, one-half for a truck driver, two for carpenters, three-tenths each for an electrician and for a stationary engineer, and 1.5 for the category comprising plumber, steamfitter and pipefit-ter positions. See AR Tab 206 at 1-2; see also Tab 197 (chart showing MPC adjustment for KIRA as 6.6 FTEs for innovations). According to the minutes from the SSEB’s final caucus concerning the Management subfactor, “[t]he evaluator reviewing the staffing stated that the credits against the IGE for innovations equate[d] to fewer for each offeror than originally expected.” AR Tab 196 at 2. The minutes then note the decision to give FCSS a credit of three FTEs, but give a slightly different total for KIRA of 6.5 FTEs. Id.

    In the PA Report, the SSEB Chairman explains that “[t]he labor portion of the Most Probable Cost (MPC) was calculated for each offeror based on the analysis of its proposed manning levels in comparison to the IGE and included recognition of any savings that the SSEB considered justified in the proposal as resulting from proposed innovations.” AR Tab 209 at 5. He described the adjustment for KIRA as requiring the addition of five FTEs above the IGE to reflect extra positions needed due to KIRA’s “proposed methodology of performance,” and identifies the innovations credit received by KIRA as “6.3 man-years.” Id. at 6. The Chairman also notes that KIRA’s claim of staff savings due to efficiencies in option years two through nine were rejected by the SSEB as “not supported by documentation in the proposal.” Id. Had this claim been accepted, the additional credits would have grown to 25.7 FTEs by option year 7. See id. Regarding FCSS, the Chairman confirmed that it “was credited with 3 man-years for proposed innovations and .5 man-years for home office effort.” Id. at 8.

    In the second round of evaluations, the IGE was based on the determination that employees totaling 258 FTEs were needed. See AR Tab 172(1) at 3; AR Tab 197. Due to the innovations and home office credit of 3.5 FTEs, and based on its proposed workforce totaling 182 man-years, the net under-staffing of FCSS was calculated to be a variance of 72.5 FTEs. AR Tab 211 at 79. The KIRA figure was a shortfall of 30.7 FTEs. Id. at 39. The Source Selection Authority concurred with this analysis. AR Tab 214 at 2. In the SSDD, he discussed the *596history of the determination of FCSS’s innovations credit:

    In the first evaluation, both the SSEB and I gave [FCSS] a substantial credit in manpower based on its assertion of savings anticipated from proposed innovative methods of performance. However, even so, [FCSS] still had a significant manpower shortage, which was pointed out to it in discussions. Further, both in discussions and in later amendment to the Request for Proposals, all of the offerors were strongly cautioned that they needed either to clearly document any claimed savings from innovations or to raise their staffing levels in the areas the Government pointed out as low. In the second round of evaluation, the SSEB reassessed [FCSS]’s essentially still undocumented claims of innovation savings and gave them a minimal credit.

    AR Tab 214 at 4.

    The Court concludes that the record of the procurement decision demonstrates that the Army’s evaluation of staffing levels was not arbitrary. In the process of compiling the IGE for the work to be performed under the Solicitation, the Army initially concluded that employees totaling 263 FTEs were needed. See AR Tab 52(1). It later revised this figure to 258 FTEs. See AR Tab 172(1). In reviewing the manpower proposed by the offerors relative to the level used in the IGE, the Army credited FCSS and KIRA for staffing positions saved due to innovative approaches to the work, and specifically identified the positions credited. AR Tabs 206, 208. These credit determinations were adopted by the SSEB after consensus meetings, and were used by the SSA in making his best value determination. See AR Tab 209 at 6-8; AR Tab 211 at 39, 79; AR Tab 214 at 4-5.28

    The staffing level computations are attacked by FCSS in a number of ways. As was noted above, its primary argument appears to be that the Army did not explain why its innovation credit figure was reduced from the initial evaluation. This argument would have some force, were the initial decision itself properly documented and articulated. Had the SSEB initially determined, or relied upon a document which determined, that FCSS’s proposed method would need X fewer employees in one classification, Y fewer in another, Z fewer in a third, etc., then an evaluation of the final proposal that dropped out all discussion of these staff savings could well reflect an arbitrary action. Unfortunately for FCSS, the situation presented in the record is the reverse: it was the second decision that was supported by documentation, not the initial evaluation. And the second staffing and innovation determinations were the ones that influenced the final decision.

    In making these determinations, the Army considered the relevant information — it made an IGE, compared the staff levels to those proposed by the offerors, and evaluated the offerors’ proposals to see whether credit for innovations should be given. The Army articulated a reason for the credits being the size they were — it felt that FCSS demonstrated the need for fewer FTEs for the positions of carpenter and plumber, see AR Tab 208 at 2-3, and that KIRA showed it could make do with fewer FTEs for the positions of grounds maintenance laborer, truck driver, carpenter, electrician, stationary engineer, plumber, steamfitter and pipe-fitter. See AR Tab 206 at 1-2. Fort Carson Support Services argues that the record should show the qualifications of the people making these decisions, precisely how they made them, and which proposed innovations were responsible for which credits. See FCSS Reply at 17-18. But such level of detail is not needed to demonstrate that the decision was the product of the relevant information being consulted. While minutiae — such as a description of each individual judgment made by each evaluator concerning the relationship between each position for which an offeror was understaffed and the innovations that offeror proposed — could aid a court in second-guessing the technical judgments of evaluators, this is not the proper role of courts. See, e.g., E.W. Bliss, 77 *597F.3d at 449. Nor would it be proper, under the APA-style review employed in bid protests, for a court to reduce its deference to Executive branch technical judgments because of the alleged inexperience of evaluators.

    Fort Carson Support Services takes issue with the SSA’s conclusion that its innovation savings were “essentially still undocumented,” AR Tab 214 at 4, and points to the revisions in its proposal that discuss innovations. FCSS Reply at 18. But the language identified in the FPR does not attempt to quantify the results of any savings due to the innovations, and the spread sheet purporting to calculate the number of FTEs needed is just a spread sheet. See AR Vol. VIII App. C at rev. 13-44. The spread sheet may well demonstrate that FCSS and its subcontractor systematically and carefully calculated the number of employees they felt were needed to do the work, but when this number differed from the Army’s experience, no principle requires the Army to accept the former.

    But FCSS argues that if the Army relies on its experience, the result is necessarily irrational when a cost-plus contract is at stake. The Army penalized FCSS for believing it could do the work with fewer employees than were traditionally employed, and FCSS claims the alternative was to pad the proposed staffing numbers. Essentially, FCSS argues that the SSEB’s evaluation of the staffing variance was «reasonable because it would have been “more reasonable to posit that adding manpower just to bulk up its number ... is less rational than maintaining its lower staffing ... with the resultant lower costs.” FCSS Mot. at 50 (emphasis added). The standard for reviewing an agency’s decision in this Court, however, is not whether there exists a hypothetically “more reasonable” decision, but whether the decision actually made is reasonable in light of the record before the Court. See Camp v. Pitts, 411 U.S. 138, 142, 93 S.Ct. 1241, 36 L.Ed.2d 106 (1973); Impresa Construzioni, 238 F.3d at 1332 (adopting APA standards developed by the D.C. Circuit); see also Delta Data Sys. Corp. v. Webster, 744 F.2d 197, 204 (D.C.Cir.1984). The record shows that the Army was not convinced that the methods of performance explained by FCSS warranted a credit for more than 3.5 FTEs. See, e.g., AR Tabs 196, 208, 209, 211. From the perspective of FCSS, perhaps it would not have been rational for it to propose positions it felt it did not need, even in light of the warning about understaffing that all offerors were given, and the identification of the specific work classifications for which its proposal was short. See AR Tabs 117, 147. But just because an agency disagrees with even a reasoned judgment of an offeror does not mean the agency is acting arbitrarily.29 Nor is it unreasonable for an agency to insist that the burden should fall on an offeror to prove to the agency’s satisfaction that a proposed method of performance will, in fact, require fewer staff. This process may not be ideal, and may put offerors in the difficult position of having to show that positions it cannot even imagine needing are not in actuality needed — but this does not make the process irrational.

    As was discussed above, Amendment 10 to the Solicitation informed offerors that “the Government will consider any innovative ideas, but proposals should clearly identify and explain ... any cost savings that rely upon those innovations.” AR Tab 149. A second round of scoring in a negotiated procurement is not per se unlawful, see Dismas Charities, Inc. v. United States, 61 Fed.Cl. 191, 204 (2004) and offerors were on notice that the Army intended to make the award after discussions. AR Tab 7 at 35 (referencing 48 C.F.R. § 52.215-1 Alternate I).30 Ab*598sent some irregularity in the re-evaluation that demonstrates a bad faith intent to alter the result, the mere fact that FCSS received a less favorable review does not render the review arbitrary. See Dismas, 61 Fed.Cl. at 202.

    Fort Carson Support Services also argues that it “was not required to show manpower savings, just cost savings.” FCSS Reply at 14. But in the discussions, the Army clearly explained, “you have to establish in your proposals that the numbers and types of people that you propose correlate accurately to the method and means you propose for performing this Contract.” Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 38. Moreover, the Solicitation explained that the Army would “evaluate the offeror’s methodology for managing the manpower ... required to perform all tasks described in the PWS and its application of personnel ... for implementation of the technical approach for performing the tasks described in the PWS.” Solicitation § M.2.2.1.1, AR Tab 7 at 49. And while FCSS may feel that it adequately demonstrated how it estimated the staffing level needs, by providing the aforementioned workload spread sheet, or that it did a better job than KIRA in demonstrating its manpower requirement, these are the sorts of judgments that receive the greatest deference possible. See, e.g., E.W. Bliss, 77 F.3d at 449. The argument by FCSS that it was irrational for KIRA to have received more innovation FTEs than FCSS did, in light of the Army’s rejection of KIRA’s claimed efficiencies for the out years, see FCSS Mot. at 34, is also unavailing. The Army’s decision that certain staffing savings were not documented by KIRA, see AR Tab 214 at 4, has nothing to do with its determination that others were documented and, if anything, helps demonstrate that the decision was a reasoned one. The Army’s evaluation of the staffing levels and innovations proposed was not arbitrary, capricious, or unlawful.31

    b. Past Performance and Experience evaluation.

    The next objection raised by FCSS concerns the Army’s evaluation of its PPE information. In the initial review of proposals, all offerors were assigned a “Low” risk rating for the PPE subfactor and each element of PPE. See AR Tab 76 at 16, 22, 27, 33, 38, 44, 46; AR Tab 77 at 2. After the Army “re-validate[d] PPE information,” as was announced in Amendment 10, see AR Tab 149 at 2, it determined that FCSS should receive only a “Moderate” risk rating for the Managerial and Cost Control elements of PPE, AR Tab 211 at 75, and an overall PPE risk rating of “Low/Moderate.” Id. at 76. Fort Carson Support Services argues that the worsening of its risk rating in this area was arbitrary and capricious, since the second evaluation was based on the same information as the first one. FCSS Mot. at 56. It also repeats its argument that the use of the hybrid “Low/Moderate” rating was improper. Id.32 But “[w]hen the Court considers a bid protest challenge to the past performance evaluation conducted in the course of a negotiated procurement, ‘the *599greatest deference possible is given to the agency.’ ” University Research Co. v. United States, 65 Fed.Cl. 500, 505 (2005) (quoting Gulf Group, 61 Fed.Cl. at 351). Evaluation of past performance is “within the discretion of the contracting agency and will not be disturbed unless it is unreasonable or inconsistent with the terms of the solicitation or applicable statutes or regulations.” Consolidated Eng’g Servs. v. United States, 64 Fed.Cl. 617, 637 (2005).

    Although FCSS argues that “re-vali-dat[ing],” per Amendment 10, is different than reevaluating, see FCSS Mot. at 56 n. 54, the Army did not need to inform offerors of its intention to re-evaluate any portion of the proposals. Unless such an action was forbidden by law, by the Solicitation, by the FAR, or by the SSP, the Army was free to review proposals as many times as it felt it needed, so long as it treated all offerors fairly and equally. See Dismas, 61 Fed.Cl. at 202-03; cf. Beta Analytics Int’l, Inc. v. United States, 67 Fed.Cl. 384, 407 (2005) (holding evaluation arbitrary when only one offeror received a second evaluation). Moreover, the Court finds that this language placed offerors on notice that PPE was going to be reevaluated, for it explained “re-validat[ion]” as “including any new information that may be available for activities since the original proposal submission date.” AR Tab 149 at 2. If old, previously available information were not to be used in this process, there would have been no need for this explanation.

    Once again, of course, if an initial evaluation of a proposal is properly documented and articulated, and a subsequent evaluation of the same proposal results in a different assessment, the lack of an explanation for the agency’s change of mind could well result in a finding that the second decision was arbitrary. Here, as with the staffing evaluation, there is a problem. The record contains the proper documentation of the individual SSEB member’s pre-caucus evaluation of FCSS’s PPE information, see AR Tab 62, as is called for in the SSP. See AR Tab 34 at 11. This appears to reflect a “Low” risk rating for the Performance and Business Relations elements, and a “Moderate” risk rating for the Managerial and Cost Control elements as well as for PPE overall. AR Tab 62.33 But “the result of the caucus on the PPE in the original evaluation was not documented.” AR Tab 209 at 4. As the Chairman of the SSEB subsequently related:

    In the caucus, the determination was that the information provided by the offeror and obtained by the Government supported the LOW assessment. The caucus determined that the somewhat derogatory reference concerning its Plum Island contract was offset by the labor union issues that existed before its start of work on the contract. Also, evaluation of the offeror’s ability to manage this contract was enhanced by the experience of its proposed subcontractor.

    Id.

    So the questions to be answered are how and why the assessment of FCSS’s PPE changed after the second evaluation, with the risk ratings for the elements Managerial and Cost Control reverting back to the individual evaluator’s assessment of “Moderate” risk, and the overall risk slipping to “Low/Moderate.” The record shows that search of a government database yielded two new evaluation reports for contracts performed by one of the FCSS joint venture partners. AR Tab 195. The individual evaluator assessing PPE information documented her findings. AR Tab 205. The two new FCSS reports “reflected strong performance through ratings of excellent, good, and outstanding,” although one’s “relevancy to the solicited procurement is questionable except for issues relating to management, customer service, cost control, etc. of which the evaluator rated the contractor highly.” Id. at 2. The evaluator concluded that the new reports “seem to support the Government’s original PPE risk assessment of LOW.” Id. The evaluator also stated that despite “current investigations related to the proposed subcontractor” for FCSS, there was “no basis for downgrading the subcontractor’s PPE.” Id.

    *600The Chairman of the SSEB then explained in the PA Report that “the SSEB reconsidered the PPE information in light of the importance of the PPE Subfactor ... and in light of the fact that the Management Sub-factor ratings and proposal risk assessments of the offerors were closer than they had been in the initial evaluation round.” AR Tab 209 at 5. Without elaboration, he added that “the SSEB found some instances in which it felt that its initial evaluations of PPE had been erroneous and made appropriate adjustments in this round.” Id. The documentation of the SSEB’s PPE determinations is restricted to the slides used in the presentation to the SSA. Concerning the “Moderate” performance risk assessment for FCSS under the Managerial element, the SSEB noted: “Ratings varied; lower scores generally came from Plum Island.” AR Tab 211 at 75. For the Cost Control element assessment of “Moderate” risk, the SSEB stated: “JV Member had problems with only cost contract” and “Sub has extensive experience and generally good ratings.” Id. No similar notes accompany the “Low/Moderate” overall PPE subfactor risk assessment. See id. at 76.

    The ultimate decisionmaker in the process, the SSA, explained the PPE evaluation of FCSS in the SSDD:

    In the PPE subfactor, [FCSS] submitted information showing experience for each of its two joint venturers as a prime, although of considerably less magnitude and narrower scope than this contract. [FCSS] did not submit any PPE information for the joint venture or for its proposed main subcontractor. Considering both the information submitted by the offeror and that elicited by the SSEB from other sources, [FCSS]’s performance risk for the PPE subfactor was rated as low-to-moderate, with two elements moderate and two low.

    AR Tab 214 at 3. He later explained that “[Hooking beyond the ratings to the substantive information,” he found KIRA “well above” FCSS and TGG in the PPE subfaetor assessment, as the other two offerors’ “experiences as primes were significantly less relevant than [KIRA]’s, and [FCSS]’s ratings were considerably more mixed that either of the other two offerors.” Id. at 4. In light of this record, can a reviewing court find the assessment of FCSS’s PPE to be arbitrary?

    The Court concludes not, particularly given the “greatest deference possible” that must be afforded to the agency’s judgments in this area. See University Research Co., 65 Fed.Cl. at 505. The individual evaluator who considered PPE initially felt that FCSS should receive a “Moderate” performance risk rating for two elements and for the subfactor overall. AR Tab 62. After the SSEB caucus decided, without documentation, to provide FCSS with the same “Low” performance risk rating across-the-board as was given to all offerors, the SSA chose to focus solely on the Management subfactor and Cost factor in the initial evaluation. See AR Tab 77 at 2. After the offerors in the competitive range were given the opportunity to submit additional PPE information, the SSEB re-validated the information for all three offerors in the second round. See AR Tabs 193-95, 205. And in making the source selection decision, the SSA explained that he considered not just the SSEB’s risk ratings but also the PPE information underlying them. AR Tab 214 at 4.

    The Court notes that nothing contained in the underlying information and analysis contradicts the SSA’s conclusions concerning PPE. He found FCSS’s experience “of considerably less magnitude and narrower scope” than the contract at issue. Id. at 3. These four contracts ranged from $500,000 to $ 8 million, AR Tab 211 at 74; see also AR Tabs 60, 61, which are indeed much smaller than the procurement at issue, for which the IGE for the first full-year was over $ 19.4 million. See AR Tab 208 at 4. By contrast, the KIRA experience was evaluated based on a $ 27 million contract, a $ 32 million contract, and contracts of its proposed subcontractor that ranged from $117 million to $ 801.8 million. AR Tab 211 at 34; see also AR Tabs 54-56. The TGG contracts evaluated ranged from $100,000 to $25.4 million in size. AR Tab 211 at 54. In re-evaluating the PPE subfactor, the SSA followed the Solicitation, which provided: “The Government will also consider the breadth and depth of experience reflected in the offeror’s *601list of work accomplished .... more similar PPE, in terms of type and scope of work, will be given greater credit than less similar.” Solicitation § M.2.2.2, AR Tab 7 at 50.

    Moreover, the SSA’s decision that FCSS’s ratings “were considerably more mixed than either of the other two offerors,” AR Tab 214 at 4, is not contradicted by the record. A review of the ratings provided in response to the PPE questionnaires (which contained twenty-nine items to be rated) shows that KIRA received ninety-eight “Exceptional” ratings and thirty-six “Very Good” ratings, see AR Tab 56; TGG received fifty-four “Exceptional” ratings, nine “Very Good” ratings, and twenty-one “Satisfactory” ratings, see AR Tab 59; and FCSS received nineteen “Exceptional” ratings, fifty-three ‘Very Good” ratings, twenty-five “Satisfactory” ratings, and four “Marginal” ratings. See AR Tab 60.34 Although the Court hastens to reiterate that the raw numbers might in themselves mean little, they certainly are consistent with the SSA’s judgment that FCSS’s ratings “were considerably more mixed.” And while FCSS criticizes the post-hoc explanation of the Contracting Officer that he “probably gave [FCSS] more benefit of the doubt initially than [he] should have,” FCSS Mot. at 57 (quoting from Orange (FCSS) Post-Award Tr. (June 16, 2005) at 72), arguing that the record contains no evidence of the existence of any doubt in the initial PPE evaluation, the record also contains nothing to indicate that FCSS’s experience had previously been found to be of a similar scope and magnitude to the contract at issue, or its PPE ratings to be as good as KIRA’s. The record shows that the relevant PPE information was considered, and the SSA articulated a satisfactory explanation for his action. Nothing has been identified in the record demonstrating that the Army treated FCSS differently or unfairly in evaluating its PPE information in the second round. This aspect of the evaluation was not arbitrary.35

    c. Cost factor evaluation.

    Fort Carson Support Services also disputes the manner in which the Cost factor was used by the Army in the evaluation. See FCSS Mot. at 60-68; FCSS Reply at 30-35. This would not seem to be a particularly fruitful avenue for protest, given that the evaluation methodology of the Solicitation stated that the other factor, Quality, was “significantly more important than Cost.” Solicitation § M.2.1, AR Tab 7 at 48. But FCSS argues that the Solicitation’s evaluation criteria were violated by the Army’s use of the Cost factor and its components.

    The Cost factor included two subfactors, Total Estimated Cost, and Cost Realism, which were “of approximately equal importance.” Solicitation § M.2.1.1, AR Tab 7 at 48; see also id. §§ M.2.3.5, M.2.3.6, AR Tab 7 at 51. The TEC subfactor evaluation included “reasonableness,” which meant determining if proposed costs were “unreasonably high or low in relation to the offeror’s technical and management approaches and in comparison with” the IGE. Id. § M.2.3.2, AR Tab *6027 at 51. The Cost Realism subfactor involved the evaluation of balance, allocation, and variance. It is the latter element — the difference between the TEC and the Most Probable Cost for a proposal — upon which FCSS primarily focuses in this aspect of its protest.

    One of FCSS’s arguments is that the Army’s cost variance evaluation was flawed in part because it was based on “a flawed assessment of FCSS’s staffing,” in which “the Army failed to justify giving FCSS only 3.5 FTEs credit for its innovations, which were the same innovations that the Army had determined were worth approximately 34 FTEs in the first evaluation.” FCSS Mot. at 63. The Court rejects this argument, for the reasons explained above concerning the staffing level evaluation. The record demonstrates that the Army followed a reasoned process, and considered the relevant information, to determine the number of innovation credits to allow each offeror. Because FCSS was given just 3 FTEs due to innovations, and 0.5 FTEs for “home office effort,” it was short 72.5 FTEs relative to the IGE. AR Tab 211 at 79. The adjustment for the labor shortfall proposed by FCSS accounts for the entire amount of the increase from its TEC to its MPC — or, in other words, its variance. See AR Tab 208 at 4.36 Because the staffing determination was reasonable, it was also reasonable for the Army to have calculated the variance to be $32.7 million over the full potential life of the contract.

    Fort Carson Support Services’ other arguments relating to the Cost factor charge the Army with violating the terms of the Solicitation. The use of the adverb “marginally” to modify the “satisfactory” evaluation of FCSS’s cost variance is challenged, see FCSS Mot. at 65. But nothing in the Solicitation or the SSP forbids the use of such descriptors. See AR Tabs 7, 34. While the Solicitation states that “[vjariance will be evaluated as either satisfactory or substandard,” Solicitation § M.2.3.6.1, AR Tab 7 at 51, variance is a part of the realism analysis, of which the Solicitation also states:

    If an offeror’s cost is evaluated as unrealistically low or high compared to the anticipated costs of performance; and/or the offeror fails to recognize and/or mitigate technical, schedule, and cost risks; and/or the offeror fails to adequately explain the proposed costs, the Government will reflect the inherent findings, risks and associated cost impacts in the Cost Realism Analysis.

    Id. § M.2.3.3 AR Tab 7 at 51. Thus, the evaluation was designed to be more than the mere generation of the labels “satisfactory or substandard,” and the use of a modifier does not conflict with the Solicitation. The argument is also pressed that variance is just one of three elements of one of two subfactors of Cost, and thus was given a more prominent role in the evaluation than its one-sixth share of the process would warrant. FCSS Mot. at 66-68. But the Solicitation clearly informed prospective offerors that the source selection “decision will not be made by the application of a predefined formula, but by the conduct of a tradeoff process among the factors, sub-factors, and elements identified” in the Solicitation. Solicitation § M.1.1, AR Tab 7 at 48. It was certainly proper for the Army to consider FCSS’s cost variance element evaluation in making the best value tradeoff.

    Moreover, offerors were also warned that “[a]n offeror’s eligibility for award may be significantly downgraded if the offeror’s proposed costs and technical approach result in significant cost realism adjustments.” Id. § M.2.3.6, AR Tab 7 at 51 (emphasis added). And the cost variance evaluation specifically involved “consideration given to the percentage of the variance, the dollar amount of the variance, the anticipated impact on contract performance, and the offeror’s proposed method of performance.” Id. § M.2.3.6.1 AR Tab 7 at 51. The variance of FCSS was calculated to be $32.7 million over the life of the contract, and 15.7% of the FCSS proposal’s probable cost — much higher than the other two offerors. See AR Tab 211 at 83. Under the Solicitation’s evaluation methodology, it was hardly arbitrary for the evalua*603tion of this element to have been considered when comparing the proposals.37

    U- Did the Army Conduct Meaningful Discussions with FCSS?

    The final challenge brought by FCSS to the Army’s award of the contract to KIRA concerns alleged violations of an applicable regulation. Fort Carson Support Services argues that the Army failed to engage in meaningful discussions with it, in violation of 48 C.F.R. § 15.306. See FCSS Mot. at 68-84. This provision of the FAR concerns discussions with offerors in the competitive range, and requires that, “[a]t a minimum, the contracting officer must ... indicate to, or discuss with, each offeror still being considered for award, deficiencies, significant weaknesses, and adverse past performance information to which the offeror has not yet had an opportunity to respond.” Discussions are meaningful if “they generally lead offerors into the areas of their proposals requiring amplification or correction, which means that discussions should be as specific as practical considerations permit.” WorldTravelService v. United States, 49 Fed.Cl. 431, 439 (2001) (quoting Advanced Data Concepts, Inc. v. United States, 43 Fed.Cl. 410, 422 (1999), aff'd, 216 F.3d 1054 (Fed.Cir.2000)).

    a. Staffing levels.

    Two aspects of the discussions concerning its staffing levels are raised by FCSS. It concedes that “during discussions, the Army told FCSS of the areas in which it considered FCSS’s staffing to be either high or low,” and that “the Army told FCSS that it was only identifying those areas where the under- or over-staffing was significant.” FCSS Mot. at 70. Indeed, all offerors were told in writing that their “proposed staffing reflected substantial variances from that which the Government considered necessary,” which “included large numbers of instances of understaffing.” AR Tab 117 at 2; see also AR Tabs 115, 116. As a follow-up to the discussion, each offeror was sent an email confirming the PWS categories for which its proposal was understaffed or overstaffed. AR Tabs 143,146,147. And during the discussion with FCSS, the Army revealed the Most Probable Cost figure for the FCSS proposal and explained that the “gross difference” between this number and FCSS’s proposed cost reflected the net amount of under-staffing. Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 37.

    But FCSS argues that the Army did not more precisely spell out the understaffing amounts, and that “the Army never gave FCSS any reason to believe that its staffing overall constituted a significant weakness or even a weakness or problem.” FCSS Mot. at 70. It frames this argument in light of the fact that it was initially awarded the contract, and thus had no reason to believe the Army found serious problems with its staffing approach. Id. at 70-71. But the Army clearly conveyed to FCSS that it was understaffed for eight of the seventeen work categories specified in the PWS, see AR Tab 13, and overstaffed in just two. AR Tab 147. And the Army informed FCSS that the MPC calculated in the initial evaluation was a figure exceeding FCSS’s proposed costs by more than $ 26.3 million, which was due entirely to net understaffing. See Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 37; AR Tab 76 at 34. The offeror without question “was placed on notice of significant identified management problems. That renders the discussions meaningful under the FAR.” Cubic Defense Sys., Inc. v. United States, 45 Fed.Cl. 450, 471 (1999) (internal quote omitted). The Army specifically identified areas of understaffing and conveyed the seriousness of the issue, satisfying its obligation under the FAR, as it was “not required to recommend a particular level of staffing.” WorldTravelService, 49 Fed.Cl. at 440 (inter*604nal quotation omitted).38

    The other staffing issue concerns FCSS’s Work Reception and Dispatch Desk plan. See FCSS Mot. at 71-83. The only aspect of this plan identified as a weakness during discussions concerned the person to whom calls were to be sent after the desk closed at eleven p.m. See AR Tab 117 at 2 (weakness described as “[possible gap in coverage if plant operator is occupied with plant duties”); Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 18-19. But in the FPR evaluation, the fact that the WR & DD was to be staffed after 3:30 p.m. was identified as a weakness. See FCSS Mot. at 77; AR Tab 204 (comment number 3650). The only change in this aspect of the plan was to increase by one hour, until midnight, the staffing of the desk. The offeror accuses the Army of a “bait-and-switch” in identifying one weakness to be addressed, but ultimately finding another. FCSS Mot. at 71.

    It appears that the SSEB initially found the concept of extended hours for the WR & DD to be a strength, see AR 76 at 31 (listing among strengths: “Good concept on work reception desk manned from 0700 to 2300 hrs”) but then made a complete U-turn and found it to be a weakness. See AR Tab 211 at 63 (listing as a weakness: “Manning of work reception desk until midnight — extra cost.”). One doubts that the marginal addition of the eleventh hour was the Army’s concern. But in any event, the Court does not find the Army’s failure to raise this aspect of the proposal during discussions to be a violation of 48 C.F.R. § 15.306(d)(3). As FCSS’s own brief shows, the Army addressed the staffing of FCSS’s WR & DD plan during discussions. FCSS Mot. at 75-76, 78-83; Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 18-20, 29-35, 69-70. This sufficed to “generally lead” FCSS “into the areas of [its] proposal ] requiring amplification or correction.” WorldTravelService, 49 Fed.Cl. at 439. Moreover, the FAR provision requires a discussion of “significant weaknesses,” not all weaknesses. 48 C.F.R. § 15.306(d)(3) (2005). And even after the second evaluation, the WR & DD plan was considered merely a weakness, not a significant one. AR Tab 211 at 63.

    But more to the point, nothing in the record demonstrates that the extended hours aspect of the WR & DD plan was considered by the Army to be a weakness before the FPRs were reviewed, so discussions concerning weaknesses could not have included it.39 And it is certainly the Army’s prerogative to change its mind — its failure to identify an existing weakness in a proposal does not preclude the Army from considering the weakness later. Even if the failure to discuss this aspect of the WR & DD plan were a violation of the FAR provision, the Court does not find it to be prejudicial. This particular weakness was not even mentioned in the Source Selection Decision Document, see AR Tab 214, and related to an element of the Management subfactor for which the “significant weakness” of understaffing clearly predominated. See AR Tab 211 at 63; AR Tab 214 at 3.

    b. Adverse past performance information.

    The Army’s discussions with FCSS are also faulted for failing to cover certain past performance information that was considered *605by the Army. FCSS Mot. at 83-84; FCSS Reply at 82-86. In this regard, it should be noted that the FAR does not require that discussions address all adverse past performance information, but only “adverse past performance information to which the offeror has not yet had an opportunity to respond. ” 48 C.F.R. § 15.306(d)(3) (2005) (emphasis added). When an offeror had the opportunity to respond to a past performance rating outside of the procurement process in which the information is being considered, the regulation does not apply. This was made clear by the government’s explanation of the nearly-identical formulation (“to which an offeror has not had a prior opportunity to respond”) when this language was introduced as a part of 48 C.F.R. § 15.306(b)(l)(i):

    We did not revise the rule to permit offer-ors to address past performance information to which they have already had an opportunity to respond because the solicitation provides offerors with the opportunity to address problems encountered on previous contracts and related corrective actions. In addition, FAR Subpart 42.15, Contractor performance information, already contains formal rebuttal procedures.

    62 Fed.Reg. 51,224, 51,228 (Sept. 30, 1997).

    i) Plum Island

    The first past performance information that FCSS argues should have been discussed pertains to the operations and maintenance support contract performed by Olgoon-ik, one of the FCSS joint venture partners, at the Plum Island Animal Disease Center. Fort Carson Support Services asked an official at the U.S. Department of Agriculture’s Agricultural Research Service, which ran the Center at the time of the contract, to complete the past performance questionnaire for this procurement, and listed the Plum Island contract as relevant experience. See AR Tab 60; AR Vol. VI, Tab 2 at 12. Olgoonik’s work on the contract had ended on December 31, 2003, and the questionnaire was completed on May 27, 2004. See AR Tab 60.

    The respondent for the Plum Island contract provided a rating for twenty-seven of the twenty-nine questions for which a rating was requested. For three questions, he rated performance to be “Very Good,” and for twenty, he rated the performance “Satisfactory.” Id. In response to two questions, the respondent circled both “Satisfactory” and “Marginal,”40 and for two others, he gave the outright rating of “Marginal.” Id. One of these blended ratings concerned how the contractor responded to change requests with technical/cost proposals, and was accompanied by the comment that Olgoonik “had trouble with keeping costs in line with proposals.” Id. (question twelve). The other blended rating contained the comment, “[difficulty in providing accurate cost and fund expenditure reports.” Id. (question six). One of the “Marginal” ratings concerned cost control, for which the respondent wrote: “Contractor had difficulty controlling cost. A large contributing factor was the labor issues.” Id. (question twenty-five). The respondent added a final comment to the questionnaire, explaining that “[t]he contract suffered from the ongoing labor issues and political interference.... Contractor tried very hard to do a good job under very difficult circumstances.” Id. (question thirty).

    The SSEB member responsible for the initial PPE evaluation recommended that FCSS receive a “Moderate” performance risk rating for PPE overall and for the elements Managerial and Cost Control. See Tab 62. The Plum Island ratings were mentioned as a factor in these determinations that “some” doubt existed concerning successful performance. Id. Athough it went undocumented, see AR Tab 209 at 4, the SSEB’s initial consensus was that FCSS should nonetheless receive a “Low” performance risk rating for the PPE subfactor and each element. See AR Tab 76 at 33. The SSEB Chairman later explained that “[t]he caucus determined that the somewhat derogatory reference concerning its Plum Island contract was offset by the labor union issues that existed before its start of work on the contract.” AR Tab 209 at 4. The question *606here presented is whether this “somewhat derogatory reference” constituted adverse past performance information that the Army was required to discuss with FCSS, as per 48 C.F.R. § 15.306(d)(3).

    To answer the question, we must first determine whether the information was “adverse.” For the PPE questionnaire, the worst rating that could be provided was “Unsatisfactory.” See AR Tab 19 at 3. The rating of “Marginal” was defined as follows:

    Performance does/did not meet some contractual requirements. The contractual performance of the element or sub-element being assessed reflects a serious problem for which the contractor has not yet identified corrective actions. The contractor’s proposed actions appear only marginally effective or were not fully implemented.

    Id. Does the failure to meet “some” requirements, or “a serious problem” in performance amount to an adverse rating? There does not appear to be any case law on point. One opinion of our Court held that evaluations needed to be “deemed ‘poor’ or otherwise deficient” to be adverse for the purposes of this FAR provision. Bannum, Inc. v. United States, 60 Fed.Cl. 718, 729 (2004). A GAO decision suggests that a “marginal” rating would be adverse past performance information. See NMS Management, Inc., B-286335, 2000 CPD ¶ 197, 2000 WL 1775243, at *3 (Comp.Gen. Nov.24, 2000). Another suggests that adverse means an evaluation that “reflected a finding of a history of performance problems.” See Standard Communications, Inc., B-296972, 2005 CPD ¶ 200, 2005 WL 3078884, at *6 (Comp.Gen. Nov.1, 2005).

    The Court considers this a close question. But it appears that the SSEB’s PPE evaluator was influenced by the Plum Island ratings in determining that “some” doubt of successful performance existed. See AR Tab 62. And the record does not contain any document preexisting the discussions that reflects a consensus opinion that the “Marginal” ratings were mitigated by inherited labor conditions. In light of the above, and the characterization of the evaluation as “somewhat derogatory” — and considering that the definition of “Marginal” includes serious performance problems and performance failures — the Court concludes that the Plum Island evaluation contained adverse past performance information.

    But in order for FCSS to have been entitled to notice of this information in the discussions with the Army, FCSS must show that at the time of the procurement process, it had “not yet had an opportunity to respond” to this information. 48 C.F.R. § 15.306(d)(3) (2005). The information pertained to a contract which had expired four months before FCSS asked the respondent to complete the questionnaire. See AR Tab 60. The Plum Island contract was with the federal government, so the FAR provisions entitling the contractor to review of its performance evaluation applied. See 48 C.F.R. §§ 42.1501-03 (2005). This information was submitted at FCSS’s request, and FCSS could have provided an explanation of the Plum Island performance in its proposal. But most importantly, no information in the record shows that Olgoonik had “not yet had an opportunity to respond” to its Plum Island performance evaluation. Fort Carson Support Services argues the contrary position, that nothing in the record establishes that Olgoonik knew of or had the opportunity to this information, and contends that “[t]he burden of compliance with FAR 15.306(d)(3) is not FCSS’s, it is the Army’s.” FCSS Reply at 85-86.

    But while the FAR provision indeed imposes a requirement on the Army, FCSS must show a “clear and prejudicial violation” of the regulation to prevail. See Impresa Construzioni, 238 F.3d at 1333. This Court has held that, in bid protests, “the record may be supplemented with (and through discovery a party may seek) relevant information that by its very nature would not be found in an agency record.” Orion Int’l Techs. v. United States, 60 Fed.Cl. 338, 343 (2004); see also Beta Analytics Int’l, Inc. v. United States, 61 Fed.Cl. 223, 225 (2004). Evidence of the lack of an opportunity to respond to past performance information fits this description, and if FCSS possessed any relevant information on this point, FCSS could have supplemented the record with it— in the form of a declaration from the relevant *607Olgoonik officer, or the like. Absent such information, FCSS has failed to demonstrate a “clear and prejudicial violation” of the regulation in question.

    ii) Kellogg Brown & Root

    The other past performance information which FCSS contends should have been a topic of discussion relates to the work of its proposed subcontractor, KBR. On January 26, 2005 — five days after the FPRs were filed, and six weeks after the discussions were conducted — KIRA, acting through its counsel, sent a letter to the Contracting Officer. AR Tab 179.41 KIRA declared that its purpose in writing was “to provide information regarding deficiencies on the part of Kellogg, Brown & Root (‘KBR’) in managing and accounting for Government property.” Id. (Jan. 26, 2005 ltr. at 1). The letter “call[ed] to [the Army’s] attention an audit report issued by the Office of Inspector General of the Coalition Provisional Authority” that was dated October 25, 2004, and was attached to the letter. Id42, KIRA characterized the information in the report as reflecting “one of the worst contractor performances in recent memory,” and stated that “KBR’s inability to properly manage Government property calls into doubt ‘the [JV’s] likelihood of success in meeting the requirements stated in [the RFP].’” Id. (ltr. at 2) (quoting Solicitation § M.2.2.2) (insertions in original). KIRA closed the letter with the “hope” that the Army would “find this information helpful in evaluating the JV’s proposal.” Id. (ltr. at 3).

    Without question, this past performance information must be considered “adverse.” According to KIRA, it concerned “deficiencies” and “one of the worst contractor performances in recent memory.” AR Tab 179 at 1, 2. The Chairman of the SSEB described KIRA’s attachment as “a derogatory report.” AR Tab 209 at 4. He also stated that “the report did not indicate an opportunity for the subcontractor to respond to its findings.” Id.43 Even though the report was released in October, 2004, KIRA waited until late January, 2005 — after discussions had already occurred and the FCSS final proposal was submitted — before sending a copy to the Army to be “helpful in evaluating” the FCSS proposal. See AR Tab 179. The timing of the document, then, seems calculated to prevent FCSS from having an opportunity to respond to the information.

    KIRA’s submission of the audit report had its intended effect, as the Chairman acknowledged that “[t]he SSEB considered that information” in evaluating FCSS’s past performance. AR Tab 209 at 4. Under these circumstances, then, adverse past performance information concerning FCSS was used in the procurement process, and FCSS had neither an opportunity to respond to the information nor any discussion of this information with the Army. This is a clear violation of 48 C.F.R. § 15.306(d)(3). But a violation must be not only clear, but prejudicial, if it is to be the basis for overturning a procurement decision. Impresa Construzioni, 238 F.3d at 1333. The record establishes that this violation was not prejudicial to FCSS’s chance to receive the contract award.

    The Chairman explained that the SSEB considered the KBR audit report, “but decided that it would not adversely affect [FCSS]’s performance risk assessment.” AR Tab 209 at 4. He stated that the “primary reason” for this decision “was that the SSEB felt that it was not reasonable to use work performed in a foreign, chaotic, wartime situation as a basis for assessing ability to perform in a stateside, garrison context such as under this contract.” Id. He added that the “report reflected disagreement with its findings by the Defense Contract Management *608Agency,” and noted the lack of an opportunity given KBR to respond to the findings. Id. In the SSEB’s presentation to the SSA, the only reference to FCSS’s proposed subcontractor is the note: “Sub has extensive experience and generally good ratings.” AR Tab 211 at 75. The only mention of the subcontractor in the SSA’s Source Selection Decision Document was the remark that FCSS “did not submit any PPE information ... for its proposed main subcontractor.” AR Tab 214 at 3. Far from establishing that the failure to discuss the adverse KBR performance information was prejudicial to the evaluation and award under this Solicitation, the record instead shows that the information did not affect the Army’s decision.44

    The Court notes that KIRA’s submission of the letter and audit report was certainly contrary to the spirit of competitive bidding, a process the government uses to provide offerors with the incentive to improve the quality and price of their bids, see Arch Chems., 64 Fed.Cl. at 400, not the opportunity to conduct a negative campaign impugning the abilities of one’s competitors. But while the Solicitation does not seem to contemplate such actions, the Court is not aware of any provisions in it or the FAR that would prohibit this behavior, either.

    C. The Ginn Group’s Protest

    In his Source Selection Decision, the SSA concluded that TGG’s proposal ranked “third of the three offerors.” AR Tab 214 at 4. The Ginn Group’s proposed costs and MPC were the highest of the three. Id. For the “significantly more important” Quality factor, see Solicitation § M.2.1, AR Tab 7 at 48, TGG was also ranked third, as the SSA “considered [TGGj’s PPE to be somewhat better than [FCSSJ’s, but not enough to offset [FCSSj’s considerably better Management subfactor proposal.” AR Tab 214 at 4. The Ginn Group was assigned the adjectival rating of “Satisfactory” and the proposal risk rating of “Moderate” for the Management subfactor overall, id. at 2, but its Quality Control element was evaluated as deficient, with an “Unsatisfactory” adjectival rating and a “High” proposal risk rating. Id. at 3. The SSA explained that “the Quality Control plans provided in its proposal, as revised, lacked detail sufficient to reflect an understanding of the [Solicitation] requirements.” Id.

    The Ginn Group raises three grounds for protest.45 First, TGG claims that the Army unreasonably evaluated the Quality Control (“QC”) element of its proposal, and failed to engage in meaningful discussions concerning its QC plan. TGG Mot. at 14-24. Second, it argues that the Army failed to follow the Solicitation requirements and was inconsistent in evaluating the TGG and KIRA proposals under three other elements of the Management subfactor, resulting in an arbitrary Management subfaetor evaluation. Id. at 24-31. Finally, TGG asserts that the Army’s best value determination was arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law, due to the SSA’s having followed the allegedly flawed evaluations made by the SSEB. Id. at 31-33.

    At the outset, the Court notes that TGG attempts to shoe-horn this matter to fit within the analytical framework employed by this Court in Beta Analytics Int’l, Inc. v. United States, 67 Fed.Cl. 384 (2005). See, e.g., TGG Mot. at 22, 26-32. In Beta Analytics, the evaluations and notes of individual evaluators were relevant to our Court’s review, because the evaluation process employed therein required offerors to be compared on the basis of the average of the scores provided by the individual evaluators. Beta Analytics, 67 *609Fed.Cl. at 397-99. The source selection plan did not employ scoring by consensus, and did not give the SSA any discretion to deviate from the technical score averages in determining best value. See id. at 387-88, 396-97. Since the judgments of each individual evaluator factored into the averages, the “verification that the relevant factors were considered, and that a reasoned connection exists ‘between the facts found and the choice made,’ ” id. at 398 (quoting Balt. Gas & Elec. Co. v. Natural Res. Def. Council, 462 U.S. 87, 105, 103 S.Ct. 2246, 76 L.Ed.2d 437 (1983)), necessitated that the Court consider these judgments. Id. at 399.

    The evaluation methodology used in the procurement decision at issue here differs from that employed in Beta Analytics in many critical respects. The SSA’s best value determination was not based on the rigid use of scores supplied by other officials, but instead involved “a tradeoff process among the factors, subfactors, and elements” defined in the Solicitation, and “the exercise of ... sound business judgment.” Solicitation § M.1.1, Tab 7 at 48. The Solicitation expressly provided that this “decision will not be made by the application of a predefined formula.” Id. The Source Selection Plan charged the SSEB to produce “consensus ratings,” and required that these ratings “shall not be mathematically computed based on the individual evaluations, but rather shall be established by agreement among the SSEB members.” AR 34 at 8; see also id. at 9,11. And, as was explained above, far from generating numerical scores of any kind, the evaluation process used adjectival and risk ratings as tools to aid in the assessment of proposals, rather than as ends in themselves.

    In an evaluation process in which the ultimate decisionmaker was required to “utilize all evaluation reports and analysis in determining the successful proposal,” AR 34 at 14, and in which he possessed the discretion to “subjectively” reach his own best value determination, see Solicitation § M.1.2, AR Tab 7 at 48, circumstances calling for a review of the judgments of individual evaluators are less likely to be presented. And these circumstances are further limited by the particular types of assessments involved in this matter, as “the amount of deference given to a particular decision will depend on how objective the associated criteria for scoring happened to be.” Beta Analytics, 67 Fed.Cl. at 399. Of course, a Court may always review the rationale of the ultimate decision-maker, or of any other official or group that is expressly relied upon by that decisionmaker, “to see if a necessary element for a score is missing,” id., but it cannot “second guess” the “discretionary determinations of procurement officials” concerning the “minutiae” involved in “technical ratings.” E.W. Bliss, 77 F.3d at 449.

    1. Was the Evaluation of the Quality Control Element Arbitrary or Unlawful?

    The Quality Control element of TGG’s proposal was evaluated as deficient, disqualifying TGG for award under the SSP. See AR 34 at 3 (explaining that a deficiency “increases the risk of unsuccessful contract performance to an unacceptable level”). Thus, this element is particularly critical to the success of TGG’s protest.

    a. The Army’s evaluation of the QC element.

    The first argument of TGG concerning the evaluation of the QC element is that the SSA misinterpreted the SSEB’s criticism that TGG’s QC plan “lacked sufficient detail to be evaluated.” TGG Mot. at 14-15 (quoting AR Tab 211 at 51); see also AR Tab 181 at 2; AR Tab 202 at 1. In the Source Selection Decision Document, the SSA explained that “the Quality Control plans provided in [TGG’s] proposal, as revised, lacked detail sufficient to reflect an understanding of the [Solicitation’s] requirements.” AR Tab 214 at 3. But these two findings are not inconsistent. The Solicitation required offerors to “[d]escribe in detail the approach to deficiency prevention and documentation and how the QC system will be used to achieve constant improvement in the performance of all PWS tasks.” Solicitation § L.8.6.5, AR Tab 7 at 43. Offerors were told that “[t]he Government will evaluate how the offeror’s overall approach to quality control will be put into place to meet or exceed the require-*610merits of the PWS.” Id. § M.2.2.1.5, AR Tab 7 at 50. The evaluation of the QC plan is based on whether the requirements of the Solicitation are met, and the SSEB found insufficient detail to evaluate TGG’s plan. It stands to reason that if the lack of sufficient detail prevented the SSEB from comparing the proposal to the Solicitation requirements, TGG’s understanding of the Solicitation requirements was not reflected in the proposal. See also Solicitation § M.3.1, AR Tab 7 at 52 (definition of the “Unsatisfactory” rating given to TGG for this element includes “inadequate detail to assure the evaluator of an understanding of the proposed approach”).

    The Ginn Group’s other arguments concerning the evaluation of its QC plan ask the Court to second guess a number of determinations made by the SSEB and adopted by the SSA. When TGG was informed, by letter, that its proposal had been included in the competitive range, it also received a list of discussion topics, organized under the headings “Uncertainties,” ‘Weaknesses,” “Significant Weaknesses,” and “Deficiencies.” AR Tab 116. The Army list of these problems included a weakness in the QC plan: “States critical work will receive [XXXX] but did not provide sufficient detail to enable the evaluation of the proposed method of inspection.” Id. (list at 2). Two deficiencies regarding the QC plan were expressly identified. The first stated that the QC plan “for Grounds Maintenance was copied entirely from the PWS and only covered what is required for Grounds Maintenance and did not provide any detail on how this area would be monitored,” and added the criticism that “[t]he QC plans did not provide details on how they integrate with the offeror’s overall QC programs.” Id. (list at 3) The other deficiency noted that the QC plan “for Service Orders did not explain the proper use of random sampling for service orders and has cited the use of MIL-STD 105E that is an obsolete standard.” Id.

    In response to these criticisms, TGG revised its QC proposal, but the Army still regarded TGG’s QC proposal as “Unsatisfactory” with a “High” risk. The Ginn Group takes aim at the specific findings of the SSEB, memorialized in the caucus session minutes. See TGG Mot. at 18-20. In addition to the deficiency, described above, of a “Mack of sufficient detail within the Quality Control plans for evaluation,” the SSEB also found that “the proposed plan does not explain how to determine AQL’s or to perform random sampling,” and that TGG deleted the obsolete standard referenced in its initial proposal “but did not replace it with either sufficient description to explain the program or with another standard, which would have supplied adequate AQL’s.” AR Tab 181 at 1-2.46 The SSEB also elaborated on the deficiency in TGG’s QC plan, noting that “[i]n the grounds maintenance area the use of [XXXX] is an unrealistic standard,” and explained that a significant weakness was “unrealistic AQLs for all areas.” AR Tab 202 at 1.

    The Ginn Group argues that its QC plan description was sufficiently detailed. TGG Mot. at 19. It contends that statements such as “random sampling will be used to select and measure a representative group of performance units,” along with a chart specifying the various percentages of performance objectives to be randomly sampled, suffices to show how random sampling will be performed. Id. at 20 (quoting AR Vol. XIV, at Appendix 8-4, 9-4, and 10-4). And it asserts that it needs no standard as a source for AQLs, for it proposes a [XXXXXXXX], Id. at 21. The Ginn Group invites the Court to conduct a “simple review of [the details of] TGG’s Quality Control Plans,” TGG Mot. at 19, and “a comparison of TGG’s and [KIRA]’s Quality Control plans.” Id. at 20. But the sort of judgments that TGG wants reviewed — whether a plan is detailed enough to be evaluated; whether a system for determining AQLs is necessary; whether defect levels are set too low; or whether the manner of performing random sampling is adequately described — are quintessential minutiae of the evaluation process.

    *611The review of the Army’s judgments in these areas requires more than simply verifying that some objective feature is missing, like a description of particular work experience. Cf. Beta Analytics, 67 Fed.Cl. at 402. The Army did not state that AQLs were missing, but that they were unrealistic and that a system for providing adequate ones is not described. AR Tab 181 at 1. It did not find that the words “random sampling” were missing, but that the proposal did not explain how the sampling would be performed — and a Court is hardly in the position to say that merely stating the percentage of things being sampled would suffice. The Court’s deference to the Army in this regard “is not absolute, but where a Court is called upon to review technical matters that are within the [Army’s] expertise, the highest degree of deference is warranted.” Cubic Defense Sys., 45 Fed.Cl. at 458. The Army’s evaluation of the technical merits of TGG’s QC proposal demonstrates that the relevant information was considered, and that a satisfactory explanation was articulated. While the Court may confirm or deny that a discretionary decision was reasoned, it may not substitute its own judgment for that of the procurement officials in such matters as these. See E.W. Bliss, 77 F.3d at 449.

    b. The discussions concerning the QC proposal.

    The Ginn Group also alleges that the Army failed to engage in meaningful discussions about its QC plan, and that “the Army never indicated to TGG the level of concern that the Army had regarding TGG’s quality control plans.” TGG Mot. at 15 (emphasis added). The latter statement is premised on TGG’s description of the attachment to the competitive range letter as just a “list of discussion questions,” id., when the items concerning the QC element were designated as one “Weakness” and two “Deficiencies.” See AR Tab 116. Calling QC plan shortcomings “deficiencies” and a “weakness” seems to adequately communicate a serious level of concern.

    As was discussed above, the FAR requires, when discussions are held, that “[a]t a minimum, the contracting officer must ... indicate to, or discuss with, each offeror still being considered for award, deficiencies, [and] significant weaknesses.” 48 C.F.R. § 15.306(d)(3). To be meaningful, discussions must “generally lead offerors into the areas of their proposals requiring amplification or correction,” and “should be as specific as practical considerations permit.” WorldTravelService, 49 Fed.Cl. at 439 (citation omitted). The GAO has described meaningful discussions as not requiring “that an agency must spoon feed a vendor as to each and every item that must be revised or addressed to improve the submission.” Uniband, Inc., B289305, 2002 CPD ¶ 51, 2002 WL 393059, at *11 (Comp.Gen. Feb. 8, 2002) (internal quotations and citations omitted).

    According to TGG, “the Army informed TGG during ... discussions that TGG could address the agency’s concerns with TGG’s Quality Control plans” by defining “critical” work, including in each individual plan a description of how its individual QC plans integrate with its overall QC plan, and including AQLs in its proposal. TGG Mot. at 23. The Ginn Group argues that since it made these revisions, but still received an “Unsatisfactory” adjectival rating and a “High” proposal risk rating for the QC element, “there is no conclusion but that the Army misled TGG.” Id. But the Army did not, during the discussions, ever claim that all TGG had to do was make the changes discussed to have a satisfactory QC plan. See Blue (TGG) Disc. Tr. (Dec. 15, 2004) at 13-15; see also AR Tab 130 at 3 (prepared statement for discussions) (the Army tells the offerors “we are not here to tell you how to respond to those deficiencies or weaknesses or how, otherwise, you may improve your proposal”).47 During the *612discussions, the Army did not instruct TGG on how to rewrite its proposal, and it had no such obligation to do so. See SDS Int’l v. United States., 48 Fed.Cl. 759, 774 (2001). The Army identified TGG’s QC plan as an area of concern, and mentioned several areas of the QC plan in particular as weaknesses or deficiencies that TGG should address. Thus, the discussions satisfied the FAR requirement of leading TGG to those areas of its proposal that needed improvement.

    The Ginn Group also argues that it was misleading for the Army to have been so specific about problems with TGG’s QC proposal, so that TGG believed it could improve the proposal by merely focusing on the specific areas discussed. See TGG Reply at 8 (citing Analytical & Research Tech., Inc. v. United States, 39 Fed.Cl. 34 (1997)). It “acknowledges that it might now have no basis for protest” if “the Army merely informed TGG of its ultimate purported conclusion, that TGG’s Quality Control plans ‘lacked detail.’” Id. at 8. The Ginn Group contends that since the lack of detail was raised only as relating to the “[XXXXXX]” proposal, see id. at 9 & n. 3, and that it clarified this as requested, the Army cannot use the lack of detail as the reason the QC plan is deficient. But since the Army has no obligation to help an offeror rewrite its proposal, it could not be required, during discussions, to explain how to write a detailed QC proposal. Moreover, it appears that the Army’s concern about the lack of detail in the final proposal related to the random sampling and AQL provisions that were added after discussions occurred. See AR Tab 181. Discussions need not employ clairvoyance to be meaningful.

    The Solicitation required offerors to provide the Army with a “practical, straightforward, specific, concise, and complete” QC plan. Solicitation § L.8.6.5 AR Tab 7 at 43. The Army provided TGG with a list of weaknesses, significant weaknesses, and deficiencies identifying areas of the initial proposal that needed improvement. AR Tab 116. If TGG failed to provide sufficient detail in its final proposal, this was due to its own business judgment, as the Army is not obliged to explain the amount of detail TGG’s proposal required. The Ginn Group has failed to demonstrate a clear and prejudicial violation of 48 C.F.R. § 15.306(d)(3).

    Although TGG’s failure to prevail on this ground for protest may well moot the protest as a whole — since the deficiency under the QC element stands — the Court will nevertheless consider the remainder of TGG’s arguments, in turn.

    2. Was the Army’s Evaluation of the TGG and KIRA Proposals Consistent and in Accordance with the Solicitation?

    The Ginn Group also argues that the Army’s evaluation of its FPR was contrary to the terms of the Solicitation, and inconsistent with the Army’s evaluation of KIRA’s FPR. TGG Mot. at 24-31. It criticizes the Army’s overall evaluation of the Management subfactor, based on alleged problems with the evaluation of three elements of that subfactor— Phase-in Planning, Contract Administration, and Human Resources.

    a. The Phase-in-Planning element.

    In its presentation to the SSA, the SSEB commented that the strengths of TGG’s Phase-in Planning were: “All aspects strong.” AR Tab 211 at 45. The Ginn Group contends that this is the equivalent of finding that the proposal “offer[ed] significant advantages to the Government,” see TGG Mot. at 25, which is part of the description of the adjectival rating “Outstanding.” See Solicitation § M.3.1, AR Tab 7 at 52. Thus, it argues, the Army acted arbitrarily in merely rating this element as “Good.” TGG Mot. at 25. But a Court is hardly in a position to decide whether a proposal should be rated “Outstanding” rather than “Good,” even given that the proposal is strong in all aspects. Considering that a “Good” proposal is one “offering] some advantages to the Government,” how is the Court to decide whether some or significant advantages are *613offered? This is exactly the sort of second-guessing that Courts must avoid. The Ginn Group argues that this situation is similar to one presented in Beta Analytics, where the record showed that a necessary — and objective — requirement for the top rating was absent, conflicting with an evaluator’s award of the top score. Beta Analytics, 67 Fed.Cl. at 402-03. But there is a big difference between a Court looking to see if a necessary element for a high rating is missing — whether this is based on an objective criterion, or on a specific finding by the evaluator that the element is missing — and a Court deciding that a higher rating is warranted because a sufficient element is present. There is an inconsistency when a proposal receives a rating for which it was ineligible, but not when it fails to receive one for which it is eligible— unless, of course, the relevant evaluating official stated clearly that a particular finding was the only reason a higher rating was denied, and that finding itself is objectively verifiable as a mistake.

    The Ginn Group further argues the Army was inconsistent in its evaluation of this element, as KIRA received an “Outstanding” rating although it was evaluated as having “only” two identified strengths (and no weaknesses). TGG Mot. at 26; see AR Tab 211 at 24-25. But neither the SSP nor the Solicitation employed a methodology whereby the raw numbers of strengths and weaknesses were added or subtracted to determine the rating an offeror received. The Court cannot say that it is irrational for a proposal with a couple of identified strengths to be better than one that was strong in all aspects, as nothing decrees that all strengths are created equal.

    b. The Contract Administration element.

    The Ginn Group makes a similar argument regarding the Contract Administration element. For this element, it received a “Satisfactory” adjectival rating and a “Moderate” proposal risk rating. AR Tab 211 at 46. Because the SSEB identified one specific strength and no weaknesses relating to this element, id. at 47, TGG argues that these ratings were irrational. A “Satisfactory” proposal was described in the Solicitation as one that “offerfed] no particular advantages to the Government,” Solicitation § M.3.1, AR Tab 7 at 52; a “Moderate” risk rating meant a proposal was “[ljikely to cause some disruption of schedule, increase in cost, or degradation of performance” and would “require [a] continuous level of contractor emphasis.” Id. § M.3.2, AR Tab 7 at 53. The Ginn Group reasons that a strength must be a particular advantage, and that the lack of any identified weaknesses means the conditions for a “Moderate” level of risk are not present.

    But this, again, is not the sort of judgment a Court can make. In giving TGG the ratings it did, the Army implicitly found that these descriptions “most closely describ[ed] the actual assessment,” Id. § M.3, AR Tab 7 at 52, and these ratings — as well as the identified strengths and weaknesses — are tools to aid the Army in assessing proposals, not ends in themselves.

    The Ginn Group also argues that KIRA’s evaluation under this element was inconsistent with the Solicitation criteria. TGG Mot. at 28-30. KIRA received a “Good” adjectival rating and a “Low” proposal risk rating for Contract Administration. AR Tab 211 at 26. KIRA had three identified strengths, but also what TGG characterized as “a relatively serious weakness.” TGG Mot. at 29 (citing AR Tab 211 at 27). This weakness was described in SSA briefing materials as: “Proposed work schedule is not in agreement with the PWS or the current CBA. Could result in an inability to meet PWS requirements and reduced oversight by the Government.” AR Tab 211 at 27. The Ginn Group argues that this weakness causes KIRA to fail the test for a “Good” rating, which was described as a proposal “fully meeting the requirements of the” Solicitation and having “a high probability of meeting or exceeding the requirements.” Solicitation § M.3.1 AR Tab 7 at 52; see TGG Mot. at 29. But the finding that an aspect of KIRA’s proposal “could result in an inability to meet PWS requirements” does not necessarily conflict with a finding of a high probability of meeting these requirements — a could, after all, may still reflect a relatively low probability. Nor is the finding that something could re-*614suit the same as a finding that it is currently the ease — so the Army could rationally find requirements fully met by a proposal, even if there were a chance that they might not be met in the future. Thus, this is not the same as an absence of an objective, necessary condition for a rating. Cf. Beta Analytics, 67 Fed.Cl. at 402-03.

    c. The Human Resources element.

    For the Human Resources element, TGG received a “Moderate” proposal risk rating. The Ginn Group argues, as it did concerning the Contract Administration element, that the lack of identified weaknesses for this element, see AR Tab 211 at 49, makes the “Moderate” risk rating irrational. The Court rejects this argument, for the same reasons stated above. The Army did not adopt a method of evaluation in which ratings were mechanistically determined by the raw numbers of strengths and weaknesses, and it is not this Court’s proper function to substitute its judgment for the technical evaluations of the contracting officials. Absent a showing that the evaluation was arbitrary, capricious, or otherwise not in accordance with law, the Court must defer to the Army’s assignment of a particular rating during the evaluation process. See E.W. Bliss, 77 F.3d at 449; Overstreet Elec. Co., 59 Fed.Cl. at 117.

    3. The Best Value Determination was not Arbitrary

    The Ginn Group’s argument that its proposal was arbitrarily evaluated under the Management subfactor was based on the cumulative effect of the alleged problems with each of the four elements discussed above. TGG Mot. at 30-31. Since the Court does not find anything unlawful or arbitrary in the manner in which these individual elements were evaluated, TGG’s claim concerning the overall subfactor must necessarily fail. The assertion by TGG that the best value determination was arbitrary and capricious is similarly based on its contentions concerning the conduct of the SSEB in evaluating four Management subfactor elements, see id. at 31-32, and is thus rejected for the same reasons.

    III. CONCLUSION

    For the foregoing reasons, the Court DENIES plaintiffs’ motions for judgment on the administrative record and for injunctive relief, and correspondingly GRANTS defendant’s and intervenor’s cross-motions for judgment on the administrative record. The Clerk of the Court is directed to enter judgment accordingly.

    IT IS SO ORDERED.

    . The Quality Committee seems, probably because it evaluated only the Management subfactor, to also have been called the "Management Committee.” See Price Negotiation Memorandum, AR Tab 78.

    . The major difference between the two is the omission, in the body of the SSP, of any discussion of the Total Estimated Cost subfactor. See Attachment C to SSP, § M.2.3.5, AR Tab 34.

    . "Deficiency" and "Weakness" were defined as in the FAR, with an example added for the former. Compare SSP §§ I.F(l) and (3), AR Tab 34 at 3-4 with 48 C.F.R. § 15.001. "Strength” was defined as "[a]n aspect of a proposal that appreciably decreases the risk of unsuccessful contract performance or that represents a significant benefit to the Government.” SSP § I.F(2), AR Tab 34 at 4.

    . In the actual evaluations, the SSA followed the relative importance given the subfactors and elements in the Solicitation, and thus this error in the SSP was of no import. See AR Tab 77 at 1; AR Tab 214 at 1.

    . The Chairman of the SSEB apparently did not prepare a PA Report for the initial evaluation. See AR Tab 209 V 2.a. (Mar. 30, 2005 Proposal Analysis) (Chairman admits he "did not prepare a separate Proposal Analysis for that stage of the evaluation").

    . For purposes of the evaluation, FCSS was given the color "Orange” to mask its identity from the SSA. KIRA was identified as offeror "Brown,” and The Ginn Group as offeror "Blue.” See AR Tab 176 (Jan. 24, 2005 Mem.).

    . Under the SSP, "to be eligible for award, a proposal must receive at least a 'Satisfactory’ rating for the Management subfactor (i.e., reflecting that the proposal meets the minimum requirements of the RFP).” SSP § III.B(5), AR Tab 34 at 10.

    . Chenega was identified as offeror "Red” in the evaluation.

    . Protests challenging FCSS’s HUBZone status were apparently also filed with the Small Business Administration, and were denied. See AR Tab 209 at 1.

    . Chenega withdrew from the competition for the contract on January 13, 2005. See AR Tab 209 at 2.

    . See AR Tabs 130, 131 (Contracting Officer’s scripts); see also Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 37-38; Blue (TGG) Disc. Tr. (Dec. 15, 2004) at 16-17; Brown (KIRA) Disc. Tr. (Dec. 16, 2004) at 78-79.

    . This amendment also shortened the base period, from the originally-contemplated one-year, see Solicitation § F.2, AR Tab 7 at 14, to four months. AR Tab 149 at 2. As a consequence, the Army used the first option year, rather than the base period, as its basis for calculating MPC in the second evaluation. See, e.g., AR Tab 208 at 4.

    . See AR Vol. VI Tabs 4-6 and Vol. VIII (FPR of FCSS); AR Vol. IX Tabs 4-6 and Vol. XI (FPR of KIRA); AR Vol. XII Tabs 4-6 and Vol. XIV (FPR of TGG).

    . There was a discrepancy in the presentation of the FCSS MPC information, as the briefing chart showed that the offeror’s FPR was understaffed by ninety-five positions, as opposed to the *58394.5 identified in the PA Report, with the result that the net understaffing was calculated to be 72.5 man-years, rather than 72. Compare AR Tab 211 at 79 with AR Tab 209 at 8; see also AR Tab 208 (FCSS MPC adjustment showing under-staffing of 94.5 man-years).

    . The government ultimately submitted a fifteenth volume, and the Second Revised Volume XV was filed on August 31, 2005.

    . It is not clear whether this standard comes from 5 U.S.C. § 706(2)(A), as being “otherwise not in accordance with law,” or under 5 U.S.C. § 706(2)(D), as an example of an action taken "without observance of procedure required by law.” Presumably, 5 U.S.C. § 706(2)(B)-(D) are mere elaborations on the 5 U.S.C. § 706(2)(A) theme, and thus apply in all bid protests.

    . In the FPR, FCSS proposed 182 FTEs. See AR Vol. VI, Tab 6. It appears that 185 FTEs were proposed initially. See AR Tab 67.

    . See Tab 209 at 5-6 (PA Report explaining ODC adjustments); compare AR Tab 68 with AR Tabs 206, 207, 208. A direct comparison of the two different MPC adjustments is complicated by a change in the baseline, from the base period used in the first evaluation, to the first option year used in the second evaluation. But it appears that the baseline MPC adjustment to FCSS's estimated costs due to understaffing increased from $ 2.76 million to $ 3.04 million, while the deduction for ODC fell from $ 0.89 million to $ 0.08 million. Compare AR Tab 68 with AR Tab 208. Thus, of the baseline MPC increase of $ 1.2 million, ODC contributed $ 0.81 million and understaffing only $ 0.28 million.

    . In its Complaint, FCSS raises one additional argument&emdash;that an experienced evaluator who retired from government service between the first and second evaluations was not replaced on the SSEB, placing too much responsibility on the lesser-experienced remaining members. See Complaint ¶ 55. The inexperience of evaluators was mentioned only in passing at oral argument, Tr. (Mar. 14, 2006) at 115, and wisely seems to have been dropped as a separate ground for protest in FCSS’s briefing on the matter&emdash;as the judgment employed by Executive branch officers in selecting such evaluators is not normally a proper matter for court review.

    . See AR Tab 210 (SSEB determines that neither the Solicitation nor the SSP "requires a score for the Overall Quality Factor”). The Solicitation stated that the adjectival and proposal *588risk ratings were to be used "for the Quality Factor, the Management subfactor, and the elements [or components] of the Management sub-factor,” Solicitation §§ M.3.1, M.3.2, AR Tab 7 at 52. While this could be read to mean the Quality factor receives these ratings, it is also consistent with the notion that the Quality factor is a "composite evaluation” that considers the subfactor and element ratings. The text describing the Quality factor plainly stated that two subfactors and their elements were to receive ratings, and makes no mention of a rating for the factor. See id. § M.2.2, AR Tab 7 at 49. Thus, it is clear that no rating was to be assigned — which makes sense, since the two subfactors are rated using different schemes and it is not apparent how the performance risk ratings could interact with the adjectival and proposal risk ones.

    . The neutral rating of "Unknown” is also used, when no performance record had been identified for an element. AR Tab 7 at 53.

    . It also argues that there was no basis for worsening its risk rating from “Low” to "Moderate" for the two sub-elements, FCSS Mot. at 27, which will be discussed below.

    . Fort Carson Support Services also included in this aspect of its protest a variation on its argument concerning the evaluation of staffing levels, which is considered below. See FCSS Mot. at 33-34.

    . Nor is there any explanation of how the IGE comparison that calculated the FCSS proposal to be understaffed by ninety-eight positions and overstaffed by seventeen was converted into the MPC decision that FCSS was understaffed by forty-five positions and overstaffed by six. Compare AR Tab 67 at 2 with AR Tab 76 at 34.

    . On another slide, the SSEB stated that ”[t]he staffing proposed cannot sufficiently provide a full-range of coverage for the entire functional areas of the contract.” AR Tab 76 at 32.

    . During oral argument, counsel for the government confirmed that the decision to credit FCSS with thirty-nine FTEs due to innovations was “not documented, and the cost analysis wasn’t provided.” Tr. (Mar. 14, 2006) at 208. See also App. to FCSS Mot., Ex. F at 4 (Aug. 31, 2005 letter from Army counsel stating contracting office was "not aware of any documents related to any innovations by FCSS that are not already contained in the record”).

    . There was one slight discrepancy, to KIRA's disadvantage, as it appears that the size of the innovations credit calculated by the evaluator was actually 6.6 FTEs, not the 6.3 FTEs figure used in the decision. Compare AR Tabs 197, 206 with AR Tabs 209, 211.

    . At least one member of the SSEB seemed to believe that at least some of the perceived under-staffing in FCSS's proposal would be overcome by labor-saving innovations. See AR Tab 100 (Nov. 4, 2004 email from SSEB member stating that FCSS ‘‘needs 16 people less than what [the Army] estimated base [sic] on their way of doing business [using PDAs], In the government’s estimate of personnel we assumed ‘business as usual.’ ’’).

    . Fort Carson Support Services attempts to distinguish Dismas on the ground that the reevaluation in that case was done pre-award. FCSS Reply at 19. The Court fails to see the relevance of this distinction. Moreover, the Army discovered that FCSS was not eligible for the initial award because of a deficiency in the FCSS proposal, see AR Tab 95 at 2, an action that was not challenged by FCSS. Thus, the initial award does not merit any special dignity in this review.

    . Fort Carson Support Services also objects to the Army's evaluation of its WR & DD plan. A slide from the SSEB’s presentation to the SSA identified this as a weakness under the Management and Technical element, and included the words "extra cost” in the description. AR Tab 211 at 63. During the post-award debriefing, FCSS asked the Contracting Officer whether cost considerations were included in the Management subfactor evaluation, and he explained that no dollar figure was considered, as the note reflected “just somebody’s comment. You know, by them staffing more than what they need to, it's costing the Contract more money.” Orange (FCSS) Post-Award Tr. at 32. When asked whether this entailed an overstaffing determination, the Contracting Officer explained “that would have been reflected in the Most Probable Cost adjustment.” Id. at 33. The protester misinterprets this remark into a claim that the MPC was adjusted upwards for the extra cost of this approach, resulting in a double-counting of the costs, see FCSS Mot. at 52-53. But it is clear that the official was merely speculating that an adjustment, if any, would have been made in the MPC. Since any adjustment for overstaffing would reduce the number of FTEs added for net understaffing, see, e.g., AR Tab 208, and thus reduce the MPC cost adjustment, FCSS’s argument&emdash;based on words ripped from their context&emdash;is particularly inapt.

    . Another argument, concerning the FAR requirement that certain adverse past performance information be discussed, see 48 C.F.R. § 15.306(d)(3) (2005), is addressed separately below.

    . It appears that the color "Green" corresponds to "Low” risk and the color "Yellow” to "Moderate" risk. See AR Tab 209 at 4 (explaining that "in the original evaluation, the evaluator’s performance risk assessment for Offeror [FCSS] was Moderate”).

    . The SSEB member who individually assessed the PPE information for the initial evaluation erred in compiling the summary of PPE questionnaire responses for FCSS, identifying ten “Satisfactory” evaluations for Plum Island as "Marginal.” Compare AR Tab 60 (Plum Island questionnaire nos. 7-11, 13, 14, 26-28) with AR Tab 62. The evaluator classified two blended "Satisfactoiy” and "Marginal” ratings as "Marginal,” compare AR Tab 60 (Plum Island questionnaire nos. 6, 12) with AR Tab 62, but mistakenly described one of these as a score of "marg[inal]-unsat[isfactory].” AR Tab 62 (Section III — Past Performance Evaluation at 4). These errors by one individual do not appear to have influenced the SSEB as a whole, as its initial consensus decision was that FCSS merited a "Low” performance risk rating for the PPE and its elements, see AR Tab 69 at 34, and FCSS itself does not challenge this aspect of the evaluation — instead taking the position that “the record does not identify any alleged errors in the SSEB’s initial evaluation of FCSS’s PPE.” FCSS Reply at 25.

    . The Court rejects the argument of FCSS that the use of a “Low/Moderate” risk rating for overall PPE violated the stated evaluation criteria, see FCSS Mot. at 53; FCSS Reply at 21, for the same reasons this argument was rejected in the context of FCSS's claim that it received the highest Quality factor rating. These labels were not the actual assessment, but tools used in the assessment, and the SSA clearly based his decision on the underlying PPE information and analysis, and not the subfactor label. See AR Tab 214 at 4.

    . The understaffing adjustment actually exceeds the variance. In the first option year, this adjustment totaled $3,043,187, while the variance was just $3,002,833, due to the net effect of the other adjustments. See AR Tab 208 at 4.

    . Fort Carson Support Services also argues that its Cost Variance rating of "marginally satisfactory” is inconsistent with its adjectival ratings of “Good" in the Management and Technical element and in the overall Management subfactor. FCSS Mot. at 64. But FCSS received the lower adjectival rating of "Satisfactory” for the element, not "Good.” See AR Tab 211 at 62. Documents in the record'demonstrate that the SSEB downgraded this adjectival rating, and the proposal risk rating of both the element and the overall subfactor, due to the understafiSng concerns that caused FCSS’s cost variance. See AR Tabs 189, 196.

    . The Court does not find that it was reasonable for FCSS to believe that the initial award to it communicated the lack of serious staffing problems. The Army subsequently stayed this award under a corrective action, see AR Tab 97, because of undisclosed “questions regarding the evaluation of proposals and the subsequent award.” Id. If FCSS believed that its staffing level was not problematic, based on its own speculation concerning the reasons behind the corrective action, it is itself responsible for this belief. Such an assumption was a serious mistake by FCSS, for the record amply demonstrates that one of the "questions” that prompted the corrective action related to the SSA having made his tradeoff decision based on erroneous calculations of the FCSS variance — which understated the impact of the understaffing by approximately sixty percent. See AR Tab 95 at 2 (evaluation was based on variances of $10.2 million and six percent, as opposed to $ 26.3 million and fifteen percent); ARTab 76 at 35.

    . And FCSS admits that the perceived strengths of its initial proposal were not revealed to it by the Army during the procurement process, see FCSS Mot. at 75 n. 69, so it cannot argue that the Army somehow induced it not to consider improving this aspect of its proposal.

    . The SSEB evaluator appears to have counted both of these as "Marginal.” See AR Tab 62 at 2-3 (questions six and twelve).

    . A copy of the letter and attachment appear to have been sent to the Army via facsimile machine on January 27, 2005. See ARTab 179.

    . The audit report is included, with the KIRA letter, in the record at Tab 179. KIRA's counsel misrepresented this fact to the Court, and asserted that "[t]he audit report in question is not part of the Record.” KIRA Br. at 34.

    . KIRA asserts, without citation, that the “report, however, shows that the subcontractor had an opportunity to respond." KIRA Br. at 34. This assertion appears to be false. See AR Tab 179 (Audit Report).

    . This conclusion is reinforced by other documentation in the record. On November 19, 2004, after making an inquiry to another Army official, the Contracting Attorney determined that information relating to KBR’s Kuwait and Iraq operations would not be considered in the procurement process, as there was “no basis for downgrading KBR’s PPE rating based on the current investigations.” AR Tab 114A; see also ARTab 205 at 2.

    . What TGG calls "other, more minor, examples of arbitrary and capricious behavior by the Army," TGG Mot. at 12 n. 2, contained in its complaint, were not pressed in the motion. See TGG Compl. ¶¶ 25-28 (lack of discussions concerning key personnel), 30-38 (Management and Technical element evaluation); 80-84 (cost realism).

    . An AQL, or "acceptable quality level," is "the highest number of defects per 100, highest percentage of defects, or highest number of defects allowed for any service performance indicator.” AR Vol. XIV, Tab 10 at 8-4.

    . Due to technical problems, the first portion of the discussion session with TGG, when this part of the statement was read, was not recorded. See AR Tab 132 at 5. The minutes from the discussions state that this written statement was read to TGG. See id. at 1. Since the same statement was read during the KIRA discussions, see Brown (KIRA) Disc. Tr. (Dec. 16, 2004) at 5, and the FCSS discussions, see Orange (FCSS) Disc. Tr. (Dec. 15, 2004) at 5, the Court has no reason to doubt that it was read at the beginning of TGG discussions. See also Blue (TGG) Disc. Tr. (Dec. 15, 2004) at 18 (Contracting Officer tells TGG it "should not construe anything in our earlier *612communications with you or in this session today, as constraining your latitude to revise your proposal subject to the rules stated in the Solicitation” and that the extent of changes made "is a matter of your corporate discretion").

Document Info

Docket Number: Nos. 05-674C, 05-715C

Citation Numbers: 71 Fed. Cl. 571

Judges: Wolski

Filed Date: 6/19/2006

Precedential Status: Precedential

Modified Date: 7/20/2022