TL;DR
A transparent, weighted scorecard reduces bias and creates meaningful discrimination between vendors. This article presents a defensible 100-point rubric, shows the scoring math, and outlines governance steps that align with guidance from the Federal Acquisition Regulation 15.304, NIGP’s best practice on RFPs, and CIPS supplier evaluation. A downloadable template is included: CSV and JSON.
Background and context
Weighted evaluation is the norm across public and private procurement because it enables consistent, auditable tradeoffs. Public sector frameworks describe two primary paths: lowest priced technically acceptable and best-value tradeoff. The first is pass-fail on nonprice factors with award to the lowest price, as explained in the GSA source selection guide. The second permits tradeoffs when nonprice factors are significant, provided factors “support meaningful comparison and discrimination,” per FAR 15.304. Professional bodies emphasize structured criteria and scoring discipline. The NIGP global best practice on RFPs consolidates guidance on developing criteria and running evaluations. The CIPS overview of supplier evaluation explains why multi-criteria scoring and post-award performance tracking matter.
For teams using dedicated platforms, product documentation shows how to implement scoring in tooling. Examples include RFPIO scorecards and RFP360 weighted scoring. Analyst methodologies reinforce the principle of transparent criteria and weighting; see the Forrester Wave methodology and its related RFP scorecard tool (subscription required). Defense acquisition materials such as the DAU evaluation factors guide and discussion of numeric protocols in Defense Acquisition Magazine provide additional models for weighting and combining scores, including price.
The 100-point rubric
The rubric below is designed for technology and services procurements where nonprice factors dominate but price remains material. The weights sum to 100. A downloadable version with definitions and scoring guidance is available: CSV and JSON.
- Functional requirements fit: 25
Degree to which the proposal meets mandatory and prioritized requirements. - Information security and privacy: 15
Controls, certifications, data protection practices, and regulatory alignment. - Implementation and change management: 12
Project plan, resourcing, timeline realism, enablement, and training. - Total cost of ownership and commercial terms: 12
Three-year license and usage costs, services, and contractual flexibility. - Integration and data management: 10
APIs, connectors, data model fit, migration approach, and interoperability. - Vendor viability and risk: 10
Financial stability, roadmap transparency, support SLAs, and operational risk. - Governance, compliance, and legal: 6
Alignment to policies, auditability, accessibility, and compliance frameworks. - Usability and accessibility: 5
User experience, accessibility standards, and adoption levers. - Reporting and analytics: 5
Native reporting, exports, and KPI coverage.
This allocation reflects the public-sector emphasis on discriminating nonprice factors and transparent scoring, per FAR 15.304, as well as good practice from NIGP and CIPS. Teams may adjust weights, but changes should be locked before proposals are opened and disclosed in the RFP to preserve fairness.
Scoring method and math
Rating scale. Use a five-point descriptive scale for each factor. Example: 0 does not meet; 1 weak; 3 partially meets; 4 substantially meets; 5 fully meets with evidence. Consistent scales reduce drift across scorers, as emphasized in the DAU evaluation factors guide.
Weighted technical score. For each factor, multiply the average rating by the factor’s weight contribution. If ratings are on 0 to 5, convert to a percentage by dividing by 5.
Technical score for vendor j:Technical_j = Σ_i (Weight_i × Rating_ij/5)
Price score. Use a normalized approach that awards full points to the lowest evaluated price and proportionally fewer to higher prices. This produces defensible tradeoffs in a best-value setting and mirrors numeric methods discussed in Defense Acquisition Magazine.Price_j = PriceWeight × (LowestPrice / Price_j)
If price is embedded in the 12-point “total cost of ownership” factor above, keep the same normalization within that factor.
Total evaluated score.Total_j = Technical_j + Price_j
Tie-breakers. When totals are within a predeclared margin, apply documented tie-breakers such as superior security posture or stronger references. For public entities, ensure the approach matches the solicitation and is consistent with NIGP RFP guidance.
Governance and controls
Pre-publication alignment. Lock criteria, weights, rating definitions, and the price formula before releasing the RFP. This aligns with the “clear, discriminating factors” requirement in FAR 15.304.
Scorer independence and calibration. Assign scorers by domain, require independent scoring first, then hold a calibration session to reconcile outliers. Public training materials recommend structured roles and written rationales; see the DAU guide for a model.
Documentation. Capture rationales and artifacts for each score to survive audits and protests. The NIGP best practice and GSA guidance stress transparency of the record.
Tooling. Where possible, calculate inside a system that supports role-based scoring and exports. Platform documentation for RFPIO and RFP360 provides workable patterns. For narrative justification, add structured comment fields so audit reviewers can follow the math and the reasoning.
Public disclosure considerations. If the procurement is subject to open records or public protest, consider whether to disclose high-level weights in the RFP while keeping detailed sub-criteria internal. The key is consistency with the solicitation and applicable rules.
Implementation checklist
- Confirm selection method and legal requirements. If the solicitation must permit tradeoffs, document why, using language consistent with FAR 15.304 or applicable policy.
- Finalize criteria and weights. Validate against NIGP RFP practice and internal risk priorities.
- Define rating scales. Adopt descriptive anchors and examples, following structure similar to the DAU guide.
- Publish the rubric summary and price formula in the RFP when appropriate. Maintain parity of information for all vendors.
- Configure the tool. Set up scorecards in platforms such as RFPIO or RFP360.
- Run the evaluation. Require independent scoring, then calibrate. Record rationales and keep the audit trail.
- Compute totals and apply tie-breakers. Use the normalization formula for price and predeclared tie rules.
- Debrief and archive. Retain artifacts, including the exported scorecard, final calculations, and selection decision.
Methods and limitations
This rubric is designed for enterprise software and services procurements where qualitative factors carry significant weight. It aligns with public procurement guidance and can be adapted for LPTA by converting nonprice factors to pass-fail. Some sources cited are member or subscription resources; where applicable, that status is noted. Analyst methodologies, including the Forrester Wave, are used as comparative frameworks rather than prescriptive rules. Teams should tailor the weights to risk posture, regulatory constraints, and expected contract value.
Sources
Federal Acquisition Regulation 15.304, evaluation factors
NIGP global best practice: request for proposals
Chartered Institute of Procurement & Supply, supplier evaluation
U.S. General Services Administration, source selection guide
Defense Acquisition University, evaluation factors help guide
RFP360 help center, weighted scoring
Forrester Wave methodology and RFP scorecard tool (subscription required)


Leave a Reply