Performance Reference Model

Introduction

Definition/Description (What) – “links agency strategy, internal business components, and investments, providing a means to measure the impact of those investments on strategic outcomes.”[1]

Purpose/Function (Why) – to provide documentable value to all stakeholders by setting manageable and measurable metrics to achieve the geospatial system investment goals as defined by the stakeholders. The section will provide:

  • References to performance guidance and implementation approaches.
  • Examples of performance indices.

Stakeholder Performance Guide (Who & How) – driven by mission/business requirements and the associated functional capabilities identified in the Operational Requirements Document, performance is a shared responsibility that provides the Executive Leadership metrics to monitor and take corrective action to address program progress and demonstrate benefit to the stakeholders. Often administered by the Program Manager and documented by the Solution Architects, performance is a measure of value.

Value is defined by the user … but quantified by the usage.

 

[1] Office of Management and Budget, Federal Enterprise Architecture Framework, Version 2, January 29, 2013, available at http://69.89.31.228/~mkerncom/wp-content/uploads/2013/02/Federal-Enterprise-Architecture-Framework-v2-as-of-Jan-29- 2013.pdf


Applying The Performance Reference Model: Approaches

Performance spans all of the Reference Models of the FEA guidance. The GIRA sections include a Performance Guide Table for each of the three (3) stakeholders (e.g., Executive Leadership, Program Managers, and Solution Architects). These tables can be found here as a consolidate geospatial investment performance indicators, but only serve as a starting point for consideration.

The Common Approach to Federal Enterprise Architecture[1] uses the performance reference model to show the linkage between internal business components and the achievement of business and customer-centric outputs and outcomes. Performance measures help support planning and decision-making based upon comparative determinations of which programs and services are more efficient and effective. The Performance Reference Model focuses on three main objectives:

  • Produce enhanced performance information to improve strategic and daily decision-making.
  • Improve the alignment and better articulate the contribution of inputs to outputs, thereby creating a clear “line of sight” to desired results.
  • Identify performance improvement opportunities that span traditional organizational structures and boundaries.

One of the most visible uses of the Performance Reference Model is for the OMB reporting as part of Exhibit 300 investment proposals. Federal agencies are required to:[2]

  • Describe the relationship between investment and agency strategic goals. A narrative explanation of the investment’s specific contribution to mission delivery and management support functions is required in Section B for the Exhibit Investment owners must identify how the investment contributes to the agency target architecture and links to performance objectives in the published agency strategic plan.
  • Provide investment-specific performance measures that quantify the intended performance Each measure must be categorized using a FEA Performance Measurement Category, and investment owners must ensure that the measures are balanced and drawn from multiple measurement categories. Performance metrics will be reported on the IT Dashboard.
  • Report on investment results using these measures monthly, quarterly, semi- annually and annually.

Reporting on investment results, using the Governance processes established through the Executive Steering Committee, provides an opportunity to validate success and make course corrections to meet changing stakeholder requirements (see Operational Requirements Documentation). The Federal Shared Services Implementation Guide[3] recommends when developing their organization’s strategic plans and performance goals, Executive Leadership and Program Managers should evaluate the prior performance of their investments. This presents an opportunity to question and assess the following:

  • What is the performance of existing processes and services?
  • What existing capabilities can be improved?
  • What is the cost structure of current capabilities?
  • How efficient is service delivery?
  • What new capabilities are needed and funded by the organization?

The Shared Services Implementation Guide also recommends that the Agencies should objectively and continuously assess their IT investment portfolios throughout the investment lifecycle as part of Capital Planning and Portfolio Management oversight. Each checkpoint should be considered an opportunity to re-evaluate whether an investment is still performing as desired and continues to deliver the level of business value and capabilities required by end users and key stakeholders. For this reason, capital planning, business, and IT Program Managers should discuss whether there is an opportunity to leverage an existing shared service before embarking on development of a new initiative that will incur significant costs, as well as risks.

The Segment Architecture Analysis of the Geospatial Platform, Version 1.0[4] describes performance architecture as a means to align strategic goals and objectives with specific metrics that can be applied to processes, systems, and technology in order to evaluate success against those goals. The performance metrics creates a reporting framework to measure the geospatial activities and investments across the enterprise. Improved performance is realized through greater focus on mission, agreement on goals and objectives, and timely reporting of results. The Segment Architecture Analysis also outlines the ways in which these performance metrics should evolve in order to align geospatial initiatives across an enterprise’s stovepipes and incorporate additional considerations critical to geospatial functionality. The document concludes by providing high-level recommendations for the development of a “Geospatial Transition Roadmap and Milestones” for the federal geospatial community to consider; including nine (9) government- wide level governance initiatives.

The Geospatial Profile of the Federal Enterprise Architecture, Version 2.0[5] notes that “the PRM is of particular use to the development of fledgling geospatial programs across government because it provides a structure for analyzing both means and ends. Using performance measures allow agencies to define how much more effective their business processes are by incorporating geospatial resources, approaches, or methods.

All activities of an agency’s geospatial program—developing policies and using standards, implementing geospatial services and geo-enabling functions within the organization, and implementing and providing geospatial data services both inside and outside the agency—can benefit by evaluating performance. There are two primary measures for evaluating performance:

  • Measures of the performance of business processes incorporating geospatial resources and investments (how much does the business process save by using geospatial technology and data, how many users does it support).
  • Measures of the maturity of a geospatial program responsible for developing an agency’s geospatial architecture (is the program progressing towards offering better services to more customers and does its geospatial data meet quality standards).”

 

[1] Office of Management and Budget, The Common Approach to Federal Enterprise Architecture, May 12, 2012, available at http://www.whitehouse.gov/sites/default/files/omb/assets/egov_docs/common_approach_to_federal_ea.pdf

[2] Office of Management and Budget, Federal Enterprise Architecture Framework, Version 2, January 29, 2013.

[3] https://cio.gov/wp-content/uploads/downloads/2013/04/CIOC-Federal-Shared-Services-Implementation-Guide.pdf

[4] https://www.fgdc.gov/initiatives/geospatial-lob/index_html

[5] http://www.fgdc.gov/initiatives/resources/geospatial-profile-of-the-FEA-v2-march-2009.pdf.


Performance Indices

Performance measures are often seen as an administrative burden and additional cost to the system investment. However, performance metrics, if reflective of the stakeholder requirements, provides awareness of the value of the investment and enables effective management for operations and maintenance.

“You cannot manage what you cannot measure.” – Anonymous

The Enterprise Architecture and Geospatial communities of practice have several performance indices that may be used in part or whole to help design and develop meaningful measures for investments. Performance indices (e.g., Maturity Models) may provide a normalizing or level- setting functional for an organization to better understand the range of capabilities and investments and also contribute to the baseline assessment activities (see Geospatial Baseline Assessment Matrix) when an organization performs its Operational Requirements Documentation.

One of the challenges of any maturity model is the general lack of a Return on Investment (ROI) indicator for moving from one level to the next in the maturity progression ladder. A maturity or capability model may have ~ 5 levels of maturity and while an organization assesses its maturity to be a “3” there are generally no explicit cost/benefit to determine the value proposition for moving to the next level. In fact, there may be diminishing returns and the stakeholders will need to determine the optimal level of geospatial proficiency that meets the needs of the entire investment. However; a performance management framework for geospatial capabilities that are embedded within a larger system environment may include the necessity to tie the value to the overall or “parent” enterprise architecture investment.

OFFICE OF MANAGEMENT AND BUDGET: ENTERPRISE ROADMAP

OMB’s Memorandum for Increased Shared Approached to Information Technology Services[1] “provides Federal Agencies with policy guidance and management tools to use in increasing shared approaches to information technology (IT) service delivery across mission, support, and commodity areas.” The policy memo directs Federal Agency Chief Information Officers to submit an “Enterprise Roadmap” each year that documents an agency’s current and future views of its business and technology environment from an architecture perspective. In the 2013 submission of the Enterprise Roadmap includes:[2]

  1. Business and Technology Architecture (Main Body): a high-level, integrated description of the agency’s IT-related strategic goals, business objectives, and enabling IT capabilities across all operating units, and program areas.
  2. Enterprise Architecture (EA) Maturity Measurement Template (Appendix 1): a self-evaluation of the maturity of the Agency’s EA Program.
  3. EA Outcomes and Measurements Template (Appendix 2): a self-evaluation of the effectiveness of the agency’s enterprise architecture program, examples of contributions to beneficial outcomes, areas for improvement, and measurement of value using the attached template.
  4. IT Asset Inventory (Appendix 3) (Optional): a list of IT systems and applications that support mission, administrative, and commodity IT services, using the attached template and the Federal Enterprise Architecture Reference Models that are provided in the Common This Appendix will be considered “For Official Use Only.”

 

 

The EA Maturity Measurement Template (see Appendix G.1) provides a matrix that includes the primary evaluation categories (e.g., Spending, Systems, Services, Security) and requires the inclusion of IT investment Inventory and Outcomes with descriptions for; Area of Measurement, Specific Measurement Indicators, Measurement Method and Targets (Timeline), and Comments/Artifacts. Depending upon the category of Inventory or Outcome, the Areas of Measurement may include the following (Table 1):

Table 1. EA Maturity Measurement Template: Areas of Measurement

 

INVENTORY & OUTCOME

AREA OF MEASUREMENT

Inventories
Completeness
Accuracy
Ratio

 

INVENTORY & OUTCOME

AREA OF MEASUREMENT

Outcomes
Cost Savings/Avoidance
Reduction of Duplication
Efficiency
IT Enablement

 

ISE INFORMATION INTEROPERABILITY FRAMEWORK: INTEROPERABILITY MATURITY MODEL

The Information Sharing Environment (ISE) Interoperability Framework (I2F) is used to guide the implementation of the ISE information sharing capabilities.[3] The ISE I2F leverages existing systems architecture guidance, suggested standards, tools, and methodologies to foster the linkage of systems as well specifying the development of common artifacts that are intended to enable disparate architectures to improve information sharing.

The Interoperability Maturity Model of the ISE I2F is aligned with the OMB guidance Federal Enterprise Architecture Framework and The Common Approach to Federal Enterprise Architecture and is broken down by domains (e.g., Business, Data, Applications, Infrastructure, Security, and Performance). The model establishes characteristics for each level of interoperability (e.g., ad hoc, repeatable, enhanced, managed, and optimized) for each interoperability requirement. Each row in the maturity model represents a functional area within the domain. Each column represents a different stage of maturity. Interdependences between functional areas exist but the goal is to assess a system independently for each functional area.

Within the I2F Performance Domain maturity model, it is divided into functions or process groups (rows) and maturity levels (column). The maturity model is then followed by several supporting questions (Table 2).

Table 2. I2F Performance Domain Maturity Model Metrics

 

 

⓪ ABSENT

① AD HOC

② REPEATABLE

③ ENHANCED

④ MANAGED

⑤ OPTIMIZED

Metrics

Formalized performance metrics that provide direct line of sight between strategic planning and the investment review process do not exist.

Formalized performance metrics exist and align with strategic goals of organization as well as to applicable policy, guidance, and laws.

Formalized performance metrics that identify common performance elements across investments or activities exists.

Formalized performance metrics are used to inform gap analysis of interoperability requirements and adhere to relevant performance goals.

 

COMMON OPERATING PICTURE: KEY PERFORMANCE INDICATORS AND MEASURES

The Department of Homeland Security has chartered an Executive Steering Committee (ESC) for its Common Operating Picture (COP)/User Defined Operating Picture (UDOP) Domain. The Components of DHS have invested in multiple COP/UDOP capabilities to support situational awareness for law enforcement, emergency management, intelligence, and homeland security/defense mission activities. The goal of the COP ESC has been to increase COP interoperability, effectiveness, and shared capabilities while reducing the Department’s collective operational costs by managing COP systems as enterprise mission service investments. It is expected to promote and guide the development and operation of, and investment in the DHS Common Operating Picture domain. The COP ESC will provide analytical support and provide recommendations, guidance, and procedures for improving the sharing of data, information, infrastructure, tools, and services across DHS COP investments.

In support of the COP ESC, the DHS Geospatial Management Office prepared the DHS Sensitive But Unclassified COP/UDOP Segment Architecture[4] document is to provide a holistic and conceptual view of the future consolidated or interoperable state of the COP domain for Homeland Security. This initial version focused upon the target technical architectural areas: business, data, services, technology, security, and performance. It presents a Target Architecture based on a common services framework that relies on shared services and enterprise delivery of core data, software, and infrastructure using approved standards. This shared services approach ensures data and system interoperability and reliable exchange of information in a usable and geospatial format.

The target performance architecture for the DHS COP segment architecture prepared a performance management scorecard (Appendix G.2) to tracking progress and effectiveness toward achieving the strategic goals and objectives for the COP domain. The scorecard is based on the key performance indicators (KPIs) to include; Governance, Information Sharing, Mission Enablement and Technology Management. These scorecard metrics were established from the COP Domain priorities for interoperability, effectiveness, authoritative/trusted information in a geospatial format, standards-based information exchanges, reliability, and shared capability.

NATIONAL STATES GEOGRAPHIC INFORMATION COUNCIL: GEOSPATIAL MATURITY ASSESSMENT

The National States Geographic Information Council’s (NSGIC) Mission is to promote statewide geospatial coordination activities in all states and to be an effective advocate for states in national geospatial policy and initiatives, thereby enabling the National Spatial Data Infrastructure (NSDI).[5] NSGIC maintains a Geospatial Maturity Assessment (GMA) that is a “baseline assessment methodology to routinely and continuously monitor and validate statewide geospatial capabilities.” [6] The GMA included eighty three (83) questions that characterized their geospatial programs. The assessment is over one-half data focused, but also includes questions on staffing and budget, strategic and business planning, and interagency coordination and data sharing.

URBAN AND REGIONAL INFORMATION SYSTEMS ASSOCIATION: GIS CAPABILITY MATURITY MODEL

The Urban and Regional Information Systems Association (URISA)[7] promotes the effective and ethical use of spatial information and information technologies for the understanding and management of urban and regional systems. URISA members and participants typically use geospatial and other information technologies to solve challenges in government agencies. URISA provides educational programs, offers volunteer GIS expertise through its GISCorps program, and assists government agencies with benchmarking GIS maturity through its GIS Management Institute®.

“The URISA GIS Capability Maturity Model[8] is a key component of the GIS Management Institute. Its primary purpose is to provide a theoretical model of a capable and mature enterprise GIS operation within a designated organization. The URISA Model is intended to serve the GIS community as a stand-alone document to define the framework for an effective enterprise GIS. The Model was developed initially with a focus on local government agencies (e.g., cities, counties, regional agencies, and other similar entities) but it is intended for future use by any enterprise GIS. As a stand-alone document, the Model is intended to facilitate discussion amongst GIS managers and the decision makers who deploy and fund GIS to maximize effectiveness and return on investment from a given level of investment.”

The Capability Maturity Model assumes two (2) broad areas of GIS operational development:

  • Enabling Capability – the technology, data, resources, and related infrastructure that can be bought, developed, or otherwise acquired to support typical enterprise GIS Enabling capability includes GIS management and professional staff.
  • Execution Ability – the ability of staff to utilize the technology at their disposal (subject to separate assessment as part of the Model).

 

 

[1] OMB Memo: Increasing Shared Approaches to Information Technology Services, May 2, 1012, available at http://www.whitehouse.gov/sites/default/files/omb/assets/egov_docs/sharedapproachmemo_0502.pdf

[2] OMB Memorandum to Agency Chief Architects, Guidance on 2013 Federal Agency Enterprise Roadmaps, March 29, 2013.

[3] Program Manager – Information Sharing Environment (PM-ISE), Information Interoperability Framework (I2F), Version 0.5, March 2014, available at http://ise.gov/ise-information-interoperability-framework.

[4] Department of Homeland Security, COP/UDOP Sensitive But Unclassified Segment Architecture, Version 1.0 (DRAFT), April 27, 2012, prepared by the Geospatial Management Office.

[5] http://www.nsgic.org/

[6] http://www.nsgic.org/geospatial-maturity-assessment/

[7] http://www.urisa.org/main/about-us/

[8] http://www.urisa.org/clientuploads/directory/GMI/GISCMM-Final201309%28Endorsed%20for%20Publication%29.pdf

Content edited with the HTML G online editor program. Please purchase a license to stop adding similar ads to the edited documents.


Stakeholder Performance Guidance

The Performance Guide provides several key considerations and decision points that may influence the design and development of the performance metrics necessary to assess the most effective and efficient design, development, and implementation of the geospatial system investment (Table 3). A consolidated Performance Guide for all of the reference models (e.g., Business, Data, Applications/Services, Infrastructure, Security, and Performance) can be found here.

 

Table 3. Stakeholder Performance Guide: Performance

STAKEHOLDER PERFORMANCE GUIDE

PERFORMANCE

Role

Responsibility

Approach

Benefit

Executive Leadership

• Define mission context for geospatial investments across the enterprise.

• Ensure Performance metrics and indicators are included in all CPIC (OMB 300/53) geospatial investments.

• Provide overall mission context and expected contribution of geospatial to/within programs to Program Managers, and align program success to improved performance of business functions.

• Using Performance indicators for each reference model (e.g., Business, Data, Applications/Services, Infrastructure, Security, and Performance) prepare matrix for ESC review and adoption and monitoring.

• Creates quantifiable measures and expected outcomes (mission and resource impact) of a geospatial investment.

• Ensures OMB reporting compliance and senior leadership commitment to managed/measured success of investment.

Program Manager

• Define measures of effectiveness and success criteria for geospatial investments under oversight.

• ESC to oversee cost, schedule, and scope of geospatial investments across enterprise.

• Provide clear guidance to Solution Architects for requirements and dependencies of required solutions.

• Communicate with Executive Leadership and stakeholder community (mission holders) to foster an understanding of the value of current efforts with the overall mission success.

• Creates clarity as to the value of programs being managed to overall mission effectiveness.

• Enables easier management through a better understanding of how measures of effectiveness translate into system requirements and benefits.

Solution Architect

• Derive functional and technical requirements and associated quantifiable performance success measures given target objective.

• Oversee technical implementation and schedule and provide status to leadership and recommended course corrections as needed.

• Analyze program requirements and measures of effectiveness and identify solution elements that will enable the program to meet success criteria.

• Create a clear understanding of how the project scope, schedule, and budget is progressing and provides line-of-sight with respect to the overall program and enterprise requirements.

• Demonstrable solution effectiveness, tied directly to executive-level interests which enables an end-to-end picture of how delivered solutions fit into an enterprise-level mission

• Enables clear communication with the Project Managers and Executive Leadership regarding schedule and scope of system delivery.

Updated on February 4, 2020