Thriving in the Digital Age: Why the 5Ps Are Essential for Successful Digital Excellence

The federal landscape is undergoing a significant shift towards digital excellence. From streamlining citizen services to enhancing agency operations, digital initiatives are a necessity for government organizations to keep pace with the evolving landscape. However, navigating digital excellence, and by association digital transformation, can be complex. While many leaders focus on the technology itself, neglecting the human element and crucial operational changes can lead to project failure. The ripple effect of that failure is felt by internal and external users, stakeholders, the mission suffers, time is lost, let alone taxpayer funding is wasted. A successful digital transformation requires a holistic approach that considers five key pillars: People, Policy, Process, Partners, and Platforms, what I call the 5Ps.

The 5Ps: Building a Strong Foundation for Digital Transformation
Each P represents a crucial element that, when integrated effectively, fosters successful digital transformation. Conversely, neglecting any one of these Ps can cause friction and resistance, leading to a project that is incongruous, low-yielding, shortsighted, or even disintegrates altogether.

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.” – George Westerman | Principal Research Scientist, MIT Sloan Initiative on Digital Economy
Let’s delve deeper into each P to understand their significance and the potential pitfalls of overlooking them:

1. People: The Human Center of Transformation

At the core of any digital transformation initiative are the People. This encompasses leaders, employees, citizens, and other stakeholders. Their buy-in, skills, and capabilities are paramount. Effective communication, training, and change management are crucial to ensure everyone understands the digital project’s goals and how their roles will evolve.

What happens when you neglect People?


  • Resistance to change from employees who fear job displacement or whose skills are not aligned with the new technologies.
  • Lack of user adoption for new applications or systems.
  • Talent gaps due to a lack of upskilling or reskilling initiatives.
2. Policy: Setting the Guideposts

Policy provides the guardrails that govern how digital transformation is implemented. This includes directives, guidance, and procedures that address data privacy, security, and IT infrastructure management. Federal agencies must adhere to specific legislative and compliance requirements, and policies must be developed to ensure digital initiatives align with these requirements.

What happens when you neglect Policy?


  • Non-compliance with federal regulations leading to project delays or shutdowns.
  • Unclear decision-making processes hinder progress.
  • Security breaches and data leaks due to inadequate protocols.
3. Process: Optimizing Workflows

Process refers to the business workflows that will be transformed through digital initiatives. Federal agencies often have legacy systems and paper-based processes that can be inefficient. In some cases, they digitized a paper process, and now there’s an opportunity to improve the digital process through automation. Digital transformation is an opportunity to streamline these processes and leverage technology to enhance efficiency and accuracy.

What happens when you neglect Process?


  • Inefficient workflows that continue to burden employees and hinder productivity.
  • Incompatibility between new technologies and existing processes, creating bottlenecks.
  • Failure to realize the full potential benefits of digital transformation.
4. Partners: Collaboration is Key

Partners play a critical role in digital transformation. This includes internal and external partners, such as academia, industry experts, and technology vendors. Collaboration with these partners can provide access to specialized skills, knowledge, and innovative solutions. This collaboration also ensures their needs or requirements are incorporated into the digital transformation.

Federal CIO Clare Martorana emphasized “The success of digital transformation efforts will depend heavily on how well agencies collaborate across functions and with external partners.”

What happens when you neglect Partners?


  • Re-inventing the wheel by attempting to develop solutions in-house that already exist.
  • Lack of access to cutting-edge technologies and expertise.
  • Siloed efforts that hinder information sharing and collaboration.
5. Platforms: The Technological Foundation

Platforms encompass an organization’s existing technologies, software, applications, and infrastructure that will be leveraged or replaced during digital transformation. A thorough assessment of current IT infrastructure is essential to determine compatibility with new technologies and to identify any upgrades needed.

What happens when you neglect Platforms?


  • Investing in new technologies that are incompatible with existing infrastructure, leading to integration challenges.
  • Security vulnerabilities due to outdated or unsupported technologies.
  • An inability to scale digital initiatives to meet future demands.

“…we will be able to target the right investments to support digital delivery, consolidate and retire legacy websites and systems, work with our private sector partners to implement leading technology solutions, maximize the impact of taxpayer dollars, and deliver a government that is secure by design and works for everyone.” – Federal Chief Information Officer Clare Martorana

By focusing on the 5Ps—People, Policy, Process, Partners, and Platforms—federal leaders and buyers can establish a strong foundation for successful digital transformation and achieve digital excellence. A well-coordinated approach considering these interconnected elements will help ensure projects are implemented effectively, deliver meaningful benefits, and position agencies to thrive in the digital age.



Introducing Highlight’s New Bytes & Insights

Highlight is thrilled to announce the launch of our all-new quarterly tech brief through our Technology and Innovation office, Bytes & Insights – Decoding the Quarter. This insightful report recaps the hottest topics, thought leadership pieces, and industry developments from the past quarter. Our inaugural edition is here, packed with valuable info you won’t want to miss. Stay ahead of the curve and stay informed on what matters most!

Historic Overhaul of Uniform Grants Guidance Announced for Federal Financial Assistance

Federal Agencies Must Apply Revisions by October 1

The White House has issued a groundbreaking nine-page memorandum (April 4, 2024) that sets forth new rules for the administration of Federal financial assistance. The initiative, “Reducing Burden in the Administration of Federal Financial Assistance,” includes significant updates and revisions in Title 2 of the Code of Federal Regulations (CFR), otherwise known as Uniform Grants Guidance.

The guidelines impact the $1.2 trillion in funding provided by the federal government for thousands of programs that receive grants and other forms of financial assistance. The changes are meant to reduce complexity, administrative burden, and ambiguity. The new guidance also means new work for agencies to apply revisions by October 1, 2024.

In its April 4, 2024, announcement, the Office of Management and Budget (OMB) called the guidance “the most substantial revision to the Uniform Grants Guidance since it went into effect ten years ago,” and noted that changes were based on input from federal, state, and local governments, tribal organizations, nonprofits, universities, and companies.

OMB Deputy Director for Management Jason Miller, in an OMB briefing, said the new guidance will simplify grant announcements with plain language and, as a result, will strengthen accountability and compliance, streamline implementation, and broaden the pool of potential recipients. As the most substantial revision to the Uniform Grants Guidance, since it went into effect 10 years ago, Miller called it “a new era in the management of federal funds.”

The final version of the Uniform Guidance will be posted in the Federal Register. OMB released a pre-publication version. Federal agencies must submit how they plan to implement the revisions by May 15, 2024. This includes submitting plans for simplifying Notices of Funding Opportunities (NOFOs). The guidance requires agencies to redesign notices to improve accessibility, readability, and clarity. Targeting underserved communities, the goal of the guidance is to reduce paperwork burden and compose NOFOs in plain language.

Other key federal directives and goals from the memorandum include:

  • Comprehensive Revision of Title 2: A key component of the memorandum is the extensive revision of Title 2 of the Code of Federal Regulations (CFR), which governs the administrative requirements, cost principles, and audit requirements for Federal awards. These changes, effective for all federal awards issued on or after October 1, 2024, aim to enhance the stewardship of federal funds, promote equitable access, reduce administrative burdens, and ensure effective oversight. Federal agencies are tasked with swiftly and consistently implementing these revisions to maximize their benefits.
  • Post-Award Accountability and Transparency Enhancements: The memorandum addresses the need for maintaining accurate federal financial assistance award and sub-award data, establishing standardized core data elements, and implementing post-award administration efficiencies. These measures aim to reduce burden while enhancing accountability, transparency, and program outcomes.
  • Consultation with the Grants Quality Service Management Office (QSMO): Federal agencies are reminded to consult with the lead Grants QSMO when updating their grants and cooperative agreements management systems. This ensures alignment with best practices and the provision of high-quality service offerings.

Roadwork Ahead for Federal Agencies

While the goal is to reduce burden, the new guidance requires significant changes in how federal agencies manage and administer grants and cooperative agreements.

  • Systems and processes must be upgraded and could be time-consuming. Staff will need training on new requirements and processes.
  • Enhancing post-award accountability and transparency means managing a vast amount of data accurately.
  • Agencies must ensure that they can collect, manage, and report subaward data effectively.

With guidance emphasis on improving access and equity for Tribal Nations and underserved communities, agencies will need to engage and target communication with these communities more effectively. This could involve outreach, consultations, and the development of specialized application processes. Agencies must navigate the complexities of these engagements sensitively and effectively, which may be challenging without prior experience or established relationships.

Key Actions Leaders Should Consider Today

  1. Build a Plan – Draft a Strategic Plan
  2. Identify Resources – Ensure you have an allocation of sufficient resources
  3. Develop an Agile Culture – Lead the way by committing to continuous improvement and adaptation

Requirements to consult with QSMO when updating grants and cooperative agreements, agencies must align management systems with QSMO guidelines and best practices. This coordination could be challenging for agencies with unique or specialized grant management needs.

Addressing these challenges will require strategic planning, allocation of sufficient resources, and a commitment to continuous improvement and adaptation. As federal agencies navigate these changes, they must develop new strategies, tools, and partnerships to successfully implement the guidance and achieve its intended outcomes.

Barry Lawrence is a Senior Communication Program Manager for Highlight. The opinions expressed in this blog are his own and reflect a commitment to compliance and fostering a more accessible digital world for all Americans.

Unlock Digital Agility: A Guide to a Flexible, Plug-&-Play Tech Strategy

The relentless evolution of technology demands organizations be ready for constant change. Case in point, over the past 18 months, we have all seen ChatGPT and other Large Language Models (LLMs) disrupt how we do a lot of things. To take advantage of this disruption and the next disruption, organizations must cultivate agility and flexibility within their infrastructure. A Plug-and-Play tech strategy is your key to unlocking that adaptability.

Future-Proofing with Plug-and-Play
Rather than investing heavily in on-premise IT assets, the Plug-and-Play approach emphasizes cloud-based subscriptions and easily interchangeable technology components. This “built to change” mindset overthrows the outdated “built to last” philosophy.

Building Blocks, Not Fortresses: Why Plug-and-Play Wins. 
Imagine building your tech infrastructure like a LEGO set, not a brick-and-mortar building. With a Plug-and-Play approach, individual components (data visualization tools, CRMs, etc.) are designed for easy integration. This lets you swap them out or add new ones as your needs evolve, fostering agility and adaptability within your organization.

Where to Begin?
A successful Plug-and-Play transformation goes beyond the technology itself. To ensure lasting success, consider these key areas:

  • The 5 P’s: People, policies, processes, partners, and platforms must be carefully assessed and aligned with your transformation goals. Early attention to these areas ensures smooth adoption and minimizes long-term disruption.
  • Follow a Four-Step Digital Transformation Process: A structured process to go from transformation into continuum:
    1. Discovery and Alignment: Define your vision, assess the current state (“as-is”), and articulate your desired future state (“to-be”). Look for alignment between system solutions and organizational objectives.
    2. Transformation Assessment: Analyze the gaps between “as-is” and “to-be”, developing your strategy, requirements, and infrastructure plan. Mitigate risks early.
    3. Agile Development and Release: Execute your plan using agile methodologies, with a strong focus on quality assurance and testing. Continuously evaluate and incorporate new tools or capabilities as needed.
    4. Sustained Digital Continuum: Shift focus to ongoing operations and maintenance, providing support, updates, and ensuring ongoing adaptability.

Key Considerations:

  • Risk Mitigation: Address potential risks like privacy concerns or integration issues early in the process.
  • Stakeholder Involvement: Engage stakeholders throughout the journey for better buy-in and adoption.
  • Metrics: Define clear metrics to track your progress and measure the impact of your transformation.

Real-World Implementation
Start with less critical back-office processes to build confidence before tackling core operations. Here’s how modern tech empowers your strategy:

  • SaaS (Software as a Service): Streamlines deployment and updates.
  • APIs: Facilitate communication between different software systems.
  • Microservices: Help create modular, easily swappable components.
  • AI and ML-Powered Tools: Enable faster decision-making and automation.
  • LLMs (Large Language Models): Facilitate natural language interaction, content generation, and knowledge extraction.
  • RAGs (Retrieval-Augmented Generation): Provide LLMs with access to external knowledge sources, enhancing their accuracy and the scope of their responses.

The Plug-and-Play Advantage

Embrace the mindset that all tech solutions are inherently temporary. A Plug-and-Play strategy lets you quickly adopt cutting-edge tools and drive better outcomes.

  • The Reality of Constant Change:  In today’s technology landscape, obsolescence is inevitable. What’s considered “best-in-class” today might be surpassed within months or weeks. Clinging to solutions for the sake of familiarity hampers your organization’s ability to stay competitive.
  • Leadership’s Critical Role: To foster an always-changing organization, leaders must:
    • Model an Embrace of Change: Leaders set the tone. Being visibly open to new technologies and experimentation signals that it’s safe for others to do the same.
    • Champion Continuous Learning: Encourage employees to stay up-to-date on emerging trends and provide opportunities for skills development.
    • Reward Adaptability: Recognize and celebrate those who successfully navigate change and pivot quickly when needed.
  • The Plug-and-Play Advantage: A Plug-and-Play framework anticipates change. It prioritizes solutions designed for easy integration and replacement. This means your organization isn’t shackled to outdated systems, allowing you to capitalize on the latest innovations.
  • Driving Better Outcomes:  Flexibility drives results. By quickly adopting cutting-edge tools, you potentially:
    • Boost Efficiency: Automation, AI-driven insights, and streamlined processes can significantly increase speed and productivity.
    • Enhance User Experience: Whether it’s an internal system for employees or a public-facing application, modern tools often deliver a superior user experience.

Example: Imagine the impact of staying committed to cumbersome spreadsheet-based accounting versus switching to a cloud accounting platform, automating processes, and enabling real-time financial insights. The Plug-and-Play mindset allows you to make these critical updates quickly.

Key Questions for 2024 and Beyond:

  • How does your strategy drive robust data-driven insights utilizing AI-backed analytics?
  • Are you effectively balancing cloud, hybrid, and on-premise solutions for optimal performance?
  • What percentage of your IT budget fuels innovation versus maintenance?
  • How do you track the tangible impact of new technologies on achieving your mission?
  • What are your plans to proactively identify and assess emerging technologies?

Remember, change is the only constant. By combining a Plug-and-Play mindset with a holistic transformation approach, you create a foundation for sustained digital agility.

Navigating the Labyrinth of RBAC and Access Keys 

As federal organizations continue building services on cloud providers and deploying to container orchestration platforms, virtual servers, or physical hardware, securing access to cloud resources is crucial. There are two common methods for access control: RBAC (Role-Based Access Control) and access keys. You know, those keys need to be rotated every six months or whatever the cadence is. That process can be automated but is painful, and if not done properly, it can lead to an incident. Depending on the number of keys, it can become burdensome and painful for teams. As noted by Zscalar, 28 percent of access was through keys instead of roles or groups within AWS. Can we use RBAC to mitigate these pain points? 

RBAC works similarly to Access Keys in the sense that it generates session tokens for applications/users to use to access resources. When it comes to how RBAC and Access Keys are implemented, that’s where the fundamental differences lie. With Access Keys, you have generated static Access Key ID and Secret Key ID to be used by the application(s). These keys are either injected into the application environment during setup and can be retrieved by the application on boot, or can be fetched during the runtime of the application from a secret store. Due to the nature of the implementation, when rotating access keys, it is common to restart the application after creating new keys. RBAC roles can be attached to software entities. Once the role is attached to the entity, the entity will be able to access the resources defined by the role. As the role is attached to the software entity, there are no keys to be rotated. 

Access Keys are usable by anyone who has the values. Leaking of these sensitive secrets can have financial implications, unauthorized access, data breaches, and much more. As these keys are static and humans make mistakes, unfortunately, there have been countless situations where engineers have used access keys to develop software and accidentally committed the keys to source control. Exposure of these secrets to anyone outside the scope of the application poses a security risk. Should a bad actor discover these keys, they might be able to access systems intended for the target application. There have been thousands of secrets discovered in source control repositories like Github. The longer these keys go undetected, the risk of compromised secrets increases. That’s one reason periodic rotation of access keys is a proactive measure. As a matter of fact, up to 50% of access keys are not rotated periodically.  

Image Reference 

RBAC is directly attached to the entities and does not have static keys, so it inherently does not need a secret rotation cadence. Depending on the software deployment architecture, you can attach the roles to the application as granularly as you like. For virtual servers like EC2, you can attach the roles to the instance itself. For Kubernetes clusters, you can attach IAM roles to Kubernetes Service Accounts through RoleBindings and OIDC (OpenID Connect). RBACs attachment to the software entities prevents misuse by unauthorized parties. 

Federal organizations have unique security requirements and compliance regulations that necessitate strict access control measures. By adopting RBAC, these organizations can ensure that only authorized personnel can access sensitive data and resources. RBAC allows for creating roles based on job functions, making it easier to manage access rights across large organizations with complex hierarchies. 

When implementing RBAC in federal organizations, it is essential to consider the following best practices: 

  1. Conduct a thorough analysis of job functions and access requirements to define roles accurately. 
  2. Assign roles based on the principle of least privilege, granting only the minimum access rights necessary for individuals to perform their duties. 
  3. Regularly review and update roles to ensure they align with changing organizational requirements and personnel changes. 
  4. Implement a robust audit trail to monitor and log all access attempts and activities associated with each role. 
  5. Provide comprehensive training to employees on RBAC policies and their responsibilities in maintaining the security of the organization’s resources. 

By adopting RBAC, federal organizations can reap several benefits, including: 

  1. Enhanced security: RBAC ensures that access to sensitive data and resources is strictly controlled, reducing the risk of unauthorized access and data breaches. 
  2. Improved compliance: RBAC helps federal organizations meet regulatory requirements, such as FISMA and NIST, by providing a framework for managing access control. 
  3. Increased efficiency: With RBAC, access management becomes more streamlined, reducing the administrative overhead associated with managing individual access key permissions.
  4. Better scalability: As federal organizations grow and evolve, RBAC allows for the easy addition of new roles and the modification of existing ones, ensuring that access control remains effective and efficient. 

In conclusion, RBAC offers a more secure and efficient alternative to access keys for federal organizations looking to secure their cloud resources. By implementing RBAC, organizations can mitigate the risks associated with static access keys, such as accidental exposure and the need for frequent rotation. RBAC provides granular access control, allowing organizations to assign roles based on job functions and adhere to the principle of least privilege. By adopting RBAC best practices and leveraging its benefits, federal organizations can enhance their security posture, improve compliance, and streamline access management processes. 


The 2020 State of Cloud (In)Security 

Governance at scale: Enforce permissions and compliance by using policy as code 

3 Ways to Reduce the Risk from Misused AWS IAM User Access Keys 

Over 100,000 GitHub repos have leaked API or cryptographic keys 

 What happens when you leak AWS credentials and how AWS minimizes the damage 

Reducing the Risk from Misused AWS IAM User Access Keys 

Part 6 Turning Theory to Practice: Applying the Cost-Capability Matrix 

Fundamentally, the matrix highlights crucial tradeoffs between innovation costs, risks, and performance spanning maturity horizons – signaling avenues for judicious investment. Cost-conscious leaders can identify commoditizing solutions balancing savings and customizability for budget optimization. Forward-thinkers ascertain emerging capabilities showing traction for adoption tailoring and scale. Visionaries pinpoint pioneering advances aligning to long-term roadmaps.  

Still, leaders rightfully ask – how does conceptual modeling enhance real decision-making? Simply put, the matrix provides a valuable framing tool guiding objective debates and trade-off analyses for capability planning and investments. 

Want to read the rest of the Series?

Part 1 Intro to the Cost Capability Matrix
Part 2 | Assessing the Cost-Capability Tradeoff, Quadrant 1 – Consumables
Part 3 | Navigating the Cutting Edge: Investing in Specialized Innovation, Quadrant 2 – White Elephants
Part 4 | Calibrating Capabilities and Costs for Widespread Adoption, Quadrant 3 – High Value
Part 5 | Exploring Uncharted Frontiers: Investing in Pioneering Innovation, Quadrant 4 – High Demand/Low-Density Workhorses

A group of cars with text

Description automatically generated

Consider bottom-up and top-down dynamics. Frontline units closest to application contexts best understand flexible tactical requirements. However, higher authorities maintain broader strategic perspectives and scaled priorities. By plotting specific capability solutions on the matrix, stakeholders can clearly visualize investments through different lenses – surfacing disconnects between local and centralized vantage points. This enriches discourse on optimizing decisions factoring in customized agility, commoditized economies, and specialized innovation. 

Furthermore, positioning existing and emerging capabilities on the matrix quickly indicates maturity levels, adoption risk, required investment, and adjacent possibilities useful for planning. Capability clusters become apparent. Targeting gaps and development opportunities grow more systematic. Roadmaps stabilize balancing short and long-term activities. 

Real world example: Small Arms Ranges Cost & Capability Matrix

In the USAF, we managed all the USAF Firing Ranges. To help us understand our portfolio, we plotted each range type using a cost and capability matrix. Figure 1 shows how the range type aligns with the doctrine statement of “train as we fight.” Figure 2 shows how the range types align based on their impact on life, health, and safety issues. As you can see, we had some white elephants, consumables, and high-value assets. We used these findings to help answer which range configuration gave us the best bang (pun intended) for our taxpayer buck. What is apparent is the importance of finding the real estate needed to operate Non-Contained Impact (NCI) ranges (aka full distance ranges). From a health perspective, we also asked which range configuration had the least health issues for range operators. Again, the NCI range type is the range configuration that impacts range operators’ health the least. There are a lot of other questions we can ask, too.

Leaders can also easily re-plot capabilities against adjusted axes as constraints shift. For instance, legal changes altering risk tolerance might expand viable spaces warranting investment in pioneering advances. Budget fluctuations would signal to adjustment of targeted maturity levels. New evaluations prompt iterative alignment to evolving contexts. 

Ultimately, no universal technology prescription exists, given unique constraints organizations face. However, as a thinking aid, the cost-capability matrix proves invaluable for centering complex debates regarding multi-horizon innovation. The clarity introduced by visually bounding feasible spaces fosters dialogue surfaces assumptions, and focuses data-driven decision quality. With insights unlocked by this approach, leaders gain confidence in optimizing capability decisions and balancing priorities across tactical needs, strategic direction, and visionary possibilities. 

The matrix thereby enables translating conceptual frameworks into enhanced real-world technology outcomes. By encouraging systematic evaluations factoring short- and long-term costs, risks, and payoffs, leaders make progress in navigating the daunting innovation possibility space through incremental steps that sequentially raise organizational maturity. No single revelation reveals all answers – just an effective compass grounded in objective trade-off analysis pointing the way forward. 

Download Key Actions & Matrix Worksheet.

Elevating Digital Accessibility: A Closer Look at Enhanced Federal Compliance with Section 508

In the digital age, ensuring that technology serves everyone equitably is not just a noble goal—it’s a legal requirement for federal agencies. Section 508 of the Rehabilitation Act mandates that all electronic and information technology developed, procured, maintained, or used by federal agencies must be accessible to people with disabilities. This law aims to eliminate barriers in information technology, opening new avenues for people with disabilities to obtain information and engage with their government.

Recent developments signal a pivotal shift in how federal agencies approach Section 508 compliance. The Office of Management and Budget (OMB), in collaboration with the General Services Administration (GSA) and the U.S. Access Board, unveiled a landmark guidance in December, detailed in OMB Memo M-24-08. This guidance is not merely an update; it’s a clarion call for a more inclusive digital government.

The memo outlines enhanced expectations and accountability, urging agencies to place accessibility at the heart of digital governance. Among the pivotal components of the new guidance are:

  • Leadership and Accountability: Agencies are now required to appoint a dedicated program manager to spearhead and monitor digital accessibility efforts.
  • Expert Involvement: The integration of accessibility subject matter experts into the acquisition process ensures that new Information and Communications Technology (ICT) adheres to accessibility standards from the outset.
  • User-Centric Design: Including individuals with disabilities in user groups for digital product design and testing enriches the user experience for everyone.
  • Proactive Compliance: Agencies must regularly scan and monitor web content for accessibility, promptly addressing any deficiencies.
  • Ongoing Education: The mandate for regular training on Section 508 and digital accessibility aims to foster a culture of inclusivity.

These enhancements come in response to mixed results in 508 compliance across agencies. A February 2023 report by the Department of Justice and GSA underscored the need for additional support and resources, reflecting on insights from a comprehensive 2012 survey.

“Accessibility must be incorporated, unless an exception applies, from the very beginning of the design and development of any digital experience and integrated throughout every step of the ICT lifecycle, including qualitative and inclusive research, feature prioritization, testing, deployment, enhancements, and maintenance activities,” the memo states. (Exceptions are detailed in the Standards under E202 General Exceptions.)

A Future of Inclusive Digital Services

Anticipating the road ahead, the GSA and the Access Board are finalizing a government-wide Section 508 assessment for 2024. This effort, expected to roll out in phases from spring to fall, aims to gather detailed insights into agency practices and challenges. Kristin Smith-O’Connor of the GSA shared with ExecutiveGov, “We are refining and honing our approach, ensuring that the upcoming changes, while not drastic, will significantly contribute to our collective goal of a fully accessible federal digital landscape.”

Agencies are encouraged to lean on the resources available on This platform strives to be a comprehensive resource, offering guidance, best practices, and compliance testing tools. Additionally, the OMB memo directs the GSA and the Access Board to broaden Section 508 certification and training opportunities, enhancing the capabilities of federal employees to champion digital accessibility.

Enhancing Accessibility Now and in the Future

When navigating the complexities of Section 508 compliance, consider rigorous testing using tools like WAVE, Axe, and Lighthouse to identify and rectify common accessibility issues. However, recognizing the limitations of automated tools, consider augmenting them with manual evaluations, including keyboard navigation and screen reader compatibility tests. The combination of these efforts are guided by the Web Content Accessibility Guidelines (WCAG), ensuring your services remain aligned with legal requirements and best practices.

Yet, the journey towards universal accessibility is ongoing. Despite significant strides, the path forward requires continuous effort, innovation, and collaboration. We celebrate the government’s initiative to demystify Section 508 compliance, and we remain hopeful for more actionable guidance to emerge, fostering an environment where digital accessibility is not just a compliance requirement but a cornerstone of public service.

Barry Lawrence is a Senior Communication Program Manager for Highlight. The opinions expressed in this blog are his own and reflect a commitment to fostering a more accessible digital world for all Americans.

Part 5 | Exploring Uncharted Frontiers: Investing in Pioneering Innovation, Quadrant 4 – High Demand/Low-Density Workhorses 

Progress relies on bold organizations pushing boundaries with pioneering inventions redefining entire paradigms. But charting new frontiers carries immense risks, demanding exceptional discernment balancing long-term strategic necessity against short-term fiscal realities. Let’s take a deeper look at this problem through our cost and capability matrix, looking at our final quadrant. 

Did you miss the rest of the series?

Part 1 Intro to the Cost Capability Matrix
Part 2 | Assessing the Cost-Capability Tradeoff, Quadrant 1 – Consumables
Part 3 | Navigating the Cutting Edge: Investing in Specialized Innovation, Quadrant 2 – White Elephants
Part 4 | Calibrating Capabilities and Costs for Widespread Adoption, Quadrant 3 – High Value

Insights into High-Demand/Low-Density Workhorses  

Quadrant 4 contains complex customized solutions with enormous price tags and broad flexibility catering to a niche, specific application with a wide range of diverse capabilities. These genesis innovations pioneer entirely new concepts while custom-built offerings address unique constraints through specialized tailoring. Think Lockheed Martin F-35 Lightning II, VR headsets in their infancy before standardized designs, self-driving vehicles under current R&D lacking widespread production, or conceptual Mars colonization capabilities. 

The audiences drawn to Quadrant 4 accept significant expense and uncertainty in exchange for unprecedented capabilities mapping uncharted territory. By nature, the limited scale of these innovations prevents cost efficiencies and flexibility of eventual commoditized alternatives. But the tradeoff offers opportunities to pursue mind-bending breakthroughs unencumbered by commercial viability constraints – for those strategists with patience and fortitude to endure. 

For leaders balancing pragmatic investments against exploring uncharted frontiers, three guidelines apply when engaging emerging innovations well before their benefits trickle down:   

  • Anchor on Aligned Vision 
    Scattered moonshots waste resources. Prioritize game-changing innovations aligning to strategic roadmaps and unique constraint drivers before appraising exotic alternatives. 
  • Embrace Iterative Agility 
    Rigorous yet nimble road mapping reduces the risks of backing dated designs. Modular architectures, iterative testing, and flexible requirements sustain competitiveness through ongoing evolution. 
  • Forge Tight Feedback Loops   
    User-centric co-design and close developer collaboration maximize real-world value and application. Rapid concept testing surfaces must have use cases earlier.   

Make no mistake: the vast majority of cutting-edge inventions never progress beyond this high-risk, high-cost quadrant. However, for select innovations promising unprecedented paradigms aligned to institutional ambitions, the immense initial expenses and semi-narrow flexibility prove acceptable tradeoffs. With patient, disciplined strategies balancing focused innovation investments against quick-win solutions, leaders can judiciously support pioneering development while ensuring affordable access to new capabilities at the opportune moment. 

Of course, what constitutes an acceptable tradeoff depends heavily on the observer. While pragmatic key stakeholders naturally orient toward proven capabilities and fiscal prudence, visionary strategists think bigger – prioritizing long-term possibilities over short-term savings. Both mindsets have merits. The key lies in analyzing decisions through multiple lenses, accounting for all perspectives – including the end vision, interim milestones, and must-have capabilities that ultimately determine what constitutes value. 

Practical Application  

Analyzing pioneering innovations in an organization’s portfolio through the lens of Quadrant 4 reveals just how many exploratory moonshots fail to materialize capabilities or strategic outcomes warranting prolonged investment at scale. This grounding assessment highlights expensive genesis projects and custom builds outpacing actual user needs or lagging in real-world viability. Plotting existing bleeding-edge initiatives on the matrix provides perspective on which demand vision over validation, enabling recalibration around innovations demonstrating clearer progression from novelty towards necessity. Leaders can periodically evaluate Quadrant 4 investments against strategic alignment, opportunity costs, and upside optionality relative to risk to determine if pressing forward or pivoting resources makes sense given competing priorities. 

Questions a leader should consider: 

  • How clearly do our pioneering innovation investments map to long-range strategic vision, priorities, and constraint scenarios vs isolated speculative curiosity?  
  • Have we established rigorous stage gate criteria assessing when to continue or sunset high-risk exploratory initiatives based on demonstrated applicability? 
  • What level of recurring costs over what time horizon requires validating success for various genres of bleeding-edge innovation moonshots we pursue? 
  • Where can we employ rapid prototyping and user co-creation to accelerate insights on utility earlier before overinvesting in custom innovations lacking validated market fit?   
  • To what extent do our custom innovation architectures allow for modular refresh, interoperability, and future adaptation, minimizing sunk costs as paradigms shift? 
  • Which lower-risk existing alternatives or incremental improvements could partially fulfill niche needs in the interim before specialized quadrant 4 capabilities mature?  
  • At what thresholds of stretching accuracy in long-term future forecasting should leaders demand evidence of clearer market signaling before allocating resources to extremely customized boutique solutions? 

Asking these challenging questions introduces essential rigor, milestones, priority balancing, and runway debates regarding high-cost innovations far removed from practical payoffs. This helps avoid inertia where investments balloon absent defensible strategies for affordability, adoption, and scaling. 

With honest appraisals and robust discourse, wise leaders deliberately choose innovation investments spanning maturity horizons aligned to multi-step strategic roadmaps. Mature capabilities tackle present constraints using economical, commoditized solutions. Advancing innovations address emerging opportunities primed for customizable, scalable adoption. Pioneering moonshots map future frontiers stretched beyond today’s imagination. By intentionally anchoring innovation across time horizons, leaders compound capabilities, shaping tomorrow while mastering today. In the final part of this six-part series, we’ll look at turning the theory into practice by applying the cost-capability matrix in our final part.

Part 4 | Calibrating Capabilities and Costs for Widespread Adoption, Quadrant 3 – High Value 

Innovations inevitably transition from bleeding-edge exclusivity to mass-market commodities as improved manufacturing and competition drive down costs. Savvy leaders understand where highly valued capabilities currently sit on this spectrum, ensuring investments target accessible innovations with favorable risk-reward ratios primed for scalable adoption. Let’s take a deeper look at understanding where these innovations fit into the cost vs capability matrix, focusing on quadrant 3. 

Did you miss the rest of the series?

Part 1 Intro to the Cost Capability Matrix
Part 2 | Assessing the Cost-Capability Tradeoff, Quadrant 1 – Consumables
Part 3 | Navigating the Cutting Edge: Investing in Specialized Innovation, Quadrant 2 – White Elephants

Insights into High Value   

Quadrant 3 represents the commercial sweet spot spanning novel yet increasingly standardized capabilities with expanding mainstream utility. For budget-conscious leaders seeking maximum capability per dollar spent, Quadrant 3 offers optimal bang for the buck – modernized solutions squeezing every bit of value from investments by bridging customizability and economies of scale. 

Whether pursuing technology upgrades or new solution procurement, targeting innovations sliding down adoption curves unlocks the best of both worlds – substantial capability advancement at palatable price points minimized through commodification. Building in customizability broadens the applicability of the system to wider use cases to extract full utility from existing investments. 

Moreover, commoditizing innovations through flexibility and customization provides organizational agility to tailor solutions perfectly with specific requirements. The savings accrued from maximizing adoption lifetime value frees up funds for additional capability enhancements or innovation investments in the future – and creates dynamic advancement built on firm fiscal foundations.  

By proactively targeting solutions transitioning from early niche audiences to mainstream viability, leaders avoid overspending on exotic innovations while sidestepping stagnant antiquation. Instead, real material progress emerges as prudent investments harness commodification’s compounding savings and flexibility dividends to scale organizational capabilities over time systematically. 

The key insight for leaders lies in evaluating emergent capabilities by the trajectory and velocity of their value rather than technical specifications alone. Prioritizing innovations reaching the knee of hockey stick adoption curves allows tapping into explosive demand built on proven multi-context utility. 

With appetites for sophisticated new functionalities balanced against moderate risk tolerances, early adopters validate solutions demonstrating burgeoning market viability. Take smartphones transitioning from luxury to essential, streaming proliferating beyond early niche followers, and solar energy expanding from eco-enthusiasts to cost-conscious households. In each case, engineering and positioning transformed exotic innovations into flexible mass-market commodities traded on improving price-performance ratios.   

Practical Application  

Plotting existing capabilities against Quadrant 3 allows leaders to identify emerging innovations ripe for adoption and scale. Analyzing through this lens highlights solutions fit for flexible customization, standardization, and volume deployment – prime targets for maximizing capability bang for the buck. Leaders can assess innovation velocity, utility trajectory, and price elasticity to prioritize commoditizing opportunities on the cusp of explosive hockey stick growth. Comparing organizational solutions against market alternatives re-emphasizes gaps in Anchoring innovation investments to this high-value nexus and fuels aggressive capability advancement at minimizing price points before niche innovations become exclusionary. 

Questions a leader should consider: 

  • Which emerging innovations demonstrate a clear trajectory towards commoditization that we should evaluate for adoption and scaling? 
  • How could we enhance flexibility, configurability, and customizability in our existing solutions to improve applicability across diverse use cases?  
  • Where do opportunities exist to consolidate contracts around standardized capabilities with multiple vendors to improve purchasing power? 
  • How can we leverage volume licensing, bulk pricing, or other economies of scale to reduce costs further as we broaden the deployment of valuable capabilities? 
  • Do our software development, testing, and release cycles allow rapid integration feedback and new features prioritizing user needs as capabilities commoditize?   
  • How frequently are we testing the market for replacement solutions as existing ones transition from differentiation to commoditization? 
  • What risks of disruption do we face if failing to adopt new high-value commodity solutions prior to reaching the scale ceiling with current ones? 
  • Across stakeholders benefiting from common, scalable capabilities, are governance and funding properly aligned to share responsibility and cost savings? 

Proactively asking these questions focuses technology investments on the dynamic high-value center of the market. This prevents leaving money on the table during invaluable windows when tailored adoption at scale is possible before niches become exclusionary or obsolete. 

Rather than chase exotic innovations or settle for antiquation, alignment to Quadrant 3’s mix of customizability and enlarging scale offers attractive middle paths for optimizing capability growth. Leaders realize the best of both worlds – substantial capability advancement at minimizing price points via commodification – for aggressively taking advantage of emerging opportunities. Next, we’ll examine our last quadrant, High Demand/Low-Density Workhorses. 

Part 3 | Navigating the Cutting Edge: Investing in Specialized Innovation, Quadrant 2 – White Elephants 

Progress demands pushing boundaries with pioneering innovations to redefine what’s possible. But not every bleeding-edge capability warrants immediate investment. At least not before evaluating the cost and performance viability for widespread adoption. Still, certain specialized use cases justify the premiums commanded by exclusive emerging technologies. We’re exploring each quadrant in the cost and capability matrix. Let’s take a deeper look into quadrant 2. 

Did you miss the rest of the series?

Part 1 Intro to the Cost Capability Matrix
Part 2 | Assessing the Cost-Capability Tradeoff, Quadrant 1 – Consumables

Insights into White Elephants 

Quadrant 2 contains these complex customized solutions with astronomical price tags and limited flexibility catering to niche, specific applications. These systems occupy critical spaces demanding extensive tailoring, exotic components, and top-tier performance. Think about microprocessors powering high-performance computing, highly customized cybersecurity defenses fortifying infrastructure, or proprietary aerospace and defense technologies securing strategic capabilities.  

The innovators and early adopters drawn to these offerings willingly trade off higher expenses and rigid designs for unmatched capabilities meeting unique constraints. By nature, the limited scale of these specialized innovations prevents cost efficiencies and flexibility afforded more commoditized mainstream solutions. But the flipside offers opportunities to pursue mind-bending breakthroughs in materials, processes, and performance unencumbered by commercial viability constraints. 

Practical Application  

Navigating this rarefied innovation airspace dominated by high-risk technological frontiers and uncertainty requires savvy leadership. Three essential guiding principles apply when engaging and evaluating with cutting-edge solutions before their benefits trickle down to wider audiences: 

  • Laser Focus on Critical Priorities: Not every nice-to-have capability warrants riding the bleeding edge, given the big bills and decision paralysis. Leaders must ground innovation priorities in strategic necessity and unique organizational requirements before pursuing exotic alternatives.  
  • Embrace Iterative Agility: Rigorous yet nimble road mapping reduces risks of backing ultimately outdated designs. Prioritize modular architectures, iterative testing and flexible requirements that sustain competitiveness through ongoing evolution vs wholesale rip-and-replace upgrades.   
  • Forge Tight Feedback Loops: User-centric co-design and collaboration with developers is essential to maximize bespoke solutions’ real-world value and application. Rapid user testing surfaces vital insights on utility while targeting must-have use cases.   

Plotting an organization’s existing specialized innovations on the cost-capability matrix reveals how many complex custom solutions fail to demonstrate strategic alignment or strong value realization compared to more mainstream commodities. By analyzing the niche innovations portfolio through the lens of Quadrant 2, leaders gain sobering visibility into expensive, over-designed systems bordering on extravagance more than necessity. This introspection highlights opportunities to scale back custom projects losing steam to prioritize resources for capabilities demonstrating clearer enterprise payoffs. Taking an inventory of specialized innovations against the matrix provides a much-needed perspective on the sustainability and strategic importance of boutique bills threatening to breach acceptable risk thresholds. 

 Questions a leader should consider: 

  • Do our organization’s specialized niche innovations directly address clearly defined strategic priorities and constraints, or are they more speculative “nice-to-haves”? 
  • Have the custom solutions reliably demonstrated sufficient real-world performance improvements over mainstream alternatives to justify 2-3x costs? 
  • What level of adoption and utilization have our bespoke innovations seen since deployment? How might we improve outcomes? 
  • Can any modular components be extracted from existing niche innovations for reuse in other solutions pursuing standardization and scale? 
  • Would pursuit of more open, flexible architectures reduce switching costs and allow our specialized capabilities to remain competitively refreshed? 
  • Can we meaningfully forecast total lifetime costs for supporting, upgrading, and maintaining highly customized innovations with unpredictable change over time? 
  • At what threshold of expense, delayed delivery, requirements creep, or opportunity costs should we reevaluate continuing investment in specialized niches vs pivoting resources to higher-value activities?   
  • Beyond narrow niches, do we have mature processes for responsibly mainstreaming or sunsetting specialized innovations if use cases evolve or fail to materialize? 

Asking these difficult questions allows leaders to critically examine custom innovations to ensure investments stay strategically aligned and deliver tenable value to the organization’s needs. Ongoing scrutiny combats inertia or emotional attachments that cause niche solutions to bloat budgets. 

Specialized innovation occupies a crucial yet often misunderstood niche, balancing present constraints against future ambitions. Though exotic and expensive, the apex solutions produced by unrelenting builder-user focus make the high costs and narrow flexibility worthwhile for the few organizations requiring their unmatched benefits. With disciplined strategies balancing investments in bespoke innovations against commoditized alternatives, leaders can judiciously support pioneering development while ensuring affordable access to new capabilities at the opportune moment for their unique needs. In the next article, we’ll look into the ideal space, high value, where we have low cost and high capability.