Skip to main content
  • Federal AI Policy Takes Shape: HHS Unveils AI Strategy as Administration Moves to Curb State AI Laws

    Center for Connected Health Policy
    HAPPY NEW YEAR TO ALL OUR READERS!

    We’re starting off 2026 with an issue that will likely be the topic of many future 2026 editions of CCHP’s weekly emails.
    In December, the Department of Health and Human Services released its long-anticipated Artificial Intelligence Strategy, outlining a unified, department-wide approach to integrating AI across internal operations, research, public health, and health care delivery. The Strategy represents the next phase of HHS’s efforts to modernize federal health programs under President Trump’s January executive order directing federal agencies to develop tailored AI plans. It also implements elements of the Administration’s broader AI Action Plan, as well as Office of Management and Budget (OMB) Memoranda M-25-21 and M-25-22 released earlier this year, which establish federal requirements for AI governance, risk management, and transparency.  CCHP covered the documents referenced above and more policy focused on federal AI activity in multiple 2025 newsletters, including issues published in OctoberAugustJuneApril, March, and February.

    While the HHS Strategy focuses primarily on internal federal use and governance modernization, HHS frames it as an initial step that will later expand to deeper engagement with private-sector innovators, technology developers, and health care stakeholders. The Strategy applies across all HHS Operating Divisions—including the National Institutes of Health (NIH), Centers for Disease Control and Prevention (CDC), Centers for Medicare & Medicaid Services (CMS), and the Food and Drug Administration (FDA)—and emphasizes a “OneHHS” model intended to reduce silos and promote coordinated AI adoption.

    HHS positions AI as a foundational technology with the potential to improve public health surveillance, clinical care delivery, biomedical research, program integrity, and administrative efficiency. At the same time, the Strategy acknowledges the risks associated with AI systems, particularly those that may influence health outcomes, patient rights, or public trust. Accordingly, the Strategy emphasizes building governance systems, shared infrastructure, and a skilled workforce to maximize the value of AI while minimizing harm.

    Specifically, the Strategy seeks to:
    • Strengthen risk management and public trust in AI-enabled systems;
    • Deploy shared, AI-ready technology platforms across HHS;
    • Reduce workforce burden through secure and approved AI tools;
    • Enhance the integrity and reproducibility of AI-enabled research; and
    • Modernize care delivery and public health programs through responsible AI augmentation.
    To support these goals, HHS plans to maintain a comprehensive inventory of AI use cases across the Department, develop shared data and compute resources aligned with FAIR principles (findability, accessibility, interoperability, and reusability), and implement role-appropriate training for federal employees. The Strategy also includes performance metrics to assess progress related to governance, efficiency, cybersecurity, and health outcomes.

    Under OMB Memorandum M-25-21, HHS is developing an AI Maturity Assessment Framework to evaluate the lifecycle, impact, and compliance status of AI systems used across the Department. Scheduled for public release later in 2025, the framework aims to improve transparency, ensure consistent oversight, and strengthen alignment across HHS divisions through the OneHHS model.

    HHS will assess AI use cases across four staged maturity levels:
    • Latent – Early-stage or exploratory concepts not yet operationalized;
    • Emerging  Pilot or limited deployments with defined objectives;
    • Realizing  Scaled systems delivering measurable value; and
    • Leading  High-impact, mature systems demonstrating sustained performance and alignment with best practices.
    According to HHS, the assessment framework will help identify high-value solutions suitable for broader deployment, flag ineffective or duplicative tools for sunsetting, and guide investment toward scalable and secure AI systems. For telehealth and digital health stakeholders, this framework signals increasing federal attention to systematic evaluation of AI tools, particularly those embedded in care delivery or program administration.  The HHS AI Strategy is organized around five core pillars, each addressing a distinct dimension of AI governance and implementation.  The pillars include:
    • Pillar 1: Governance and Risk Management for Public Trust - HHS aims to establish a coordinated, risk-based AI governance framework that supports innovation while protecting public trust, with particular attention to “high-impact” AI systems as defined by OMB M-25-21. The Department plans to publish plain-language summaries of high-impact systems, apply standardized risk-management practices, and determine whether non-compliant tools should be modified, paused, or discontinued. By April 3, 2026, all high-impact AI systems must meet minimum risk requirements or be suspended, with a standardized waiver process developed to promote consistency across HHS divisions.
    • Pillar 2: Infrastructure and Platforms Designed for User Needs - This pillar focuses on developing a shared OneHHS AI-integrated Commons that provides secure, scalable access to data, compute resources, model-hosting, and evaluation tools. The goal is to reduce duplication, lower costs, and accelerate AI development while prioritizing American-made technologies, open standards, and appropriate privacy and security safeguards.
    • Pillar 3: Workforce Development and Burden Reduction - HHS seeks to build an AI-ready workforce through role-based training, approved AI copilots, and centralized support resources that encourage responsible experimentation. These efforts are intended to improve productivity, reduce administrative burden, and integrate AI into daily operations without displacing human judgment.
    • Pillar 4: Gold-Standard Science and Research Reproducibility - Under this pillar, HHS aims to strengthen transparency, rigor, and reproducibility in AI-enabled biomedical research through standardized protocols, pre-registration, and data and code sharing where feasible. The Strategy also supports expanded research infrastructure to advance AI applications in drug development, diagnostics, and clinical research, including areas relevant to telehealth and remote monitoring.
    • Pillar 5: Modernizing Care and Public Health Delivery - HHS envisions using AI to support clinicians and public health professionals in improving care delivery and population health outcomes, not to replace human decision-making. Priority applications include clinical decision support, early-warning systems, and risk stratification in areas such as chronic disease management, maternal health, overdose prevention, and sepsis, with success measured through both health outcomes and reductions in administrative burden.
    Against this backdrop, the Trump Administration recently issued an Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” marking a significant escalation in federal efforts to assert control over AI governance. According to the accompanying White House fact sheet, the Order is intended to preserve U.S. global leadership in AI by reducing regulatory barriers, addressing what the Administration characterizes as a fragmented state-level regulatory landscape, and advancing a uniform national AI policy framework.  The Order seeks to challenge state laws deemed inconsistent with national AI policy, condition certain federal funding on state regulatory alignment, and accelerate development of a preemptive federal legislative framework.
    The Administration argues that divergent state AI laws—particularly those emerging in states such as California and Colorado—risk creating a fragmented regulatory environment that could undermine U.S. global AI leadership, impose ideological constraints on AI outputs, and raise constitutional concerns. The Executive Order includes several notable directives including:
    • AI Litigation Task Force: The Attorney General must establish an AI Litigation Task Force within 30 days to challenge state AI statutes viewed as unconstitutional, preempted, or otherwise unlawful.
    • Federal Evaluation of State AI Laws: The Secretary of Commerce must publish an evaluation of existing state AI statutes within 90 days to identify laws considered onerous or in conflict with federal policy.
    • Conditioning Federal Funding: Certain federal funding—such as non-deployment Broadband Equity, Access, and Deployment (BEAD) funds—may be conditioned on states refraining from enacting or enforcing conflicting AI laws.
    • Federal Reporting and Disclosure Standards: The Federal Communications Commission must consider establishing a federal AI reporting and disclosure framework to preempt state requirements.
    • FTC Preemption Policy: The Federal Trade Commission is directed to clarify that state laws requiring alterations to truthful AI outputs may be preempted as deceptive practices.
    • Development of Federal Legislation: White House advisors are tasked with developing a national legislative framework to preempt conflicting state AI laws, with carve-outs for areas such as child safety and state procurement.
    The Executive Order’s focus on state AI laws is particularly notable given recent state legislative activity identified in CCHP’s 2025 legislative roundup. In 2025, CCHP identified ten enacted AI-related bills with direct implications for health care and telehealth.  These laws largely focus on safeguarding utilization review and prior authorization processes, regulating mental health AI tools, and establishing transparency and ethical standards. How these laws will fare under increased federal scrutiny remains an open and evolving question.

    Federal AI policy is entering a more mature and contested phase. The HHS AI Strategy signals a significant commitment to integrating AI across federal health programs with structured governance, shared infrastructure, and measurable outcomes. At the same time, the Administration’s Executive Order targeting state AI laws highlights deepening tensions over who should set the rules for AI in health care. For telehealth stakeholders, these developments underscore the importance of closely monitoring both federal implementation efforts and evolving federal–state dynamics. As AI becomes more deeply embedded in care delivery, utilization management, and public health, the balance between innovation, oversight, and patient protection will remain a central policy challenge in the years ahead.  Stay tuned for updates from CCHP, and reference our Pending Legislation Tracker, as AI continues to intersect with healthcare and telehealth.

    Upcoming NCTRC Webinar:
    TELEHEALTH POLICY OUTLOOK FOR 2026
    See the original resource at :: How we work - CCHP

By using this site, you agree to the Privacy Policy and acknowledge the use of cookies to store information, which may be essential to making our site work properly or enhancing user experience.