As healthcare organizations across the nation step into 2026, they are confronting a dramatically altered regulatory environment shaped by a surge of state-level artificial intelligence legislation. With comprehensive federal AI laws still pending in Congress and guidance from federal agencies remaining fluid, individual states have taken decisive action to govern the use of AI in sensitive sectors, particularly healthcare. Effective this year, a series of new laws introduces stringent requirements for disclosure, transparency, and data protection, compelling developers, deployers, and users of AI systems in clinical settings to reevaluate their compliance frameworks and operational protocols. This patchwork of regulations marks a pivotal moment for the industry, demanding immediate attention and strategic adaptation to navigate an increasingly complex legal landscape.
1. California’s Proactive Stance on AI in Medicine
California has solidified its position as a leader in AI regulation with new laws aimed directly at protecting patients in healthcare interactions. The most prominent of these, AB 489, which became effective on January 1, directly addresses the potential for AI systems to mislead patients into believing they are interacting with licensed medical professionals. The law explicitly prohibits developers and deployers from using any terms, phrases, or design elements that could imply an AI possesses a healthcare license it does not have. This extends to advertising and system functionalities that might suggest care is being delivered by a human practitioner when, in fact, it is not. What gives this law significant weight is its enforcement mechanism; professional licensing boards are now empowered to take action against such violations, including pursuing injunctions under existing licensing laws, creating a direct line of accountability within the established healthcare oversight system. This measure is a clear response to the growing sophistication of AI interfaces and seeks to maintain a high standard of trust and clarity in patient care.
In parallel, California has enacted SB 243 to regulate the burgeoning field of “companion chatbots,” which are designed to provide ongoing emotional support and interaction. Recognizing the sensitive nature of these applications, the law mandates that providers clearly and conspicuously notify users that they are communicating with an AI, not a person. More critically, it imposes specific safety protocols. Developers must implement systems to prevent the AI from generating responses that could encourage self-harm or suicidal ideation. Furthermore, if a user expresses thoughts of suicide or self-harm, the chatbot is required to provide a notification that refers the user to a crisis service provider. This legislation has immediate implications for a wide range of digital health tools, including mental health support apps, patient engagement platforms, and wellness communication tools. California’s approach is part of a larger national trend, as states like Illinois, Nevada, and Utah have also begun to introduce regulations governing chatbots, signaling a growing consensus on the need for safeguards in AI-driven mental and emotional support services.
2. Texas Enacts Broad and Specific AI Requirements
Texas has introduced one of the most comprehensive pieces of AI legislation in the country with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which took effect on January 1. This act establishes a wide array of governance standards for AI systems but includes particularly stringent disclosure requirements for healthcare. Under TRAIGA, licensed medical practitioners are now obligated to provide patients with a conspicuous written disclosure informing them of the provider’s use of AI in their diagnosis or treatment. This notification must be delivered either before or at the time of the clinical interaction. For emergency situations where prior disclosure is not feasible, the law requires that it be provided as soon as reasonably practicable. Beyond disclosure, TRAIGA also prohibits the use of AI systems that are created with the specific intent to discriminate against individuals based on protected characteristics, although it clarifies that a disparate impact alone is not enough to prove discriminatory intent. The enforcement of TRAIGA falls to the Texas Attorney General, who has the authority to levy substantial civil penalties, ranging from $10,000 to $200,000 per violation, with the potential for these fines to accrue daily for ongoing non-compliance.
This robust framework is further supported by a separate Texas law, SB 1188, which became effective on September 1, 2025. This law directly addresses the clinical use of AI, permitting practitioners to leverage these technologies for diagnostic or treatment purposes on the condition that they operate within the scope of their professional license and personally review all AI-generated content or recommendations before any clinical decision is finalized. This provision places the ultimate responsibility squarely on the human practitioner, ensuring that AI serves as a tool to augment, not replace, clinical judgment. Similar to TRAIGA, SB 1188 also mandates that professionals disclose their use of AI to patients, reinforcing the state’s commitment to transparency in healthcare. Together, these two laws create a layered regulatory environment in Texas that prioritizes both patient awareness and practitioner oversight in the integration of artificial intelligence into clinical practice, setting a high bar for accountability.
3. The Growing Demand for AI Transparency
Beyond rules specifically targeting healthcare, a wave of broader AI transparency laws is creating new obligations that will inevitably impact healthcare organizations. California’s AI Transparency Act (SB 942), for instance, requires “covered providers”—defined as entities with one million or more monthly users—to offer free tools that enable users to determine whether content they are viewing was generated by AI. This has significant implications for large-scale telehealth platforms, patient portals, and healthcare marketing operations that engage with a substantial user base, forcing them to integrate new transparency features into their digital interfaces. In addition, California’s AB 2013 compels developers of generative AI systems to disclose key information about the data used to train their models. This means that vendors selling clinical decision support tools, diagnostic algorithms, or patient communication bots to healthcare organizations must now be prepared to provide detailed answers about the origins and composition of their training datasets, a crucial step in assessing potential biases and ensuring model reliability.
The responsibility for adhering to these new transparency mandates does not rest solely with AI vendors; healthcare organizations deploying these technologies remain ultimately accountable for compliance. This new reality necessitates a fundamental shift in how provider organizations approach procurement and governance. Contracts with AI vendors must now include specific clauses addressing data transparency, bias mitigation, and ongoing performance validation. Due diligence practices must evolve to include a thorough investigation of how a vendor’s AI models were trained, tested, and validated. Key questions regarding the sources of training data, the protocols used for bias testing, the controls in place to ensure model accuracy, and the mechanisms for continuous performance monitoring have become essential components of the vendor selection and management process. Organizations can no longer assume that a vendor’s claims of compliance are sufficient and must instead build their own internal governance frameworks to verify and oversee the AI tools integrated into their operations.
4. Consumer Privacy Laws Expand Their Reach
The national trend toward stronger consumer data privacy rights continues with new laws in Indiana, Kentucky, and Rhode Island that took effect on January 1. These three states have adopted legislation modeled closely on the Virginia Consumer Data Protection Act (VCDPA), which has become a de facto template for state-level privacy regulation. This model grants consumers a suite of powerful rights, including the ability to access, correct, delete, and obtain a portable copy of their personal data. Critically for the age of AI, these laws also provide consumers with the right to opt out of the processing of their data for purposes of targeted advertising, data sales, and, most importantly, profiling that produces legal or similarly significant effects. This last provision directly impacts how organizations can use AI to make automated decisions about individuals, requiring them to provide a clear and accessible opt-out mechanism for such activities. The laws also mandate that organizations conduct and document data protection impact assessments for any data processing activities deemed high-risk, which explicitly includes profiling.
For the healthcare sector, the interplay between these new privacy laws and existing regulations like HIPAA is complex. The good news for HIPAA-regulated entities is that all three VCDPA-style laws provide specific exemptions for protected health information (PHI). They also contain carve-outs for covered entities and their business associates, but only when they are acting within the scope of HIPAA. This is not a blanket exemption for the entire healthcare organization. Any data processing activities that fall outside the purview of HIPAA—such as data collected for marketing purposes, information from non-clinical wellness apps, or data from website visitors who are not yet patients—may be subject to these new consumer privacy laws. Consequently, healthcare organizations must conduct a careful analysis of all their data flows to identify any processing that could trigger these new obligations, ensuring that they have the necessary consent mechanisms, disclosure protocols, and impact assessments in place for non-HIPAA data.
5. Federal Intervention Creates New Uncertainty
Just as healthcare organizations were finalizing their strategies for complying with the new wave of state AI laws, the White House introduced a significant element of uncertainty. On December 11, 2025, an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” was signed, signaling a major push by the federal government to preempt the patchwork of state regulations and establish a “single national framework” for AI. The order directs the U.S. Attorney General to create an AI Litigation Task Force within 30 days. This task force is charged with actively challenging state AI laws that the administration views as inconsistent with federal policy. The order specifies that such challenges can be based on the argument that state laws unconstitutionally regulate interstate commerce or are preempted by existing federal regulations. Furthermore, the U.S. Secretary of Commerce has been tasked with identifying “onerous” state AI laws within 90 days, with the order specifically citing Colorado’s AI Act as an example of what it considers problematic state-level overreach.
This executive order does not immediately nullify any existing state laws, and legal experts have already suggested that the order itself will face significant legal challenges. However, it creates a highly unpredictable environment for the laws that just took effect on January 1. The federal government’s stated intention to actively oppose the enforcement of certain state AI requirements introduces a new layer of risk and complexity for organizations striving for compliance. For now, the new laws in California, Texas, and other states remain on the books and are legally enforceable. Healthcare organizations are therefore advised to continue with their compliance preparations while closely monitoring developments at the federal level. The tension between state-led regulation and the federal push for a unified framework is unlikely to be resolved quickly, meaning that organizations operating across multiple states will need to navigate this evolving and potentially contentious legal landscape with caution and diligence for the foreseeable future.
6. Navigating the New Regulatory Landscape
Confronted with this complex legal environment at the start of the year, leading healthcare organizations initiated comprehensive audits of their patient-facing artificial intelligence systems. These reviews were conducted to ensure that all AI tools complied with the new state-mandated transparency and disclosure protocols, particularly those designed to prevent AI from being mistaken for a licensed human professional. Internal teams thoroughly reassessed data processing activities across their enterprises to determine which operations fell outside the scope of HIPAA and were therefore subject to the new consumer privacy laws in states like Indiana and Kentucky. Furthermore, these organizations established robust internal monitoring systems dedicated to tracking the rapidly shifting state and federal developments in AI governance. By investing in this critical compliance infrastructure early, they strategically positioned themselves to adapt to the new era of AI regulation in medicine and mitigate the risks associated with a landscape marked by both state-level action and federal-state tension.
