New AI disclosure requirements for physicians passed into Texas law
The 2025 Texas legislative session saw the passage of two bills into law that establish clear disclosure rules for physicians using artificial intelligence (AI) when seeing patients.
The 2025 Texas legislative session saw the passage of two bills into law that establish clear disclosure rules for physicians using artificial intelligence (AI) when seeing patients. These rules must be followed in conjunction with existing HIPAA regulations.
- Senate Bill 1188, which took effect September 1, 2025, sets the following parameters for health care practitioners who use AI in a diagnostic capacity. In summary, AI use is only allowed if:
- the practitioner is acting within the scope of their license;
- the particular use of AI is not prohibited by law;
- the practitioner reviews all AI-generated records consistent with Texas Medical Board standards; and
- the practitioner discloses their use of AI technology to patients.
- Violations can result in substantial financial penalties and license suspension or revocation for repeat offenders.
- House Bill 149, known as the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, takes effect January 1, 2026, and requires health care providers to inform patients or their guardians when AI systems are being used in their treatment “not later than the date of service or treatment,” or as soon as reasonably possible in an emergency. According to the bill, these disclosures must be clear, conspicuous, and written in plain language.
Several practical steps are recommended for managing AI use in medical practice.
- Review Senate Bill 1188 and House Bill 149 in their entirety for the full list of requirements.
- Establish or update governance policies covering AI use, training, and audit trails. These new laws do not offer specific guidance on how they should be implemented. Therefore, physicians are tasked with establishing their own disclosure procedures for their practices.
These procedures need to be used in addition to existing protocols used to follow HIPAA regulations and disclosures. While HIPAA doesn't explicitly cover AI, it does provide guidelines for protecting patient health information (PHI) in technology applications. AI vendors handling PHI typically qualify as business associates under HIPAA, requiring formal business associate agreements be put in place that outline responsibilities for data protection, technical safeguards, and breach reporting rules.
The American Medical Association has released guidance on their website for health systems looking to develop and implement AI within their organization. - Fully document AI's role if used to make clinical decisions. Record as much information as possible, including date, application, patient condition, reasoning behind use of or disagreement with AI’s recommendations or conclusions.
- Practice transparent communication with patients about A use. Provide treatment options for those who opt out of care that involves or uses AI.
- Ensure AI tools do not use your patients' PHI to train other AI tools. While HIPAA permits physicians to use PHI to make specific treatment decisions; secure payment; and other processes without patient authorization, AI model training may not fall under these permissions. Physicians should verify whether vendors use patient data for AI training purposes or to sell data to third parties. If they do, this may be in violation of HIPAA requirements.
To support physicians navigating these requirements, the Texas Medical Association has developed an AI Resource Page featuring a vendor evaluation tool, policy templates, educational articles, podcasts, and continuing medical education courses.
Additional resources
- American Medical Association: “Governance for Augmented Intelligence”
- Texas Medical Association: “Physicians must disclose AI use alongside existing HIPAA requirements, per state laws”
- Texas Medical Association: “TMA develops new AI education”
- TMLT: Free CME: “The AI revolution in medicine”
- TMLT: “Using AI medical scribes: Risk management considerations”
- Medical Economics: “The new malpractice frontier: Who’s liable when AI gets it wrong?”
- JAMA: “Ethical obligations to inform patients about use of AI tools”
Disclaimer
The 2025 Texas legislative session saw the passage of two bills into law that establish clear disclosure rules for physicians using artificial intelligence (AI) when seeing patients. These rules must be followed in conjunction with existing HIPAA regulations.
- Senate Bill 1188, which took effect September 1, 2025, sets the following parameters for health care practitioners who use AI in a diagnostic capacity. In summary, AI use is only allowed if:
- the practitioner is acting within the scope of their license;
- the particular use of AI is not prohibited by law;
- the practitioner reviews all AI-generated records consistent with Texas Medical Board standards; and
- the practitioner discloses their use of AI technology to patients.
- Violations can result in substantial financial penalties and license suspension or revocation for repeat offenders.
- House Bill 149, known as the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, takes effect January 1, 2026, and requires health care providers to inform patients or their guardians when AI systems are being used in their treatment “not later than the date of service or treatment,” or as soon as reasonably possible in an emergency. According to the bill, these disclosures must be clear, conspicuous, and written in plain language.
Several practical steps are recommended for managing AI use in medical practice.
- Review Senate Bill 1188 and House Bill 149 in their entirety for the full list of requirements.
- Establish or update governance policies covering AI use, training, and audit trails. These new laws do not offer specific guidance on how they should be implemented. Therefore, physicians are tasked with establishing their own disclosure procedures for their practices.
These procedures need to be used in addition to existing protocols used to follow HIPAA regulations and disclosures. While HIPAA doesn't explicitly cover AI, it does provide guidelines for protecting patient health information (PHI) in technology applications. AI vendors handling PHI typically qualify as business associates under HIPAA, requiring formal business associate agreements be put in place that outline responsibilities for data protection, technical safeguards, and breach reporting rules.
The American Medical Association has released guidance on their website for health systems looking to develop and implement AI within their organization. - Fully document AI's role if used to make clinical decisions. Record as much information as possible, including date, application, patient condition, reasoning behind use of or disagreement with AI’s recommendations or conclusions.
- Practice transparent communication with patients about A use. Provide treatment options for those who opt out of care that involves or uses AI.
- Ensure AI tools do not use your patients' PHI to train other AI tools. While HIPAA permits physicians to use PHI to make specific treatment decisions; secure payment; and other processes without patient authorization, AI model training may not fall under these permissions. Physicians should verify whether vendors use patient data for AI training purposes or to sell data to third parties. If they do, this may be in violation of HIPAA requirements.
To support physicians navigating these requirements, the Texas Medical Association has developed an AI Resource Page featuring a vendor evaluation tool, policy templates, educational articles, podcasts, and continuing medical education courses.
Additional resources
- American Medical Association: “Governance for Augmented Intelligence”
- Texas Medical Association: “Physicians must disclose AI use alongside existing HIPAA requirements, per state laws”
- Texas Medical Association: “TMA develops new AI education”
- TMLT: Free CME: “The AI revolution in medicine”
- TMLT: “Using AI medical scribes: Risk management considerations”
- Medical Economics: “The new malpractice frontier: Who’s liable when AI gets it wrong?”
- JAMA: “Ethical obligations to inform patients about use of AI tools”
Disclaimer
Want to save this article for later?
Download the full issue as a PDF for future reference or to share with colleagues.
Subscribe to Case Closed to receive insights from resolved cases.
You’ll receive two closed claim studies every month. These closed claim studies are provided to help physicians improve patient safety and reduce potential liability risks that may arise when treating patients.
Related Resources
Discover more insights, stories, and resources to keep you informed and inspired.





