From retinal image analyzers to automated charting assistants, artificial intelligence (AI) is rapidly reshaping ophthalmic practice through tools that support diagnosis, documentation, and clinical decision-making. Yet as these technologies proliferate, practices must navigate 2 parallel legal frameworks: (1) fraud and abuse risk, highlighted most visibly in the 2020 Practice Fusion enforcement action, and (2) US Food and Drug Administration (FDA) regulation of certain clinical decision support (CDS) tools as medical devices.
Together, these frameworks underscore a consistent theme: AI tools must be independent, clinically grounded, and transparently designed. Below, we outline the key regulatory considerations for ophthalmology practices evaluating CDS tools.
Fraud and Abuse Lessons From the Practice Fusion Enforcement Action
In United States ex rel. Osinek v. Practice Fusion, Inc., the government alleged that a major electronic health record (EHR) vendor allowed pharmaceutical manufacturers to influence clinical decision support alerts. The resulting settlement, announced January 27, 2020, is highly instructive for any specialty adopting AI-enabled recommendations or documentation. As ophthalmology practices increasingly rely on AI to suggest testing frequency, imaging, or treatment pathways, 4 themes from the case are particularly relevant.
1. Guard Against Commercial Influence in AI Logic
The Practice Fusion case centered on allegations that manufacturers helped design CDS alerts that steered prescribing. In ophthalmology, where AI tools may recommend optical coherence tomography (OCT) frequency, diagnostic tests, or therapeutic products, even subtle manufacturer involvement can raise compliance risks.
Best practices include:
-
Asking vendors to identify all third-party funding or commercial partnerships involved in model development.
-
Requesting clarity on training data sources and whether any manufacturer or supplier influenced algorithm design.
-
Ensuring that tools offering recommendations provide the practice with access to underlying clinical rationale, not merely the output.
2. Evaluate Whether AI “Nudges” Create Anti-Kickback Statute Risk
AI-generated prompts that encourage ordering reimbursable services or selecting particular products can carry Anti-Kickback Statute (AKS) implications if a manufacturer benefits financially from those nudges.
Safeguards include:
-
Confirming that recommendations are independent and clinically justified, not product specific.
-
Reviewing whether nudges could be seen as steering utilization toward a manufacturer’s reimbursable services or supplies.
-
Documenting internal review of vendor attestations and development processes.
3. Use Caution With AI-Generated Documentation
AI transcription and summarization tools can inadvertently introduce inaccurate or overstated clinical findings. Even well-vetted AI systems may auto-populate exam components or assessment details that the clinician did not actually perform. Such inaccuracies can expose practices to False Claims Act (FCA) liability if documentation and billing does not match services actually rendered.
Recommended precautions:
-
Treat AI-generated notes as drafts requiring active clinician review and correction.
-
Train physicians and technicians on common error patterns.
-
Disable autofill features that populate exam elements not explicitly confirmed by the provider.
4. Align Automated Documentation With Coverage and Medical Necessity Rules
Ophthalmology reimbursement is heavily dependent on documented medical necessity, such as justification for imaging, extended ophthalmoscopy, or higher-level evaluation and management codes. AI-generated justification language should not be used unless it accurately reflects real clinical findings and payer-specific coverage criteria. A brief compliance review focused on vendor independence, documentation accuracy, and payer alignment can help practices adopt AI responsibly and avoid the pitfalls seen in the Practice Fusion matter.
FDA Regulation of Ophthalmic Clinical Decision Support Tools
In addition to fraud and abuse considerations, ophthalmology practices implementing AI must understand when an AI-enabled CDS tool is regulated by the FDA as a medical device.
When Is a CDS Tool a “Device”?
Under the Federal Food, Drug, and Cosmetic Act (“FDCA”), a software product is considered a “device” if it is intended for use in the diagnosis, cure, mitigation, or prevention of a disease or a condition, and does not achieve this purpose through chemical action or metabolism within the body (see 21 U.S.C. 321[h]). Although software products can fall within this definition, certain software functions are excluded under the 21st Century Cures Act (“Cures Act”).
To be considered a nondevice CDS under the Cures Act, a software function must satisfy all 4 of the following criteria:
-
Not intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system (eg, an MRI);
-
Intended to display, analyze, or print medical information about a patient or other medical information;
-
Intended to support or provide recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and
-
Intended to allow a health care professional to independently review the basis for the recommendations (see 21 U.S.C. 360j[o][1][E]).
FDA’s Narrow Interpretation Makes the Exemption Hard to Meet
For several years, the FDA adopted a narrow interpretation of the Cures Act CDS exemption, making it challenging for ophthalmic and other CDS tools to fall within the scope of the exemption. For example, in CDS guidance issued in 2016, the FDA took the position that software functions that provide a single, specific diagnostic or treatment output or recommendation do not satisfy criterion 3. This meant a software function that provides an ophthalmologist with information that a patient may be exhibiting signs or symptoms of a disease (eg, a specific retinal disorder) or that provides a risk score or risk probability of the patient developing or having a particular disease or condition would fail on criterion 3 (and thus would be a regulated device). Rather, as the FDA interprets the Cures Act, a non-device CDS function is one that provides multiple recommendations to an HCP, such as providing an HCP with multiple treatment or diagnostic options for consideration.
However, in an effort to “promote more innovation with [artificial intelligence] and medical devices,” the FDA recently updated its CDS guidance to expand the scope of CDS tools that can be marketed without complying with the FDA’s medical device requirements.1 In guidance issued in January 2026, the FDA made major interpretive changes to criterion 3. Specifically, the FDA introduced a new limited enforcement discretion policy whereby it intends to exercise enforcement discretion toward a CDS software function that provides a single output or recommendation in scenarios where only a single option is “clinically appropriate.” Although the FDA does not define “clinically appropriate,” the agency provides illustrative examples of such software functions that would fall within the scope of the enforcement discretion policy. Importantly, this enforcement discretion policy is not a blanket policy. The updated guidance states that CDS software functions that provide a single recommendation would remain the focus of the FDA’s oversight if, for example, the software predicts the risk of a time-critical event or if it analyzes a medical image or signal in generating the recommendation (eg, CDS software that analyzes a retinal scan).
Additionally, to satisfy criterion 4, the FDA recommends that the software or labeling provide a plain language description of the underlying algorithm development and validation that forms the basis for the CDS implementation, which can be challenging for CDS that incorporates AI. Given these interpretations, many AI-driven ophthalmic tools, particularly those evaluating retinal images, are considered regulated devices.
Regulated Devices Face Tiered Requirements
If a CDS tool does not meet the Cures Act exemption (and does not fall within FDA enforcement discretion), it is regulated as a device and subject to tiered requirements based on risk classification (class I–III). Although devices in all 3 categories are subject to certain general controls (eg, facility registration, safety reporting, labeling requirements, quality system requirements, investigational device requirements that apply during clinical development), devices in class I generally do not require premarket authorization. Class II and III devices typically require 510(k) clearance or premarket approval. Novel technologies might not fall within an existing FDA device classification (rendering it class III by default), but may be eligible for the de novo classification pathway.
Examples Relevant to Ophthalmology
The FDA has authorized numerous ophthalmic CDS tools, including ones that incorporate artificial intelligence. Several such products fall under the FDA’s classification regulation for “retinal diagnostic software devices” (21 C.F.R. 886.1100). A recent example is an April 2024 clearance issued to AEYE Health Inc. for the AEYE-DS, which is a retinal diagnostic software device that incorporates an algorithm to evaluate retinal images for diagnostic screening to identify retinal diseases or conditions. Specifically, the AEYE-DS is indicated for use by health care providers to automatically detect more-than-mild diabetic retinopathy in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy.
Consequences of Noncompliance
Failure to comply with FDA requirements can lead to administrative and enforcement actions, including public warning letters, FDA inspections, or civil and criminal liability. Where there is uncertainty, practices or vendors can engage with the FDA for feedback through a 513(g) request for information or a presubmission meeting.
Bringing It All Together: A Framework for Responsible AI Adoption in Ophthalmology
As AI technologies expand, practices can mitigate legal risk by incorporating both fraud-and-abuse and FDA considerations into procurement, contracting, and implementation processes.
-
Evaluate Vendor Independence: Review vendor disclosures regarding commercial relationships, data sources, and model training.
-
Review and Validate AI-Generated Documentation: Educate clinicians on the limitations of automated documentation tools and require provider oversight before submission to payers.
-
Assess Recommended Testing or Treatment Logic: Watch for overutilization risks or product-specific nudges.
-
Determine Whether the Tool Is FDA-Regulated: Use the Cures Act criteria as an initial screen, but apply the FDA’s narrow interpretations, recognizing that many ophthalmic image-based tools will be regulated devices.
-
Perform Periodic Compliance Reviews: Monitor vendor conduct, documentation patterns, and regulatory status.
AI is poised to enhance efficiency and diagnostic precision across ophthalmology. But as the Practice Fusion matter illustrates, the benefits of AI must be balanced with careful oversight of compliance, documentation, and regulatory risks. Layering these considerations with the FDA’s evolving framework for CDS tools will allow ophthalmology practices to leverage AI responsibly while protecting patients, payers, and clinicians. OM
Reference
1. US Food and Drug Administration. Clinical decision support software guidance for industry and Food and Drug Administration staff. January 2026. Accessed January 23, 2026. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software







