The information provided in this article is for educational and informational purposes only and does not constitute legal, financial, or tax advice. No attorney-client relationship is formed by reading this content. Laws and regulations vary by jurisdiction and change frequently; always consult with a qualified professional regarding your specific situation. The author and publisher assume no liability for any actions taken based on this information.
- Deepfakes are synthetic media that can imitate real people
- Privacy harms linked to deepfakes often involve identity and consent
- Federal law addresses deepfakes through several different legal tools
- The TAKE IT DOWN Act created federal rules for nonconsensual intimate deepfakes
- A separate federal civil lawsuit exists for some nonconsensual intimate images
- Federal robocall rules can reach AI voice cloning used in calls
- Consumer protection and fraud laws can apply to deepfake scams
- Federal cyberstalking and computer crime laws can also be relevant
- Section 230 can affect civil claims against online platforms
- State law often fills gaps in privacy and name image and likeness rights
- Sources
Key Facts
- Federal level: Deepfakes are digital images, audio, or video that are artificially generated or modified to appear real and may be used to impersonate a person.
- Federal level: The TAKE IT DOWN Act added federal crimes for certain nonconsensual intimate images and “digital forgeries” in 47 U.S.C. § 223(h).
- Federal level: The same Act also created a federal “notice and removal” obligation for certain online services in 47 U.S.C. § 223a.
- Federal level: Federal law also provides a civil lawsuit option for some nonconsensual disclosures of intimate images in 15 U.S.C. § 6851.
- Federal level: The Telephone Consumer Protection Act restricts certain calls using an “artificial or prerecorded voice” under 47 U.S.C. § 227.
- Federal level: The FCC has stated that the TCPA’s “artificial or prerecorded voice” limits cover current AI voice generation in FCC 24-17.
- Federal level: The FTC can bring cases for unfair or deceptive acts or practices in commerce under 15 U.S.C. § 45, which can intersect with some deepfake-related scams.
- Federal and state: Many privacy and “name, image, likeness, and voice” rights are still mostly governed by state law and can vary by state.
As of January 2026: This article describes federal statutes and agency actions that can change through new laws, rulemaking, and court decisions, including deadlines and timing rules that may be updated.
Deepfakes are synthetic media that can imitate real people
Deepfakes are commonly discussed as synthetic or AI-altered images, audio, or video that can make it look or sound like a real person said or did something that did not happen. Because deepfakes can copy a person’s face and voice, the legal issues often involve identity, consent, deception, and reputational harm.
Privacy harms linked to deepfakes often involve identity and consent
Many privacy concerns around deepfakes come from uses that imitate someone without permission, especially in intimate-image contexts, harassment contexts, or fraud contexts. In legal terms, the same deepfake content can raise different issues depending on how it is created, shared, and used, and whether it connects to interstate commerce or an online service that federal law regulates.
Federal law addresses deepfakes through several different legal tools
There is no single federal “deepfake law” that covers every situation, so federal coverage is usually pieced together from different areas of law. In practice, the most directly relevant federal rules tend to involve nonconsensual intimate images, consumer deception, and communications rules (like restrictions on certain robocalls).
The TAKE IT DOWN Act created federal rules for nonconsensual intimate deepfakes
In 2025, Congress enacted the TAKE IT DOWN Act, which added a set of federal crimes aimed at the nonconsensual publication of certain intimate images, including certain “digital forgeries,” a term the statute defines in its own way. The Act’s definitions include concepts like “consent,” “identifiable individual,” and “digital forgery” within 47 U.S.C. § 223(h).
The same Act also created a platform-focused “notice and removal” framework in 47 U.S.C. § 223a that applies to certain “covered platforms” and describes what a written notice and removal request must include. The statute also sets a required time window for removal after a valid request and states that the FTC enforces compliance by treating certain failures as a type of unfair or deceptive act or practice under the FTC Act.
The timing provisions in 47 U.S.C. § 223a include an obligation for covered platforms to establish the required process not later than one year after May 19, 2025, and a removal deadline that is framed in hours. These deadlines are especially important to read directly in the statute because small wording differences can affect how the requirement is understood.
A separate federal civil lawsuit exists for some nonconsensual intimate images
Separate from the TAKE IT DOWN Act, federal law also provides a civil cause of action for certain nonconsensual disclosures of “intimate visual depictions” in 15 U.S.C. § 6851. This law uses defined terms such as “disclose,” “consent,” and “depicted individual,” and it also contains statutory exceptions (including for certain disclosures tied to law enforcement, legal proceedings, medical education or treatment, and matters of public concern or public interest).
When the statute applies, the relief described can include actual damages or liquidated damages in the amount of $150,000, along with costs and potentially attorney’s fees, and the court may also order certain injunctive relief. Because this is a federal civil claim with defined elements and exceptions, whether it fits a given situation can depend heavily on the specific facts and how courts interpret the statute.
Federal robocall rules can reach AI voice cloning used in calls
Deepfake audio is often discussed in the context of voice cloning for scams and impersonation, including calls that mimic a family member, executive, or public figure. One federal law that can intersect with that risk is the TCPA, which restricts certain calls made using an “artificial or prerecorded voice” under 47 U.S.C. § 227.
In 2024, the FCC issued a Declaratory Ruling concluding that the TCPA’s restrictions on “artificial or prerecorded voice” encompass current AI technologies that generate human voices. That agency interpretation matters because it clarifies how the FCC views AI voice generation under existing statutory language.
Consumer protection and fraud laws can apply to deepfake scams
Some deepfake-related harms show up as scams in the marketplace, such as deceptive advertising, impersonation tied to financial loss, or misleading claims about what is real. At the federal level, the FTC’s core consumer protection authority includes the prohibition on “unfair or deceptive acts or practices” in or affecting commerce in 15 U.S.C. § 45.
Depending on the conduct, criminal fraud statutes may also come into play, including the federal wire fraud statute in 18 U.S.C. § 1343, which concerns schemes to defraud that use interstate wire, radio, or television communications. Whether any specific deepfake activity fits those elements is a fact-specific question that courts ultimately decide.
Federal cyberstalking and computer crime laws can also be relevant
Deepfakes can also be connected to harassment, threats, or sustained targeting online. One federal law that can be relevant to stalking-like conduct using electronic systems is 18 U.S.C. § 2261A, which covers stalking and includes certain uses of “interactive computer service” or other facilities of interstate or foreign commerce.
In other situations, deepfakes may be part of a larger cyber incident, such as account compromise, credential theft, or unauthorized access used to obtain data that then gets used to build more convincing synthetic content. Federal computer crime law includes the Computer Fraud and Abuse Act in 18 U.S.C. § 1030, which covers a range of computer-related offenses and includes both criminal enforcement and a civil action in some circumstances.
Section 230 can affect civil claims against online platforms
When deepfakes are posted online, questions often arise about whether a platform can be held responsible for user-posted content. One federal statute frequently discussed in that context is 47 U.S.C. § 230, which contains limits on treating certain providers or users as the publisher or speaker of information provided by another information content provider, along with specific exceptions and carve-outs.
Section 230 is complex, and its application can depend on how a claim is framed, what the platform allegedly did, and how courts interpret the specific facts. In addition, newer federal statutes can sometimes add duties or enforcement mechanisms that operate alongside Section 230 rather than replacing it.
State law often fills gaps in privacy and name image and likeness rights
Even when federal law applies, many deepfake disputes still involve state-law issues such as privacy torts, right of publicity, defamation, or state criminal laws related to harassment and intimate images. In general, “name, image, likeness, and voice” protections are primarily state-based, and the details can vary widely by state.