Grammarly Scraps AI Tool Mimicking Famous Authors

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates.

March 30, 2026
|

A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.

Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.

The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.

Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.

Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.

Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.

Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.

For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.

Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.

Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.

Source: BBC News
Date: March 12, 2026

  • Featured tools
Twistly AI
Paid

Twistly AI is a PowerPoint add-in that allows users to generate full slide decks, improve existing presentations, and convert various content types into polished slides directly within Microsoft PowerPoint.It streamlines presentation creation using AI-powered text analysis, image generation and content conversion.

#
Presentation
Learn more
Beautiful AI
Free

Beautiful AI is an AI-powered presentation platform that automates slide design and formatting, enabling users to create polished, on-brand presentations quickly.

#
Presentation
Learn more

Learn more about future of AI

Join 80,000+ Ai enthusiast getting weekly updates on exciting AI tools.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Grammarly Scraps AI Tool Mimicking Famous Authors

March 30, 2026

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates.

A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.

Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.

The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.

Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.

Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.

Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.

Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.

For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.

Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.

Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.

Source: BBC News
Date: March 12, 2026

Promote Your Tool

Copy Embed Code

Similar Blogs

March 30, 2026
|

Meta Court Setbacks Signal Stricter AI Scrutiny

Meta faced multiple legal losses related to its AI initiatives, particularly around training data usage, algorithmic transparency, and consumer protection obligations. Courts questioned the company’s safeguards, emphasizing risks of bias, privacy violations, and misinformation.
Read more
March 30, 2026
|

Anthropic Pushes Back Against Pentagon Pressure

Anthropic, a leading AI firm, resisted Pentagon pressure to weaken or remove safeguards designed to prevent misuse of its AI systems. The confrontation escalated after Hegseth urged faster deployment of AI capabilities without certain safety constraints.
Read more
March 30, 2026
|

Digital Twin Meets AI in Mining Transformation

MineScape 2026 introduces enhanced capabilities combining AI-powered analytics with digital twin simulations to optimize mine planning and operations.
Read more
March 30, 2026
|

AI Moves Beyond Earth With Space Data Centers

Nvidia has introduced a concept for deploying AI data center hardware in space, leveraging satellite platforms and orbital infrastructure to process data closer to its source. The initiative aligns with rising demand for real-time analytics from Earth observation, telecommunications, and defense sectors.
Read more
March 30, 2026
|

AI Becomes Frontline Defense Against Spam Calls

The development aligns with a broader trend across global markets where AI is being used both to enable and combat digital fraud. Spam calls have become a widespread issue, costing consumers and businesses billions annually.
Read more
March 30, 2026
|

Bluesky Unveils AI Driven Feed Customization

The integration of AI into feed customization represents a convergence of personalization and decentralization. Historically, social media has prioritized engagement metrics over user choice.
Read more