Grammarly Scraps AI Tool Mimicking Famous Authors

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates.

March 30, 2026
|

A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.

Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.

The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.

Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.

Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.

Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.

Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.

For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.

Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.

Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.

Source: BBC News
Date: March 12, 2026

  • Featured tools
Murf Ai
Free

Murf AI Review – Advanced AI Voice Generator for Realistic Voiceovers

#
Text to Speech
Learn more
WellSaid Ai
Free

WellSaid AI is an advanced text-to-speech platform that transforms written text into lifelike, human-quality voiceovers.

#
Text to Speech
Learn more

Learn more about future of AI

Join 80,000+ Ai enthusiast getting weekly updates on exciting AI tools.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Grammarly Scraps AI Tool Mimicking Famous Authors

March 30, 2026

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates.

A controversy surrounding generative AI ethics has intensified after Grammarly withdrew a feature that allowed users to imitate the writing style of specific authors. The decision followed widespread criticism from writers and industry groups, highlighting growing tensions between AI innovation and intellectual property protections.

Grammarly removed a recently introduced AI feature that enabled users to replicate the style of well-known authors after facing swift backlash from writers, publishers, and digital rights advocates. The tool had allowed users to generate text that mimicked the tone and style of recognizable authors, raising concerns that the technology could be used to replicate creative voices without permission.

Critics argued the feature risked undermining author rights and misrepresenting original creators. Following the criticism, Grammarly confirmed it had withdrawn the feature and emphasized that the company aims to build AI tools that support rather than replace human creativity. The move reflects increasing scrutiny on how generative AI models replicate artistic or literary styles.

The episode underscores a broader global debate about generative AI and intellectual property rights. As AI systems become capable of producing text, images, and music that resemble the work of specific creators, legal and ethical questions are emerging across the creative economy.

Technology companies developing AI writing tools including OpenAI, Google, and Microsoft have increasingly faced scrutiny over how their models are trained and how closely they can replicate human creative styles.

Authors, artists, and publishers have warned that AI systems could replicate distinctive creative voices without compensation or attribution. Several lawsuits and regulatory debates are already underway in major markets including the United States and Europe. The Grammarly incident reflects the delicate balance technology firms must strike between innovation and protecting intellectual property in an increasingly AI-driven content ecosystem.

Experts in technology policy and copyright law say the backlash illustrates growing sensitivity around AI-generated content that imitates identifiable creators. Industry analysts note that while generative AI systems often learn from vast datasets of publicly available content, reproducing distinctive styles can raise legal concerns related to copyright, personality rights, and creative ownership.

Grammarly indicated that its goal is to enhance writing productivity rather than imitate specific individuals. The company emphasized that it continues to refine its AI systems to ensure they respect creative boundaries and user trust. Meanwhile, publishing groups and author organizations have urged technology companies to establish clearer safeguards preventing AI tools from directly mimicking living or recognizable writers. Experts say these debates are likely to shape future regulations governing generative AI development and deployment.

For technology companies, the controversy highlights the growing reputational and regulatory risks associated with generative AI features. Firms introducing AI-powered creative tools must increasingly evaluate how those tools interact with copyright law and creator rights. For investors and corporate leaders, the incident demonstrates how ethical considerations can quickly influence product strategy and public perception in the AI sector.

Governments and regulators are also closely monitoring how generative AI systems handle intellectual property. Policymakers may introduce new guidelines governing training data, style replication, and attribution requirements. Companies developing AI writing tools may need to implement stronger safeguards to prevent unauthorized imitation of identifiable creative voices.

Looking ahead, debates around AI-generated content and creative ownership are likely to intensify as generative models become more sophisticated. Technology companies will face increasing pressure to balance innovation with ethical safeguards and legal compliance. For executives and policymakers alike, the challenge will be establishing frameworks that encourage AI development while protecting the rights and livelihoods of human creators.

Source: BBC News
Date: March 12, 2026

Promote Your Tool

Copy Embed Code

Similar Blogs

May 8, 2026
|

Google Rebrands Fitbit App Integration

The Fitbit app is being phased into a new identity under Google’s broader health and fitness ecosystem, accompanied by updated features designed to enhance user tracking, analytics.
Read more
May 8, 2026
|

AI Tools Boost Workforce Productivity

AI-powered tools are being widely adopted to streamline everyday work tasks such as scheduling, email drafting, research, and workflow organization.
Read more
May 8, 2026
|

Global Tech Faces RAMageddon Crisis

Technology companies across hardware, cloud computing, and artificial intelligence sectors are reporting rising concerns over a shortage of RAM (random-access memory).
Read more
May 8, 2026
|

Huawei Launches Ultra-Thin Premium Tablet

Huawei has launched its latest premium tablet, positioned as a direct competitor to Apple’s high-end iPad Pro series.
Read more
May 8, 2026
|

Cloudflare AI Shift Cuts Workforce

Cloudflare has announced plans to cut approximately 20% of its workforce, equating to more than 1,100 jobs, as it restructures operations around AI-driven efficiency models.
Read more
May 8, 2026
|

OpenAI Advances Cybersecurity AI Race

OpenAI has reportedly rolled out a new AI model tailored for cybersecurity applications, aimed at strengthening threat detection, vulnerability analysis, and automated defense mechanisms.
Read more