Alphy in Forbes: Why AI Writing Tools Need Guardrails
- Julian Guthrie
- Mar 18
- 2 min read

At the same time that companies of all sizes are bracing for increasing litigation in 2025, AI writing tools are reshaping how we all communicate — and Alphy’s HarmCheck.ai is part of that conversation. In a Forbes article last week, communication professor and strategist Kathryn Lancioni explored the rise of AI-assisted writing in leadership — and our HarmCheck was featured alongside industry giants, including Google’s Gemini and Microsoft’s Co-Pilot.
A big thank you to Lancioni for her thoughtful analysis and for recognizing our work at Alphy in reimagining AI’s role in communication. At Alphy, we’re obsessed with using skilled humans to improve AI’s understanding of language. We also believe that free speech needs guardrails.
AI-powered tools can boost efficiency, but when it comes to workplace communication and compliance, they also present real risks. U.S. companies spend a fortune on the fallout from poor, harmful, and unlawful communication. According to a report by the Society for Human Resource Management (SHRM), poor communication costs large companies an average of $62.4 million per year in lost productivity. There’s also the issue of reputational damage, employee turnover, and regulatory and legal fines.
The Promise of AI Writing Tools
AI can be an incredible asset for professionals. It can help craft clear, structured messages, correct grammar, and even refine tone. For leaders, this means faster responses, better readability, and consistency across communications. In a world where written interactions define everything from business deals to company culture, AI can be a valuable co-pilot.
The Pitfalls: Losing Authenticity and Increasing Risk
However, the convenience of AI-generated writing comes at a cost. Many tools optimize for efficiency but strip away nuance, making communication feel impersonal or robotic. For employees at all levels, this is a problem — your voice matters. Employees, investors, colleagues, and customers can tell when a message lacks authenticity.
Even more concerning, AI-generated content can introduce unintended compliance risks. It might misinterpret intent, produce biased language, or soften legally necessary disclaimers. Importantly, it won’t stop employees from sending angry, threatening, retaliatory, discriminatory, and unlawful texts or emails.
Where HarmCheck.ai Fits In
That’s why we built HarmCheck.ai — not to replace human communication, but to protect against intended and unintended harm in real-time. Unlike generic AI writing tools, HarmCheck helps leaders and professionals:
Detect harmful or unlawful language in real-time before hitting send.
Protect the employee and the company from regulatory fines, public missteps, and unintended harm.
Ensure compliance and fairness, particularly in regulated industries.
AI + Human Judgment = The Future of Communication
AI writing tools should act as co-pilots, not autopilots. Leaders need to balance efficiency with responsibility, ensuring their words maintain trust, authenticity, and legal integrity.
The future of AI in communication isn’t just about speed — it’s about smarter, safer, and more human-centered writing. That’s the problem we’re solving with HarmCheck.
Julian Guthrie is the founder and CEO of Alphy.
HarmCheck by Alphy is an AI communication compliance solution that detects and flags language that is harmful, unlawful, and unethical in digital communication. Alphy was founded to reduce the risk of litigation from harmful and discriminatory communication while helping employees communicate more effectively. For more information: www.harmcheck.ai To book a demo: sales@harmcheck.ai