If the people shaping your AI strategy don’t understand AI, your governance strategy is built on bad information. Here’s how governance gaps, vendor risk, and internal culture quietly derail AI initiatives.
There’s a pattern showing up across industries right now, and it should worry anyone who’s serious about building a functional AI strategy. Namely: AI policies are being shaped by people who have never meaningfully used the tools.
The loudest voices in the room are frequently people with little to no hands-on experience with whatever AI tools they’re either championing or condemning. And organizations are making significant strategic decisions based on their input.
This is not a new problem. But AI is making it expensive in new ways.
AI Governance Failure in Action: The Hachette Case
Last week, Hachette Book Group, which is one of the largest publishers in the world, pulled a horror novel called Shy Girl from shelves in the UK and canceled its imminent US release after concluding that large portions of the text were AI-generated.
The accusations had been building on social media for months, including in a viral YouTube video, Reddit threads, a widely circulated scan by an AI detection company whose CEO voluntarily ran the book through his software. The New York Times called Hachette for comment on a Wednesday. By Thursday, Hachette had canceled the book. It’s being widely described as a landmark moment for the publishing industry.
There are a lot of arcane publishing questions involved here, but let’s focus on what matters for the rest of us.
Hachette, a business with global operations, hundreds of contracted authors, and a robust legal infrastructure made one of the most consequential decisions in its recent history within 24 hours of being contacted by a single reporter. What’s more the Times story leaned substantially on the output of one private company’s CEO, who ran an unsolicited test and tweeted about it.
This what happens when an organization has minimal policy, lax monitoring, and no framework to manage a foreseeable risk . . . and gets caught flat-footed in public.
Vendor AI Risk: Third-Party AI Use Is Your Problem, Too
Hachette acquired Shy Girl quickly, drawn by its grassroots commercial success as a self-published title. Industry observers have noted that publishers rarely conduct thorough editorial reviews of already-published acquisitions, because the assumption is the product is already “finished.” The vendor management failure here is textbook.
It’s evident that nobody asked hard questions about how the book was made. The contracts apparently required AI disclosure, but there was no process to verify or enforce that requirement. The disclosure clause existed on paper, sure, but the due diligence did not exist in practice.
There’s a reason publishers haven’t moved to blanket bans, and it’s not just squeamishness. Some form of AI is now embedded in virtually every software tool a writer, editor, or marketer might use on any given day, from grammar checkers to email clients to the platforms publishers themselves use for marketing, audiobook production, and rights management. A contract that prohibits “AI use” without careful definition is either unenforceable or so broad it would prohibit normal business activity. But the moment you try to define AI use with precision, you’ve opened a negotiation about where the line is. And right now, there is no industry consensus.
Supreme Court Justice Potter Stewart famously defined pornography as “I know it when I see it.” Unfortunately, that’s pretty much where we are with AI usage in writing. Definitional language hasn’t caught up to operational reality, and that gap is a genuinely thorny problem.
It’s worth pausing to ask if your organization has some version of this gap. Do you have contractors, vendors, freelancers, and partners? Their AI use is not your AI use… until it absolutely is. In cybersecurity, we’ve spent years making the case that a subcontractor’s weak security posture is your breach waiting to happen. The same logic applies here exactly. If what your outside partners are doing with AI tools can land at your front door, it’s your problem, too. The question is whether you find out on your own terms or on a reporter’s.
Why AI Policies Fail Inside Organizations
The publishing industry got to this moment partly because a vocal segment has set the terms of discourse in a way that makes honest internal conversation impossible. By their own public admission, many of the loudest anti-AI voices don’t use these tools professionally. They are, however, extremely confident about what using them means. Admitting to AI use of any kind is essentially a career-ender in the industry. And yet, somehow, people keep using it.
This is an organizational failure with a name: preference falsification. The practical result: working practitioners can’t share what they know, best practices can’t develop, and when a real situation materializes, nobody has language, policy, or process for it.
The situation has now produced a concrete, documented, commercially damaging outcome: a canceled book, a reputational crisis, and an author whose career may be over, regardless of what actually happened.
Hachette’s problem had two layers. The first was the vendor management gap: no real process for verifying what contracted authors were doing with AI tools, despite having a disclosure requirement on paper. The second was cultural: an industry so captured by loud anti-AI voices that honest internal conversation had become professionally dangerous. Those two problems compounded each other. Because practitioners couldn’t speak frankly, no one developed workable standards. Because there were no workable standards, no one built enforcement mechanisms. The disclosure clause in the contract was essentially decorative.
Author Brian Merchant described the “ambient animosity” toward AI in Wired magazine in 2025. That animosity is real, and it’s essential to take it seriously. Concerns about job displacement, accuracy, intellectual property, and over-reliance on nonhuman “intelligence” are grounded in genuine risk. But there’s a second kind of anti-AI sentiment that organizations are also swimming in, and it’s worth distinguishing the two. This version is less about lived experience with the tools and more about identity, be it professional, political, or generational. It’s loud, and it tends to crowd out more useful signals.
If your internal AI working group is being shaped primarily by the most vocal skeptics, you may be solving for the wrong problem. You may have contractors using AI tools in ways that create legal or reputational exposure, with no process to find out. On the other hand, you may have practitioners who’ve developed genuinely useful, defensible workflows, and no way to learn from them because the culture punishes honesty.
This problem has a mirror image, too. In publishing, anti-AI voices are by far the loudest. But in other sectors, such as finance, logistics, and manufacturing, the loudest voices in the room aren’t the skeptics. They’re the boosters: vendors with something to sell, executives chasing a narrative, early adopters whose enthusiasm has outrun their evidence. The practitioners with sincere, grounded concerns about implementation speed, reliability, or workforce impact get labeled as “Luddites” or “resistant to change.” The risk is that they’ll gradually stop raising their hands entirely.
The result is the same organizational failure in reverse: honest feedback doesn’t reach decision-makers, risks go unmodeled, and the strategy reflects the most enthusiastic voices rather than the most informed ones. A culture where skeptics can’t speak frankly is just as dangerous as a culture where practitioners can’t. Both produce the same outcome: decisions made on bad information, at speed, with nobody in the room who will say what they actually think.
The Culture Problem Behind AI Strategy Failures
The employees and practitioners most likely to have sophisticated, nuanced views on AI tool use are also the most professionally exposed if they share them. On the other hand, the people most willing to speak loudly are frequently those with the least to lose — or, in the case of AI-product vendors, the most to gain from a particular position.
This creates a systematic information problem at exactly the moment when good information is most valuable.
A few questions worth sitting with:
- Who is actually informing your AI policy: practitioners or observers?
- Do your contracts with outside vendors, contractors, and freelancers include AI disclosure requirements? If so, do those requirements have any enforcement mechanism, or are they decorative?
- When did you last audit what your third-party partners are actually doing with AI tools — not what they’ve agreed to on paper, but what’s happening in practice?
- Does your organization have channels where practitioners can give honest feedback about AI workflows without career risk?
- Are you distinguishing between “our customers have legitimate concerns we should address” and “the loudest voices on social media have staked out extreme positions”? Those require very different responses.
- What would it take for someone inside your organization to say, publicly, I’ve used this tool and here’s what I actually found?
If the answer to that last question involves significant professional risk, you have a culture problem that will outlast whatever policy you put in place.
Organizations that are navigating this well tend to share a few characteristics. They’re actively creating space for practitioner voices, not just policy voices. They’re distinguishing between strategic risk (real) and reputational noise (also real but different). And they’re treating “our people aren’t being honest with us” as the emergency it actually is.
If your AI strategy has no clothes, who in your organization is willing to say so out loud?
This is a great learning resource.