Last week, Alex Lieberman (@businessbarista) posted a question on X that got 45,000 views:
"What is the #1 thing slowing or stopping your company's AI transformation?"
Nineteen honest answers came back from $100M+ executives. Data quality topped the list. But one reply cut through everything:
"Strip away the AI framing and this is just the list of things that have always slowed companies down. AI has made them impossible to ignore." - Rzac (@renilzac)
That's true. And here's the part most people miss: after two years of these conversations in the Salesforce ecosystem, data quality is not the killer everyone claims it is. The rank order looks different from the inside.
- Clarity vacuum - Buying licenses without a defined use case. Fix: Pick one workflow, start with summarization.
- Non-AI-native leaders - Delegates AI, never personally uses it. Fix: Build champions at 2nd/3rd level.
- Behavior gap - Training programs nobody applies. Fix: Small pilots with change drivers built in.
- Coordination failure - Pilots that never reach production. Fix: One named owner with budget and mandate.
- Data quality - Used as a reason to not start. Fix: Start with present data, clean later.
1. The Clarity Vacuum
Leadership feels pressure to act. So they buy licenses. Microsoft Copilot, Salesforce Agentforce, Claude, ChatGPT. Pick your platform. The announcement goes out. And then nothing moves.
Buying a license without a defined use case is a gym membership. You go a few times. Life gets busy. The equipment collects dust. It's not that people don't want AI. It's that nobody told them what specifically to do with it, in which workflow, to produce what outcome.
The fix is starting with what you already have. In Salesforce, that usually means summarization. Think about a pipeline review: the first five minutes are always a rep recapping what they know. Put an Account 360 summary in front of the VP beforehand and you start at diagnosis, not recap. Same with Service Cloud. Cases with 10+ email interactions are summarization-ready today. No data cleanup required. Define the use case. Start there.
2. Leadership That Has Never Felt What AI Can Do
Brian Halligan called this "the big one." Alex Lieberman called it "low agency." Sidu Ponnappa put it most precisely: "A leadership that isn't AI native yet."
Not AI native doesn't mean they haven't read the articles. It means they haven't personally felt the moment where AI produces something useful. That moment is load-bearing. Without it, leaders manage AI transformation the way they manage any initiative they don't understand: by delegating it, under-resourcing it, and asking for status updates.
Eric Stevens nailed the downstream effect: "Incurious leadership doesn't just slow AI adoption. It guarantees the curious people leave first."
If leadership won't be the personal trainer, someone below them has to be. The companies getting traction built AI champions at the second and third level. Small workshops across different sales teams, real feedback, habit before scale. There's a line I use: if you don't have a great taste in wine, you can't run a restaurant where people come to drink.
3. The Literacy Gap Is a Behavior Problem
Mixed AI literacy sounds like a training problem. It isn't.
Give people training and they go back to email, Slack, and meetings the next morning. Not because they don't understand the tool. Because their behavior is optimized for the world they already live in. Changing behavior requires a personally relevant reason, a workflow that makes the new behavior easier than the old, and enough repetition for the habit to form.
The companies breaking through don't run training programs. They run pilots with people who are already curious, involve them in designing the workflow, show them what AI does for their specific job, and collect feedback fast before scaling.
4. Coordination Failure: Pilots That Die
Prakhar Yadav put it well: "AI adoption isn't a tooling problem. It's a coordination problem."
The failure mode I see most: a pilot works. Numbers are good. Users liked it. Then it stalls because nobody owns the next step. No budget for scale. IT wasn't involved early enough. Legal has concerns. The executive sponsor moved on. As Ayush Poddar noted in the thread, roughly 74% of companies can't get AI pilots to real value. The cause is almost never the model. It's endless process.
The fix: one named owner with real budget and a mandate from the executive level. Not a committee. Without that, every pilot is a science experiment: interesting, documented, and forgotten.
5. Data Quality: Real, but Not Where You Think
Data quality is overused as a reason to not start.
Early-stage AI value comes from summarization, and summarization doesn't require clean data. It requires present data. If reps have logged activities, if cases have email threads, if account notes exist in Salesforce, you have enough. As Tigran (@tigranbs) put it: "Garbage in still means garbage out, just at AI speed with a bigger cloud bill." True. But that problem hits when you're doing predictive scoring or next-best-action, not summarization.
Start with the data you have. Build the habit. Then clean what you need for what comes next.
The Real Question
Kashyap Patel's comment lands cleanest: "The AI is ready. The org chart isn't."
The killer combination is a leadership layer that hasn't felt what AI can do, no clear use case to anchor adoption, and nobody with ownership to bridge the two. The technology is not the constraint.
What I'm watching change is the tooling layer. Bringing AI to where people already work, rather than asking them to find it in a new application, removes the behavior change requirement entirely. That frictionless path to first value is where I'm seeing real adoption begin.
What's the #1 thing you're seeing slow AI transformation in your organization? I'm curious whether the rank order matches what I'm seeing or looks different from where you sit.
