Why AI Pilots Fail—and How to Pick the Use Cases That Don't
Most AI pilots fail because teams chase the wrong use cases. This session breaks down a simple framework for identifying high-value, immediately achievable AI opportunities that don't require months of training data—and how to avoid the hype trap that killed blockchain, web3, and every other trend.


Industry
- 1
Start with high-value, low-nuance AI use cases that don't require model training—email personalization, case summarization, account 360 summaries, and sentiment analysis. These can be deployed in weeks.
- 2
Ian Gotts' PASTA framework: Policy (set guardrails), Amnesty (find out what employees already do with AI), Support (coach rather than prohibit), Technology evaluation, and Adoption (migrate to the approved stack).
- 3
AI punishes mediocrity. The quality of outputs is proportional to data quality, prompt quality, and business context understanding. Sanofi's CEO reported a four-hour payback period on their internal AI tool.
- 4
Prompt engineering requires version control and drift monitoring. Unlike traditional code, a prompt hitting a constantly-evolving LLM can produce different results over time. Treat prompts like production code.
- 5
A simple ROI calculation: 100 service reps at $30/hour making 40 calls/day spending 1-3 minutes reading case history per call costs roughly $960K/year. Even a 10x-overestimated AI tooling cost produces clear net savings.
Focus on use cases that eliminate repetitive grunt work without requiring AI model training. Quantify the time and cost of the manual process, then compare to tooling cost. A simple calculation for 100 reps often shows $500K-$1M in annual savings against $40-50K in tooling costs.
PASTA stands for Policy, Amnesty, Support, Technology, and Adoption. It provides a structured path from uncontrolled experimentation to governed AI rollout—setting policies, discovering existing usage, coaching teams, evaluating tools, and driving organization-wide adoption.
Prompt drift occurs because LLMs are continuously updated, meaning the same prompt can produce different outputs over time. Teams need weekly quality checks for high-impact prompts and should version-control them like production code.
For most organizations without dedicated ML teams, buying is faster. AppExchange products and Salesforce AI Cloud provide pre-optimized prompts, security layers with PII masking, and declarative configuration. You can go live in weeks rather than months.
Enterprise AI deployments use a security layer that masks PII inside Salesforce before sending data to the LLM. Names are replaced with tokens, the LLM processes anonymized content, and real values are re-identified within Salesforce.
Build or Buy AI: Make the Right Call for Your Enterprise
Vernon Keenan & Preetam Joshi · 10 min
Salesforce + AIBuild Your Salesforce AI Roadmap: The Crawl-Walk-Run Framework
Anand B Narasimhan & Saurabh Gupta · 12 min
Salesforce + AIAI-Driven Personalization Without Breaking Compliance in Financial Services
Kavin Mehta & Saurabh Gupta · 11 min