Enterprise Dreamin'
Data Security← All Sessions

Govern AI Risk Without Killing Innovation: An Enterprise Security Framework

The Samsung engineer incident—code leaked into ChatGPT training data—sparked CISOs to block OpenAI. But fear is outpacing knowledge. Enterprise versions of generative AI with non-training data policies exist. The real challenge: there's no turnkey solution yet. Five security principles (data protection, explainability, access controls, compliance mapping, vendor evaluation) form the foundation ...

Doug Merrett & Vernon Keenan·12 min watch
Doug Merrett

Doug Merrett

Founder and Principal Consultant · Platinum7

Vernon Keenan

Vernon Keenan

Senior Industry Analyst · SalesforceDevops.net

Industry

enterprise-securitycompliancefinancial-serviceshealthcaretechnology
Key Takeaways
  • 1

    Enterprise AI API calls are contractually excluded from training data—unlike consumer ChatGPT. CISOs who blocked OpenAI domains in early 2023 may be operating on outdated assumptions.

  • 2

    Build your own LLM cloud security framework around five pillars: data protection policy, compliance alignment, transparency and explainability, access control, and evaluating your LLM service provider's security posture.

  • 3

    PII masking before data leaves Salesforce is non-negotiable. Both GPTfy and Salesforce's Trust Layer strip PII from prompts, send anonymized data to the LLM, then re-inject original values into the response.

  • 4

    Salesforce's existing security model—profiles, permission sets, user mode Apex, named credentials with JWT, mutual TLS—transfers directly to AI use cases and provides a strong foundation for governing AI prompts.

  • 5

    LLMs are non-deterministic and subject to model drift. Treat prompts like code, monitor for drift continuously, and enforce regular QA audits of AI-generated outputs.

Frequently Asked Questions

Enterprise API integrations with OpenAI (including via Azure AI Services) are contractually excluded from model training. Tools like GPTfy and Salesforce's Trust Layer mask all PII before data leaves Salesforce, so the AI engine only processes anonymized placeholders.

Build your own evaluation checklist around five pillars: data protection policy, industry compliance requirements (GDPR, HIPAA, etc.), transparency and explainability, granular access control, and verifying the security posture and certifications of your LLM provider.

Use profile-level and record-type-based controls to restrict which users can trigger AI prompts. For data residency, provision separate AI instances in the required regions and route data accordingly.

Model drift occurs when foundation LLMs are retrained and redeployed by their providers, potentially changing how your prompts behave. You need to treat prompts as version-controlled code and run regular regression tests to catch degradation.

At this stage of the technology, virtually every enterprise AI application should be human-in-the-loop. Use AI to generate drafts but present them to a human for review before action, especially for customer-facing outputs.