The average enterprise Salesforce org changes its schema 12 to 24 times per year. Some orgs carry upward of 20,000 custom fields. Weekly data modifications run around 155,000 changes per org (Elements Cloud data, cited by Cloud Compliance).
The governance policy that covers those fields was probably written once, reviewed annually if at all, and refers to specific field names on specific objects. Every release that adds 15 new fields renders that policy slightly more incomplete. After three years of this, the gap between what the policy describes and what the org actually contains is substantial.
This is not a discipline problem. It is a structural one. The governance model most Salesforce teams use — policies that reference specific fields on specific objects — cannot keep pace with a platform designed to make schema changes easy. The model is broken by design.
What changed, and when
Ten years ago, Salesforce stored basic customer information: contacts, leads, accounts, opportunities. Internal sales teams accessed it. The governance question was narrow. Field counts were low. The sensitivity of the data was limited.
That version of Salesforce no longer exists at most enterprises.
Today, Salesforce functions as a system of record for sensitive consumer data across sales, service, marketing, communities, and partner portals. Organizations in healthcare store PHI. Financial services firms store regulated account data. Experience Cloud portals expose Salesforce data directly to external users — partners, patients, customers — who log in and interact with records that live in the same org as internal operations.
The regulatory environment has shifted in parallel. GDPR (2018) set the global precedent with cumulative fines exceeding EUR 1.65 billion. CCPA and its successor CPRA followed in California. Virginia's CDPA went live. Illinois has biometric-specific legislation. Four additional US states had privacy laws pending as of late 2023. Gartner projected that by 2025, 80% of customer data would fall under some form of data privacy regulation.
The result is a double bind: the platform's data surface is expanding while the regulatory requirements governing that data are multiplying. The data model evolves at release-cycle speed. The regulatory landscape evolves at legislative speed. Governance teams, typically understaffed and working from static documentation, are caught between the two.
The translation gap
Most organizations have some version of a data governance structure: a data strategy, a governance committee, policies, and awareness training at the enterprise level. What they consistently lack is a functioning bridge between those policies and the Salesforce teams executing against them.
An executive director of CRM at a large Blue Cross Blue Shield plan described this plainly during a recent practitioner session: the number-one concern for a center-of-excellence leader managing thousands of users across a complex Salesforce environment is people accessing data they should not access. Once someone has access, data can be exported, copied, screenshotted, or printed. The security model matters, but it is only as good as the classification beneath it.
The challenge, as this practitioner described it, is that governance policies written at the enterprise level do not translate cleanly to the Salesforce teams in the trenches. An admin, architect, or developer responsible for a sprint delivery sees governance as overhead that slows progress. Their performance review measures what they shipped, not what they classified. The governance layer and the execution layer operate on different incentives, different timelines, and frequently different vocabularies.
A head of data and analytics with 25 years of experience in the field put it more bluntly: data governance gains the least support in terms of funding. Leadership asks for business outcomes, and proving the ROI of classification work is genuinely difficult. The business case has to be framed in risk language — regulatory penalties, breach costs (the average data breach exceeded $4 million in 2021, per IBM), and customer trust erosion — not in efficiency language.
The feature that already exists
Buried in Salesforce setup, on every field of every object — standard, custom, and managed package — are two metadata properties: Data Sensitivity Level and Compliance Categorization.
Data Sensitivity Level allows teams to tag fields as public, internal, confidential, or restricted. The picklist is editable; organizations can add their own classification tiers. Compliance Categorization allows tagging fields by regulatory regime: HIPAA, GDPR, PCI, CCPA, COPPA, or any custom category an organization defines.
These tags exist at the field metadata level, not the record level. They describe the field's classification, not the value stored in it. This distinction matters.
The standard approach to governance — hardcoding policies to specific field names — breaks every time the data model changes. A policy that says "mask these 40 fields on Contact" must be updated whenever fields are added or renamed. A metadata-driven policy that says "mask every field on Contact tagged as HIPAA-confidential" does not. The data model can evolve without invalidating the governance rules. New fields get tagged at creation time. The policies reference the tags, not the fields.
This decoupling is the structural fix. It separates governance velocity from data model velocity, which is the root cause of the gap.
Salesforce Shield uses these same metadata properties. Integration architects can query field-level metadata via the API before querying the data itself — pulling only fields that are not tagged as restricted before sending data to a downstream system that may lack the appropriate certification. Retention automation can target fields by sensitivity classification rather than by name. Data subject access requests under GDPR or CCPA can use compliance categorization to determine what data to export or delete.
None of this requires a new license. None of it requires a third-party product. It is available in the platform today.
Why almost no one is doing it
A practitioner who has spent over two decades in Salesforce environments estimated that 90% of the organizations they speak with are either unaware of field-level sensitivity tagging or are not using it. The feature is not hidden, but it is not prominent either. It sits in field-level setup, one property among many. Salesforce does not surface it in onboarding, training, or most governance documentation. It has no dashboard. It generates no alerts.
The organizational reason is simpler: nobody's job description includes "tag every field with a sensitivity classification." Architects design solutions. Admins configure them. Developers build them. The governance team writes policies. None of these roles has field classification as a deliverable. It falls into the gap between strategy and execution — exactly where this problem lives.
Starting without boiling the ocean
The practitioners in this session converged on a four-step maturity model. It is not proprietary or complex. It is the practical reality of how governance work gains traction inside an organization that has not prioritized it:
Awareness. Determine whether field-level classification is relevant to your organization. If you store PII, PHI, or regulated financial data in Salesforce — and if your org has more than a handful of integrations — it is.
Build a small use case. Pick four objects: Contact, Lead, Account, Opportunity. Export the field list. Work with a business owner or compliance stakeholder to tag each field with a sensitivity level and compliance category. Salesforce supports CSV import of field metadata, so this can be done in hours, not weeks.
Build consensus. Use the small-scale classification to demonstrate value. If you are building APIs, show the integration team that they can query field metadata to exclude sensitive data from downstream systems that are not certified for it. If you are handling GDPR data subject requests, show legal that field classification can drive automated identification of in-scope data. Security and compliance teams are natural allies here — their mandate creates the business case.
Deploy into the SDLC. Once classification has organizational support, embed it in the development lifecycle. User stories and business requirements for new fields should include classification decisions. Release checklists should include a metadata query — via SOQL against the FieldDefinition or EntityDefinition objects — to identify unclassified fields before deployment. Classification becomes a gating criterion, not an afterthought.
The CoE leader from Blue Cross Blue Shield used an analogy that is worth repeating: CRM governance is building maintenance. You can focus exclusively on renting apartments and collecting revenue, but if you neglect the foundation, the plumbing, and the electrical systems, the building eventually becomes uninhabitable. A handyman approach — reactive, ad hoc, generalist — does not work for a structure at enterprise scale. It requires engineers who understand the systems.
What this means for your org
If your Salesforce governance policy references specific field names rather than metadata classifications, it has a shelf life shorter than your release cycle. Every new feature deployment, every managed package installation, every business requirement that adds fields to an object is widening the gap between what you think is governed and what actually is.
The fix is not a product purchase or a staffing increase. It is a structural change in how governance policies reference data. Metadata-driven policies survive data model evolution. Field-name-driven policies do not.
The four-object pilot takes hours. The SOQL queries to audit unclassified fields are straightforward. The organizational challenge — getting classification work into someone's job description and into the release checklist — is harder, but it is the same organizational challenge that every governance initiative faces. The difference here is that the technical foundation is free and already in the platform.
Your org almost certainly has fields that are unclassified, integrations that pull sensitive data without checking metadata, and sandboxes where production PII is fully visible to anyone with developer access. Field-level classification will not solve all of that. But it is the starting point that makes the rest possible, and the cost of starting is approximately zero.
