Claude Enterprise Adoption & Data Security
DC CAP serves over 800 scholars annually through a model that combines financial aid, intensive coaching, and university partnerships. Every hour staff spend on grant drafting, program reporting, and operational documentation is an hour redirected from direct student interaction, partner relationship management, or the coaching conversations that drive persistence and graduation. Enterprise AI adoption exists to shift that ratio.
This framework establishes the governance structure, data classification standards, and operational protocols for DC CAP Scholars to adopt Anthropic's Claude Enterprise AI platform. The framework balances two priorities: maximizing the platform's strategic value by connecting it to organizational data, and protecting the personally identifiable information (PII) of the scholars, families, partners, and employees we serve. The intended result is that coaching staff recover time for high-value student engagement, development teams accelerate grant cycles that fund scholarships, and student success leaders strengthen the evaluation practices that improve outcomes across the scholar portfolio.
Claude Enterprise provides the security architecture required for responsible adoption. Anthropic does not use Enterprise customer inputs or outputs to train its models. The platform includes single sign-on (SSO) integration, audit logging, custom data retention controls, and role-based access management. These protections are comparable to the enterprise platforms DC CAP already operates, including Salesforce Education Cloud and Microsoft 365.
Research across 1,400+ executives shows that 95% of AI pilots fail to deliver measurable returns, and the primary determinant of success is organizational investment in people and processes, which should represent 70% of total effort. This document recommends a phased adoption approach that begins with a leadership pilot cohort using low-sensitivity organizational data, then expands access and data integration as the team builds fluency and governance matures. Success will be measured through a three-tier framework tracking engagement, proficiency, and impact on the scholars we serve, with explicit Scale/Pause/Pivot decision gates at Day 45 and Day 60. The framework is designed to be a living document, updated iteratively as real usage patterns inform policy refinements.
DC CAP Scholars operates a comprehensive student success model that combines financial aid, intensive coaching, and university partnerships to serve over 800 scholars annually. The organization's strategic priorities require increased capacity across grant writing, program evaluation, partnership analysis, strategic communications, operational efficiency, and data-informed decision-making. At the same time, emerging AI capabilities require that users add relevant technology orchestration skills to remain both effective in their current roles and competitive across their careers.
Claude Enterprise addresses these needs by providing a secure platform where staff can work with organizational knowledge in a more secure environment. Several features are particularly relevant to DC CAP's operating context:
Research across 346 nonprofits shows that governance separates the 7% achieving major organizational impact from the 93% plateauing at efficiency gains. These goals define what DC CAP's governance infrastructure is designed to accomplish.
Maintain a comprehensive AI governance policy with four-tier data classification, approved and prohibited tool lists, cross-functional oversight, and semi-annual review cycles. This positions DC CAP among the 24% of organizations with formal AI governance and the 7% with governance embedded in organizational systems.
Measures: Policy published and board-approved. All pilot participants complete governance orientation. Zero Tier 1 or Tier 2 data incidents during pilot. Governance review scheduled for October 2026.
Ensure FERPA compliance and student PII protection across all AI-assisted workflows through clear anonymization standards, audit procedures, and data classification practice integrated into daily work.
Measures: All participants demonstrate correct data classification in at least one supervised session. Privacy-by-design checklist integrated into workflow redesign templates. Zero unauthorized data exposure.
Develop organizational habits where AI-assisted outputs carry clear human ownership, disclosure is practiced when relevant, and the Diligence dimension of the 4D framework becomes reflexive behavior across all units.
Measures: 80%+ of participants self-report consistent application of Diligence behaviors by Day 60. At least one workflow redesign per unit includes explicit human review checkpoints.
Evidence base: 47% of nonprofits lack any AI governance policy (NTEN 2024). Organizations with clear governance, documented workflows, cross-functional ownership, and measurement systems comprise the 7% achieving major impact versus 93% stuck on the efficiency plateau (Virtuous 2026, n=346). Board-level AI oversight has grown from 11% to 40% of organizations (NTEN trend data).
The following classification system governs what organizational data can be used with Claude Enterprise, under what conditions, and by whom. All DC CAP data falls into one of four tiers.
| Tier | Description | Examples | Claude Access |
|---|---|---|---|
| Tier 1 Restricted |
Individual PII, FERPA-protected data, SSN, financial aid details. Protected by FERPA, GEAR UP regulations, and organizational policy. | Scholar names + academic records, SSNs, financial aid details linked to individuals, health/disability information, family income data, employee PII (social security, withholding, salary, evaluation, bank account information, benefits information), organizational financial and security information (including banking account numbers and platform logins), partner financial information | PROHIBITED — Never upload to Claude in any form. No exceptions without legal review and explicit board authorization. |
| Tier 2 Sensitive |
Internal financial records, embargoed data, staff performance data, partner-shared confidential information, and small-cell aggregates (N<10) that could enable re-identification. | Budget models, Investment Portfolio allocations, disbursement totals, embargoed research or reports, staff evaluation summaries, partner MOU terms, aggregate student data with cell sizes below 10, aggregated HR/employee data where it is not possible to identify individuals | RESTRICTED ACCESS — Role-specific access with least access default as the norm. Requires de-identification verification before upload. 30-day retention. |
| Tier 3 Internal |
Organizational knowledge, program documentation, draft strategies, operational content, and de-identified aggregated outcomes and demographics. Does not contain PII or confidential partner data. | Strategic plans, draft strategic analyses, grant narratives, program models, data dictionaries, process documentation, meeting templates, OKR frameworks, training materials, aggregated retention/graduation rates, demographic distributions (percentages) | OPEN ACCESS — All licensed users within DC CAP. Upload to shared Projects. Standard retention policy. |
| Tier 4 Public |
Published materials, public-facing content, and externally available research or data. | Annual reports, website content, published research, press releases, public data sets (IPEDS, NSC aggregates, Census) | UNRESTRICTED — All users. Standard retention. |
Adoption follows three phases designed to build organizational fluency, refine governance based on real usage, and expand access as trust and competency mature. This resembles the phased infrastructure approach DC CAP applied to Salesforce Education Cloud deployment.
Objective: Build leadership fluency and validate governance protocols with low-risk data.
Objective: Extend access to coaching and operations teams through an all-staff kickoff in June. Introduce Tier 2 data access for authorized users.
Objective: Organization-wide deployment with mature governance.
The following security settings were configured at contract activation and will be reviewed quarterly.
| Setting | Configuration | Rationale |
|---|---|---|
| Model Training | Disabled (Enterprise default) | DC CAP data will never be used to train Anthropic models. |
| SSO Integration | Enabled via Microsoft Entra ID | Aligns with existing Microsoft 365 identity management. Centralizes access control. |
| Data Retention | 90 days (Phase 1), reduce to 60 days (Phase 2+) | Balances utility of persistent conversations with data minimization principles. |
| SCIM Provisioning | Enabled | Automated user lifecycle management aligned with HR onboarding/offboarding. |
| Audit Logging | Enabled, reviewed monthly | Provides visibility into platform usage. Supports compliance documentation. Does not reveal information about specific conversations. |
| Role-Based Access | Tiered by data classification level | Ensures staff access only the data tiers authorized for their role. |
| Domain Capture | Enabled for @dccap.org domain | Ensures all staff accounts route through organizational SSO. Prevents shadow IT. |
| Role | Responsibility | Named Individual(s) |
|---|---|---|
| AI Governance Lead & Co-Owner | Owns this framework. Manages platform configuration, conducts audit reviews, approves Tier 2 data access, leads quarterly governance reviews. Shares governance ownership with the AI Governance Co-Owner. | Preston Magouirk, CSAO |
| AI Governance Co-Owner | Co-owns platform governance and operations. Manages SSO/SCIM integration, coordinates with IT vendors, supports user onboarding and technical troubleshooting. Holds full platform operational authority, including independent incident response capability, by end of Phase 1. Serves as primary governance contact when AI Governance Lead is unavailable. Participates in governance reviews and policy decisions alongside the AI Governance Lead. | Angela Cammack, COO |
| Executive Sponsor | Authorizes policy changes, approves budget, escalation point for governance decisions requiring organizational authority. | Eric Waldo, CEO |
| Student Success Lead | Champions adoption within coaching teams, identifies high-value use cases, ensures Tier 1 data protections are maintained in daily workflows. | Stephanie Gardner, Director of Strategic Partnerships and Program Strategy |
| Communications Lead | Ensures all Claude-assisted external communications, donor materials, and public-facing content receive staff review before distribution. Maintains brand voice standards in AI-assisted drafting workflows. Advises on Tier 2 data use in storytelling and impact reporting. | Alexander Vassiliadis, Director of Communications |
| GEAR UP Compliance Lead | Ensures GEAR UP data compliance requirements are maintained in AI workflows. Consulted on Tier 2 boundary decisions involving federal grant data. Advises on aggregation thresholds for GEAR UP reporting. | Danielle Walker, GEAR UP Director |
| Development Co-Leads | Champions adoption within fundraising and donor relations workflows. Identifies high-value use cases for grant writing, prospect research, and donor communications. Ensures development data (donor PII, gift amounts) follows Tier 2 protections. Leads Blackbaud connector adoption and implementation. | Sashia Moore, Director of Development Operations; Anna Hartge, Development Co-Lead |
| Data Lead | Supports data infrastructure, analytics workflows, and Salesforce data integration within AI-assisted processes. Ensures data quality standards and Tier 2 protections are maintained in analytical outputs. | Anthony Little, Data Lead |
| Operations Content Lead | Curates and organizes HR, finance, and operations documentation for ingestion into Claude Projects. Classifies content per data tier framework before upload. Ensures Tier 1 and Tier 2 materials are filtered appropriately and that only authorized content enters shared knowledge bases. | Andre Mendes, Executive Operations Manager |
| All Licensed Users | Complete AI onboarding training, follow data classification protocols, report any incidents or concerns to the AI Governance Lead. | All staff with Enterprise seats |
Platform access is conditional on training completion. Research shows it takes 2-3 months for skilled workers to reach competent AI use, and organizations that invest systematically in training achieve 60% adoption rates compared to 30% without. The governance framework protects data. The training program builds the fluency that makes governance intuitive rather than burdensome.
No staff member receives Claude Enterprise access until they complete the following three prerequisites. The AI Governance Lead verifies completion through the pilot hub prerequisite tracker before provisioning accounts.
| Requirement | What It Covers | Estimated Time |
|---|---|---|
| Review the Start Here Guide and AI Governance Framework | Claude interfaces, organizational skills, data tier classification, approved use cases, and responsible use principles | ~30 minutes |
| Complete the Pre-Launch Assessment | Baseline fluency measurement across five constructs (AI orientation, learning orientation, current use, AI knowledge, applied skills) with embedded governance acknowledgment | ~30 minutes |
| Complete Two Anthropic Academy Courses | Claude 101 (platform basics, features, prompts, navigation) and AI Fluency for Nonprofits (4D framework applied to mission-driven work). Certificates required. | ~60 minutes |
Account provisioning is conditioned on completing all three prerequisites. The AI Governance Lead reviews the prerequisite tracker and confirms completion before activating each user's Enterprise seat.
Detailed training content, facilitation guides, and the full 60-day learning arc are maintained in the AI Onboarding Implementation Plan (separate document, managed by the AI Governance Lead). The prerequisite experience described above is the entry point into a structured progression that includes 1:1 coaching, hands-on activities (including a facilitated Governance Walk with scenario-based data tier classification practice), intermittent group learning sessions, and capstone module development.
The leadership pilot operates through three complementary formats designed to build individual fluency while creating shared organizational capacity.
| Format | Cadence | Purpose | Owner |
|---|---|---|---|
| Leadership-wide introduction | Pilot launch (April 6-10) | Establish shared vocabulary, complete prerequisites, align on governance and expectations | AI Governance Lead + Co-Owner |
| 1:1 check-ins | Recurring throughout pilot | Individualized coaching on each leader's workflow integration, use case development, and fluency growth. The primary vehicle for building competency. | AI Governance Lead + Co-Owner |
| All-group meetings | Intermittent throughout pilot | Shared problem-solving, peer learning, module development feedback, and collective progress check-ins | AI Governance Lead |
| Weekly pulse check-in | Weekly (self-directed, ~30 seconds) | Track engagement, iteration frequency, confidence, and surface wins and friction points | Individual participants |
The leadership pilot produces two outputs that enable organization-wide scaling: trained leaders who can facilitate AI adoption within their functional areas, and training modules built by those leaders for their specific teams.
The all-staff kickoff in June marks the transition from Phase 1 (leadership pilot) to Phase 2 (team expansion). Each pilot leader will have developed a training module tailored to their unit's workflows, governance considerations, and use cases. The kickoff is the delivery point for those modules, led by the leaders who built them.
Training is an ongoing requirement. The AI skills landscape changes rapidly, and one-time onboarding produces declining returns within 3-4 months without reinforcement.
| Cadence | Activity | Owner |
|---|---|---|
| Phase 2 (post-June kickoff) | Unit-level training delivered by pilot leaders using their modules | Pilot leaders within each functional area |
| Monthly (Phase 2+) | All-hands working sessions where staff share workflow integrations and troubleshoot challenges together | AI Governance Lead + pilot leaders |
| By end of Phase 2 | All team members complete Anthropic Academy's 5-course progression | Individual responsibility, tracked by AI Governance Co-Owner |
| Quarterly (Phase 3+) | Advanced topics, platform updates, and governance refresh | AI Governance Lead |
The ten leaders in the pilot cohort are the champion network. Each leader owns AI adoption within their unit from day one. Their role: find and share practical team-specific examples, answer questions and mentor colleagues, surface feedback on what works and what creates friction, and participate in monthly governance feedback sessions. Additional champions may emerge from early adopters identified during the all-staff rollout in Phase 2.
Champion capability builds progressively during the pilot through 1:1 coaching, group sessions, and rotating facilitation of AI Fridays. By Phase 3, leaders facilitate peer learning sessions independently. The AI Governance Lead maintains the champion roster and provides facilitation support. The Student Success Lead (Stephanie Gardner) owns champion development within coaching teams. The Development Co-Lead (Sashia Moore) owns champion development within fundraising workflows.
All Claude Enterprise users at DC CAP must adhere to the following guidelines. These are covered in the onboarding training described in Section 6 and acknowledged in writing by each user before receiving platform access. All Claude-generated errors are the responsibility of the user. Embedding personal review and expertise into every AI workflow is a non-negotiable practice at DC CAP.
The following represent the kinds of high-value AI use this governance framework is designed to enable:
DC CAP's data governance obligations apply regardless of which AI platform staff use. Claude Enterprise is DC CAP's approved AI platform because it provides the enterprise security controls, audit logging, data retention policies, and training data exclusions that protect our organization and our students. Personal AI accounts (ChatGPT, Google Gemini, Microsoft Copilot, or any other consumer AI service) lack these protections.
Staff should not use personal AI accounts to process DC CAP data at any classification tier. Internal documents, student information, strategic plans, grant materials, partner communications, and any other organizational content should be worked with exclusively through Claude Enterprise. This policy exists because consumer AI platforms may use uploaded content for model training, lack audit trails required for governance compliance, and operate outside DC CAP's data retention and security infrastructure.
If a staff member identifies a capability available in another AI platform that Claude Enterprise does not currently provide, they should raise it with the AI Governance Lead or Co-Owner. DC CAP's approach is to consolidate AI use within a governed, auditable environment rather than fragment it across unmonitored tools.
If Tier 1 data is accidentally uploaded to Claude or a potential data breach is identified, the following protocol applies:
The user who identified the incident deletes the conversation containing the data immediately. Claude's custom retention policy will process the deletion, but manual deletion accelerates removal.
Notify the AI Governance Lead (Preston Magouirk) or AI Governance Co-Owner (Angela Cammack) within 2 hours of discovery. Either can initiate the full response protocol. The responding Governance owner notifies the CEO within 4 hours.
The Governance Lead reviews audit logs to determine the scope of the incident: what data was exposed, for how long, and whether it was referenced in any outputs.
Review and tighten access controls if the incident results from a permissions gap. Update training protocols and ensure immediate remediation when the incident results from user error.
Record the incident, root cause, and corrective actions in the governance log. Present findings at the next quarterly governance review.
Governance without measurement is compliance theater. We need to know whether this platform is producing the outcomes that justify the investment, and we need to know early enough to course-correct. The following framework measures success across three dimensions: Engagement (are people using it?), Proficiency (are they getting better?), and Impact (is it producing mission-relevant results?). Detailed metric definitions, tracking instruments, and collection schedules are maintained in the KPI Framework (separate operational document).
These are leading indicators. They tell us whether staff are using the platform and building habits.
| Metric | Phase 1 Target (Day 60) | Phase 2 Target | Phase 3 Target |
|---|---|---|---|
| Monthly active users | 80%+ of pilot cohort (7+ of 9) | 60% of all licensed users by Month 4 | 75% of all licensed users by Month 8 |
| Prompts per participant per week | Baseline established Week 1; 15+ by Week 6 | 15+ sustained | 20+ |
| Active user segmentation | Cohort shifting from Light (1-5/week) toward Moderate (6-19) and Heavy (20+) | Majority in Moderate+ | Majority in Heavy |
| Anthropic Academy completions | Claude 101 + AI Fluency for Nonprofits (prerequisite); Framework & Foundations (recommended) | 5-course progression for all team members | Required for new hires within 60 days |
Data source: Claude Enterprise admin panel (usage logs) + weekly check-in self-report. Tracked weekly starting Week 1.
These metrics capture whether platform engagement translates into fluency development. Proficiency is the leading indicator that predicts workflow impact.
| Metric | Phase 1 Target (Day 60) | Measurement |
|---|---|---|
| Iteration frequency | 60%+ of cohort reporting "A few times" or higher on weekly check-in | Self-report (weekly check-in) + admin logs where available |
| Observable fluency behaviors | 70%+ of participants demonstrate 3+ behaviors (iteration on drafts, clarifying goals, questioning model reasoning, identifying missing context, specifying output formats, providing examples, fact-checking outputs) | Biweekly observation starting Week 3; pre/post survey |
| Session depth | Average conversation length increasing over time | Enterprise admin data cross-referenced with self-report |
| Workflow redesign documentation | Each participant identifies at least 1 workflow with before/after comparison; minimum 2 documented redesigns per unit (6 total) | Templates deployed Week 3; tracked at Day 45 and Day 60 |
These are lagging indicators that connect AI-assisted proficiency to mission outcomes. We expect initial signal by Day 45 and reportable results by Day 60 for the pilot; Phase 2-3 targets are longer-horizon.
| Metric | Phase 1 Target | Phase 2-3 Target | Measurement |
|---|---|---|---|
| Time savings evidence | Each participant documents at least 1 task with before/after time comparison | Aggregated into "total hours redirected per week" by function | Self-report + workflow redesign documentation |
| Quality improvement evidence | At least 3 examples of AI-assisted outputs rated higher quality than previous manual outputs (peer-assessed) | Ongoing peer assessment integrated into champion sessions | Standard quality rubric |
| Mission connection | At least 1 example per unit of AI-freed time redirected to high-value student interaction | Grant cycle acceleration, coaching contact hour increases, program evaluation depth | Unit-level tracking by champion + Student Success Lead |
| Grant narrative first-draft cycle | Establish baseline hours per narrative | 25% reduction (Phase 2); 40% reduction (Phase 3) | Self-report by Development team |
| Coaching documentation time | Establish baseline hours per scholar | 15% reduction (Phase 2); 25% reduction (Phase 3) | Self-report by coaching team |
The June board briefing presents pilot outcomes and a rollout recommendation. The recommendation is determined by the following pre-committed decision framework. Each pathway specifies the evidence required and the action it triggers, so the board receives a decision tool rather than a report.
All four conditions must be met:
Board recommendation: Approve Q1 FY27 (July-September) rollout to all 24 licensed users via cohort-based onboarding. Budget: existing Enterprise license allocation plus facilitation time from Governance Lead and Co-Owner.
Two or three of the four Scale conditions are met, with specific gaps:
Board recommendation: Approve Q1 rollout with targeted modifications. Extend pilot by 30 days for underperforming units. Pair new cohort members with pilot graduates as mentors. Present a modified rollout timeline at the September board meeting.
One or more of the following conditions are present:
Board recommendation: Pause org-wide rollout. Conduct a root-cause analysis of pilot barriers. Present revised approach and timeline at the September board meeting. Maintain licenses for active pilot users during the diagnostic period.
| Metric | Target | Frequency |
|---|---|---|
| Tier 1 or Tier 2 data incidents | Zero | Continuous, reported monthly |
| Governance framework updates based on usage data | 1+ per quarter | Quarterly |
| Staff governance confidence (self-reported) | 80%+ report confidence in data tier classification by end of Phase 1 | Pre/post survey |
| Audit log review completion | 100% of scheduled reviews completed on time | Monthly |
These metrics will be reported to the CEO monthly during Phase 1, quarterly during Phases 2-3, and included in the annual board briefing on AI adoption outcomes and risk posture (Section 10).
This framework is a living document. Governance matures alongside organizational fluency with the platform. The following review cadence ensures continuous improvement.
| Frequency | Activity | Owner |
|---|---|---|
| Monthly | Review audit logs. Check for unauthorized data access patterns. Gather user feedback on friction points and productivity gains. Report engagement metrics (Section 9) to CEO. | AI Governance Lead & Co-Owner |
| Quarterly | Formal governance review with leadership team. Update data classification decisions, revise acceptable use policy, assess phase progression. Review Anthropic platform updates for new capabilities or policy changes. Include staff feedback from champion network and quarterly survey. Report proficiency and early impact metrics (Section 9). | AI Governance Lead & Co-Owner + Executive Sponsor |
| Annually | Comprehensive framework revision. Align with organizational strategic plan updates. Benchmark against peer nonprofit AI governance practices. Board briefing on AI adoption outcomes, full impact metrics (Section 9), and risk posture. | AI Governance Lead & Co-Owner + CEO |
The review cadence above governs the pilot and initial rollout phases. Sustained AI governance requires infrastructure that outlasts any individual and adapts as the organization's AI maturity evolves. The following maintenance commitments ensure this framework remains a living system.
Each July (aligned with DC CAP's fiscal year), the AI Governance Lead and Co-Owner conduct a comprehensive framework revision. This includes updating data classification decisions based on the prior year's audit findings, incorporating new Anthropic platform capabilities, revising acceptable use policy based on observed usage patterns, and benchmarking against peer nonprofit AI governance practices. The revised framework is presented to the CEO and included in the annual board briefing on AI adoption outcomes.
All licensed users complete the AI Fluency Assessment annually. The post-pilot version of the instrument (administered June 2026) becomes the baseline for ongoing measurement. Annual results are compared against prior-year scores to track organizational fluency trends, identify emerging knowledge gaps, and inform training investments. Results are reported in aggregate to the CEO and board; individual results are used for professional development conversations only.
Every new DC CAP employee with a Claude Enterprise license completes the onboarding prerequisites (assessment, Start Here guide, governance framework review, and Anthropic Academy courses) within their first 30 days. The AI Governance Co-Owner is responsible for ensuring new-hire onboarding is tracked and completed. New hires are paired with a trained colleague from their unit for their first two weeks of AI use.
The AI Governance Lead and Co-Owner roles carry institutional knowledge that must transfer smoothly during personnel transitions. Both roles maintain written documentation of platform configuration decisions, active governance issues, and in-progress policy changes. If either role transitions, a 30-day knowledge transfer period is initiated with the successor before the departing owner's last day. The remaining governance owner holds full operational authority during any transition period.
Beginning in Year 2, the quarterly governance review includes a brief knowledge spot-check for all licensed users: 5 scenario-based questions drawn from the data classification tiers, incident response protocol, and acceptable use policy. The spot-check takes under 3 minutes, reinforces governance awareness, and surfaces areas where refresher training may be needed. Results are tracked in aggregate; individual results trigger a coaching conversation only if a user scores below the threshold on data classification items.
This AI Data Governance Framework operates within and reinforces DC CAP's existing policy infrastructure. It should be read alongside the following organizational policies:
By checking the box below, I certify that I have read and understand the DC CAP AI Governance Policy Framework. I acknowledge my personal responsibility for maintaining best practices within my functional area as they relate to: