AI Data Governance Framework

Claude Enterprise Adoption & Data Security

DC CAP Scholars | v1.1 | April 2026 | Prepared by: PLM/AMC

Enterprise AI delivers value in direct proportion to the organizational context it can access. Governance exists to structure that access responsibly, allowing teams to work at the frontier of what the technology enables while maintaining strict protections for the people we serve.

Executive Summary

DC CAP serves over 800 scholars annually through a model that combines financial aid, intensive coaching, and university partnerships. Every hour staff spend on grant drafting, program reporting, and operational documentation is an hour redirected from direct student interaction, partner relationship management, or the coaching conversations that drive persistence and graduation. Enterprise AI adoption exists to shift that ratio.

This framework establishes the governance structure, data classification standards, and operational protocols for DC CAP Scholars to adopt Anthropic's Claude Enterprise AI platform. The framework balances two priorities: maximizing the platform's strategic value by connecting it to organizational data, and protecting the personally identifiable information (PII) of the scholars, families, partners, and employees we serve. The intended result is that coaching staff recover time for high-value student engagement, development teams accelerate grant cycles that fund scholarships, and student success leaders strengthen the evaluation practices that improve outcomes across the scholar portfolio.

Claude Enterprise provides the security architecture required for responsible adoption. Anthropic does not use Enterprise customer inputs or outputs to train its models. The platform includes single sign-on (SSO) integration, audit logging, custom data retention controls, and role-based access management. These protections are comparable to the enterprise platforms DC CAP already operates, including Salesforce Education Cloud and Microsoft 365.

Research across 1,400+ executives shows that 95% of AI pilots fail to deliver measurable returns, and the primary determinant of success is organizational investment in people and processes, which should represent 70% of total effort. This document recommends a phased adoption approach that begins with a leadership pilot cohort using low-sensitivity organizational data, then expands access and data integration as the team builds fluency and governance matures. Success will be measured through a three-tier framework tracking engagement, proficiency, and impact on the scholars we serve, with explicit Scale/Pause/Pivot decision gates at Day 45 and Day 60. The framework is designed to be a living document, updated iteratively as real usage patterns inform policy refinements.

1. Strategic Rationale for Enterprise AI Adoption

DC CAP Scholars operates a comprehensive student success model that combines financial aid, intensive coaching, and university partnerships to serve over 800 scholars annually. The organization's strategic priorities require increased capacity across grant writing, program evaluation, partnership analysis, strategic communications, operational efficiency, and data-informed decision-making. At the same time, emerging AI capabilities require that users add relevant technology orchestration skills to remain both effective in their current roles and competitive across their careers.

Claude Enterprise addresses these needs by providing a secure platform where staff can work with organizational knowledge in a more secure environment. Several features are particularly relevant to DC CAP's operating context:

  • Projects: Persistent workspaces loaded with organizational documents (grant narratives, program models, strategic plans, data dictionaries) that Claude can reference across conversations.
  • Skills: Repeated tasks should not, but often do, require new context and orientation. Building skills in Claude ensures best practices are captured so they can be replicated and made more efficient. DC CAP's leadership team has developed a custom skills library for the organization that will be shared with staff as part of the onboarding experience.
  • Connectors: Direct integrations with Microsoft 365, and nonprofit-specific platforms (Blackbaud Raiser's Edge, for example) that allow Claude to access specific information without individual manual uploads.
  • Custom Retention: Organization-controlled policies for how long conversation data is stored, with a minimum of 30 days and full audit logging.
  • Nonprofit Pricing: Qualifying 501(c)(3) organizations receive a 75% discount on this platform, making it less expensive to purchase as an organization than allow each staff member to purchase a comparable subscription individually ($10/per seat/per month vs. $20 in individual plans).

Governance Goals

Research across 346 nonprofits shows that governance separates the 7% achieving major organizational impact from the 93% plateauing at efficiency gains. These goals define what DC CAP's governance infrastructure is designed to accomplish.

Goal 1: Establish Living Governance Policy

Maintain a comprehensive AI governance policy with four-tier data classification, approved and prohibited tool lists, cross-functional oversight, and semi-annual review cycles. This positions DC CAP among the 24% of organizations with formal AI governance and the 7% with governance embedded in organizational systems.

Measures: Policy published and board-approved. All pilot participants complete governance orientation. Zero Tier 1 or Tier 2 data incidents during pilot. Governance review scheduled for October 2026.

Goal 2: Operationalize Privacy-by-Design

Ensure FERPA compliance and student PII protection across all AI-assisted workflows through clear anonymization standards, audit procedures, and data classification practice integrated into daily work.

Measures: All participants demonstrate correct data classification in at least one supervised session. Privacy-by-design checklist integrated into workflow redesign templates. Zero unauthorized data exposure.

Goal 3: Build Transparency and Accountability Norms

Develop organizational habits where AI-assisted outputs carry clear human ownership, disclosure is practiced when relevant, and the Diligence dimension of the 4D framework becomes reflexive behavior across all units.

Measures: 80%+ of participants self-report consistent application of Diligence behaviors by Day 60. At least one workflow redesign per unit includes explicit human review checkpoints.

Evidence base: 47% of nonprofits lack any AI governance policy (NTEN 2024). Organizations with clear governance, documented workflows, cross-functional ownership, and measurement systems comprise the 7% achieving major impact versus 93% stuck on the efficiency plateau (Virtuous 2026, n=346). Board-level AI oversight has grown from 11% to 40% of organizations (NTEN trend data).

2. Data Classification Framework

The following classification system governs what organizational data can be used with Claude Enterprise, under what conditions, and by whom. All DC CAP data falls into one of four tiers.

Tier Description Examples Claude Access
Tier 1
Restricted
Individual PII, FERPA-protected data, SSN, financial aid details. Protected by FERPA, GEAR UP regulations, and organizational policy. Scholar names + academic records, SSNs, financial aid details linked to individuals, health/disability information, family income data, employee PII (social security, withholding, salary, evaluation, bank account information, benefits information), organizational financial and security information (including banking account numbers and platform logins), partner financial information PROHIBITED — Never upload to Claude in any form. No exceptions without legal review and explicit board authorization.
Tier 2
Sensitive
Internal financial records, embargoed data, staff performance data, partner-shared confidential information, and small-cell aggregates (N<10) that could enable re-identification. Budget models, Investment Portfolio allocations, disbursement totals, embargoed research or reports, staff evaluation summaries, partner MOU terms, aggregate student data with cell sizes below 10, aggregated HR/employee data where it is not possible to identify individuals RESTRICTED ACCESS — Role-specific access with least access default as the norm. Requires de-identification verification before upload. 30-day retention.
Tier 3
Internal
Organizational knowledge, program documentation, draft strategies, operational content, and de-identified aggregated outcomes and demographics. Does not contain PII or confidential partner data. Strategic plans, draft strategic analyses, grant narratives, program models, data dictionaries, process documentation, meeting templates, OKR frameworks, training materials, aggregated retention/graduation rates, demographic distributions (percentages) OPEN ACCESS — All licensed users within DC CAP. Upload to shared Projects. Standard retention policy.
Tier 4
Public
Published materials, public-facing content, and externally available research or data. Annual reports, website content, published research, press releases, public data sets (IPEDS, NSC aggregates, Census) UNRESTRICTED — All users. Standard retention.
Aggregation Guidance: Aggregated outcomes (retention rates, graduation rates) and demographics (percentages, distributions) are Tier 3 when fully de-identified. If cell size is below 10 and could enable re-identification, treat as Tier 2.
FERPA & GEAR UP Compliance: DC CAP operates under FERPA protections for scholar educational records and GEAR UP federal grant data requirements. Tier 1 restrictions are designed to exceed these requirements. Any future consideration of using individual-level data with AI tools would require a formal Data Protection Impact Assessment (DPIA), legal counsel review, and board approval.

3. Phased Adoption Plan

Adoption follows three phases designed to build organizational fluency, refine governance based on real usage, and expand access as trust and competency mature. This resembles the phased infrastructure approach DC CAP applied to Salesforce Education Cloud deployment.

1
Leadership Pilot (Months 1–2)

Objective: Build leadership fluency and validate governance protocols with low-risk data.

  • 10 seats: CEO, COO, Director of Strategic Partnerships and Student Success, GEAR UP Director, Director of Communications, Director of Development Operations, Director of Foundation Relations, Manager of Executive Operations, Data Lead, Development Co-Lead
  • All tiers with role dependencies employed. Organizational knowledge, program models, grant language, strategic documents, and public data.
  • Design project infrastructure, deploy pre-built organizational skills, identify and add connectors, and build shared organizational context
  • Load shared Projects with program documentation, past grant narratives, strategic plan, OKR frameworks, and data dictionaries
  • Success metrics: User adoption rate, tasks completed, time savings reported, zero Tier 1 or Tier 2 data incidents, governance framework updated based on real usage
2
Team Expansion (Months 3–5)

Objective: Extend access to coaching and operations teams through an all-staff kickoff in June. Introduce Tier 2 data access for authorized users.

  • Launch Phase 2 with all-staff kickoff led by pilot leaders delivering the training modules they developed during Phase 1
  • Enable Microsoft 365 connector for document access within Claude
  • Authorize Tier 2 aggregate data access for leadership team with de-identification verification protocols
  • Develop team-specific Projects loaded with role-relevant knowledge (coaching playbooks, partner evaluation frameworks, GEAR UP documentation)
  • Reduce data retention to 60 days. Review audit logs monthly.
  • Require five-course progression completion in Anthropic Academy from all team members by end of term
3
Full Integration (Months 6–12)

Objective: Organization-wide deployment with mature governance.

  • Deploy to all staff with role-appropriate access permissions
  • Evaluate and deploy additional connectors (Candid for funder research, Salesforce if available)
  • Integrate Claude workflows into standard operating procedures for grant writing, program reporting, and partnership analysis
  • Establish quarterly governance review cycle with audit log analysis, policy updates, and compliance verification
  • Connect AI governance to broader organizational data governance under the existing GEAR UP data policy framework

4. Security & Compliance Configuration

The following security settings were configured at contract activation and will be reviewed quarterly.

Setting Configuration Rationale
Model Training Disabled (Enterprise default) DC CAP data will never be used to train Anthropic models.
SSO Integration Enabled via Microsoft Entra ID Aligns with existing Microsoft 365 identity management. Centralizes access control.
Data Retention 90 days (Phase 1), reduce to 60 days (Phase 2+) Balances utility of persistent conversations with data minimization principles.
SCIM Provisioning Enabled Automated user lifecycle management aligned with HR onboarding/offboarding.
Audit Logging Enabled, reviewed monthly Provides visibility into platform usage. Supports compliance documentation. Does not reveal information about specific conversations.
Role-Based Access Tiered by data classification level Ensures staff access only the data tiers authorized for their role.
Domain Capture Enabled for @dccap.org domain Ensures all staff accounts route through organizational SSO. Prevents shadow IT.

5. Roles & Responsibilities

Role Responsibility Named Individual(s)
AI Governance Lead & Co-Owner Owns this framework. Manages platform configuration, conducts audit reviews, approves Tier 2 data access, leads quarterly governance reviews. Shares governance ownership with the AI Governance Co-Owner. Preston Magouirk, CSAO
AI Governance Co-Owner Co-owns platform governance and operations. Manages SSO/SCIM integration, coordinates with IT vendors, supports user onboarding and technical troubleshooting. Holds full platform operational authority, including independent incident response capability, by end of Phase 1. Serves as primary governance contact when AI Governance Lead is unavailable. Participates in governance reviews and policy decisions alongside the AI Governance Lead. Angela Cammack, COO
Executive Sponsor Authorizes policy changes, approves budget, escalation point for governance decisions requiring organizational authority. Eric Waldo, CEO
Student Success Lead Champions adoption within coaching teams, identifies high-value use cases, ensures Tier 1 data protections are maintained in daily workflows. Stephanie Gardner, Director of Strategic Partnerships and Program Strategy
Communications Lead Ensures all Claude-assisted external communications, donor materials, and public-facing content receive staff review before distribution. Maintains brand voice standards in AI-assisted drafting workflows. Advises on Tier 2 data use in storytelling and impact reporting. Alexander Vassiliadis, Director of Communications
GEAR UP Compliance Lead Ensures GEAR UP data compliance requirements are maintained in AI workflows. Consulted on Tier 2 boundary decisions involving federal grant data. Advises on aggregation thresholds for GEAR UP reporting. Danielle Walker, GEAR UP Director
Development Co-Leads Champions adoption within fundraising and donor relations workflows. Identifies high-value use cases for grant writing, prospect research, and donor communications. Ensures development data (donor PII, gift amounts) follows Tier 2 protections. Leads Blackbaud connector adoption and implementation. Sashia Moore, Director of Development Operations; Anna Hartge, Development Co-Lead
Data Lead Supports data infrastructure, analytics workflows, and Salesforce data integration within AI-assisted processes. Ensures data quality standards and Tier 2 protections are maintained in analytical outputs. Anthony Little, Data Lead
Operations Content Lead Curates and organizes HR, finance, and operations documentation for ingestion into Claude Projects. Classifies content per data tier framework before upload. Ensures Tier 1 and Tier 2 materials are filtered appropriately and that only authorized content enters shared knowledge bases. Andre Mendes, Executive Operations Manager
All Licensed Users Complete AI onboarding training, follow data classification protocols, report any incidents or concerns to the AI Governance Lead. All staff with Enterprise seats

6. Onboarding & Fluency Requirements

Platform access is conditional on training completion. Research shows it takes 2-3 months for skilled workers to reach competent AI use, and organizations that invest systematically in training achieve 60% adoption rates compared to 30% without. The governance framework protects data. The training program builds the fluency that makes governance intuitive rather than burdensome.

6.1 Training Prerequisites for Platform Access

No staff member receives Claude Enterprise access until they complete the following three prerequisites. The AI Governance Lead verifies completion through the pilot hub prerequisite tracker before provisioning accounts.

Requirement What It Covers Estimated Time
Review the Start Here Guide and AI Governance Framework Claude interfaces, organizational skills, data tier classification, approved use cases, and responsible use principles ~30 minutes
Complete the Pre-Launch Assessment Baseline fluency measurement across five constructs (AI orientation, learning orientation, current use, AI knowledge, applied skills) with embedded governance acknowledgment ~30 minutes
Complete Two Anthropic Academy Courses Claude 101 (platform basics, features, prompts, navigation) and AI Fluency for Nonprofits (4D framework applied to mission-driven work). Certificates required. ~60 minutes

Account provisioning is conditioned on completing all three prerequisites. The AI Governance Lead reviews the prerequisite tracker and confirms completion before activating each user's Enterprise seat.

Detailed training content, facilitation guides, and the full 60-day learning arc are maintained in the AI Onboarding Implementation Plan (separate document, managed by the AI Governance Lead). The prerequisite experience described above is the entry point into a structured progression that includes 1:1 coaching, hands-on activities (including a facilitated Governance Walk with scenario-based data tier classification practice), intermittent group learning sessions, and capstone module development.

6.2 Pilot Experience Structure (Phase 1)

The leadership pilot operates through three complementary formats designed to build individual fluency while creating shared organizational capacity.

Format Cadence Purpose Owner
Leadership-wide introduction Pilot launch (April 6-10) Establish shared vocabulary, complete prerequisites, align on governance and expectations AI Governance Lead + Co-Owner
1:1 check-ins Recurring throughout pilot Individualized coaching on each leader's workflow integration, use case development, and fluency growth. The primary vehicle for building competency. AI Governance Lead + Co-Owner
All-group meetings Intermittent throughout pilot Shared problem-solving, peer learning, module development feedback, and collective progress check-ins AI Governance Lead
Weekly pulse check-in Weekly (self-directed, ~30 seconds) Track engagement, iteration frequency, confidence, and surface wins and friction points Individual participants

6.3 All-Staff Kickoff and Scaling (Phase 2)

The leadership pilot produces two outputs that enable organization-wide scaling: trained leaders who can facilitate AI adoption within their functional areas, and training modules built by those leaders for their specific teams.

The all-staff kickoff in June marks the transition from Phase 1 (leadership pilot) to Phase 2 (team expansion). Each pilot leader will have developed a training module tailored to their unit's workflows, governance considerations, and use cases. The kickoff is the delivery point for those modules, led by the leaders who built them.

6.4 Ongoing Fluency Development

Training is an ongoing requirement. The AI skills landscape changes rapidly, and one-time onboarding produces declining returns within 3-4 months without reinforcement.

Cadence Activity Owner
Phase 2 (post-June kickoff) Unit-level training delivered by pilot leaders using their modules Pilot leaders within each functional area
Monthly (Phase 2+) All-hands working sessions where staff share workflow integrations and troubleshoot challenges together AI Governance Lead + pilot leaders
By end of Phase 2 All team members complete Anthropic Academy's 5-course progression Individual responsibility, tracked by AI Governance Co-Owner
Quarterly (Phase 3+) Advanced topics, platform updates, and governance refresh AI Governance Lead

6.5 Champion Network

The ten leaders in the pilot cohort are the champion network. Each leader owns AI adoption within their unit from day one. Their role: find and share practical team-specific examples, answer questions and mentor colleagues, surface feedback on what works and what creates friction, and participate in monthly governance feedback sessions. Additional champions may emerge from early adopters identified during the all-staff rollout in Phase 2.

Champion capability builds progressively during the pilot through 1:1 coaching, group sessions, and rotating facilitation of AI Fridays. By Phase 3, leaders facilitate peer learning sessions independently. The AI Governance Lead maintains the champion roster and provides facilitation support. The Student Success Lead (Stephanie Gardner) owns champion development within coaching teams. The Development Co-Lead (Sashia Moore) owns champion development within fundraising workflows.

7. Acceptable Use Policy

All Claude Enterprise users at DC CAP must adhere to the following guidelines. These are covered in the onboarding training described in Section 6 and acknowledged in writing by each user before receiving platform access. All Claude-generated errors are the responsibility of the user. Embedding personal review and expertise into every AI workflow is a non-negotiable practice at DC CAP.

7.1 Encouraged Uses

The following represent the kinds of high-value AI use this governance framework is designed to enable:

  • Draft grant narratives, program reports, and strategic documents using shared Projects loaded with approved organizational language, past submissions, and data dictionaries.
  • Analyze de-identified program data (Tier 3) for trends that inform coaching strategies, partnership evaluations, and board reporting.
  • Prepare board and funder materials by working with strategic plans, OKR frameworks, and published outcome data in dedicated Projects.
  • Streamline operational workflows through Microsoft 365 integration for email drafting, document review, and meeting preparation.
  • Build and share team-specific Projects that capture institutional knowledge so best practices are replicated across the organization.

7.2 Required Practices

  • Verify the data tier classification of any information before uploading or referencing it in Claude conversations.
  • Use shared organizational Projects for team knowledge rather than duplicating documents across individual accounts.
  • Review Claude outputs for accuracy before incorporating them into external communications, grant submissions, or reports. Claude is a drafting and analysis accelerant. Staff remain accountable for the quality and accuracy of all work product.
  • Report any accidental upload of Tier 1 data to the AI Governance Lead immediately. The incident response protocol (Section 8) applies.
  • Separate personal projects and chats from DC CAP projects. Every employee is expected to keep personal projects, build team projects, share projects with others, and separate them appropriately.

7.3 Prohibited Uses

  • Uploading or referencing any Tier 1 (Restricted) data, including scholar names linked to academic records, financial details, or family information.
  • Using Claude to make automated decisions about scholar eligibility, placement, or support level without human review and approval.
  • Using Claude to make automated outreach, hiring, dismissal, evaluation, and other decisions without human review and approval.
  • Sharing Claude-generated content externally without staff review and attribution. All external communications remain staff-authored; Claude is a tool in the drafting process.

7.4 Personal AI Tool Use with Organizational Data

DC CAP's data governance obligations apply regardless of which AI platform staff use. Claude Enterprise is DC CAP's approved AI platform because it provides the enterprise security controls, audit logging, data retention policies, and training data exclusions that protect our organization and our students. Personal AI accounts (ChatGPT, Google Gemini, Microsoft Copilot, or any other consumer AI service) lack these protections.

Staff should not use personal AI accounts to process DC CAP data at any classification tier. Internal documents, student information, strategic plans, grant materials, partner communications, and any other organizational content should be worked with exclusively through Claude Enterprise. This policy exists because consumer AI platforms may use uploaded content for model training, lack audit trails required for governance compliance, and operate outside DC CAP's data retention and security infrastructure.

If a staff member identifies a capability available in another AI platform that Claude Enterprise does not currently provide, they should raise it with the AI Governance Lead or Co-Owner. DC CAP's approach is to consolidate AI use within a governed, auditable environment rather than fragment it across unmonitored tools.

Anthropic's Data Handling: Anthropic confirms that Enterprise customer data is never used for model training. Your conversations and data are protected end-to-end with Enterprise security controls.

8. Incident Response Protocol

If Tier 1 data is accidentally uploaded to Claude or a potential data breach is identified, the following protocol applies:

  1. Immediate Containment

    The user who identified the incident deletes the conversation containing the data immediately. Claude's custom retention policy will process the deletion, but manual deletion accelerates removal.

  2. Notification

    Notify the AI Governance Lead (Preston Magouirk) or AI Governance Co-Owner (Angela Cammack) within 2 hours of discovery. Either can initiate the full response protocol. The responding Governance owner notifies the CEO within 4 hours.

  3. Assessment

    The Governance Lead reviews audit logs to determine the scope of the incident: what data was exposed, for how long, and whether it was referenced in any outputs.

  4. Remediation

    Review and tighten access controls if the incident results from a permissions gap. Update training protocols and ensure immediate remediation when the incident results from user error.

  5. Documentation

    Record the incident, root cause, and corrective actions in the governance log. Present findings at the next quarterly governance review.

Note on Anthropic's Data Handling: Anthropic confirms that Enterprise customer data is not used for model training by default. Deleted conversations are processed for removal within 30 days. Custom retention settings provide additional organizational control over data lifecycle. These protections mean that even in an incident scenario, the blast radius is contained by platform architecture.

9. Success Measurement Framework

Governance without measurement is compliance theater. We need to know whether this platform is producing the outcomes that justify the investment, and we need to know early enough to course-correct. The following framework measures success across three dimensions: Engagement (are people using it?), Proficiency (are they getting better?), and Impact (is it producing mission-relevant results?). Detailed metric definitions, tracking instruments, and collection schedules are maintained in the KPI Framework (separate operational document).

Engagement

These are leading indicators. They tell us whether staff are using the platform and building habits.

Metric Phase 1 Target (Day 60) Phase 2 Target Phase 3 Target
Monthly active users 80%+ of pilot cohort (7+ of 9) 60% of all licensed users by Month 4 75% of all licensed users by Month 8
Prompts per participant per week Baseline established Week 1; 15+ by Week 6 15+ sustained 20+
Active user segmentation Cohort shifting from Light (1-5/week) toward Moderate (6-19) and Heavy (20+) Majority in Moderate+ Majority in Heavy
Anthropic Academy completions Claude 101 + AI Fluency for Nonprofits (prerequisite); Framework & Foundations (recommended) 5-course progression for all team members Required for new hires within 60 days

Data source: Claude Enterprise admin panel (usage logs) + weekly check-in self-report. Tracked weekly starting Week 1.

Proficiency

These metrics capture whether platform engagement translates into fluency development. Proficiency is the leading indicator that predicts workflow impact.

Metric Phase 1 Target (Day 60) Measurement
Iteration frequency 60%+ of cohort reporting "A few times" or higher on weekly check-in Self-report (weekly check-in) + admin logs where available
Observable fluency behaviors 70%+ of participants demonstrate 3+ behaviors (iteration on drafts, clarifying goals, questioning model reasoning, identifying missing context, specifying output formats, providing examples, fact-checking outputs) Biweekly observation starting Week 3; pre/post survey
Session depth Average conversation length increasing over time Enterprise admin data cross-referenced with self-report
Workflow redesign documentation Each participant identifies at least 1 workflow with before/after comparison; minimum 2 documented redesigns per unit (6 total) Templates deployed Week 3; tracked at Day 45 and Day 60
Early Warning: Any participant reporting "Never" on iteration for 2+ consecutive weeks triggers a 1:1 coaching conversation with the AI Governance Lead.

Impact

These are lagging indicators that connect AI-assisted proficiency to mission outcomes. We expect initial signal by Day 45 and reportable results by Day 60 for the pilot; Phase 2-3 targets are longer-horizon.

Metric Phase 1 Target Phase 2-3 Target Measurement
Time savings evidence Each participant documents at least 1 task with before/after time comparison Aggregated into "total hours redirected per week" by function Self-report + workflow redesign documentation
Quality improvement evidence At least 3 examples of AI-assisted outputs rated higher quality than previous manual outputs (peer-assessed) Ongoing peer assessment integrated into champion sessions Standard quality rubric
Mission connection At least 1 example per unit of AI-freed time redirected to high-value student interaction Grant cycle acceleration, coaching contact hour increases, program evaluation depth Unit-level tracking by champion + Student Success Lead
Grant narrative first-draft cycle Establish baseline hours per narrative 25% reduction (Phase 2); 40% reduction (Phase 3) Self-report by Development team
Coaching documentation time Establish baseline hours per scholar 15% reduction (Phase 2); 25% reduction (Phase 3) Self-report by coaching team

Q1 Rollout Decision Framework

The June board briefing presents pilot outcomes and a rollout recommendation. The recommendation is determined by the following pre-committed decision framework. Each pathway specifies the evidence required and the action it triggers, so the board receives a decision tool rather than a report.

Scale: Full Q1 Rollout

All four conditions must be met:

  • 7+ of 9 pilot participants are monthly active users by Day 60
  • Pre/post knowledge assessment shows measurable improvement across the cohort (median gain of 15%+ on scored constructs)
  • 6+ documented workflow redesigns with time-savings evidence across at least 4 of 6 units
  • Zero Tier 1 data incidents; zero unresolved Tier 2 incidents

Board recommendation: Approve Q1 FY27 (July-September) rollout to all 24 licensed users via cohort-based onboarding. Budget: existing Enterprise license allocation plus facilitation time from Governance Lead and Co-Owner.

Scale with Modifications

Two or three of the four Scale conditions are met, with specific gaps:

  • Active user rate is 5-6 of 9 (adoption variance by unit)
  • Knowledge gains are positive but below 15% median threshold
  • Fewer than 6 workflow redesigns, or redesigns concentrated in 2-3 units
  • Governance record is clean

Board recommendation: Approve Q1 rollout with targeted modifications. Extend pilot by 30 days for underperforming units. Pair new cohort members with pilot graduates as mentors. Present a modified rollout timeline at the September board meeting.

Pause and Restructure

One or more of the following conditions are present:

  • Fewer than 5 of 9 participants are active users by Day 60
  • Pre/post knowledge assessment shows no measurable improvement
  • Any Tier 1 data incident occurred during the pilot
  • Leadership alignment has eroded (2+ participants disengaged after Week 4)

Board recommendation: Pause org-wide rollout. Conduct a root-cause analysis of pilot barriers. Present revised approach and timeline at the September board meeting. Maintain licenses for active pilot users during the diagnostic period.

Pre-commitment matters. This framework was established before pilot outcomes are known. Defining decision criteria in advance prevents post-hoc rationalization and ensures the board receives an honest, evidence-driven recommendation regardless of outcome.

Governance-Specific Metrics

Metric Target Frequency
Tier 1 or Tier 2 data incidents Zero Continuous, reported monthly
Governance framework updates based on usage data 1+ per quarter Quarterly
Staff governance confidence (self-reported) 80%+ report confidence in data tier classification by end of Phase 1 Pre/post survey
Audit log review completion 100% of scheduled reviews completed on time Monthly

These metrics will be reported to the CEO monthly during Phase 1, quarterly during Phases 2-3, and included in the annual board briefing on AI adoption outcomes and risk posture (Section 10).

10. Governance Review Cycle

This framework is a living document. Governance matures alongside organizational fluency with the platform. The following review cadence ensures continuous improvement.

Frequency Activity Owner
Monthly Review audit logs. Check for unauthorized data access patterns. Gather user feedback on friction points and productivity gains. Report engagement metrics (Section 9) to CEO. AI Governance Lead & Co-Owner
Quarterly Formal governance review with leadership team. Update data classification decisions, revise acceptable use policy, assess phase progression. Review Anthropic platform updates for new capabilities or policy changes. Include staff feedback from champion network and quarterly survey. Report proficiency and early impact metrics (Section 9). AI Governance Lead & Co-Owner + Executive Sponsor
Annually Comprehensive framework revision. Align with organizational strategic plan updates. Benchmark against peer nonprofit AI governance practices. Board briefing on AI adoption outcomes, full impact metrics (Section 9), and risk posture. AI Governance Lead & Co-Owner + CEO

10b. Long-Term Governance Sustainability (Year 2+)

The review cadence above governs the pilot and initial rollout phases. Sustained AI governance requires infrastructure that outlasts any individual and adapts as the organization's AI maturity evolves. The following maintenance commitments ensure this framework remains a living system.

Annual Framework Revision

Each July (aligned with DC CAP's fiscal year), the AI Governance Lead and Co-Owner conduct a comprehensive framework revision. This includes updating data classification decisions based on the prior year's audit findings, incorporating new Anthropic platform capabilities, revising acceptable use policy based on observed usage patterns, and benchmarking against peer nonprofit AI governance practices. The revised framework is presented to the CEO and included in the annual board briefing on AI adoption outcomes.

Recurring Knowledge Assessment

All licensed users complete the AI Fluency Assessment annually. The post-pilot version of the instrument (administered June 2026) becomes the baseline for ongoing measurement. Annual results are compared against prior-year scores to track organizational fluency trends, identify emerging knowledge gaps, and inform training investments. Results are reported in aggregate to the CEO and board; individual results are used for professional development conversations only.

New-Hire Onboarding Pathway

Every new DC CAP employee with a Claude Enterprise license completes the onboarding prerequisites (assessment, Start Here guide, governance framework review, and Anthropic Academy courses) within their first 30 days. The AI Governance Co-Owner is responsible for ensuring new-hire onboarding is tracked and completed. New hires are paired with a trained colleague from their unit for their first two weeks of AI use.

Governance Role Succession

The AI Governance Lead and Co-Owner roles carry institutional knowledge that must transfer smoothly during personnel transitions. Both roles maintain written documentation of platform configuration decisions, active governance issues, and in-progress policy changes. If either role transitions, a 30-day knowledge transfer period is initiated with the successor before the departing owner's last day. The remaining governance owner holds full operational authority during any transition period.

Quarterly Spot-Check Protocol

Beginning in Year 2, the quarterly governance review includes a brief knowledge spot-check for all licensed users: 5 scenario-based questions drawn from the data classification tiers, incident response protocol, and acceptable use policy. The spot-check takes under 3 minutes, reinforces governance awareness, and surfaces areas where refresher training may be needed. Results are tracked in aggregate; individual results trigger a coaching conversation only if a user scores below the threshold on data classification items.

Design Principle: Governance sustainability depends on making compliance easy and visible. Recurring assessments, structured onboarding, and role succession planning ensure that organizational AI fluency and safety practices deepen over time rather than decay after the initial training investment.

11. Alignment with Existing Policies

This AI Data Governance Framework operates within and reinforces DC CAP's existing policy infrastructure. It should be read alongside the following organizational policies:

  • GEAR UP Data Policy: Establishes data handling requirements for federally funded programming. The AI framework's Tier 1 restrictions exceed GEAR UP's data protection requirements.
  • Salesforce Data Governance: Defines field-level security, role-based access, and data sharing rules within the Education Cloud platform. The AI framework's classification tiers mirror and extend these protections.
  • Employee Handbook / Acceptable Use: Claude Enterprise use falls under existing technology acceptable use provisions. Section 7 of this framework provides AI-specific supplementary guidance.
  • Partner MOUs: University partnership agreements include data sharing provisions. Tier 2 classification of partner-shared confidential data respects these contractual obligations.
  • AI Onboarding Implementation Plan: Defines the 60-day training arc, session facilitation guides, competency progression, pre/post assessment instruments, and champion network development. Section 6 of this framework establishes the training prerequisites; the Implementation Plan provides the detailed curriculum.
  • KPI Framework: Operational measurement instrument with detailed metric definitions, collection schedules, tracking dashboards, and Scale/Pause/Pivot decision gates. Section 9 of this framework establishes governance-level targets; the KPI Framework provides the tracking infrastructure.

12. Decision Log & Next Steps

Completed Actions

  • Enterprise order form signed with Anthropic. Nonprofit pricing confirmed. Initial seat count: 24.
  • CEO and CSAO reviewed and approved this framework for pilot implementation.
  • COO designated as AI Governance Co-Owner with full platform authority development path through end of Phase 1.

Immediate Actions (Weeks 1–4)

  • Execute 4-Week Kickoff Plan. Configure security infrastructure, build knowledge base, launch pilot cohort.
  • COO assesses Microsoft Entra ID readiness for SSO integration.
  • CSAO prepares onboarding training materials for Phase 1 pilot cohort.

Open Questions

  • Should the board receive a briefing on AI adoption before or after the pilot phase? Recommendation: brief the board at the conclusion of Phase 1 with initial usage data and governance learnings.
Strategic Note: Governance documents built from actual usage patterns are more durable and practical than those designed in the abstract. The leadership pilot will generate the real-world data needed to refine this framework for Phase 2 expansion. The framework will be updated at the conclusion of Phase 1 based on pilot learnings.

Governance Acknowledgment

By checking the box below, I certify that I have read and understand the DC CAP AI Governance Policy Framework. I acknowledge my personal responsibility for maintaining best practices within my functional area as they relate to:

  • Data Security & Diligence — classifying data before upload, following the four-tier framework, and protecting student and organizational information
  • Responsible Delegation — making informed decisions about which tasks to assign to AI and maintaining human judgment on sensitive matters
  • Discernment in Review — critically evaluating all AI-generated outputs for accuracy, tone, and policy compliance before any external use