Securing Data In The Age of AI

AI Data Governance

At NC A&T, AI tools are becoming an integral part of how we support teaching, learning, and administrative work. As we adopt these technologies, it’s important to understand how they interact with university data. Any use of AI tools must align with our data classification standards outlined in our Information Security Policy (ISP), which defines four categories of institutional data: Confidential, Sensitive, Controlled, and Public. Each category carries specific requirements for access, handling, and protection. Using AI responsibly means understanding these distinctions and ensuring that tools are used in ways that uphold our institutional values, legal obligations, and data security standard.

  1. Confidential Data – University data that are protected by federal, state, or local
    statutes and regulations, industry regulations, provisions in government research
    grants, or other contractual arrangements, which impose legal and technical
    restrictions on the appropriate use of institutional information.
  2. Sensitive Data – University data that may not be protected by law or regulation
    but are considered private and are subject to restricted treatment; information
    resources that may be protected by contracts, third-party agreements, or
    university policy.
  3. Controlled Data – University data that are proprietary or produced only for use
    by members of the university community who have a legitimate purpose to access
    such data.
  4. Public Data – University data that have few restrictions and/or are intended for
    public use
 

Data Classification Level— Category

Data Examples

AI Tool Permissions

1— Confidential Data

  • Social Security numbers and tax information
  • Protected health information (PHI)
  • Payment card data and banking information
  • FERPA - protected student records
  • Research data with federal grant restrictions
  • Export control regulated information
  • Data subject to HIPAA, GLBA, or other regulatory compliance

Prohibited from use with any generative AI tools.


2— Sensitive Data

  • Student academic records and transcripts
  • Employee personnel files and evaluations
  • Unpublished research data and findings
  • Third-party contracted data
  • Donor information and giving records
  • Disciplinary records

Restricted to university-approved AI tools with enterprise data security agreements. Requires department head authorization and data anonymization when possible.


3— Controlled Data

  • Internal university policies and procedures
  • Department budgets and financial reports
  • Staff directories and organizational charts
  • Internal meeting minutes and communications
  • Campus facility usage data
  • Proprietary administrative systems data
  • Internal research proposals and drafts

Restricted to university-approved AI tools with enterprise data security agreements.


4— Public Data

  • University catalogs and course descriptions
  • Published research papers and abstracts
  • Public event announcements
  • General campus information and maps
  • Marketing materials and brochures
  • Public faculty profiles and contact information
  • Press releases and news articles

Unrestricted use of any generative AI tools and services. No special approvals required.

 

AI tools can be powerful aids for drafting, summarizing, and organizing content—but users are responsible for ensuring that any AI-generated material meets university standards. Whether you're using AI to help with writing, communication, or design, it's important to review and edit content for accuracy, tone, and compliance. For more information on appropriate use, including quality assurance, intellectual property, and ethical considerations, please refer to the AI‑Generated Content Guidelines.