
Use AI Safely at CMU
Generative Artificial Intelligence (GenAI) tools enable the quick creation of new content based on data analysis and inputs. As these tools evolve, we must be aware of the associated data privacy, security, ethical, and legal concerns and use them responsibly.
Checklist: Use AI Safely at CMU
Before using generative AI tools, follow these steps to keep your data secure and use AI responsibly:
- Review the Guidelines for Data Classification to ensure the security of university data and protect the privacy of our students and colleagues.
- Report suspicious activity, such as phishing or synthetic media (e.g., Deepfakes), to the Information Security Office.
-
Copyright guidance regarding authorship is still unclear. Adding personal modifications to Generative AI output may increase the likelihood of copyright protection, but exercise caution. Please be aware that current legal rules state that auto-generated content does not get copyright protection.
- Review AI outputs for accuracy. A human-in-the-loop is required to catch hallucinations or errors.
- Use CMU-approved tools. Logging in with your Andrew ID ensures better data protection.
Also, consider these steps for your role:
- Faculty: Add a syllabus statement explaining what AI use is permitted in your course. Follow the Eberly Center’s guidance on course design and responsible AI use.
- Staff: Review the data types you can safely input into generative AI tools.
- Students: Read CMU's Academic Integrity Policy, especially the "Unauthorized Assistance" section.
- Researchers: Familiarize yourself with copyright and data privacy policies, and consult your grant agency’s guidelines.
Know the Difference—Public vs. Protected AI Tools
Public AI Tools
When you use a public or free AI account, such as a personal or free ChatGPT account, or don’t log in with your Andrew account:
- You risk losing control over how your data is stored, processed, and reused.
- Your data could be stored and used to further train the model.
- CMU data could even be sold to advertisers or used to train the AI model.
Remember, these tools should only be used for general exploration. Never use public AI tools with student data, confidential research, or sensitive administrative tasks.
Examples:
- Any AI tool that you create an account for yourself with a personal email (e.g., Claude, Chat GPT, Jules, etc.).
- Any AI tool that you log into with your Andrew account that does not bring you to a CMU Web Login page.
Protected AI Tools
When you access an AI tool through a protected environment, such as CMU’s approved AI tools:
- You retain control over how your data is stored, processed, and reused.
- Your data remains private and secure.
- Your data will not be sold to advertisers or used to train AI models.
Examples: