Mount Holyoke College acknowledges the growing role of generative AI tools such as ChatGPT, Gemini, Leonardo AI, and others in academic, administrative, and personal work. At the same time, the capabilities of these tools and the understanding of their benefits and challenges are evolving rapidly. While AI presents opportunities for innovation and efficiency, its use raises concerns regarding its environmental impact, privacy, security, academic integrity, and equity. Also, AI often gets things very wrong. The following guidelines aim to ensure the ethical, secure, and responsible use of AI, fostering a culture of critical engagement with technology in line with the College’s mission and strategic vision as we navigate these changes as a community.
Ethical Use of AI
- Human Values: AI-enabled tools should be developed and used in ways that support the ideals of human values, such as human agency and dignity, and respect for civil and human rights. Adherence to civil rights laws and human rights principles must be examined in consideration of AI-adoption where rights could be violated.
- Bias, Discrimination and Fairness: AI systems can perpetuate biases present in the data they are trained on and this can lead to discrimination and harm. Users should critically evaluate AI-generated content for biases and ensure it aligns with the College’s commitment to diversity, equity, and inclusion.
For questions related to bias and discrimination related to AI use contact Diversity@mtholyoke.edu. - Avoiding Plagiarism: AI tools may produce content that includes copyrighted material. Users must ensure that any AI-generated material is properly reviewed and does not infringe on intellectual property rights.
Culture of Critical Engagement
- Learning Together: LITS, in collaboration with faculty, staff, and students, will provide workshops and other opportunities to encourage sharing and to increase AI literacy among students, faculty and staff. These sessions will cover ethical issues, practical applications, and critical analysis of AI technologies.
- Ongoing Review and Adaptation: As AI technology evolves, the College will continuously review and update its guidelines, ensuring responsible adaptation to new developments. If you have any feedback you’d like to share about these guidelines or AI use at MHC generally, please use this feedback form.
Professional Integrity and Responsibility
- Transparency: Any use of generative AI in academic work should be disclosed. This includes acknowledging AI-generated content in papers, projects, or other creative outputs.
- Faculty Guidelines: Instructors should set clear expectations regarding AI use in their courses. The appropriateness of AI for drafting, researching, or editing academic work will be determined on a course-by-course basis. Students must seek clarification on AI use from their instructors.
See the LITS Guide on Artificial Intelligence in the academic context or contact EdTech@mtholyoke.edu or ResearchServices-g@mtholyoke.edu. - Review AI Outputs: AI tools can generate content that is inaccurate, biased, or misleading. Users are responsible for verifying the accuracy of AI-generated information and ensuring it complies with professional integrity standards.
- For questions regarding academic integrity and AI, contact dean-studies@mtholyoke.edu. For professional integrity in the use of AI in the workplace, contact hrfrontdesk-g@mtholyoke.edu.
Compliance with College Policies
- Acceptable Use Policy: The use of AI tools is subject to Mount Holyoke College’s Code of Ethical Conduct, Student Honor Code, Acceptable Use of IT Policy, Data Classification Policy, Nondiscrimination and Anti-Harassment Policy, Sex Discrimination and Sex-based Harassment Policy and other relevant institutional policies.
- Consultation for New Tools: As with any new technologies or tools, before purchasing or deploying new AI platforms for College-related work, faculty and staff must consult with LITS to review contracts and ensure compliance with security, accessibility and privacy policies. Users must not assume that because a particular AI function is currently free-to-use that it will always remain so. For questions or to request that a certain tool’s AI features be reviewed by LITS please submit the AI Tools assessment request form.
- Nondiscrimination Policy: Please note that some AI tools may not hold the same values that we do at Mount Holyoke College. Therefore, if you feel that any system, or individual who is reviewing a system’s product is discriminating against you as a user, please avail yourself of the Nondiscrimination Incident Reporting Form or contact the Assistant Vice President of Compliance directly. You can also find our Nondiscrimination and Anti-Harassment Policy and Sex Discrimination and Sex-based Harassment Policy on our institutional website.
Data Privacy and Security
- Protect Confidential Data: Unless explicitly approved by LITS, AI tools must not be used to process or store information classified as restricted, confidential, or internal according to the College’s Data Classification Policy. This includes data covered under FERPA, HIPAA, or proprietary research information. Users must refrain from inputting student records, unpublished research, or sensitive administrative data into generative AI platforms.
- Public Domain: Only publicly available information, as described in the College’s Data Classification Policy, should be entered into generative AI tools with the exception of tools approved for use with other data types by LITS. Most AI platforms learn from user data, and entering private information can expose it to third parties or unauthorized users. For questions regarding data use in AI tools please open a ticket with LITS.
Cybersecurity and AI-related Threats
- Phishing and Cyber Threats: AI tools can be used to create sophisticated phishing scams or malware. Users should remain vigilant against such threats by following cybersecurity best practices, including the use of two-factor authentication and avoiding suspicious links. Please report all suspected phishing messages to LITS.
Practices
- When using AI do so with human oversight. Examples include the following:
- A meeting organizer setting expectation about use of AI meeting summaries
- Example language to set these expectations can be as follows:
- We are using AI meeting summaries today. Please utilize this tool within the Chat or by using the AI Companion. If you notice that the tool did not accurately portray someone or something that was said, please bring this to the attention of the meeting organizer.
- Example language to set these expectations can be as follows:
- A meeting organizer or attendees editing AI summary outputs before sharing.
- A researcher referencing how they used AI in their research
- A user of AI services, validating AI outputs before using them to inform their work.
- A meeting organizer setting expectation about use of AI meeting summaries
These guidelines were developed by using ChatGPT to draw on best practices observed at peer institutions including Bucknell University, Wellesley College, and Iona University with substantial editing by faculty and staff at Mount Holyoke College.