AI BASICS

What does the Ethical Use of GenAI Mean for Faculty and Students?

By the end of this unit, you will be able to explain why ethical use of GenAI tools matters in academic and professional settings, identify key risks and responsibilities, and apply practical strategies to use GenAI tools responsibly, transparently, and in alignment with institutional expectations.

Key Takeaways

  • Ethical use of GenAI means transparency, accountability, and privacy.
  • Institutional policies come first—know your boundaries.
  • GenAI should enhance—not replace—your own thinking and communication.

Why Ethics Matter

Generative AI tools are powerful, but they are not neutral. They can help you brainstorm, summarize, and draft content—but they can also produce biased, fabricated, or misleading outputs. Using them without transparency, attribution, or awareness of institutional guidelines can lead to academic integrity violations or misuse of sensitive data.

Ethical GenAI use means understanding what the tool can and cannot do, respecting privacy boundaries, and taking ownership of your work—even when AI helps.

Common Ethical Risks When Using GenAI 

Plagiarsim

Submitting AI-generated work as your own original thinking without acknowledgment.

Fabrication

Using AI-generated statistics, citations, or quotes that are inaccurate or do not exist.

Harmful Assumptions

Accepting AI outputs that reinforce harmful stereotypes without scrutiny.

Privacy Breaches

Entering confidential or identifiable student or institutional data into public GenAI platforms, which may store, reuse, or leak submitted data, entering student or institutional information. This can violate privacy laws like FERPA and expose sensitive records to unauthorized access.

Guidelines for Ethical Use

Follow Course & Institutional AI Policies

If you are a faculty member, include a clear statement in your syllabus explaining whether and how students are allowed to use AI tools. Use the AI Syllabus Statement Templates to get started. 

If you are a student, always read your course’s AI policy in the syllabus. If it is not clear or missing, ask your instructor before using any GenAI tools for classwork. 

Own What You Submit

If AI helps you brainstorm or structure a response, that is fine—but the final output should be clearly yours. You should be able to explain your work without the AI.

Be Transparent

If you used GenAI in your process, say so. Note how you used it (e.g., “Used Perplexity for research and ChatGPT to draft”). This is especially important in professional or collaborative settings.

Fact-Check Everything

AI tools often “hallucinate” plausible-sounding but false information. Verify facts, statistics, and citations through trusted academic or institutional sources. 

Protect Privacy

Use CUNY-licensed tools like Copilot when working with educational content. They are designed to uphold strong privacy protections aligned with CUNY’s commitment to safeguarding learning data. Avoid entering personal or identifiable information into non-CUNY platforms, which may not meet these standards.

Examples: Applying Ethical Use in Practice

Students

Scenario Ethical? Why / Why Not 
You use GenAI to brainstorm ideas and then write your own draft, following guidelines in the syllabus that allow AI use. ✅ Yes AI supported idea generation, but the work is yours.
You paste an AI-generated output into your submission without citation or revision. ❌ No This misrepresents authorship and may violate academic integrity policies.
You ask GenAI to quiz you on a reading, then double-check the questions with the source text. ✅ Yes This supports active learning and critical thinking.
You enter a classmate’s work into ChatGPT to “get a better version.” ❌ No This breaches trust, privacy, and collaboration ethics.

Faculty

Scenario Ethical? Why / Why Not 
You use GenAI to generate quiz questions and then edit them before publishing. ✅ Yes You reviewed and contextualized AI output before use.
You enter the student’s submission with their name into ChatGPT for feedback suggestions. ❌ No Public tools may store the data, violating FERPA.
You paste a student’s draft into CUNY Copilot, use it to generate feedback suggestions, then review and personalize the comments before sharing. ✅ Yes You are using an institutionally approved tool responsibly, with human oversight to ensure relevance and accuracy.
You copy a student’s essay into CUNY Copilot and paste the AI-generated feedback into the grade box without reviewing or editing. ❌ No This bypasses professional judgment and may result in inaccurate or inappropriate feedback, even if the tool is FERPA-compliant.
You use GenAI to draft a rubric, then revise it for your course. ✅ Yes You are using AI as a creative assistant, not a decision-maker.
You copy a full AI-generated assignment prompt from a public chatbot with no edits or review. ❌ No It risks misalignment with learning objectives and ignores quality control.

Quiz – Ethical Use of AI

Choose the best answer. You will earn a badge if you answer all questions correctly.

Content

Is submitting AI-generated content as your own work acceptable without any review, edits, or acknowledgment?

Responsible use

What is a responsible way to use GenAI in an academic or professional task?

True or False

It is safe to enter names, emails, or identifiable work into public GenAI platforms.

Ethical Use

Which of the following best demonstrates ethical GenAI use?

Pledge

I understand the core principles of ethical AI use and pledge to use GenAI tools responsibly, with integrity, transparency, and respect for privacy.

  • IntroEssential AI Skills
  • Unit 1What You Need to Know About GenAI
  • Unit 2Prompting GenAI Effectively
  • Unit 3The Context Window
  • Unit 4Assessing AI-Generated Content
  • Unit 5What does Ethical Use of GenAI Mean for Faculty and Students?
  • RecapEssential AI Skills: Recap

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

Log in with your credentials

Forgot your details?