What Does it Mean to be Original: Caltech’s Stance on Open AI Tools

In the last couple of years, ChatGPT and other AI chatbots have surged into the spotlight as “hot topics” across various domains. From our daily interactions on social media to headline news, these conversational artificial intelligence entities have become prominent features of our lives as well as in the classroom. They’ve found their way into our academic discussions, created questions on our homework assignments, and routinely make appearances in the fine print at the bottom of course syllabi. The question looming over students is: Does ChatGPT present a gateway for academic dishonesty, allowing students to cheat on assignments and compromise their learning? Conversely, could ChatGPT be a revolutionary tool, providing access to a wealth of information to enhance and facilitate learning?

Caltech finds itself in a unique position regarding the use of ChatGPT, as there is no comprehensive institute-wide policy on its utilization. Beyond the general guiding principles of disclosure, data and information protection, content responsibility, and Caltech’s honor code (as outlined on https://www.imss.caltech.edu/services/ai), the responsibility falls upon individual professors to decide the role these chatbots play in their courses, and sometimes they leave that decision to the discretion of the students. Yet, amidst this freedom, a common thread persists—the expectation that students adhere to the Honor Code that defines the ethos of Caltech.

With the emergence of AI chatbots like ChatGPT, we face uncharted territory within the already turbulent waters of the Honor Code. The vast capabilities of these tools raise ethical questions that challenge traditional notions of academic integrity as it becomes increasingly difficult to discern where assistance ends and academic autonomy begins. Are assignments completed with the aid of AI truly reflective of a student’s effort, akin to using a calculator, which none of us would say is compromising our intelligence, or do they just represent a loophole for easily obtaining answers?

Currently, Caltech’s statements regarding the use of tools like ChatGPT focus on strategies to either facilitate their productive use or mitigate potential disadvantages. Tips for creating assignments that resist compromise by artificial intelligence are offered to professors who choose to incorporate these tools into their courses. The Institute aims to strike a balance, acknowledging the potential benefits while emphasizing the importance of maintaining the integrity of the learning process. A few of the “suggestions” for creating assignments extending beyond the reach of artificial intelligence are as follows:

  • Focus on research skills and the expression of original thought, rather than creating a synthesized document.
  • Include visuals — images or videos that students need to respond to — in your assignment. Be sure to include alt-text for accessibility.
  • Reference or connect to current events or conversations in your field.
  • Ask for application or engagement between personal knowledge/experience and course concepts or topics.
  • For short reading responses, instead of using open-ended questions in Canvas, try social annotation tools that require students to engage with a text along with their classmates. Try Hypothes.is or Perusall.
  • Replace an essay or short-answer writing assignment with one that requires students to submit an audio file, podcast, video, speech, drawing, diagram, or multimedia project. That is, mix up the assignment in ways that make running to ChatGPT more work than it’s worth.

However, even if we can solve usage in the classrooms, another problem arises: admissions. For the Class of 2028 application cycle, Caltech issued a statement asserting that essays are a means for the institution to get to know the applicant on a personal level. This extends to online tools being used similarly to how one would consult a teacher, parent, or friend during the essay-writing process. Caltech places trust in its applicants not to cross ethical boundaries. Furthermore, a new question has been added to the application process, asking applicants if they used artificial intelligence tools in their submissions and stating that their answers will remain hidden until after the application cycle concludes, almost like an experiment in and of itself.

Overall, the integration of AI chatbots like ChatGPT into academia and admissions at Caltech brings forth a complex array of ethical questions. As technology continues to advance, institutions must grapple with the dual challenge of fostering innovation while safeguarding academic integrity. Caltech’s nuanced approach, emphasizing trust, transparency, and practical strategies, showcases an ongoing commitment to navigating the ethical landscape and maintaining the institution’s high standards, and an underlying understanding that the responsibility to do so is just as much on the students as it is on the institution to facilitate.

So the real question becomes what are our standards for “original” work? Could you tell that this article was written almost entirely by Chat GPT? Does the fact that I spent significant time modifying the output, sentence by sentence and even word by word, entering new ideas, changing old ones, and evaluating if this article was truly an accurate portrayal of my thoughts mean that this is my work, despite very few of these words being isolatedly mine? I don’t know and that’s the point.