ChatGPT vs. the Honor Code

I, like most of us, grew up in an age ostensibly before generative AI. It always seemed to be something looming on the horizon, or just a movie villain. But, over the past year or two, AI has suddenly come into the public consciousness in a huge way, though generative AI like ChatGPT from OpenAI.

ChatGPT has made huge advancements recently, scoring incredibly high on many standardized tests, it went from the bottom 10% of bar exam takers with GPT-3.5 to the top 10% of test takers with GPT-4.0. This improvement was in the course of around a year. Clearly AI is here now and it’s here to stay. With the rapid increase in its skills many people have been looking for applications of GPT to their jobs or even homework assignments.

So, what of Caltech? We have an honor code stating, “No one shall take unfair advantage of any other member of the Caltech community.” Is using ChatGPT an honor code violation? What about other forms of AI? Where’s the line and how will it be enforced?

A lot of this question depends on the situation, as most things with the honor code do. Many classes have very strict policies, especially CS classes that would obviously disqualify someone from using ChatGPT, but others, like the humanities are more complex.

Dehn Gilmore, Professor of English and the Executive Director of the Humanities, has been thinking a lot on this subject. In the past, most humanities had fairly basic honor code policies. All of the humanities classes I took last year simply stated the honor code with no elaboration. However, the humanities department has treated this term as an experiment of sorts to see how, if at all, such AI tools can be integrated into the classroom.

Some profs experimented a lot with it, including having classroom assignments that involved interacting with ChatGPT, while others were extremely against it. The goal for the humanities department will be to det a baseline policy and then encourage the instructors to add their own policy. 

The humanities are all about developing you writing and critical thinking skills, by using ChatGPT you “hinder your progress” if you are still learning how to write, says Gilmore. Having AI write an essay for you, even based on your own ideas, still doesn’t give you practice writing, and oftentimes you will lose out on insight gained in the process of writing. 

Many of my best analysis of literature have come from me starting writing and seeing where my thoughts lease me. Sometimes I stray from my beginning so far I have to rewrite my thesis. Telling ChatGPT your thesis then getting it to write the essay for you means you don’t have the opportunity for refine your thoughts like that. 

And asking GPT for a thesis has worse problems. I tried asking it for a thesis about “Beowulf” and all of the options it gave me were incredibly surface level that did not show any in-depth knowledge or analysis of the book.

On top of all of this, GPT still has a “hallucination” problem. It sometimes makes up citations or sources completely. In fact, 2 lawyers are currently defending themselves after using GPT to write a motion because GPT made up some case references which they failed to check. They claimed they thought GPT was just a type of search engine, so they trusted what it gave them. Asking GPT about scientific articles often gives similarly bad results.

According to Gilmore, students need to “critically analyze sources” and they will “get led into bad tracks by ChatGPT”. They more students rely on ChatGPT the more complacent they will get and their skills in writing will lag behind. Susanne Hall, Professor of Writing and Director of the Hixon Writing Center, compares getting ChatGPT to write an essay for you to paying someone at the gym to lift the weights for you. The assignment might get done, but you will have learned nothing.  

Despite all this, GPT can still be incredibly helpful as a teaching tool. It is undeniable that AI will continue to be a large part of our lives going forward, so perhaps we should start using it now, and have classroom activities about it. It can also be a very helpful teaching tool according to Hall who has used to to generate practice sentences and examples for her students.

Currently, most humanities horror code policies, and even Caltech websites about the honor code and humanities are mostly concerned with plagiarism. So, is using ChatGPT plagiarism? Plagiarism is defined as passing someone else’s ideas off as one’s own, and ChatGPT is not technically another person, so according to Hall, ChatGPT is not plagiarism. Gilmore was less certain however, saying that it is “muddy” because while ChatGPT may not be another person whose ideas you are stealing, the ideas are still coming from somewhere other than your brain. So, the question of plagiarism is clearly hard to answer definitively.

Should you then cite GPT as a resource if you used it? If you write your whole piece with GPT, probably, but what if you just get GPT to edit or reword bits? Currently many people would say yes, however this is likely to change. People don’t feel the need to cite their spellcheckers or even Grammarly, a common grammar correction tool that also uses AI. So perhaps it will just be almost expected that someone would use GPT to refine their writings.

What of the future then? Next year, most humanities classes will likely have an AI policy falling somewhere on the spectrum from completely prohibited to actively encouraged. There will likely not be an institute wide policy, or even humanities wide policy, due to both the evolving nature of GPT and due to every class having different needs. 

Farther into the future is much more uncertain. Gilmore is not that excited about the future and is worried that 5 years from now the students won’t know how to write or think critically because they will have outsourced both of those jobs to AI. Hall is cautiously optimistic but is likewise concerned about students writing abilities.

So, is ChatGPT an honor code violation? Well, that depends on the classroom policy, which therefore depends on instructors to actually have a clear honor code policy. Without a clear policy it can be hard to tell where the line is. Is using a spelling and grammar checker, ok? What if the grammar checker is using AI to help you rewrite sentences? What if you ask an AI to rewrite your paragraph? There is obviously a line somewhere because at a certain point you are no longer learning anything from the assignment thus defeating the whole point which, in my opinion, is clearly an honor code violation.

And what of other applications? Will ChatGPT solve my physics problems for me? Probably not. But will ChatGPT write me an article for the California Tech? According to Michael Gutierrez (Ay ’25, Dabney/Ricketts) , the answer is “no” if I want to get paid. Gutierrez states that because the Tech has a certain standard and expectation of quality, having people use chat GPT to write articles would be unethical, especially as the writers for the tech get paid for their efforts. There have in the past been 2 or 3 articles written at least in part by ChatGPT, but these were all non-serious articles, and the authors heavily edited the responses and thus were paid for their effort wrangling ChatGPT into giving reasonable outputs. 

Articles fully written by ChatGPT are not eligible to be published in the Tech, but Gutierrez is in favor of using it as a writing tool. Gutierrez says that he has used GPT in the past for several tasks, including generating band names (none of them were good), or to explain the concept of entropy (all the answers were ambiguous). 

And as for how the Tech would know if an article is written by a human or ChatGPT, well that is a question of the honor code. Passing its work off as your own might not be plagiarism (according to Gutierrez that depends on the definition of plagiarism you’re using), but its def an honor code violation. 

And since articles about tech rarely age well, I would like to make some predictions about the future for historians to laugh at.



Personally, I think much of the hype for AI is overblown and we will not see it taking over as many jobs as people claim it will. People claimed the industrial revolution would take people jobs just as they said the increased automation that has happened over the past couple of decades would take peoples jobs. And while many jobs have become fully automated and extinct, new jobs have opened in different sectors, or peoples job duties have shifted. So even though AI seems like this hot new thing, I don’t think it will change people’s lives as much as they expect. In addition, many of these changes will likely be gradual enough that we will not notice any increased dependence on AI in our lives.



AI has a great potential, both for good and for destructive causes. The massive popularity of AI right now is leading to increased development. AI will just keep getting better and better and we will keep integrating it into our lives. Our live in as little as 5 years will probably look very different due to AI. It is possible we will have to rethink much of society to account for AI replacing jobs or making professions outdated. Maybe new jobs will open up elsewhere, but likely not on the same magnitude as the number of jobs lost, and likely in sectors vastly different from the sectors where jobs are being lost. If all these jobs are lost, society will have to deal with how to feed and clothe people who not only do not have a job but have no opportunity to gain one.


AI is an incredibly interesting problem, probably one of the most interesting one of this day and age. While nations might squabble over nuclear warheads and pesky cold and proxy wars, AI will be there gaining in power and ability. We need to develop regulation to ensure that the development of AI is done in a fair way so we can avoid situations like what happened with many facial recognition algorithms that could not recognize black faces because there were not enough in the training data. Many people see AI as computerized equity and fairness, after all, how could a couple of 1s and 0s be biased, but in truth, we train them to our own biases. Thus, we need to focus on creating more fair and equitable AI and not on the possibility of giant sentient cockroaches taking over the world.