When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
In Spain, the whole university system was like that until like 15ish years ago. Exams were king, in most courses they were worth 80%-90%, and of course always in person.
Then we did a university reform, partly with the excuse of aligning with the rest of the EU within the Bologna process (and I say "excuse" because that's what it was, because the politicians introduced some things with that pretense that weren't like that in the rest of the EU at all, and it was perfectly possible to comply with Bologna without doing them) and partly to copy the US/UK ways. And one of the pillars of that reform was continuous assessment, and evaluating coursework.
As a consequence of this, first of all working class students were royally screwed. Because suddenly it wasn't OK to just organize yourself to prepare the exam, you had to attend lots of sessions to earn points, which put students who work at a disadvantage. And second, passing by cheating became possible, even before LLMs. People tend to forget that before everyone got access to ChatGPT, some people had access to experts (family members, or even paying someone to do the work).
Now that this kind of cheating has been democratized and everyone can do it instead of just the most privileged with access to experts or money to pay them, people act all outraged. Although pretty much nothing is being done, except for using snake oil detectors, or sometimes increasing difficulty of assignments to make them LLM-proof (with which you screw the students who actually want to learn without LLMs).
They spent years indoctrinating us (professors) in training courses on how the old exam-based ways were wrong (the "Napoleonic" model, they called it... none of them seems to entertain the thought that maybe if it had been working essentially unchanged since Napoleon it wasn't that bad, and you need solid reasons to change it beyond "this is old so let's change") and the new ways were the bee's knees. Like in the Milgram experiment, it's difficult for people to back down and acknowledge that they have been wrong, even when the solution is obvious.
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
I'm old enough to remember a similar controversy over whether to allow calculators in math classes. While most schools were banning them to force kids to learn how to do math without them, my school went the other way. They mandated that every student had one and then changed the assignments and tests to account for it. Gone were questions that had whole number answers that could be computed in our heads. Instead, answers were complex and the only way to know whether you'd done the question correctly was to be sure of your method. They even allowed us to write programs in TI-BASIC that we could use on tests, the only limitation was that we were not allowed to share programs with other students. I discovered that rather than trying to cram for exams, I could just write a program that would solve each class of problem we were likely to see on the exam, and by essentially teaching my calculator to pass the test, I also taught myself. It was a vastly better way for me to study. It also led to my decision to major in comp sci and my career in software. I'm forever grateful to those teachers for choosing to see the latest technology as a multiplier of student potential rather than a way students could cheat to avoid learning.
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
A lot of people in this thread are talking about how they did in-person exams, handwritten problem sets in class, etc. This kind of thing is more challenging in the humanities, where the research paper is kind of our bread and butter. A lot of us have since turned to different kinds of assignments, but I am not ready to forego research papers in favor of blue book exams. I think there's some value in having to develop and sustain an argument in conversation with some body of literature (scholarly or otherwise), and that is not easily to replicate with in-person writing, at least at the undergrad level. (Doctoral candidates do this kind of thing all the time in qualifying exams, but that's after years of graduate school and fresh off doing nothing but reading 100+ books over the course of a few months.)
In one of my classes the approach was the opposite, I’m expected to do Ph.D level work as an undergrad and am expected to use AI.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
For programming, the way I would build a curriculum is to force students to actually learn how to program and code first. This is simple by requiring them to write code inside the classroom by hand for all exams.
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
I can't compose at a typewriter the way I do with a word processor. I would have to write it out by hand first. If the typing is just transcription, I could just as well be copying from an AI doc.
If you're doing it in class anyway, and providing typewriters, you might as well provide a locked down Chromebook. Cheaper and better for composition.
I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
Is there really much point though? I think AI will keep improving, and there will be more and more incentive to use an AI which costs $20/month, instead of a human writer that costs $30/hour. If someone want's an article written, and if people like the AI article as much as the human one, what stops anyone everyone using AI?
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
I sometimes wonder if the next step for expensive, elite universities will be to make exam-by-interview a thing, even for undergraduate work. Get in a room with no electronics allowed, just you and the instructor/TA/whoever, and they interview you. Pretty quickly they are going to know if you learned the material or not.
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
FWIW my Dad taught me how to type at 4yo on a huge Imperial typewriter. My spelling took an enormous leap in capability in a few weeks. Primary school teachers were amazed at the words I could spell correctly. (Didn't help my handwriting though which was still like intoxicated chicken scratch on a good day).
Aside from AI-proofing, IMO there is value in slow typing or writing. We have to think just a little bit more before putting ink to paper. There is also a higher cost to making mistakes.
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
In college I had to handwrite all my exams and the pure math courses didn't allow calculators. For the latter you could realize you were doing it wrong if the answer was too complicated to write down. As others said the final was something like 30% or more of your grade.
If you need a typewriter, there's a company in New Jersey that makes them for the prison trade in a lucite housing to prevent prisoners from hiding contraband inside.
Not sure if China manufactures new machines. India supposedly still manufactures them.
When I was a kid we were told no calculators because when you grow up and are in the real world it's not like you will have a calculator with you at all times. Fast forward to today and we all have a calculator on our phones.
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
College education is all kinds of backwards if the goal was teaching students. If you aren't speaking day one of German find another teacher. In this case you are cheating yourself out of an education if you use a typewriter.
We have amazing tools through screens, but insist on using them as if they were the old tools and act surprised when they lead to worse outcomes. Each student could easily have a friend in Germany they speak to in real time if people actually cared about breaking down cultural barriers. Schools have never cared about education, their main purpose is control. Control of the narrative and control over your life.
If I was a professor, I would make a very clear policy they AI is not to be used on assignments, and would repeat it throughout the semester, but make no effort to actually enforce this and even make it easy to abuse.
Then, for the final exam, drop the bomb: in person, handwritten, no outside references, mostly the same assignments we’ve done before. If you fail, it’s over for you. If you stayed true and studied, it should be easy for you to pass. If you used AI all semester, you did it to yourself. Those who complain will have their past assignments audited and if AI was used they are reported for plagiarism. This will be the most valuable lesson.
Surely there's a middle ground? Get some old wordstar capable 86 class clones and leave the GUI off. It's typing on a keyboard without the confusion of the internet or clicking on glowing icons.
You could just as easily have a bunch of old desktop computers with no network interface uplink going out of the room active whatsoever, running some basic GUI and libreoffice, connected to nothing but a dumb copper ethernet switch in the same room and an old HP laserjet with a 10/100 ethernet interface in it. No need to force people to deal with typewriters.
A typewriter is extreme. In school I used an AlphaSmart (https://en.wikipedia.org/wiki/AlphaSmart, because my handwriting sucked). A laptop without internet would also work.
After 30 seconds the site shows a fullscreen popup:
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
Remembering my college typewriter-use-by-quarters (coins) on a timer like being at the laundromat, I kind of love this.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
To me this nostalgia is pointless. AI is here and it's good enough only going to get better. The classroom should be about using AI better not ignoring it.
But that would require the teacher to be good at AI too. I think that's the problem here.
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
Things like this are well-intentioned but idk why there aren't more teachers creating optional "side quests" like these for students that want them instead of forcing them to do things like these
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
431 comments
We already had AI proof education.
Then we did a university reform, partly with the excuse of aligning with the rest of the EU within the Bologna process (and I say "excuse" because that's what it was, because the politicians introduced some things with that pretense that weren't like that in the rest of the EU at all, and it was perfectly possible to comply with Bologna without doing them) and partly to copy the US/UK ways. And one of the pillars of that reform was continuous assessment, and evaluating coursework.
As a consequence of this, first of all working class students were royally screwed. Because suddenly it wasn't OK to just organize yourself to prepare the exam, you had to attend lots of sessions to earn points, which put students who work at a disadvantage. And second, passing by cheating became possible, even before LLMs. People tend to forget that before everyone got access to ChatGPT, some people had access to experts (family members, or even paying someone to do the work).
Now that this kind of cheating has been democratized and everyone can do it instead of just the most privileged with access to experts or money to pay them, people act all outraged. Although pretty much nothing is being done, except for using snake oil detectors, or sometimes increasing difficulty of assignments to make them LLM-proof (with which you screw the students who actually want to learn without LLMs).
They spent years indoctrinating us (professors) in training courses on how the old exam-based ways were wrong (the "Napoleonic" model, they called it... none of them seems to entertain the thought that maybe if it had been working essentially unchanged since Napoleon it wasn't that bad, and you need solid reasons to change it beyond "this is old so let's change") and the new ways were the bee's knees. Like in the Milgram experiment, it's difficult for people to back down and acknowledge that they have been wrong, even when the solution is obvious.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
If you're doing it in class anyway, and providing typewriters, you might as well provide a locked down Chromebook. Cheaper and better for composition.
> Everything slows down. It’s like back in the old days when you really did one thing at a time.
Why did we turn computers into frenetic, distracted multitasking machines? What would it take to reverse course?
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
[1] https://archive.org/details/ahl-1978-basic-computer-games/
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
We have amazing tools through screens, but insist on using them as if they were the old tools and act surprised when they lead to worse outcomes. Each student could easily have a friend in Germany they speak to in real time if people actually cared about breaking down cultural barriers. Schools have never cared about education, their main purpose is control. Control of the narrative and control over your life.
Then, for the final exam, drop the bomb: in person, handwritten, no outside references, mostly the same assignments we’ve done before. If you fail, it’s over for you. If you stayed true and studied, it should be easy for you to pass. If you used AI all semester, you did it to yourself. Those who complain will have their past assignments audited and if AI was used they are reported for plagiarism. This will be the most valuable lesson.
> Most students found their pinkies weren’t strong enough to touch-type, so they typed more slowly, pecking at the keyboard with their index fingers.
Huh. I'm not sure I ever use a pinky while touch-typing, except to hit right-backspace sometimes.
For that matter I don't home using F and J either -- I usually home with alt+tab / cmd+tab and right-ctrl / right-cmd.
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
Feels better to design assignments where students have to use and think about AI, not avoid it. Good experiment, just not a long-term fix.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
Good luck, we’re all counting on you.
But that would require the teacher to be good at AI too. I think that's the problem here.
One of my best college professors would review such essays in-person, one-on-one twice each semester.
https://austinhenley.com/blog/aihomework.html
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development