When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
In Spain, the whole university system was like that until like 15ish years ago. Exams were king, in most courses they were worth 80%-90%, and of course always in person.
Then we did a university reform, partly with the excuse of aligning with the rest of the EU within the Bologna process (and I say "excuse" because that's what it was, because the politicians introduced some things with that pretense that weren't like that in the rest of the EU at all, and it was perfectly possible to comply with Bologna without doing them) and partly to copy the US/UK ways. And one of the pillars of that reform was continuous assessment, and evaluating coursework.
As a consequence of this, first of all working class students were royally screwed. Because suddenly it wasn't OK to just organize yourself to prepare the exam, you had to attend lots of sessions to earn points, which put students who work at a disadvantage. And second, passing by cheating became possible, even before LLMs. People tend to forget that before everyone got access to ChatGPT, some people had access to experts (family members, or even paying someone to do the work).
Now that this kind of cheating has been democratized and everyone can do it instead of just the most privileged with access to experts or money to pay them, people act all outraged. Although pretty much nothing is being done, except for using snake oil detectors, or sometimes increasing difficulty of assignments to make them LLM-proof (with which you screw the students who actually want to learn without LLMs).
They spent years indoctrinating us (professors) in training courses on how the old exam-based ways were wrong (the "Napoleonic" model, they called it... none of them seems to entertain the thought that maybe if it had been working essentially unchanged since Napoleon it wasn't that bad, and you need solid reasons to change it beyond "this is old so let's change") and the new ways were the bee's knees. Like in the Milgram experiment, it's difficult for people to back down and acknowledge that they have been wrong, even when the solution is obvious.
I think the only defense of the new model is that it forces students to learn throughout the semester, rather than just before the exam. Which is easier and more effectively engages long term memory (like doing more rounds of spaced repetition).
I definitely could tell the difference, though most of the time I just studied full 4-7 days before the exam.
When I studied CS in Germany in the 2010s (also in the Bologna Bachelor/Master system) most courses had weekly graded assignments. But the assignments didn't count towards your grade, instead you needed to reach a certain total of points to be allowed to take the exam. The actual grade was entirely composed of midterm and final exam (pen and paper exams, no computers, no multiple choice).
It was easy to cheat on the assignments. Working on them in groups was common and sometimes encouraged. The only person you could really cheat was yourself (and a TA who had to grade one more exam)
In the UK they claimed that girls did worse on one-off exams and so the one-off exams structure favoured boys. When course had more graded coursework, girls did better.
So that was the justification used to switching to a less impactful final exam.
No idea how true that is.
We were also told learning a phonetic alphabet was better for young children learning to read than using the old ABC system.
As far as I have heard, that turned out to be based on one person's fantasy and zero evidence and has actually had negative impact on children learning to read.
The UK has a real problem with pseudoscientific nonsense invading the education system.
To my knowledge they still teach about audio/visual/kinetic learners and how you should structure the way you learn around which one you are. This has been debunked for decades.
> The UK has a real problem with pseudoscientific nonsense invading the education system.
Not just the UK, pedagogy/education is a very soft science, along with any other field that revolves around human behavior (psychology, sociology, etc...).
Using AIs in experiments and studies will be an improvement even if they do not accurately reflect human behavior, just because you don't need a harm review and you can repeat your experiments multiple times under different variables.
Yes, it does have some advantages. Apart from what you mention, another one is that it's not so consequential to e.g. sleep badly the night before an important exam. It's just that I find the disadvantages to be much greater than the advantages.
If seven days of study are sufficient to pass the class, why is so little material being taught in one semester? It sounds like the exams are far too easy.
The lack of any real innovation or major economic development in Spain would appear to be fairly strong evidence that the "Napoleonic" model wasn't actually working. Maybe this new system was even worse but I'm baffled as to why anyone would believe that changes weren't needed. Physician, heal thyself.
Spanish STEM graduates innovate just fine. They just do it abroad, where they get paid decent salaries if their work for others, or decent investment opportunities if they choose to be entrepreneurs :) Spain has lots of issues but I don't think education of the workforce has ever been one, neither before nor after the reform (it was definitely better before, but still, it's not half bad for now... Let's see what happens with the inaction with respect to LLMs).
In my public university in Spain, we always had the option to do a single final exam instead of the continuous assessment, although very few chose it. Generally the continuous assessment was less stressful, and the material stuck better, with room to digest it rather than just cramming for the exam and forgetting it right after. Generally the default expectation was that everyone was a full time student yes, but there were proper accommodations for those that weren't.
It is definitely a lot more work for the professors though, most of my family are teachers. It's a lot of assessments and it's very rare to have funding for TAs. Some think that the extra work with worthwhile for the sake of transmitting the knowledge more effectively, but not all of them do.
Frankly, you sound a bit bitter about it from the professor's perspective, and somewhat rationalizing why it is bad for the students. But students do generally appreciate it, and yes good students too, not just cheaters. I think both good and bad students end up learning more and hating the process less.
Your comments on Bologna do resonate though, it was very confusing when I continued to study in Germany and the Netherlands. The massive reforms were supposed to be for alignment with EU, but if anything it got more misaligned. They unified all 3 and 5 year degrees into 4 year degrees, but in most of EU all degrees are 3 years now, for instance.
Regarding the parent comment, indeed, my Computer Science degree was mostly hand-written exercises and exams, and it wasn't that long ago. The degree is about fundamentals, about understanding concepts and applying them, about the tools you need to learn anything in CS afterwards. You are expected to learn most of the practical skills for building software on your own, since they are ever-changing. And I have to say, that style of education has served me very well in my career.
PS: I was also surprised to learn that most of the undergrad exams in Germany, and some in NL, are oral. I can see how that might be a disadvantage to some, but writing is also a disadvantage to others. I quite liked it, less intense than a long written exam, and I think the professor can get a much clearer understanding of the student's grasp of the subject. But again, it's a ton of work for the professor, 20-30 mins per student one-on-one, giving them your full attention, adds up quickly.
Not sure when this was supposed to be the case, but for actual universities (not meant in a deragotary way, Germany has two types of higher eduction) in hard sciences, most classes are graded on a single written exam. Both in undergrad/bachelors and masters.
Unless things have drastically changed in the last five years...
This was in Freiburg University, which is among the top in Germany and top 250 globally. Computer Science bachelors and masters, around 8 years ago, most courses had a final oral exam unless many students (roughly >25) signed up to the class. Some classes like Information Retrieval or Machine Learning had >100 students so they had a written exam, but most others were smaller and oral: Data Engineering and Databases, Cryptography, Physics Simulations for Graphics, Formal Verification Methods, Bioinformatics, Planning AI, P2P Networking... I had a couple oral exams in VU Amsterdam (masters) too but fewer and not the final exam.
I know that an oral exam might seem less serious and rigorous, but I do think the professor can get a better grasp of how much the student actually understands the subject through an interactive interview.
My networking final in high school was probably my favorite test taking experience - there was a small written portion on eg subnets, but the bulk of our grade was setting up a physical network, testing it, and leaving the room. Our teacher sabotaged three parts of our network - could be hardware, router misconfiguration, etc. when we came in we had iirc 20-ish minutes to diagnose and fix it.
The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
If it is mostly a ”show your work”/”show your reasoning” kind of grading where your width and depth of attempts are more important than success then it seems OK.
When I did tertiary studies in programming there wasn't AI but we did our programming exams in pencil and paper. The "beneficial" prep we had and I had since high school was using punch cards. And 24h turnaround time for compiles. That really makes you think. And you learn how to desk check even thousand line programs. Intense focus, structuring for readability (to catch typos) and simplicity (catch logic errors) helped enormously. Was not unusual to change hundred lines of code and submit knowing that it wouldn't compile but will throw up the other errors I couldn't find. Our exams would give us 4-6 attempts for clean compile AND correct output. The only space where I experience same challenge now (40+ yrs later) is embedded code. Desktops and web stuff have LSPs and dynamic reloads and interpreted code (not a thing for me when learning) with instant feedback.
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
I personally dislike placing a heavy emphasis on exams. Assignments/projects have been consistently the most enjoyable and rewarding parts of the courses I've taken so far in university.
It's a shame that they are also way more susceptible to cheating with AI.
That was common when I studied computer science in the nineties as well. Hand written exams mostly.
Writing papers is a useful skill to have. And many students aren't very good at that. I taught some classes during my Ph. D. and supervised some students with their master thesis and PhD thesis work. Many students get their degrees without that really getting addressed. At least Computer science degrees in the Netherlands just spend very little time on writing skills. You get students with high school levels of English and Dutch and that's it.
I learned to write properly only when I started my Ph. D. My supervisor made me do it right before he allowed me to submit papers for publication.
AI might actually be good for education long term. It will result in a more personalized approach, which I think is good. There are plenty of ways to test students that are more engaging and interesting for both teachers and students than some of the old ways. You can't fake knowledge when you do a verbal test. Or test people with a good old written exam.
And of course for teachers, you can automate a lot of the verification work. This can be a lot of work.
Yeah none of the problems with AI in education are new; some schools (or news articles) are just panicking because they gave their students laptops (and/or made them a mandatory part) and now the genie is out of the bottle.
But there were already heaps of problems with tech in education before AI.
My CS projects were often pretty free-form so in theory I could've just used AI - today, anyway. But a big part of the grade was a face to face interview where you actually had to talk about the code you wrote. Anyone lifting along with other people who didn't actually do any work would fall through then.
Which strikes me as a terrible way to teach and test programming skills. If you're teaching to program without so much as syntax highlighting, you're not preparing your students for anything that even remotely resembles the industry they aspire to work in.
Honestly, these days universities should probably find a way to incorporate AI into their teaching, rather than fight it. Anything else is betting that AI will not stick around, which strikes me as a hopelessly naïve bet. Especially for software development.
I don't pretend to have all the answers, I don't know how to teach systems thinking in a appropriate way either. But I'm pretty sure typewriters isn't it, unless your students are hoping to get hired by Ada Lovelace, it's just not going to be relevant.
When I was in university we had the "Honor System"—no proctors, but you had to sign a statement on every exam and assignment that you did not cheat. And if you did cheat, one of your fellow students could report you to the Honor Board. Basically using the prisoners' dilemma to enforce honesty. And the Honor Board were always threatening to bring proctors back if cheating continued.
But yeah, everything was hand-written. On sheets of paper with pencil. I even had to write x86 assembly out by hand for my CPU architecture class. Of course, laptops were available back then but not cellphones and certainly not LLMs, so cheating by electronic means probably presents a stickier wicket now than it did back then.
I’m guessing we’re similar vintage. My CS classes were like this as well.
The only exception is that when I got into grad level classes we did have some big programming projects. But most of that programming happened on sparc stations, and it was actually just easier and more productive to sit at the machine in person with its nice big (at the time) display with all the other folks doing programming projects. Those machines had the standard dev toolchains provisioned that weren’t easy (at the time) to do on a dorm room Mac or windows computer.
I really think a lot of the ways we can reduce reliance on AI for thinking is to just set up systems where it’s not an inviting or rewarding option.
Not only that, the tabletification of education has actually been shown (via standardized testing) to make our kids dumber. This is the first time a new generation has scored worse than their parents. Technology has its place, we need to pick and choose where.
You mean an AI proof grading system. Grades have very little to do with actual learning and are there solely for signalling. It would be better if Universities shifted their focus to learning and eliminated grades/exams all together. They should seek to stop trying to be the gatekeepers for white collar work and instead focus on learning and research.
My local college used to have a test center you would physically go into. Now they no longer let teachers send their students there (idk if its just for programming or what) for whatever reason. I know they've wanted to cut down on wasted paper for years.
If my college is doing this, I cannot imagine how many others are also impeding on their entire goal: education.
Seems like anybody could just study 6 hours a day for the last month before the final, last 2 weeks before the mid term and use AI for everything else.
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
I'm old enough to remember a similar controversy over whether to allow calculators in math classes. While most schools were banning them to force kids to learn how to do math without them, my school went the other way. They mandated that every student had one and then changed the assignments and tests to account for it. Gone were questions that had whole number answers that could be computed in our heads. Instead, answers were complex and the only way to know whether you'd done the question correctly was to be sure of your method. They even allowed us to write programs in TI-BASIC that we could use on tests, the only limitation was that we were not allowed to share programs with other students. I discovered that rather than trying to cram for exams, I could just write a program that would solve each class of problem we were likely to see on the exam, and by essentially teaching my calculator to pass the test, I also taught myself. It was a vastly better way for me to study. It also led to my decision to major in comp sci and my career in software. I'm forever grateful to those teachers for choosing to see the latest technology as a multiplier of student potential rather than a way students could cheat to avoid learning.
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
A lot of people in this thread are talking about how they did in-person exams, handwritten problem sets in class, etc. This kind of thing is more challenging in the humanities, where the research paper is kind of our bread and butter. A lot of us have since turned to different kinds of assignments, but I am not ready to forego research papers in favor of blue book exams. I think there's some value in having to develop and sustain an argument in conversation with some body of literature (scholarly or otherwise), and that is not easily to replicate with in-person writing, at least at the undergrad level. (Doctoral candidates do this kind of thing all the time in qualifying exams, but that's after years of graduate school and fresh off doing nothing but reading 100+ books over the course of a few months.)
In one of my classes the approach was the opposite, I’m expected to do Ph.D level work as an undergrad and am expected to use AI.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
For programming, the way I would build a curriculum is to force students to actually learn how to program and code first. This is simple by requiring them to write code inside the classroom by hand for all exams.
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
I can't compose at a typewriter the way I do with a word processor. I would have to write it out by hand first. If the typing is just transcription, I could just as well be copying from an AI doc.
If you're doing it in class anyway, and providing typewriters, you might as well provide a locked down Chromebook. Cheaper and better for composition.
I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
Is there really much point though? I think AI will keep improving, and there will be more and more incentive to use an AI which costs $20/month, instead of a human writer that costs $30/hour. If someone want's an article written, and if people like the AI article as much as the human one, what stops anyone everyone using AI?
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
I sometimes wonder if the next step for expensive, elite universities will be to make exam-by-interview a thing, even for undergraduate work. Get in a room with no electronics allowed, just you and the instructor/TA/whoever, and they interview you. Pretty quickly they are going to know if you learned the material or not.
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
FWIW my Dad taught me how to type at 4yo on a huge Imperial typewriter. My spelling took an enormous leap in capability in a few weeks. Primary school teachers were amazed at the words I could spell correctly. (Didn't help my handwriting though which was still like intoxicated chicken scratch on a good day).
Aside from AI-proofing, IMO there is value in slow typing or writing. We have to think just a little bit more before putting ink to paper. There is also a higher cost to making mistakes.
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
In college I had to handwrite all my exams and the pure math courses didn't allow calculators. For the latter you could realize you were doing it wrong if the answer was too complicated to write down. As others said the final was something like 30% or more of your grade.
If you need a typewriter, there's a company in New Jersey that makes them for the prison trade in a lucite housing to prevent prisoners from hiding contraband inside.
Not sure if China manufactures new machines. India supposedly still manufactures them.
When I was a kid we were told no calculators because when you grow up and are in the real world it's not like you will have a calculator with you at all times. Fast forward to today and we all have a calculator on our phones.
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
College education is all kinds of backwards if the goal was teaching students. If you aren't speaking day one of German find another teacher. In this case you are cheating yourself out of an education if you use a typewriter.
We have amazing tools through screens, but insist on using them as if they were the old tools and act surprised when they lead to worse outcomes. Each student could easily have a friend in Germany they speak to in real time if people actually cared about breaking down cultural barriers. Schools have never cared about education, their main purpose is control. Control of the narrative and control over your life.
If I was a professor, I would make a very clear policy they AI is not to be used on assignments, and would repeat it throughout the semester, but make no effort to actually enforce this and even make it easy to abuse.
Then, for the final exam, drop the bomb: in person, handwritten, no outside references, mostly the same assignments we’ve done before. If you fail, it’s over for you. If you stayed true and studied, it should be easy for you to pass. If you used AI all semester, you did it to yourself. Those who complain will have their past assignments audited and if AI was used they are reported for plagiarism. This will be the most valuable lesson.
Surely there's a middle ground? Get some old wordstar capable 86 class clones and leave the GUI off. It's typing on a keyboard without the confusion of the internet or clicking on glowing icons.
You could just as easily have a bunch of old desktop computers with no network interface uplink going out of the room active whatsoever, running some basic GUI and libreoffice, connected to nothing but a dumb copper ethernet switch in the same room and an old HP laserjet with a 10/100 ethernet interface in it. No need to force people to deal with typewriters.
A typewriter is extreme. In school I used an AlphaSmart (https://en.wikipedia.org/wiki/AlphaSmart, because my handwriting sucked). A laptop without internet would also work.
After 30 seconds the site shows a fullscreen popup:
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
Remembering my college typewriter-use-by-quarters (coins) on a timer like being at the laundromat, I kind of love this.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
To me this nostalgia is pointless. AI is here and it's good enough only going to get better. The classroom should be about using AI better not ignoring it.
But that would require the teacher to be good at AI too. I think that's the problem here.
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
Things like this are well-intentioned but idk why there aren't more teachers creating optional "side quests" like these for students that want them instead of forcing them to do things like these
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
431 comments
We already had AI proof education.
Then we did a university reform, partly with the excuse of aligning with the rest of the EU within the Bologna process (and I say "excuse" because that's what it was, because the politicians introduced some things with that pretense that weren't like that in the rest of the EU at all, and it was perfectly possible to comply with Bologna without doing them) and partly to copy the US/UK ways. And one of the pillars of that reform was continuous assessment, and evaluating coursework.
As a consequence of this, first of all working class students were royally screwed. Because suddenly it wasn't OK to just organize yourself to prepare the exam, you had to attend lots of sessions to earn points, which put students who work at a disadvantage. And second, passing by cheating became possible, even before LLMs. People tend to forget that before everyone got access to ChatGPT, some people had access to experts (family members, or even paying someone to do the work).
Now that this kind of cheating has been democratized and everyone can do it instead of just the most privileged with access to experts or money to pay them, people act all outraged. Although pretty much nothing is being done, except for using snake oil detectors, or sometimes increasing difficulty of assignments to make them LLM-proof (with which you screw the students who actually want to learn without LLMs).
They spent years indoctrinating us (professors) in training courses on how the old exam-based ways were wrong (the "Napoleonic" model, they called it... none of them seems to entertain the thought that maybe if it had been working essentially unchanged since Napoleon it wasn't that bad, and you need solid reasons to change it beyond "this is old so let's change") and the new ways were the bee's knees. Like in the Milgram experiment, it's difficult for people to back down and acknowledge that they have been wrong, even when the solution is obvious.
I definitely could tell the difference, though most of the time I just studied full 4-7 days before the exam.
It was easy to cheat on the assignments. Working on them in groups was common and sometimes encouraged. The only person you could really cheat was yourself (and a TA who had to grade one more exam)
So that was the justification used to switching to a less impactful final exam.
No idea how true that is.
We were also told learning a phonetic alphabet was better for young children learning to read than using the old ABC system.
As far as I have heard, that turned out to be based on one person's fantasy and zero evidence and has actually had negative impact on children learning to read.
To my knowledge they still teach about audio/visual/kinetic learners and how you should structure the way you learn around which one you are. This has been debunked for decades.
> The UK has a real problem with pseudoscientific nonsense invading the education system.
Not just the UK, pedagogy/education is a very soft science, along with any other field that revolves around human behavior (psychology, sociology, etc...).
Using AIs in experiments and studies will be an improvement even if they do not accurately reflect human behavior, just because you don't need a harm review and you can repeat your experiments multiple times under different variables.
> and partly to copy the US/UK ways
In the UK it's common for exams to have almost all of the weight. In my Physics degree almost all courses were entirely dependent on the exams.
Including a final exam which examined the entire four-year MPhys course known as General Problems where even getting 30% was considered a good grade!
It is definitely a lot more work for the professors though, most of my family are teachers. It's a lot of assessments and it's very rare to have funding for TAs. Some think that the extra work with worthwhile for the sake of transmitting the knowledge more effectively, but not all of them do.
Frankly, you sound a bit bitter about it from the professor's perspective, and somewhat rationalizing why it is bad for the students. But students do generally appreciate it, and yes good students too, not just cheaters. I think both good and bad students end up learning more and hating the process less.
Your comments on Bologna do resonate though, it was very confusing when I continued to study in Germany and the Netherlands. The massive reforms were supposed to be for alignment with EU, but if anything it got more misaligned. They unified all 3 and 5 year degrees into 4 year degrees, but in most of EU all degrees are 3 years now, for instance.
Regarding the parent comment, indeed, my Computer Science degree was mostly hand-written exercises and exams, and it wasn't that long ago. The degree is about fundamentals, about understanding concepts and applying them, about the tools you need to learn anything in CS afterwards. You are expected to learn most of the practical skills for building software on your own, since they are ever-changing. And I have to say, that style of education has served me very well in my career.
PS: I was also surprised to learn that most of the undergrad exams in Germany, and some in NL, are oral. I can see how that might be a disadvantage to some, but writing is also a disadvantage to others. I quite liked it, less intense than a long written exam, and I think the professor can get a much clearer understanding of the student's grasp of the subject. But again, it's a ton of work for the professor, 20-30 mins per student one-on-one, giving them your full attention, adds up quickly.
Not sure when this was supposed to be the case, but for actual universities (not meant in a deragotary way, Germany has two types of higher eduction) in hard sciences, most classes are graded on a single written exam. Both in undergrad/bachelors and masters.
Unless things have drastically changed in the last five years...
I know that an oral exam might seem less serious and rigorous, but I do think the professor can get a better grasp of how much the student actually understands the subject through an interactive interview.
The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
> The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
That's evil haha. It's the case where you unplug and plug again everything, changing seemingly nothing, but then it works
sounds like some of the technical exams i'ev taken, and/or one or two job interviews
If it is mostly a ”show your work”/”show your reasoning” kind of grading where your width and depth of attempts are more important than success then it seems OK.
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
It's a shame that they are also way more susceptible to cheating with AI.
Writing papers is a useful skill to have. And many students aren't very good at that. I taught some classes during my Ph. D. and supervised some students with their master thesis and PhD thesis work. Many students get their degrees without that really getting addressed. At least Computer science degrees in the Netherlands just spend very little time on writing skills. You get students with high school levels of English and Dutch and that's it.
I learned to write properly only when I started my Ph. D. My supervisor made me do it right before he allowed me to submit papers for publication.
AI might actually be good for education long term. It will result in a more personalized approach, which I think is good. There are plenty of ways to test students that are more engaging and interesting for both teachers and students than some of the old ways. You can't fake knowledge when you do a verbal test. Or test people with a good old written exam.
And of course for teachers, you can automate a lot of the verification work. This can be a lot of work.
But there were already heaps of problems with tech in education before AI.
My CS projects were often pretty free-form so in theory I could've just used AI - today, anyway. But a big part of the grade was a face to face interview where you actually had to talk about the code you wrote. Anyone lifting along with other people who didn't actually do any work would fall through then.
>even programming exams were hand written
Which strikes me as a terrible way to teach and test programming skills. If you're teaching to program without so much as syntax highlighting, you're not preparing your students for anything that even remotely resembles the industry they aspire to work in.
Honestly, these days universities should probably find a way to incorporate AI into their teaching, rather than fight it. Anything else is betting that AI will not stick around, which strikes me as a hopelessly naïve bet. Especially for software development.
I don't pretend to have all the answers, I don't know how to teach systems thinking in a appropriate way either. But I'm pretty sure typewriters isn't it, unless your students are hoping to get hired by Ada Lovelace, it's just not going to be relevant.
But yeah, everything was hand-written. On sheets of paper with pencil. I even had to write x86 assembly out by hand for my CPU architecture class. Of course, laptops were available back then but not cellphones and certainly not LLMs, so cheating by electronic means probably presents a stickier wicket now than it did back then.
The only exception is that when I got into grad level classes we did have some big programming projects. But most of that programming happened on sparc stations, and it was actually just easier and more productive to sit at the machine in person with its nice big (at the time) display with all the other folks doing programming projects. Those machines had the standard dev toolchains provisioned that weren’t easy (at the time) to do on a dorm room Mac or windows computer.
I really think a lot of the ways we can reduce reliance on AI for thinking is to just set up systems where it’s not an inviting or rewarding option.
If my college is doing this, I cannot imagine how many others are also impeding on their entire goal: education.
I'm sure they had some kind of submit your code as assignment and using testing as a way to grade the assignments.
Apparently you learn to double check your work!
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
If you're doing it in class anyway, and providing typewriters, you might as well provide a locked down Chromebook. Cheaper and better for composition.
> Everything slows down. It’s like back in the old days when you really did one thing at a time.
Why did we turn computers into frenetic, distracted multitasking machines? What would it take to reverse course?
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
[1] https://archive.org/details/ahl-1978-basic-computer-games/
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
We have amazing tools through screens, but insist on using them as if they were the old tools and act surprised when they lead to worse outcomes. Each student could easily have a friend in Germany they speak to in real time if people actually cared about breaking down cultural barriers. Schools have never cared about education, their main purpose is control. Control of the narrative and control over your life.
Then, for the final exam, drop the bomb: in person, handwritten, no outside references, mostly the same assignments we’ve done before. If you fail, it’s over for you. If you stayed true and studied, it should be easy for you to pass. If you used AI all semester, you did it to yourself. Those who complain will have their past assignments audited and if AI was used they are reported for plagiarism. This will be the most valuable lesson.
> Most students found their pinkies weren’t strong enough to touch-type, so they typed more slowly, pecking at the keyboard with their index fingers.
Huh. I'm not sure I ever use a pinky while touch-typing, except to hit right-backspace sometimes.
For that matter I don't home using F and J either -- I usually home with alt+tab / cmd+tab and right-ctrl / right-cmd.
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
Feels better to design assignments where students have to use and think about AI, not avoid it. Good experiment, just not a long-term fix.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
Good luck, we’re all counting on you.
But that would require the teacher to be good at AI too. I think that's the problem here.
One of my best college professors would review such essays in-person, one-on-one twice each semester.
https://austinhenley.com/blog/aihomework.html
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development