My teachers, prior to AI, encouraged the 'equivocating waffle' essay. These essays met word count and touched on the topics but failed to say anything interesting. Basically how ChatGPT writes, and I've as mentioned previously (https://news.ycombinator.com/item?id=40646682), I am very happy that AI can do these essays so well that we're going to be forced to actually think in order to differentiate ourselves.
The teacher doesnt read, they skim, and they already know who deserves As or Ds.
I was a victim of this. I was a general A or B student, but I thought the funny kids (D students) were funny and hung out with them. I got stereotype graded. My last paper of the year I completely gave up, the least effort ever. Teacher gave me an A and said 'You improved so much!'
Back in the early dial-up era, when teachers were not tech-savvy, I went online and found a paper exactly matching what I was tasked to write about for a homework assignment. And I regrettably submitted it as-is with no changes. I guess I knew it was cheating, but I likely also thought I was being incredibly clever as I had not heard of anyone ever doing that before. However, another student in the class submitted the same exact paper. I received an A and he received a C.
The teacher likely didn't know that he used to be my best friend growing up, and at some point was more knowledgable with computers than me. He introduced me to things like IRC. But he became one of the most popular kids at school and started distancing himself from me.
After getting our papers back, he came over to brag about how he found his paper online and that's how we discovered we submitted the same exact essay. At that point in time, I thought the teacher must have assumed he copied from me. But I think your explanation is likely more plausible. I guess the teacher just skimmed the papers and graded based on our expected grade.
I have a friend who dealt with this in highschool. The English teacher just copied whatever their grade was from their first assignment onto all other assignments.
It got so bad that his Dad, who was an active English and Spanish teacher at another school, was convinced to write one of his papers for him. He got a D.
I could imagine, I guess that would be a side effect of large class sizes.
An optimization when I was a student was to find out what the teacher thinks and re-affirm those beliefs with a few twists to give an appearance of depth. On occasion, for fun, I would take a dissenting position and I was always punished for it.
I think the entire education system is steeped in orthodoxy such that it's not in its interest to properly teach critical thinking, failing to do so is an emergent behavior / happy accident. There would have to be an environment that would reward students for actual critical thinking and not apparent critical thinking (agreeing with the teacher) and I don't know how to create one, and I especially don't know how to reform the current system.
I still get a bit of a kick out of the idea that the often proposed solution to the mass academic plagiarism, following the replication crisis, is a mass amnesty - which strangely seems to have tacitly occurred as it's no longer even being discussed.
You failed to establish the link between giving up and getting bad grades from hanging out with the funny kids and how any of that is even remotely caused by stereotyping.
Even through college I've found that it's hard to optimize for grades vs learning. I've had teachers spite me for disagreeing with them.
Then I developed a formula that essentially went, "While {common sense assertion is true}, we need to consider the nuanced implications of {regurgitated pros/cons}." Combined with the smooth fluff and flow from using speech recognition with minimal edits, suddenly the A's started rolling in. I later found this of course works wonderfully with standardized testing essays in the GRE and GMAT.
Edit: I realize now why I get (even if I don't fully agree with) the 'stochastic parrot' dismissal of language transformer models, I basically lived it.
This is my experience as well. I remember one day completely zoning out and writing pages of drivel "defining what it means to be a X" or whatever. Got an A+. After that I realized professors didn't care about my original thoughts or ideas, but rather the appearance that I was thinking through the prompt deeply.
> Anecdotally, my kids' schools (sample size two, both high school) are quite anti-AI in the classroom.
Well, it's still early days. Wait until we truly are in a "learn AI skills or be left in the dust" world and AI will play a major role in the classroom. Just like those Chromebooks everyone has now. Because kids gotta have computer skills in order to be prepared for the working world!
This isn’t that that complicated. It’s not about cognitive dissonance or standardized testing.
There are many similar human behaviors. Why do people smoke, drink alcohol, eat junk food, avoid exercise, and make all sorts of other harmful choices? Because the pleasure is immediate, and the consequences are not.
Same reason people get sunburned. If the sun burned people immediately, like a hot pan in the kitchen, everyone would use sunblock. But because it burns slowly, people walk themselves right into it.
If there is a button to avoid the pain of homework, to immediately go have fun instead, and there are no immediate consequences, all but the most disciplined, determined, and diligent students will press it. Knowing and acknowledging the future consequences makes no impact on the behavior.
I was in quarantine in middle school. During online school I paid very little attention to anything the teachers tried to teach, usually I played minecraft during class. When I had a big math test I felt fine, because I knew I would find a way to cheat. On the test, every problem was a word problem. I had no clue what the questions wanted of me, so I had no idea how to cheat.
After receiving my D-, I realized my mistake and actually started paying attention, and learning. Although this stunted my mathematical development, I was able to get back on track to having a good understanding.
Had AI been as prevalent as it is now, I don’t think I would have ever had the revelation. That is why I appreciated the point the author made about the difference between a calculator and LLMs. You have to have some semblance of understanding to put something into a calculator. You need nothing and you gain nothing by copying and pasting into ChatGPT.
I've been thinking about critical thought in our society from another angle. In my opinion if you assume that every person employs it's critical thinking abilities to reason about the world you would expect to see a lot of different opinions about the world.
But with each passing day we see the opposite, more and more people are converging in one of a few opinions about each topic. This is great if you want to move the world in a specific direction, but I think it demonstrates that people are exercising less their critical thinking abilities.
AI definitely made this worse, but I think it started long before that.
Another factor that I think contributes negatively to this effect is that our society doesn't really like when someone is wrong, or changes ideas. If we want to encourage to use their critical thinking skills we also need to tell them that arriving at bad conclusions is ok, the important thing is to always keep improving.
Why would you expect that more critical thought would lead to more visible opinions? Would be like expecting everyone to have a different route they take out of their neighborhood. Nothing wrong if someone does want to try a different way, to a large extent, but often nothing is gained from it, either.
The counter hope, of course, is that more critical thought will result in more people discovering some abstract truth out there. I don't think that is realistic, either.
The mundane landing spot, I think, is the likely one. For most things, critical thought is just not much of a benefit. Knowledge and understanding are far more beneficial. Is why we don't constantly reinvent how to drive a car. We have largely agreed that we have some mechanisms that work, and it is better to educate folks on how those work, than it is to get people to think critically about the controls.
Going further in that regard, understanding is far more immediately useful than critical deconstruction. Learning about affordances and how they guide you to what you are wanting to do is far more useful to someone's daily life.
Which is not to say that critical thought in designing said affordances is not good. Just, for most of us, we are not in a position to really impact any of that.
Democracy requires allies, so the overall position will tend to settle into two camps.
I'm not sure how well that reflects people's actual opinions. In many cases I think people don't care much about most topics. They simply accept the position of their allies. Occasionally they even find it abhorrent but necessary.
I think that mass communication has exacerbated that for decades, and AI at most optimizes it a bit further.
I don't really expect fine critical thinking. Most people aren't experts at most things.
But I am a bit surprised at the degree to which people have twisted themselves in knots to justify positions that do not withstand even the slightest scrutiny.
> you would expect to see a lot of different opinions about the world.
It is an age-old debate between know-that and know-how. Understanding the world around us is the point of education, and this means ways of looking at it, insights or theories, and how these insights and theories come about which is the critical thinking process. I would like to call it thinking from first assumptions since critical thinking as a term is overused and I would argue that AI is great at critical-thinking in the shallow definition of the term.
Just out of curiosity, is 'critical thinking' a thing in other languages also? I'm a native speaker for two other languages and learned a couple more, but it's never mentioned or is an issue in other languages. I feel it's just a way to call other people stupid, but the reader isn't, creating another chasm or us vs. them.
I don’t think I’ve encountered it in French. It’s just thinking. How you do it depends on what you what to achieve, but not a state of mind or a capability. Critical thinking seems close to “raisonement scientifique” or “raisonement logique”, so scientific reasoning or logical reasoning.
School teaches the principle of logic (and scientific method) and how to apply it in debates and learning, but not critical thinking. There were words count requirements sometimes, but essays was always about logical arguments for or against some opinions.
It covers what I think other languages may consider a subset of literacy. The point is to carefully avoid calling anyone stupid, while acknowledging that the ability to deeply think through what other people are communicating is a learned skill which often must be explicitly taught.
Yep! My essays in schools had prompts like “Describe the similarities between the Pocahontas story and the first Avatar movies”. The point was not the produced text, but the activity itself. And as a teacher, I believe it’s quite easy to catch cheaters, because producing a stellar text one day and a crappy piece another is an anomaly.
It's a meaningless, empty phrase. Even worse, the focus of the OP is on a RAND survey of some "youth panel" where they asked them how they felt about other kids' relationship to this empty phrase.
It's like when they poll people to ask them how the economy is doing. How the hell would they know? And what do you mean by the economy?
By design most education is mediocre at best. The standardized high stakes testing regime of NCLB exposed public schooling for what it was. For the majority, it was their version of leetcode. Learning to become an academic trained performer. Not unlike a circus acrobat.
With the rise of LLM, we are questioning the wisdom of public schooling as currently taught.
Ideally, with AI, schooling will no longer require standardized textbooks, lesson plans, and testing. With this technology, customized instruction and guidance will be made for each student. As it evaluates their basic knowledge daily. The hope is with this grunt teaching becoming more automated, actual critical thinking and dialogue will take place in the classroom.
Critical thinking as a skill unto itself was held as incredibly important in my early education. At school, and at home. It continued through undergrad (obviously, I think).
In the past, say 5-ish years, I've been shocked to realize this isn't universal, or at least broadly applicable. Probably more of a result of whatever societal bubble I was born into. I don't know.
The result has been a growing uneasy feeling for me, at work mainly, when discussing just about anything. I have to pause and understand for myself: "Has this person thought through what they are saying?" That's actually become a friction point with me. And it isn't generational from what I've seen.
When I try to discuss critical thinking skills with some of my peers and with one of my older brothers, this is dismissed as being in line with critical theory / CRT / doublethink from 1984.
There is apparently an ideological component to critical thinking. If you are supposed to analyze the world through the lens of what you consider the "one true set of ideas", being critical and "seeing both sides", or even working through the reasoning of others is seen as a violation of the highest order.
AI-written or not, I think this is a great article. One thing is the use of AI, but the other thing, the thing about how stupid mainstream education has become, is very real and in my opinion, a much bigger threat than AI.
This has been happening for decades and decades as it was something I was fully aware of during grade school. Now as a father of a young child, one of my main goals is to influence quality critical and logical thinking as it seems more important than ever. You would surprised at how relatively easy it is to induce a critical thinking mindset in a child, mostly encouraged through curiosity in everything and asking the child how they think things work and finding out if their ideas were true. Kids get a massive sense of accomplishment when they figure things out, it’s as simple as that.
> Not all educational AI is created equal, and the differences matter. Khan Academy's Khanmigo, launched in limited beta in 2023 and reaching approximately 1.5 million users across 130 countries by the end of 2025, represents a philosophically distinct approach to AI in education. Unlike ChatGPT, Khanmigo is designed not to give answers directly. Instead, it employs a Socratic method, offering hints and guiding questions intended to help students find answers themselves.
This is the first time I have heard of Khanmigo. Is it any good? Anyone here tried it?
This is just mass cheating. If you want to fix it, tell the kids to study with AI at home, and make them write in class. Schools should stop accepting homework altogether. Assign it, and tell them if they don't do it, they're going to end up failing the tests, which are all that's going to count for their grade.
The problem that they're going to have with this is that the schools have already been covering for bad teaching and lost students by making all the criteria fuzzy, and relying on homework that kids could cheat their way through for a large part of the grade i.e. credit for participation. Now, with AI, there's no way to deny that kids are cheating, and that's thrown the institution into a difficult position.
There's no educational threat from AI, AI will only help people learn. The threat is to the institution, which runs on a lot of dishonesty. We'll have to learn to tolerate some kids being left behind and make the effort (and create the systems) to move them forward again, instead of pretending like everyone is handling it. A system that can't deal with every kid losing a year of school, like what happened during covid, is a system that is focused more on schedule than student.
To teach students critical thinking you'd have to do expose them to authoritatively sounding bs. And nobody had time for that. I don't think AI will help with that. It got too good too fast.
You'd have to intentionally expose students to output of weaker models. And still nobody has time for that.
Can confirm via my own anecdotal data and experiences, for whatever they're worth. Elder Millennial for context.
* Up until NCLB, classes were focused more on theory than rote memorization with some notable exceptions. However, the further along I got in schooling/as NCLB approached, the more work shifted towards objective measures of knowledge rather than demonstrable understanding of theories, processes, and problem solving. By the time I was integrated into High School, most classes were graded by objective measures rather than theory - English and Social Studies were graded identically to Math and Science. The focus wasn't on the content of Shakespeare or Dante's Inferno, nor on the geopolitics of The Opium Wars or the history of European Empires; it was dates, people, how the verse was written, marking syllables, etc.
* I got lucky that my gifted status meant I spent time at a local university in grade and middle school at a special campus part of the week. That school taught me some of my most valuable lessons that continue to pay dividends in practical life: how to think critically (a semester learning game strategies with a final exam deducing whodunnit in the movie 'Clue'), appreciating the similarities and unique differences in biological life (basically a deep-dive on animals, insects, and biology half a decade before HS Biology covered the same stuff at a shallower depth), understanding the underlying physics of planetary forces (plate tectonics, volcanism, fault lines, meteorology, etc), music and art appreciation regardless of ability to understand the underlying speech (lots of VHS musicals, arts and crafts, and self-expression), and ample time understanding how computers worked - including building my first programs and coding my first website. None of my "non-gifted" classmates received remotely similar quality of education, focusing instead on rote memorization instead of abstract problem solving.
* The day NCLB was signed, I remember my World History teacher flipping his desk in the classroom. "You lot better pay attention because you're the last class who will ever get this good an education ever again." He spent the remainder of the semester trying to teach World History through his preferred lens of underlying causes, political movements, outcomes, and next-order effects rather than dates and places, with ample essay questions on exams to force you to think critically on what you learned and make arguments for/against something he posited. Subsequent classes were exclusively date-person-place tests for the sake of standardized testing and measurable outcomes.
So when I see people defaulting to AI in a world where measurable outcomes are the only things that matter (grades, KPIs, 'number-go-up'), I can't entirely fault them. I've spent enough time in this system to know my way of thinking is entirely contrary to the incentives at play, and a threat to those who benefit from it. Were I more flexible in my ethics or thinking, I'd do the same to benefit myself.
Except I continue to see growing fatigue of folks who have to deal with this slop on a regular basis. Employers have already pivoted away from AI in interviews and job postings, at least in my IT purview, because the output doesn't justify the lost opportunities of hiring quality talent with critical thinking skills to solve unique problems; they're tired of "BuT cOpIlOt SaId" in meetings as justification for any given thing, and even more exasperated that leadership seems to trust the chatbot when it's wrong more than any employee who is right.
Do I think that attitude will win out in the long run? Not really, no, because the underlying incentives make reliance on AI in lieu of personal/critical thought a better prospect than trying to forge your own identity and path forward. At least for the foreseeable future, those who blindly trust the bot will be rewarded even when they're wrong, while those of us who use it as an untrustworthy peer (or not at all) will be punished for not surrendering ourselves to its output.
86 comments
The teacher doesnt read, they skim, and they already know who deserves As or Ds.
I was a victim of this. I was a general A or B student, but I thought the funny kids (D students) were funny and hung out with them. I got stereotype graded. My last paper of the year I completely gave up, the least effort ever. Teacher gave me an A and said 'You improved so much!'
The teacher likely didn't know that he used to be my best friend growing up, and at some point was more knowledgable with computers than me. He introduced me to things like IRC. But he became one of the most popular kids at school and started distancing himself from me.
After getting our papers back, he came over to brag about how he found his paper online and that's how we discovered we submitted the same exact essay. At that point in time, I thought the teacher must have assumed he copied from me. But I think your explanation is likely more plausible. I guess the teacher just skimmed the papers and graded based on our expected grade.
It got so bad that his Dad, who was an active English and Spanish teacher at another school, was convinced to write one of his papers for him. He got a D.
An optimization when I was a student was to find out what the teacher thinks and re-affirm those beliefs with a few twists to give an appearance of depth. On occasion, for fun, I would take a dissenting position and I was always punished for it.
I think the entire education system is steeped in orthodoxy such that it's not in its interest to properly teach critical thinking, failing to do so is an emergent behavior / happy accident. There would have to be an environment that would reward students for actual critical thinking and not apparent critical thinking (agreeing with the teacher) and I don't know how to create one, and I especially don't know how to reform the current system.
I still get a bit of a kick out of the idea that the often proposed solution to the mass academic plagiarism, following the replication crisis, is a mass amnesty - which strangely seems to have tacitly occurred as it's no longer even being discussed.
Then I developed a formula that essentially went, "While {common sense assertion is true}, we need to consider the nuanced implications of {regurgitated pros/cons}." Combined with the smooth fluff and flow from using speech recognition with minimal edits, suddenly the A's started rolling in. I later found this of course works wonderfully with standardized testing essays in the GRE and GMAT.
Edit: I realize now why I get (even if I don't fully agree with) the 'stochastic parrot' dismissal of language transformer models, I basically lived it.
> It is whether the education system that ushered AI into classrooms with such breathless enthusiasm…
Was it?
Anecdotally, my kids' schools (sample size two, both high school) are quite anti-AI in the classroom.
The kids tend to be very much for "do my homework for me", but the education system? No.
> Anecdotally, my kids' schools (sample size two, both high school) are quite anti-AI in the classroom.
Well, it's still early days. Wait until we truly are in a "learn AI skills or be left in the dust" world and AI will play a major role in the classroom. Just like those Chromebooks everyone has now. Because kids gotta have computer skills in order to be prepared for the working world!
There are many similar human behaviors. Why do people smoke, drink alcohol, eat junk food, avoid exercise, and make all sorts of other harmful choices? Because the pleasure is immediate, and the consequences are not.
Same reason people get sunburned. If the sun burned people immediately, like a hot pan in the kitchen, everyone would use sunblock. But because it burns slowly, people walk themselves right into it.
If there is a button to avoid the pain of homework, to immediately go have fun instead, and there are no immediate consequences, all but the most disciplined, determined, and diligent students will press it. Knowing and acknowledging the future consequences makes no impact on the behavior.
Had AI been as prevalent as it is now, I don’t think I would have ever had the revelation. That is why I appreciated the point the author made about the difference between a calculator and LLMs. You have to have some semblance of understanding to put something into a calculator. You need nothing and you gain nothing by copying and pasting into ChatGPT.
But with each passing day we see the opposite, more and more people are converging in one of a few opinions about each topic. This is great if you want to move the world in a specific direction, but I think it demonstrates that people are exercising less their critical thinking abilities.
AI definitely made this worse, but I think it started long before that.
Another factor that I think contributes negatively to this effect is that our society doesn't really like when someone is wrong, or changes ideas. If we want to encourage to use their critical thinking skills we also need to tell them that arriving at bad conclusions is ok, the important thing is to always keep improving.
The counter hope, of course, is that more critical thought will result in more people discovering some abstract truth out there. I don't think that is realistic, either.
The mundane landing spot, I think, is the likely one. For most things, critical thought is just not much of a benefit. Knowledge and understanding are far more beneficial. Is why we don't constantly reinvent how to drive a car. We have largely agreed that we have some mechanisms that work, and it is better to educate folks on how those work, than it is to get people to think critically about the controls.
Going further in that regard, understanding is far more immediately useful than critical deconstruction. Learning about affordances and how they guide you to what you are wanting to do is far more useful to someone's daily life.
Which is not to say that critical thought in designing said affordances is not good. Just, for most of us, we are not in a position to really impact any of that.
I'm not sure how well that reflects people's actual opinions. In many cases I think people don't care much about most topics. They simply accept the position of their allies. Occasionally they even find it abhorrent but necessary.
I think that mass communication has exacerbated that for decades, and AI at most optimizes it a bit further.
I don't really expect fine critical thinking. Most people aren't experts at most things.
But I am a bit surprised at the degree to which people have twisted themselves in knots to justify positions that do not withstand even the slightest scrutiny.
> you would expect to see a lot of different opinions about the world.
It is an age-old debate between know-that and know-how. Understanding the world around us is the point of education, and this means ways of looking at it, insights or theories, and how these insights and theories come about which is the critical thinking process. I would like to call it thinking from first assumptions since critical thinking as a term is overused and I would argue that AI is great at critical-thinking in the shallow definition of the term.
School teaches the principle of logic (and scientific method) and how to apply it in debates and learning, but not critical thinking. There were words count requirements sometimes, but essays was always about logical arguments for or against some opinions.
It's like when they poll people to ask them how the economy is doing. How the hell would they know? And what do you mean by the economy?
With the rise of LLM, we are questioning the wisdom of public schooling as currently taught.
Ideally, with AI, schooling will no longer require standardized textbooks, lesson plans, and testing. With this technology, customized instruction and guidance will be made for each student. As it evaluates their basic knowledge daily. The hope is with this grunt teaching becoming more automated, actual critical thinking and dialogue will take place in the classroom.
In the past, say 5-ish years, I've been shocked to realize this isn't universal, or at least broadly applicable. Probably more of a result of whatever societal bubble I was born into. I don't know.
The result has been a growing uneasy feeling for me, at work mainly, when discussing just about anything. I have to pause and understand for myself: "Has this person thought through what they are saying?" That's actually become a friction point with me. And it isn't generational from what I've seen.
There is apparently an ideological component to critical thinking. If you are supposed to analyze the world through the lens of what you consider the "one true set of ideas", being critical and "seeing both sides", or even working through the reasoning of others is seen as a violation of the highest order.
> Not all educational AI is created equal, and the differences matter. Khan Academy's Khanmigo, launched in limited beta in 2023 and reaching approximately 1.5 million users across 130 countries by the end of 2025, represents a philosophically distinct approach to AI in education. Unlike ChatGPT, Khanmigo is designed not to give answers directly. Instead, it employs a Socratic method, offering hints and guiding questions intended to help students find answers themselves.
This is the first time I have heard of Khanmigo. Is it any good? Anyone here tried it?
The problem that they're going to have with this is that the schools have already been covering for bad teaching and lost students by making all the criteria fuzzy, and relying on homework that kids could cheat their way through for a large part of the grade i.e. credit for participation. Now, with AI, there's no way to deny that kids are cheating, and that's thrown the institution into a difficult position.
There's no educational threat from AI, AI will only help people learn. The threat is to the institution, which runs on a lot of dishonesty. We'll have to learn to tolerate some kids being left behind and make the effort (and create the systems) to move them forward again, instead of pretending like everyone is handling it. A system that can't deal with every kid losing a year of school, like what happened during covid, is a system that is focused more on schedule than student.
Also, we need to treat schools like they are daycare. K12 didnt stop getting Trump elected (pre-AI).
We really need a different kind of school system. Daycare(current teachers) and education(a new group of teachers, probably professionals).
You'd have to intentionally expose students to output of weaker models. And still nobody has time for that.
* Up until NCLB, classes were focused more on theory than rote memorization with some notable exceptions. However, the further along I got in schooling/as NCLB approached, the more work shifted towards objective measures of knowledge rather than demonstrable understanding of theories, processes, and problem solving. By the time I was integrated into High School, most classes were graded by objective measures rather than theory - English and Social Studies were graded identically to Math and Science. The focus wasn't on the content of Shakespeare or Dante's Inferno, nor on the geopolitics of The Opium Wars or the history of European Empires; it was dates, people, how the verse was written, marking syllables, etc.
* I got lucky that my gifted status meant I spent time at a local university in grade and middle school at a special campus part of the week. That school taught me some of my most valuable lessons that continue to pay dividends in practical life: how to think critically (a semester learning game strategies with a final exam deducing whodunnit in the movie 'Clue'), appreciating the similarities and unique differences in biological life (basically a deep-dive on animals, insects, and biology half a decade before HS Biology covered the same stuff at a shallower depth), understanding the underlying physics of planetary forces (plate tectonics, volcanism, fault lines, meteorology, etc), music and art appreciation regardless of ability to understand the underlying speech (lots of VHS musicals, arts and crafts, and self-expression), and ample time understanding how computers worked - including building my first programs and coding my first website. None of my "non-gifted" classmates received remotely similar quality of education, focusing instead on rote memorization instead of abstract problem solving.
* The day NCLB was signed, I remember my World History teacher flipping his desk in the classroom. "You lot better pay attention because you're the last class who will ever get this good an education ever again." He spent the remainder of the semester trying to teach World History through his preferred lens of underlying causes, political movements, outcomes, and next-order effects rather than dates and places, with ample essay questions on exams to force you to think critically on what you learned and make arguments for/against something he posited. Subsequent classes were exclusively date-person-place tests for the sake of standardized testing and measurable outcomes.
So when I see people defaulting to AI in a world where measurable outcomes are the only things that matter (grades, KPIs, 'number-go-up'), I can't entirely fault them. I've spent enough time in this system to know my way of thinking is entirely contrary to the incentives at play, and a threat to those who benefit from it. Were I more flexible in my ethics or thinking, I'd do the same to benefit myself.
Except I continue to see growing fatigue of folks who have to deal with this slop on a regular basis. Employers have already pivoted away from AI in interviews and job postings, at least in my IT purview, because the output doesn't justify the lost opportunities of hiring quality talent with critical thinking skills to solve unique problems; they're tired of "BuT cOpIlOt SaId" in meetings as justification for any given thing, and even more exasperated that leadership seems to trust the chatbot when it's wrong more than any employee who is right.
Do I think that attitude will win out in the long run? Not really, no, because the underlying incentives make reliance on AI in lieu of personal/critical thought a better prospect than trying to forge your own identity and path forward. At least for the foreseeable future, those who blindly trust the bot will be rewarded even when they're wrong, while those of us who use it as an untrustworthy peer (or not at all) will be punished for not surrendering ourselves to its output.