I am not in a CS program myself, but I guest lecture for CS students at CMU about 2x/year, and I'm in a regular happy hour that includes CS professors from other high-tier CS schools.
Two points of anecdata from that experience:
- The students believe that the path to a role in big tech has evaporated. They do not see Google, Meta, Amazon, etc, recruiting on campus. Jane Street and Two Sigma are sucking up all the talent.
- The professors do not know how to adapt their capstone / project-level courses. Core CS is obviously still the same, but for courses where the goal is to build a 'complex system', no one knows what qualifies as 'complex' anymore. The professors use AI themselves and expect their students to use it, but do not have a gauge for what kinds of problems make for an appropriately difficult assignment in the modern era. The capabilities are also advancing so quickly that any answer they arrive at today could be stale in a month.
I've been doing programming and sys admin as a hobby for a long time and only recently started my bachelors in compsci, and I'm sad to have waited so long as almost everything has been infested with ai to some degree.
currently in cs masters program at ivy: i think it's like thinking that pure math study evaporated when we made the calculator, or that we suddenly shouldn't have bothered with Riemann sums because of the FTC. ai to coding is much the same in the sense of moving to a layer of higher abstraction. i don't think cs curriculums have to change drastically to accommodate this; however, the onus on not getting it wrong increases since ai produces probabilistic output. finally, you can have a chat bot do all the work for you to your own detriment i suppose...
I taught an intro course last semester. It was intended for non-CS majors, but it ended up with one module having all CS majors after all. They were very pessimistic about their job opportunities at graduation.
I explained that the fundamentals are still very much necessary for now, even if you end up only reviewing AI code. Honestly, computational thinking is as important as ever, although how persuasive I was about this is up for debate.
We used some tools AI models just aren't good at (visual languages are not a strength of language models, and I explained that they couldn't help from day one), but it meant some weaker students still tried to use AI and were confidently told incorrect instructions. They often ended up stuck because the newest group we've gotten is very adverse to office hours when ChatGPT exists (out of ~75 students, only one ever showed up, although I did meet with many right after class).
I'm very concerned for these students, using AI as a crutch was definitely not helping them succeed, but the ability to get easy answers (even if totally wrong) is too appealing. In the classroom they seemed interested, but when they get to a chatbot, they don't want to put it in the "learning" mode, they want to be done with the assignment, and they aren't taught enough "AI literacy" to know to think critically about the outputs or their use of it in general.
What I see in a German university - no change for undergraduate CS degree, which still has 50% maths annd theoretical CS and is not affected by LLMs. But in a Master’s degree they offer really lots of ML courses - from basics to CV to hardware aware. Exams in those are written on paper without any aids.
My son goes to an extremely well respected public school CS program. He's in his second year. I am an Exec at a mid-market tech company. Naturally I'm curious how his uni is handling AI. I've been surprised that (a) they aren't teaching it, which seems like a travesty given AI fluency is an expected skill and (b) they are using paper tests (i.e. literally hand writing code) to ensure students aren't just using AI to generate code on tests. A wild situation and not particularly helpful IMO: not teaching them to use modern tools and requiring mastery of a skillset no one has needed to master for over a decade (IDE's have had great autocomplete for a long time now - no one needs to be able to hand write code with perfect syntax).
As far as the job market, he is still seeing folks get great internships and he's seeking them out himself right now. However, he's very realistic that he'll likely never work as a developer. He expects to work IN technology, but not writing code, so he's prioritizing developing his personal network and soft skills in addition to his academics. His thinking about his career and that of his peers: adaptability will be key.
Large well-regarded CS schools still have 'systems' and other traditional CS specializations. I would encourage looking at those programs.
Experience is still needed too. You can't just blindly trust AI outputs. So, my advice is to get experience in an old-fashioned CS program and by writing you own side projects, contributing to open source projects, etc.
I got a lot out of learning combinatorics, probability, statistics and the ability to prove theorems. This kind of core of good thinking would still be important and from what I’ve seen, it isn’t required in even top 50-ish USA undergrad CS programs.
I think that object oriented programming and design patterns will still be important. These are useful at higher levels to architect systems that are maintainable - even if not being used at lower levels (eg code for classes within services).
202 comments
Two points of anecdata from that experience:
- The students believe that the path to a role in big tech has evaporated. They do not see Google, Meta, Amazon, etc, recruiting on campus. Jane Street and Two Sigma are sucking up all the talent.
- The professors do not know how to adapt their capstone / project-level courses. Core CS is obviously still the same, but for courses where the goal is to build a 'complex system', no one knows what qualifies as 'complex' anymore. The professors use AI themselves and expect their students to use it, but do not have a gauge for what kinds of problems make for an appropriately difficult assignment in the modern era. The capabilities are also advancing so quickly that any answer they arrive at today could be stale in a month.
FWIW.
I explained that the fundamentals are still very much necessary for now, even if you end up only reviewing AI code. Honestly, computational thinking is as important as ever, although how persuasive I was about this is up for debate.
We used some tools AI models just aren't good at (visual languages are not a strength of language models, and I explained that they couldn't help from day one), but it meant some weaker students still tried to use AI and were confidently told incorrect instructions. They often ended up stuck because the newest group we've gotten is very adverse to office hours when ChatGPT exists (out of ~75 students, only one ever showed up, although I did meet with many right after class).
I'm very concerned for these students, using AI as a crutch was definitely not helping them succeed, but the ability to get easy answers (even if totally wrong) is too appealing. In the classroom they seemed interested, but when they get to a chatbot, they don't want to put it in the "learning" mode, they want to be done with the assignment, and they aren't taught enough "AI literacy" to know to think critically about the outputs or their use of it in general.
As far as the job market, he is still seeing folks get great internships and he's seeking them out himself right now. However, he's very realistic that he'll likely never work as a developer. He expects to work IN technology, but not writing code, so he's prioritizing developing his personal network and soft skills in addition to his academics. His thinking about his career and that of his peers: adaptability will be key.
Experience is still needed too. You can't just blindly trust AI outputs. So, my advice is to get experience in an old-fashioned CS program and by writing you own side projects, contributing to open source projects, etc.
I think that object oriented programming and design patterns will still be important. These are useful at higher levels to architect systems that are maintainable - even if not being used at lower levels (eg code for classes within services).