I'm always interested in write-ups when folks try new attacks on self-study.
I will also admit that this part hurt my heart to read (vicarious embarrassment):
> the recruiter mentioned I needed to pay more attention to code debuggability (whatever this means - I assume that under the corpo-language, they mean that I wrote invalid code)
I completely understand why that line caused vicarious embarrassment.
Looking back, I realize my brain was(is) operating on a completely different definition of that word based on my daily constraints.
I plan to write more about this in Part Two, but at that point in time, I wasn't even aware of this alternative understanding of the term.
In telco, when a remote node crashes at a client's site, I often only have access to a heavily restricted subset of logs, and the debugging communication loop via email can take days to understand "what happens".
Because of that, I write defensive, strictly encapsulated code, and I think in terms of domain-specific states and objects that can be explicitly tracked from an external PoV.
Similarly, during game jams, "debuggable and maintainable" means to me that the code is modular enough that I can completely rip out and rewrite a core mechanic in the final 3 hours just because the game design suddenly changed.
My habit of writing code optimized for remote logs and sudden architectural shifts actually became my biggest enemy under the algorthimic interview (or 45-minute LeetCode) constraint.
It makes the core algorithmic state less clear and hides algorithmic mistakes under layers of defensive "if" statements (where I would normally drop a debug log).
I am simply used to not trusting the inputs, whereas in algorithmic problems, the whole point is to exploit constraints that you need to be absolutely sure about.
So the "if" statements that usually increase "debuggability" in telco or during game jams are the exact opposite of the "debuggability" term used in algorithmic thinking.
Thanks for naming this issue so clearly - it is a very valid reality check.
Note: I haven't done any tech interview in 6 years.
I'm kind of surprised they still do leetcode-style questions on remote interviews these days. I thought those types of interviews would be 100% gamed by now.
I am quite passionate about algos, do lots of katas on codewars for fun, and done plenty of leetcodes.
Then I had a technical interview when I was asked to implement a simple algo for the tris game (aka tic tac toe) and my mind was completely blurry.
I was tired, i'm in eu and this was for a San Francisco startup interviewing me at their lunch time, very late in Italy.
And generally don't like to be interviewed/tasked.
Of course the solution is beyond simple, but I struggled even at brute forcing it.
I can easily do these kind of exercises (and much harder ones obviously) for fun, but not when interviewed.
I struggled with the same thing in University. I graduated with 104/110 even though I was consistently among the most prepared, and I learned to learn, not to pass exams (plenty of stellar performers didn't remember anything few weeks after exams).
Once I asked a professor why did he grade me 27/30 even though I spent one hour answering with details on everything, including the hardest questions.
"Because you never appear convinced when you answer".
I get nervous, I don't like to prove my knowledge this way. I rethink constantly what I'm saying, or even how I sound.
I forget how to type braces or back ticks.
I did not have any issues when not interviewed, or in written exams, or during my research period when I published 3 papers that have been highly cited.
But I am just not a fan of these types of interviews they tell absolutely nothing about the candidate.
You interview me and you'll have the very wrong impression if you ask me to live code or white board.
Meanwhile I've seen leetcode black belts spend most of their time logged on Tekken 7 on discord, consistently creating work and providing negative value while somehow always selling their high skills.
I have found much more value in seeing personal projects, and OSS contributions.
Never asked a single of these bs questions and never failed at hiring anyone. Not once.
In my experience, it’s the relatively basic questions that have the highest value — both because they’re what you run into programming most often, and because they’re less likely to overwhelm candidates in a high-stress setting.
The goal, at least from my point of view, isn’t to see if they can come up with the perfect algorithm, but about how they construct an algorithm, how they communicate about the decisions they’re making, how they respond to challenges about edge-cases, etc.
I’m also strongly in favour of picking out questions that are reflective of the actual codebase they’re being hired for — find something with some basic algorithmic complexity which has a relatively simple and easy to explain input and output, and use that as the problem.
In general, I think the best problems are those which any competent senior engineer could design a good solution for almost off the top of their head with little difficulty.
They don't, at least in the SRE space. I have been interviewing for 6 months without a single coding challenge or LeetCode-type challenge. Though I would passively avoid companies that offer them, as in I would avoid them if offered, I have yet to be offered the chance to avoid it in an interview.
They've been gamed in the "study for the test" sense for years—a sort of human over-fitting—but managers did not mind. I've heard some insist that this is a feature, not a bug. (Depending on levels of cynicism, it's either testing for diligent workers who put effort into preparation, or selecting out non-conformists who aren't willing to put up with management bullshit.)
LLMs make it easier to cheat and give managers a push to develop new, AI-aware, assessment methods, but don't really change the underlying organizational dynamics that led to these tests in the first place.
I've seen that problem in an interview before, and I thought the solution I hit upon was pretty fun (if dumb).
class Solution:
def findMin(self, nums: List[int]) -> int:
class RotatedList():
def __init__(self, rotation):
self.rotation = rotation
def __getitem__(self, index):
return nums[(index + self.rotation) % len(nums)]
class RotatedListIsSorted():
def __getitem__(self, index) -> bool:
rotated = RotatedList(index)
print(index, [rotated[i] for i in range(len(nums))])
return rotated[0] < rotated[len(nums) // 2]
def __len__(self):
return len(nums)
rotation = bisect_left(RotatedListIsSorted(), True)
print('rotation =>', rotation)
return RotatedList(rotation)[0]
I think it is really interesting that you can define "list like" things in python using just two methods. This is kind of neat because sometimes you can redefine an entire problem as actually just the questions of finding the binary search of a list of solutions to that problem; here you are looking for the leftmost point that it becomes True. Anyway, I often bomb interviews by trying out something goofy like this, but I don't know, when it works, it is glorious!
This is very interesting, I've been using LLM to learn new things that way and it really worked. To some extent, learning with LLM is better than taking any course, even with a tutor, as I am getting something prepared for me, in terms of my experience, progress level, etc.
LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.
Instruction-based tutoring is dead from that perspective, why should I follow someone reciting a book or online tutorial, while there is a tool that can introduce me into subject in a better and more interesting way?
Sure, there are great teachers, who are inspiring people, who are able to present the topic in a great way, the point is, they are minority. Now, everyone can have a great tutor for a few dollars a month (or for free, if you don't need generating too much data quickly).
Interesting article - but perhaps a bit light on details in some places, like:
> I generated a list of the most common interview tasks
How? I suppose they mean gathered, or searched for, not strictly generated?
Also a little light on details of the actual interview.
I'm also a little confused about the listing of "problems" - do they refer to some specific leet-code site's listing of problems?
It seems like half-way between naming an actual algorithm/problem and naming a concrete exercise.
As for:
> How is it that we do not use this "forgotten and forbidden" coding in our daily production code, even though all highly reusable, useful code is essentially an exploitation of the intersection between classical algorithmic thinking and real-world problems?
I'm not sure what to say - most of this stuff lives in library code and data structure implementations for any language in common use?
Indeed the one saving grace of leet code interview is arguably that it shows if the candidate can choose sane data structures (and algorithms) when implementing real-world code?
Your "no compiler" rule on day 3 taught you more than the LLM did. The LLM made concepts click. But the binary search vanishing under interview stress proves that understanding something and being able to produce it under pressure are totally different skills. Nobody talks about this enough in the "just use ChatGPT to learn" discourse.
Recently had a coding interview in which I was allowed to search online but not use any AI. On the first google search, the interviewer realized that the first result is now AI generated and said I couldn’t use anything from there. So I had to just click on different links and piece together what I needed from inside the pages
58 comments
I will also admit that this part hurt my heart to read (vicarious embarrassment):
> the recruiter mentioned I needed to pay more attention to code debuggability (whatever this means - I assume that under the corpo-language, they mean that I wrote invalid code)
In telco, when a remote node crashes at a client's site, I often only have access to a heavily restricted subset of logs, and the debugging communication loop via email can take days to understand "what happens". Because of that, I write defensive, strictly encapsulated code, and I think in terms of domain-specific states and objects that can be explicitly tracked from an external PoV.
Similarly, during game jams, "debuggable and maintainable" means to me that the code is modular enough that I can completely rip out and rewrite a core mechanic in the final 3 hours just because the game design suddenly changed.
My habit of writing code optimized for remote logs and sudden architectural shifts actually became my biggest enemy under the algorthimic interview (or 45-minute LeetCode) constraint. It makes the core algorithmic state less clear and hides algorithmic mistakes under layers of defensive "if" statements (where I would normally drop a debug log).
I am simply used to not trusting the inputs, whereas in algorithmic problems, the whole point is to exploit constraints that you need to be absolutely sure about.
So the "if" statements that usually increase "debuggability" in telco or during game jams are the exact opposite of the "debuggability" term used in algorithmic thinking.
Thanks for naming this issue so clearly - it is a very valid reality check.
I'm kind of surprised they still do leetcode-style questions on remote interviews these days. I thought those types of interviews would be 100% gamed by now.
Then I had a technical interview when I was asked to implement a simple algo for the tris game (aka tic tac toe) and my mind was completely blurry.
I was tired, i'm in eu and this was for a San Francisco startup interviewing me at their lunch time, very late in Italy.
And generally don't like to be interviewed/tasked.
Of course the solution is beyond simple, but I struggled even at brute forcing it.
I can easily do these kind of exercises (and much harder ones obviously) for fun, but not when interviewed.
I struggled with the same thing in University. I graduated with 104/110 even though I was consistently among the most prepared, and I learned to learn, not to pass exams (plenty of stellar performers didn't remember anything few weeks after exams).
Once I asked a professor why did he grade me 27/30 even though I spent one hour answering with details on everything, including the hardest questions.
"Because you never appear convinced when you answer".
I get nervous, I don't like to prove my knowledge this way. I rethink constantly what I'm saying, or even how I sound.
I forget how to type braces or back ticks.
I did not have any issues when not interviewed, or in written exams, or during my research period when I published 3 papers that have been highly cited.
But I am just not a fan of these types of interviews they tell absolutely nothing about the candidate.
You interview me and you'll have the very wrong impression if you ask me to live code or white board.
Meanwhile I've seen leetcode black belts spend most of their time logged on Tekken 7 on discord, consistently creating work and providing negative value while somehow always selling their high skills.
I have found much more value in seeing personal projects, and OSS contributions.
Never asked a single of these bs questions and never failed at hiring anyone. Not once.
In my experience, it’s the relatively basic questions that have the highest value — both because they’re what you run into programming most often, and because they’re less likely to overwhelm candidates in a high-stress setting.
The goal, at least from my point of view, isn’t to see if they can come up with the perfect algorithm, but about how they construct an algorithm, how they communicate about the decisions they’re making, how they respond to challenges about edge-cases, etc.
I’m also strongly in favour of picking out questions that are reflective of the actual codebase they’re being hired for — find something with some basic algorithmic complexity which has a relatively simple and easy to explain input and output, and use that as the problem.
In general, I think the best problems are those which any competent senior engineer could design a good solution for almost off the top of their head with little difficulty.
LLMs make it easier to cheat and give managers a push to develop new, AI-aware, assessment methods, but don't really change the underlying organizational dynamics that led to these tests in the first place.
> Find Minimum in Rotated Sorted Array
I've seen that problem in an interview before, and I thought the solution I hit upon was pretty fun (if dumb).
I think it is really interesting that you can define "list like" things in python using just two methods. This is kind of neat because sometimes you can redefine an entire problem as actually just the questions of finding the binary search of a list of solutions to that problem; here you are looking for the leftmost point that it becomes True. Anyway, I often bomb interviews by trying out something goofy like this, but I don't know, when it works, it is glorious!Good luck on your second round!
It is also odd that this article appears here after someone complained about vibe coding killing the interest in algorithms.
This game is played often. People have valid complaints, then someone posts a "rebuttal" ("LLMs are not bad for $X---they are good for $X").
Anyway, he uses LLMs more in the search capability, which is less controversial than generative AI and vibe coding.
LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.
Instruction-based tutoring is dead from that perspective, why should I follow someone reciting a book or online tutorial, while there is a tool that can introduce me into subject in a better and more interesting way?
Sure, there are great teachers, who are inspiring people, who are able to present the topic in a great way, the point is, they are minority. Now, everyone can have a great tutor for a few dollars a month (or for free, if you don't need generating too much data quickly).
> I generated a list of the most common interview tasks
How? I suppose they mean gathered, or searched for, not strictly generated?
Also a little light on details of the actual interview.
I'm also a little confused about the listing of "problems" - do they refer to some specific leet-code site's listing of problems?
It seems like half-way between naming an actual algorithm/problem and naming a concrete exercise.
As for:
> How is it that we do not use this "forgotten and forbidden" coding in our daily production code, even though all highly reusable, useful code is essentially an exploitation of the intersection between classical algorithmic thinking and real-world problems?
I'm not sure what to say - most of this stuff lives in library code and data structure implementations for any language in common use?
Indeed the one saving grace of leet code interview is arguably that it shows if the candidate can choose sane data structures (and algorithms) when implementing real-world code?