Where does engineering go? Retreat findings and insights [pdf] (thoughtworks.com)

by danebalia 32 comments 81 points
Read article View on HN

32 comments

[−] Pasanpr 61d ago

> The product management side of this equation is equally unsettled. If developers are now thinking more about what to build and why, they are doing work that used to belong to product managers

It's not clear to me why this is true. If LLMs are writing code, why are developers simply not orchestrating the completion of more features instead of moving up the stack to do product development work? Is there some implication that the existence of LLMs also enables developers to run user studies, evaluate business metrics and decide on strategy?

Additionally, if PMs can use LLMs to increase velocity in their work why not focus on all the things that used to be deprioritized? Why, with the freed up time, is generating code the best outcome?

These questions likely have different answers depending on organization size but I'm not sure I understand why orgs wouldn't just do more work in this scenario instead of blending responsibilities. It's not like there's infinite mental bandwidth just because an LLM is generating the code

[−] roncesvalles 61d ago
Arguably the PM role only exists because SWEs don't want to do PM work, and the industry acquiesced to this because SWEs are in very short supply - if you could hire a layperson (sorry) to take a few hours of non-technical work off a SWE's plate, it is worth it.

In a (hypothetical; not quite there yet) world where SWEs are in surplus, there is no reason to have PMs.

The really eye-popping efficiency gains from LLM coding won't come from doing the coding faster but from consolidating the PM, SWE, and QA/SDET roles under the same person. Then you'll start seeing startup/indie level productivity-per-person inside large organizations. Imagine Google is like 50,000 Pieter Levels.

[−] manphone 61d ago
The concept of a large organization doesn’t even make sense in this model. How do you make decisions? How do you coordinate? What is Google when you have 50,000 individual silos?
[−] mbrumlow 61d ago
Decisions are less costly. When a swe can take 4 days to do what would have cost 6 months, the math of making sure you are doing the right thing before executing goes away.
[−] manphone 58d ago
That has little to do with building code and a lot to do with customers and releases and operations - giant companies don’t just magically demo software to people.

There’s so many layers today that can’t exist if this is the way forward.

[−] wreath 61d ago
PMs do different things in different organizations.

In my last job, PMs were responsible for identifying problems that were worth solving and align with the overall company vision and plans within the owned domain, and design+engineering decided how to solve those problems and what to build. Of course with collaboration w/ the PMs (and EMs).

The job before that, PMs wrote jira tickets and nagged engineering when tickets will be delivered. The "what problems to solve and what to build" questions came straight from CEO/CTO.

[−] kingkongjaffa 58d ago
I’m not saying you’re wrong but as a senior PM, the engineers I work with see about 10-20% of what I actually do in a week, so in general engineers are not a good judge of the utility of product management.
[−] drivebyhooting 61d ago
Because feature development speed wasn’t the bottleneck a lot of times.
[−] zer00eyz 61d ago

> Engineering quality doesn't disappear when AI writes code. It migrates to specs, tests, constraints, and risk management.

> Code review is being unbundled. Its four functions (mentorship, consistency, correctness, trust) each need a new home.

> If code changes faster than humans can comprehend it, do we need a new model for maintaining institutional knowledge?

The humans we have in these roles today are going to suffer. The problem starts at hiring, because we rewarded memorization of algorithms, and solving inane brain teasers rather than looking for people with skills at systems understanding, code reading (this is a distinct skill) and the ability to do serious review (rather than bike shed Ala tabs over spaces).

LLM's are just moving us from hammers and handsaws to battery powered tools. For decades the above hiring practices were "how fast can you pound in nails" not "are you good at framing a house, or building a chair".

And we're still talking about LLM's in the abstract. Are you having a bad time because you're cutting and pasting from a browser into your VIM instance? Are you having a bad time because you want sub-token performance from the tool? Is your domain a gray box that any new engineer needs to learn (and LLM's are trained, they dont learn).

Your model, your language, your domain, the size of task your asking to be delivered, the complexity of your current code base are as big a part of the conversation. Simply what you code in and how you integrate LLMs really matters to the outcome. -- And without this context were going to waste a lot of time talking past each other.

Lastly, we have 50 years of good will built up with end users that the systems are reliable and repeatable. LLM's are NOT that, even I have moments where it's a stumbling block and I know better. It's one of a number of problems that were going to face in the coming decade. This issue, along side security, is going to erode trust in our domain.

I'm looking forward to moving past the hype, hope and hate, and getting down to the brass tacks of engineering. Because there is a lot of good to be had, if we can just manage an adult conversation!

[−] mlinhares 61d ago
Same, I'm seeing people having a lot of difficulty working with agents and providing prompts that can have the agent go end-to-end on the work. They just can't write prose and explain a problem in a way that the agent can go out and work and come back with a solution, they can only do the "little change with claude code" workflow and that just makes you less productive.

I don't think the industry is ready or has the manpower to deliver on all the promises they're making and a lot of businesses and people will suffer because of that.

[−] chickensong 61d ago
People just need to lower their expectations a bit. There's a large space between "prompting for end-to-end solution" and "little change".
[−] chickensong 61d ago
I agree with the spirit of what you're saying, but...

> we have 50 years of good will built up with end users that the systems are reliable and repeatable

There's good yes, but also we've raced to build dystopian bullshit and normalized identity theft because most software is garbage. There might not be as much goodwill as you think. Software eats the world, and many simply feel helpless. The erosion of trust you're predicting has already happened, or never existed IMHO.

LLMs may not 1-shot reliable and repeatable systems, but they're a powerful tool that I hope will end up improving systems overall, for reasons you've mentioned, among others.

[−] svilen_dobrev 61d ago

> produced something more useful: a map of the fault lines where current practices are breaking and new ones are forming.

Here some story. Long time ago, i wrote a (software) accounting system. From 1st principles - nomenclatures, accounts, double-entry, transactions, balance (=current cached status), operations+reports on top of these. 5 tables (+1 for access control later). Very flexible and re-configurable into whatever one imagines. But anyway.

We deployed it at several places. The biggest one - retail with 50+ salepoints across whole region - was the most troublesome.. and after a month+ back-and-forth it dawned that.. they did have very well-working paper system of accounts/documents/data/values flow which was highly optimized for humans and the reality it was in (papers, remote places, delays, etc). Humans forget, make mistakes, displace things etc ; paper rots in time; distances make things out-of-sync - yesterdays invoices from village X will come tomorrow - maybe - .. etc. So their document flow - and even people-roles - were aligned with that system. Duplicating some things and completely avoiding others.

The new software had no such notions. There was no such thing as forgetting, displacing, out-of-balance. And while temporal stuff was fine, the document flow - even if consisting of same dot-matrix-perfect documents - was different to what they have used to. So.. it took them - and us - 3 months to retrain the personnel to unlearn their old system and to start actually using the new one properly, and enjoying the ride instead of fighting it.

Back to the topic.. i guess the old system of software engineering, built last 50+ years, has to be rearranged now. Not everything, but.. quite. Some things probably may wait for tomorrow, as the paper notes, but some - like roles and what they mean, and the cognitive/understanding chasm - is for yesterday..

Edit: after reading the whole paper, i think there are some things that can be "loaned" from hardware-design (chips etc) flows and processes. i see this analogy - the hardware's target environment (actual physical world, e.g. silicon etc) is also non-deterministic.. just mostly. Things like Requirements engineering, design-for-test ; all the enveloping (heat, power etc) and whatever else may come handy (i am not hardware dev, only seen these from aside, e.g. from a Verilog compiler)

[−] johsole 61d ago
This is a great pdf and well worth the read. I've had a lot of the same questions in my head and glad to see they are concerns others are facing as well.
[−] voxleone 61d ago

>>Where does engineering go?

Up the abstraction ladder; we conceive axioms and constraints; we define actors and objects; we direct rules, flows, sequences and say when and how each one of them lives and dies.

May you live interesting times (some say this is a curse)

[−] hackncheese 61d ago
Found myself both resonating with a lot of points, and being challenged to consider other questions and possible solutions. Super insightful
[−] kseniamorph 61d ago
given specification approach: personally i found it useful in some cases to write preceding block-comments for functions. you can describe the desired behaviour there, input/output types, etc. you can even make a skeleton from comment blocks and run one-shot generation. but this approach is especially useful in iterative development and maintenance.
[−] NeutralForest 61d ago
I thought it was generally interesting but it needs to materialize into processes and tools people can use.
[−] bmurphy1976 61d ago
That's kind of the point. These things don't just happen, people start talking about it at a high level (this doc, conversations like this) and then dig in and solve the problems over time.
[−] NeutralForest 60d ago
I know but still it was a bit too high level for my taste though I appreciate the effort!
[−] echelon 61d ago

> practitioners are exploring how to make incorrect code unrepresentable.

I'll say it again and again and again: Rust is the best language for ML right now.

You get super strict, low-defect code. If it compiles, that's already in a way a strong guarantee.

Rust just needs to grow new annotations and guarantees. "nopanic", "nomalloc", etc., and it would be perfect. Well, that and a macro-stripped mode to compile faster. I'd happily swap AOT serde compilation (as amazing as Serde is) for permanent codegen that gets checked in and compiles fast.

[−] bmurphy1976 61d ago
@dang this is a very interesting and relevant doc. I think it needs another chance at making it to the front page.

This is a fairly easy to read doc discussing some of the challenges with using AI tooling in a forward thinking and disciplined way. Coming from Thoughtworks it also gives a bit of gravitas and legitimacy.

There's good stuff in here. It would be a shame for the larger HN community to miss out on this conversation.

[−] kingkongjaffa 61d ago

> Coming from Thoughtworks it also gives a bit of gravitas

Why? I thought the opposite. Consultancies, of which thoughtworks is one, publish thought leadership as marketing material.

[−] Rapzid 61d ago

> "Where does the rigor go?"

> Engineering quality doesn't disappear when AI writes code. It migrates to specs, tests, constraints, and risk management.

These are generic "thoughts" you can get from any agency pushing AI SDLC. The pages I read through left me wondering if there was even a real retreat.

[−] lelandbatey 61d ago
You're right that this isn't some groundbreaking revelation. If you're using AI enough to be feeling it, you're feeling/seeing what they're talking about. The purpose of a paper/retreat like this it get it all together and written down on paper, then to disseminate it to the wider world. I think the paper does a good job of collecting info that isn't wrong, and which has enough info to help guide folks making decisions.
[−] Rapzid 61d ago
It's drivel.
[−] superfrank 61d ago
Mainly because Martin Fowler is part of their C suite

I agree that it's marketing material, but that doesn't instantly make it garbage. I've been reading their quarterly Thoughtworks Radar for a while now and it's clearly put together by people who understand the industry.

[−] bmurphy1976 61d ago
Sigh. Nobody is ever going to be happy. Would saying it came from a rando Reddit user be better?

They at least put the effort into having the retreat and putting this together. Would other consultancies (who we know little about) have of done the same?

[−] dang 61d ago
Ok, let's give it a try. (Btw, @dang doesn't work reliably - for that you need to email hn@ycombinator.com. I only saw this by accident.)
[−] danebalia 66d ago
[dead]