What we learned building 100 API integrations with OpenCode (nango.dev)

by rguldener 21 comments 99 points
Read article View on HN

21 comments

[−] neya 42d ago
This is the wrong way to do it. As software architects, you need to learn to appropriate the correct usage of algorithms and AI. Using AI for building everything is not just a waste of tokens, it also is an exercise in futility.

Here is how I solved this problem:

1. There is already a knowledgebase of almost all APIs (the ones that are useful to the average Joe anyway) in either Swagger.json or Postman.json format. This is totally upto you as to what format you prefer.

2. Write a generator (I use Elixir) to infer which format 1. uses and generate your API modules using a code generator. There are plenty, or you can even write your own using simple File.write!

3. In the rare occurence you coming across a shitty API with only scattered documentation across outdated static pages online, only then use an LLM + browser to automate it to write it into the format listed in 1. (Swagger.json or Postman.json)

Throwing an LLM at everything is just inefficient lazy work.

[−] Falimonda 42d ago
Define "it" in the context of "doing it wrong".

The post provides a lot of good food for thought based on experience which is exactly what the title conveys

[−] gchamonlive 42d ago

> There are two obvious approaches: start with lots of guardrails, or start with very few and learn what the models actually do.

> We chose the second because we didn’t want to overfit our assumptions.

> Some of it went better than expected.

> But they also broke in very unexpected ways, sometimes spectacularly.

You clearly missed the whole point of the article, which is to experiment with agents and explore the limits of having them run wild.

Efficient use of tokens and which tasks to delegate is secondary to the experiment. Optimizing these is in any case premature if you don't understand the limits of the models.

[−] neya 42d ago

> which is to experiment with agents

I think you completely missed the point - they built a product purely using agents and deployed it to production for others to use. Read what the product actually does first.

[−] gchamonlive 42d ago
Why shouldn't they ship it to production if the experiment was a success? You say the only way to code is to "learn to appropriate the correct usage of algorithms and AI" which for you is to code a generator and only use "dumb" generators to produce code, which is fine, but they just showed that for 20 bucks and a few minutes you can get very far, so their evidence is just stronger than yours.
[−] neya 42d ago

> their evidence is just stronger than yours.

What evidence? There is 0 evidence. It's deployed to production, but that doesn't mean it works fine or is free of bugs - which is exactly my point and why you use algorithms for these types of things. They're testable, repeatable and scalable.

With LLM slop it's just that - slop.

[−] gchamonlive 42d ago
Have you seen the code to write it off as slop?
[−] cl0ckt0wer 42d ago
There are lots of APIs with poor or nonexistent documentation. I'm talking about internal systems where one programmer that kinda knew what he was doing built a proof of concept, and now it's a core business requirement.
[−] maxdo 42d ago
I’m sorry but this is a caveman mentality . How about tests , payloads , integrating into existing system , logs etc . Llm is perfect for that , you can point your skill in harvest to learn from docs , that will save tokens.

I’m not going to trust a scripted codegen without any logic fo such thing as api integration

[−] evilelectron 42d ago
This is the way.

I am doing something similar where I have a parser which looks for changes in documentation, matches them with the GraphQL schema and generates code using Apollo. In a nutshell it is a code generator written using Claude to generate more code and on failure goes back to Claude to fix the generator and asks a human for review.

[−] groby_b 46d ago
Pardon me if I misread, but wouldn't that be better served by a ready-made library (with, if you must AI, some futzing to account for call signature)?

What is the value add of having the AI rebuild code over and over, individually for each project using it?

[−] bilekas 46d ago
I don't know, maybe I'm misunderstanding too but they basically just asked an agent to interface with an API. It seems the agent will create new code each time..

I hope this isn't their business model.

[−] rguldener 46d ago
Author here, the build happens together with building your app. Once built, the code executes deterministically at runtime.

The news here is the AI reading the API docs, assembling requests, and iterating on them until it works as expected.

This sounds simple, but is time consuming and error prone for humans to do.

[−] skybrian 42d ago
I think the question is why integrating with, say, Google Calendar is different for each customer? How much is custom versus potentially reusable code?
[−] j16sdiz 42d ago
In my experience, most "SDK" we have today is just thin wrapper of the HTTP call generated from openapi / swagger.

It take lots of readings and testing before integrating to your project.

[−] mellosouls 46d ago
Nango claims to be fully open source but the documentation seems to imply the self-hosted version is a small subset:

https://nango.dev/docs/guides/platform/free-self-hosting/con...

Ofc that may well be my misreading but it seems important in the context of the claim and the analysis using OpenCode.

Perhaps they could clarify and/or revisit the docs.

[−] yojo 42d ago
The TL;DR dos not seem to match the rest of the article.

They claim the agents reliably generated a week’s worth of dev work for $20 in tokens, then go on to list all the failure modes and debugging they had to do to get it to work, and conclude with “Agents are not ready to autonomously ship every integration end-to-end.”

Generally a good write up that matches my experience (experts can make systems that can guide agents to do useful work, with review), but the first section is pretty misleading.

[−] cpursley 42d ago
If you're using Elixir (or don't mind running a separate Elixir service), we've built what is effectively a clone of the oAuth part of Nango (formally Pizzly). Drop into any Elixir project and get full oAuth management out of the box, and it's compatible with all of the Nango provider strategies:

https://github.com/agoodway/tango

[−] aplomb1026 42d ago
[dead]
[−] maxothex 42d ago
[dead]
[−] skrun_dev 45d ago
[dead]
[−] epolanski 46d ago
[flagged]
[−] ikbear 46d ago
[flagged]
[−] bilekas 46d ago
What are you talking about bouncing emails for?
[−] flexagoon 46d ago
They are promoting their own service
[−] Falimonda 42d ago
A lot of these smells like skill issue on the model. So many are completely non-issues if using Claude Opus 4.5+

The idea of assigning a code-owner agent per directory is really interesting. A2A (read: message passing and self-updating AGENTS.md files) might really shine there in some way.