Launch HN: Spine Swarm (YC S23) – AI agents that collaborate on a visual canvas (getspine.ai)

by a24venka 69 comments 109 points
Read article View on HN

69 comments

[−] TheTaytay 64d ago
I think this is really neat. You should probably take it as a compliment that the biggest criticisms so far are about the website landing page. ;)

I like canvases in general, and I especially like them for mentally organizing and referring to this sort of broad work. (Honestly, I think zoomable canvases would make a better window manager in general, but I digress)

One small piece of friction: My default mouse-based ways of dragging the canvas around (that work in most canvases like Figma) aren't working. I saw that you had a tutorial, and I have learned to hold space now, but I prefer the "hold middle mouse button to drag my canvas view around".

I've got a couple of research tasks running now, and my current open questions as a very new user are: 1) How easy will it be to store the outputs into a Github repository. 2) How easy will it be to refer back to this later? 3) Can I build upon it manually or automatically? 4) Can I (securely) share it with someone else for them to see and build upon it? 5) Can I do something "locally" with it? Not necessarily the model, but my preferred interface for LLMs at this point is Claude Code. Could I have a Claude Code instance running in one of these boxes somehow? 6) What if I want to do private stuff with it and don't like the traffic going through Spine's servers? Could I pay them for the interface, but bring my own keys? (Related: Can I self host somehow?) 7) When this is done, each artifact it found (screenshot, webpage, etc), is going to be helpful. The data-hoarder in me wants to make sure I can search these later. Heck, if I could do that, this would become my preferred "web browser". (But again, I digress.)

[−] a24venka 64d ago
Really appreciate the detailed feedback and questions! And yes, we'll take the website criticism as a compliment :)

Good callout on the canvas navigation, we'll look into middle mouse button support.

To answer your questions: 1) GitHub integration is on our roadmap. Right now you can export outputs manually but we want to make this seamless. 2) All your canvases are saved and you can search them by name in your dashboard. We're also working on a dedicated section for deliverables across canvases. 3) Yes to both! You can manually add or edit blocks, or kick off new agent runs that build on existing work. 4) You can currently only share public links of your canvas to others (but you can make it private at any point). We are testing out a teams feature which allows you to share canvases with members on your team securely. Beyond that, we are working on adding roles and email-based sharing controls which is in our roadmap. 5) Claude Code in a block is a really interesting idea. We don't support that today but we're thinking about computer use and coding workflows. 6)BYOK (bring your own keys) is something we've heard interest in and are considering. Self-hosting isn't available right now, though we do support private deployments for enterprise customers if that's ever relevant. 7) Love the 'preferred web browser' framing. Right now you can search canvases but searchable artifacts across canvases is definitely where we want to head.

Thanks for giving it a real spin, this kind of feedback is incredibly valuable.

[−] swyx 64d ago

> And yes, we'll take the website criticism as a compliment :)

ugh. guys. come on. stop celebrating at the 1 yard line. people are telling you they didnt even look at the product becacuse your landing page was so bad. you wasted your launch HN linking directly to it, ofc thats the first thing people are going to give feedback on. fix it right now you still have time.

[−] johnyzee 64d ago
Calling it a 'canvas' makes me think that this tool is about AI agents doing some kind of collaborative drawing. Looking at the vid though, it seems more like an environment for visually organizing and managing agentic work (which seems very cool, and quite a bit more than just a canvas).
[−] maliker 64d ago
It might just be me, but this interface is the first time I felt the desire to interact with long-running agents even though I use chat interfaces all day long. Maybe it was the demo video on the landing page which was compelling with its examples. Maybe it was the feeling that I could see what was going on because I would be on a canvas. Nicely done!

Off to keep iterating on the prototype app I started...

[−] orky56 64d ago
Congrats on the launch. Few pieces of feedback that are similar in nature to what has already been shared but unique in terms of solutions.

1) The chat interface as shared in the video is a prime starting point to capture intent but anchors viewers to what Spine is all about. Try a show-tell-show approach where you can demonstrate (ideally above the fold) a compelling output, credits used and agents leverages, and THEN the simple prompt used to get it all started. Let's be real: the chat interface is not the a-ha moment. It's what you get out of it, the orchestration that happens behind the scenes, and finally the familiar chat interface that kicks it all off.

2) Who is the target persona for this? The benchmark accolade is great for the technical audience but they may not care about doing everything in the browser. The non-technical audience may like the browser but prefers examples of other companies and use cases are make the technical more accessible. The board concept helps the abstraction layer of understanding what is produced by the agents but the missing piece is memorializing the decision-making where human in the loop needs something to grasp & share.

[−] jcims 64d ago
Got some great results for a rather broad domain in the first pass.

HN is going to tend towards negative/constructive feedback, for me the only issue is that the mouse interaction is a bit wonky. Took me a minute to realize that i could select different mouse modes. With that I'd say I'd echo TheTaytay's comment about mouse interaction and for me generating docx (which was the output of my agents, haven't even explored explicitly asking for something else) creates a bit of a barrier to use the content for me. Markdown or even HTML would be helpful.

But these are just minor nits, love the concept and great execution.

[−] BloondAndDoom 64d ago
I didn’t read the post, I checked out the website just like 99% of the people will do.

Simple advice, if you are selling a product with a selling point of being visual, show it on your website. Not in a YouTube video but actual screenshots, short cut 10 sec video/gif

[−] varenc 64d ago
Quick feedback about your demo video: I generally quite liked it and it really helped me understand Swarm. Two thoughts:

- you lament the chat interface, but the first 1m30s of the video I only see the chat interface

- your research task is LLM/AI related. There were moments where I found this slightly confusing and I wasn't sure if I was reading about Swarm itself or just its own research. Would recommend something non-LLM related and more generally applicable for the demo video.

Very cool!

[−] airstrike 64d ago
Congrats on the launch! Meta comment, but I just ain't reading all of the above. You need to be able to explain this in about 20% the number of words or you'll lose people, especially VC.

My advice is to start with "Spine Swarm solves _____" then how, then why you're different. 3 short paragraphs, preferably 1-2 sentences each.