Nanocode: The best Claude Code that $200 can buy in pure JAX on TPUs (github.com)

by desideratum 26 comments 219 points
Read article View on HN

26 comments

[−] wwfn 40d ago
Tangential (but topical in that "The threat is comfortable drift toward not understanding what you're doing" is also on the front page):

Is the generated python code in the example wrong?

The prompt

> Develop a Python function that removes any falsey values from a list. Return the modified list without creating a new one.

Is answered with list comprehension, which makes a new list and leaves the original unmodified (never mind that the *args input necessarily can't be a modifiable list?)

   def remove_falsey_values(*args): return [val for val in args if val]
Whereas I'd expect something like

    def remove_falsey_values(l):
          for i in reversed(range(len(l))):
               if not l[i]: l.pop(i)
          # returned list is linked to input l 
          return l

    a = [1, 0, False, 'foo']
    x = remove_falsey_values(a)
    x[0] = 2
    print(a) # [2,'foo']
[−] hecanjog 40d ago
It doesn't fit the requirement to modify the list in place, but the prompt itself contradicts the requirements by asking explicitly for the implementation to use *args and a list comprehension.
[−] wwfn 40d ago
Ahh I didn't see the full original prompt -- it's overflowing into a horz scroll for me. I thought it was the "critique loop" that injected the *args requirement. I guess garbage in, garbage out. Still unfortunate example to use.
[−] __s 40d ago

    def remove_falsey_values(l):
          l[:] = (x for x in l if x)
[−] desideratum 40d ago
Oh I wouldn't be surprised. This is a sample from one of the OSS code datasets I'd used, which are all generated synthetically using LLMs. Data is indeed the moat.
[−] nusl 40d ago
Why would you modify the original list and return it with the second example? Honestly the first is better
[−] highphive 40d ago
The question isn't really what's better practice, the question is whether the code follows the prompt. The first example does not.
[−] huflungdung 40d ago
[dead]
[−] semiinfinitely 40d ago
your second function is the type of bad code you get from people trying to program python like its c
[−] ktm5j 40d ago
Is there a pythonic way to satisfy the prompt? IE without making a new list?
[−] wwfn 40d ago
Absolutely! And the list.pop version is multiple orders of magnitude slower. But I took the prompt to be asking for in-place modification of the existing list. Comprehension does not do that.
[−] bdbdbdb 40d ago
Dumb question - and I'm not trying diminish the achievement here, I just genuinely don't understand:

Why would people want to spend $200 to train a coding model when there are free coding models?

[−] jaboostin 40d ago
As someone with zero ML experience, this was a super interesting and digestible read!
[−] vova_hn2 40d ago

> This is a library showing you how to train your own Claude Code end-to-end.

What does it even mean?

Claude Code is a so called "harness" - a thing that builds a context for LLMs, calls LLMs, executes tool calls etc. It uses various Anthropic models under the hood.

It can also use other models AFAIK.

It cannot be "trained".

Sorry if this comment sounds nitpicky, I'm just annoyed by the imprecise use of terminology.

[−] redman25 40d ago
Not to be confused with nanocoder, the agentic coding harness.

https://github.com/Nano-Collective/nanocoder

[−] wg0 40d ago
Does this really work? Does this how Anthropic works?

Any practitioners can elaborate?

[−] tatrions 39d ago
[flagged]
[−] meidad_g 40d ago
[flagged]
[−] tatrions 40d ago
[flagged]
[−] LeonTing1010 40d ago
[flagged]