I've chatted a bit with the author, but not actually tried the language. It looks very interesting, and a clear improvement. I'm not particularly quiet about not liking Go[1].
I do think there may be a limit to how far it can be improved, though. Like typed nil means that a variable of an interface type (say coming from pure Go code) should enter Lisette as Option
> Basically, why try to make Go more like Rust when Rust is right there?
Go gives you access to a compute- and memory-efficient concurrent GC that has few or no equivalents elsewhere. It's a great platform for problem domains where GC is truly essential (fiddling with spaghetti-like reference graphs), even though you're giving up the enormous C-FFI ecosystem (unless you use Cgo, which is not really Go in a sense) due to the incompatibilities introduced by Go's weird user-mode stackful fibers approach.
> Basically, why try to make Go more like Rust when Rust is right there?
The avg developer moves a lot faster in a GC language. I recently tried making a chatbot in both Rust and Python, and even with some experience in Rust I was much faster in Python.
No doubt a chatbot would be built faster if using a less strict language. It wasn't until I started working on larger Python codebases (written by good programmers) that I went "oh no, now I see how this is not an appropriate language".
Similar to how even smaller problems are better suited for just writing a bash script.
When you can have the whole program basically in your head, you don't need the guardrails that prevent problems. Similar to how it's easy to keep track of object ownership with pointers in a small and simple C program. There's no fixed size after which you can no longer say "there are no dangling pointers in this C program". (but it's probably smaller than the size where Python becomes a problem)
My experience writing TUI in Go and Rust has been much better in Rust. Though to be fair, the Go TUI libraries may have improved a lot by now, since my Go TUI experience is older than me playing with Rust's ratatui.
I've also found that traversing a third-party codebase in Python is extremely frustrating and requires lots of manual work (with PyCharm) whereas with Rust, it's just 'Go to definition/implementation' every time from the IDE (RustRover). The strong typing is a huge plus when trying to understand code you didn't write (and I'm not talking LLM-generated).
Only in the old "move fast and break things" sense. RAII augmented with modern borrow checking is not really any syntactically heavier than GC, and the underlying semantics of memory allocations and lifecycles is something that you need to be aware of for good design. There are some exceptions (problems that must be modeled with general reference graphs, where the "lifecycle" becomes indeterminate and GC is thus essential) but they'll be quite clear anyway.
> Only in the old "move fast and break things" sense
No, definitely not only in that sense. GC is a boon to productivity no matter how you slice it, for projects of all sizes.
I think the idea that this is not the case, perhaps stems from the fact that Rust specifically has a better type system than Java specifically, so that becomes the default comparison. But not every GC language is Java. They don't all have lax type systems where you have to tiptoe around nulls. Many are quite strict and are definitely not "move fast and break things" type if languages.
> Go was not satisfied with one billion dollar mistake, so they decided to have two flavors of NULL
Thanks for raising this kind of things in such a comprehensible way.
Now what I don't understand is that TypeScript, even if it was something to make JavaScript more bearable, didn't fix this! TS is even worse in this regard. And yet no one seems to care in the NodeJS ecosystem.
TypeScript tried to accurately model (and expose to language services) the actual behavior of JS with regards to null/undefined. In its early days, TypeScript got a lot of reflexive grief for attempting to make JS not JS. Had the TS team attempted to pave over null/undefined rather than modeling it with the best fidelity they could at the time, I think these criticisms would have been more on the mark.
ReasonML / Melange / Rescript are a wholistic approach to this: The issue with stapling an option or result type into Typescript is that your colleagues and LLMs won't used it (ask me how I know).
Your readme would really benefit from code snippets illustrating the library. The context it currently contains is valuable but it’s more what I’d expect at the bottom of the readme as something more like historical context for why you wrote it.
Yup, in my TODO list (I've only recently published this package). For now you can just check the tests, or a SO answer I wrote a while ago (before I published the idea as an npm package): https://stackoverflow.com/a/78937127/544947
If you do a type check with None, and there is some value inside (so it is Some, not None), it is IMPOSSIBLE that the .value that you extract underneath is gone. This is an important race-condition that you might run into due to the nature of TS/JS, but by boxing the value with an immutable Option type, you're protected.
Also this prevents people to run into NullReferenceException (or UndefinedRefsExceptions, or whatever is called in this ecosystem) for people that didn't turn strictNullChecks ON.
Golang does have a lot of weird flaws/gotchas, but as a language target for a compiler (transpiler) it's actually pretty great!
Syntax is simple and small without too many weird/confusing features, it's cross platform, has a great runtime and GC out of the box, "errors as values" so you can build whatever kind of error mechanism you want on top, green threading, speedy AOT compiler. Footguns that apply when writing Go don't apply so much when just using it as a compile target.
I've been writing a tiny toy functional language targeting Go and it's been really fun.
Go's defer is generally good, but it interacts weirdly with error handling (huge wart on Go language design) and has weird scoping rules (function scoped instead of scope scoped).
Does Go actually have an async story? I know that question risks starting a semantic debate, so let me be more specific.
Go allows creating lightweight threads to the point where it's a good pattern to just spin off goroutines left and right to your heart's content. That's more of a concurrency primitive than async. Sure, you combine it with a channel, and you've created an async future.
The explicit passing of contexts is interesting. I initially thought it would be awkward, but it works well in practice. Except of course when you need to call a blocking API that doesn't take context.
And in environments where you can run a multitasking runtime, that's pretty cool. Rust's async is more ambitious, but has its drawbacks.
Go's concurrency story (I wouldn't call it an async story) is way more yolo, as is the rest of the Go language. And in my experience that Go yolo tends to blow up in more hilarious ways once the system is complex enough.
To be fair, Go’s async story only works because there’s a prologue compiled into every single function that says “before I execute this function, should another goroutine run instead?” and you pay that cost on every function call. (Granted, that prologue is also used for other features like GC checks and stack size guards, but the point still stands.) Languages that aspire to having zero-cost abstractions can’t make that kind of decision, and so you get function coloring.
I'm not sure this is 100% correct. I haven't researched it but why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time. However, even if it is, Go is only trying to be medium fast / efficient in the same realm as its garbage collected peers (Java and C#).
If you want to look at Rust peer languages though, I do think the direction the Zig team is heading with 0.16 looks like a good direction to me.
> why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time
It can’t be done at compile time because it’s a scheduler. Goroutines are scheduled in userland, they map M:N to “real” threads, so something has to be able to say “this thread needs to switch to a different goroutine”.
There’s two ways of doing this:
- Signal-based preemption: Set an alarm (which requires a syscall) that will interrupt the thread after a timeout, transferring control to the goroutine scheduler
- Insert a check to see if a re-schedule needs to happen, in certain choice parts of the compiled code (ie. At function call entry points.)
Golang used to only do the second one (and you can go back to this behavior with - asyncpreemptoff=1), it’s why there was a well-known issue that if you entered an infinite loop in a goroutine and never called any functions, other goroutines would be starved. They fixed that by implementing signal-based preemption above too, but it’s done on top of the second approach.
Granted, the prologue needs to happen anyway, because go needs to check if the stack needs to grow, on every function call. So there’s basically a “hook” installed into this prologue that is a single branch, saying “if the scheduler needs to switch, jump there now”, and it basically works sort of like an atomic bool the scheduler writes to when it needs to re-schedule a goroutine… Setting it to true causes that function to jump to the scheduler.
Go has done a lot of work to make all of this fast, and you’re right that it only aspires to be a “medium-fast” language, and things like mandatory GC make these sort of prologues round to zero in the scheme of things. But it’s something other languages are fully within their rights to avoid, is my point (and it sounds like you agree.)
It sounds like you know about this / have researched it. Are you saying that any go function, even func add(x,y int) { return x + y}, is going to have such overhead in all situations? Why wouldn't Go just inline this for instance when it can? It seems like such an obvious optimization.
If go chooses to inline a function in general, then it doesn’t need to add the prologue to the inlined code, no. The prologue applies to all functions that remain after the inlining is done.
There’s also functions that can be marked as “nosplit” that skip the prologue as well.
But otherwise, it has to be in every function because you might be 1 byte away from the top of go’s (small) stack size, then you call that simple add function, and if the prologue isn’t run the stack will overflow. Go has tiny stacks by default that grow if they need to, with this prologue functioning as the “do I need to split/grow the stack?” check, so it needs to be every function that does it. The scheduler hook is just a single branch that’s part of the prologue, so it’s not that much more expensive if you’re doing the prologue anyway.
Both Borgo and now Lisette seem to act as though (T, error) returns are equivalent to a Result sum type, but this is not semantically valid in all cases. The io.Reader interface's Read method, for example, specifies not only that (n!=0, io.EOF) is a valid return pattern, but moreover that it is not even an error condition, just a terminal condition. If you treat the two return values as mutually exclusive, you either can't see that you're supposed to stop reading, or you can't see that some number of valid bytes were placed into the buffer. This is probably well known enough to be handled specifically, but other libraries have been known to make creative use of the non-exclusivity in multiple return values too.
To be fair, I feel like the language is widely criticized for this particular choice and it's not a pattern you tend to see with newer APIs.
It's a really valid FFI concern though! And I feel like superset languages like this live or die on their ability to be integrated smoothly side-by-side with the core language (F#, Scala, Kotlin, Typescript, Rescript)
Really nice work on this. The error messages alone show a lot of care, the "help" hints feel genuinely useful, not just compiler noise.
I'm curious about the compiled Go output though. The Result desugaring gets pretty verbose, which is totally fine for generated code, but when something breaks at runtime you're probably reading Go, not Lisette. Does the LSP handle mapping errors back to source positions?
Also wondering about calling Lisette from existing Go code (not just the other direction). That feels like the hard part for adoption in a mixed codebase.
Is the goal here to eventually be production-ready or is it more of a language design exploration? Either way it's a cool project.
Go has an awesome runtime, but at the same time has a very limited typesystem, and is missing features like exhaustive pattern matching, adts and uninitted values in structs.
I'd always liked the Go runtime but the language is pretty clunky imo and I don't think they will ever improve it (because they don't think anything is wrong with it). However, you have to really dislike the language to use a transpiler.
Something that I don't understand about Rust, or these rustylangs, is the insistence of separating structs and methods. Don't get me wrong, I like named-impl blocks, but why are they the only option? Why can't I put an unnamed-impl block inside the struct? Or better yet just define methods on the struct? What's the point of this and why do these rustylangs never seem to change this?
This seems awesome. Seems to address many of my armchair complaints about both Go (inexpensive) and Rust (bloated/complex).
I'm curious what compilation times are like? Are there theoretical reasons it'd be order of magnitude slower than Go? I assume it does much less than the rust compiler...
Relatedly, I'd be curious to see some of the things from Rust this doesn't include, ideally in the docs. Eg I assume borrow checking, various data types, maybe async etc are intentionally omitted?
Love the idea of bringing Rust ergonomics to the Go runtime. As someone currently building infra-automation tools (Dockit), the trade-off between Rust's safety and Go's simplicity is always a hot topic. This project addresses it in a very cool way. Will definitely follow the development
I'm wondering about the logistics of making this integrate with Go at the assembly/object file level rather than at source code level. What if it compiled to Go's assembly rather than to Go source code
This is really cool! Go is so dead simple to learn but it just lacks a few features. I feel this really fills that specific gap.
Go with more expressive types and a bit stricter compiler to prevent footguns would be a killer backend language. Similar to what TypeScript was to JavaScript.
My 2 cents would be to make it work well with TypeScript frontends. I think TypeScript is so popular in backends because 1. you can share types between frontend code and backend code and 2. it's easy for frontend devs to make changes to backend code.
154 comments
I do think there may be a limit to how far it can be improved, though. Like typed nil means that a variable of an interface type (say coming from pure Go code) should enter Lisette as Option
> Basically, why try to make Go more like Rust when Rust is right there?
Go gives you access to a compute- and memory-efficient concurrent GC that has few or no equivalents elsewhere. It's a great platform for problem domains where GC is truly essential (fiddling with spaghetti-like reference graphs), even though you're giving up the enormous C-FFI ecosystem (unless you use Cgo, which is not really Go in a sense) due to the incompatibilities introduced by Go's weird user-mode stackful fibers approach.
> Basically, why try to make Go more like Rust when Rust is right there?
The avg developer moves a lot faster in a GC language. I recently tried making a chatbot in both Rust and Python, and even with some experience in Rust I was much faster in Python.
Go is also great for making quick lil CLI things like this https://github.com/sa-/wordle-tui
Similar to how even smaller problems are better suited for just writing a bash script.
When you can have the whole program basically in your head, you don't need the guardrails that prevent problems. Similar to how it's easy to keep track of object ownership with pointers in a small and simple C program. There's no fixed size after which you can no longer say "there are no dangling pointers in this C program". (but it's probably smaller than the size where Python becomes a problem)
My experience writing TUI in Go and Rust has been much better in Rust. Though to be fair, the Go TUI libraries may have improved a lot by now, since my Go TUI experience is older than me playing with Rust's ratatui.
> moves a lot faster in a GC language
Only in the old "move fast and break things" sense. RAII augmented with modern borrow checking is not really any syntactically heavier than GC, and the underlying semantics of memory allocations and lifecycles is something that you need to be aware of for good design. There are some exceptions (problems that must be modeled with general reference graphs, where the "lifecycle" becomes indeterminate and GC is thus essential) but they'll be quite clear anyway.
> Only in the old "move fast and break things" sense
No, definitely not only in that sense. GC is a boon to productivity no matter how you slice it, for projects of all sizes.
I think the idea that this is not the case, perhaps stems from the fact that Rust specifically has a better type system than Java specifically, so that becomes the default comparison. But not every GC language is Java. They don't all have lax type systems where you have to tiptoe around nulls. Many are quite strict and are definitely not "move fast and break things" type if languages.
A Lua interpreter written in Rust+GC makes a lot of sense.
A simplified Rust-like language written in, and compiling to, Rust+GC makes a lot of sense too.
A simplified language written in Rust and compiling to Go is a no-go.
Not saying those are the only two GC languages, just circling back to the post spawning these comments.
> Go was not satisfied with one billion dollar mistake, so they decided to have two flavors of NULL
Thanks for raising this kind of things in such a comprehensible way.
Now what I don't understand is that TypeScript, even if it was something to make JavaScript more bearable, didn't fix this! TS is even worse in this regard. And yet no one seems to care in the NodeJS ecosystem.
> FP's languages approach of rather not having null at all
But
Noneis just another null / undefined, which brings along a bunch of non-idiomatic code around handling it.If you do a type check with None, and there is some value inside (so it is Some, not None), it is IMPOSSIBLE that the .value that you extract underneath is gone. This is an important race-condition that you might run into due to the nature of TS/JS, but by boxing the value with an immutable Option type, you're protected.
Also this prevents people to run into NullReferenceException (or UndefinedRefsExceptions, or whatever is called in this ecosystem) for people that didn't turn strictNullChecks ON.
But yeah it's a fair point. Sometimes I think I should just write my own lang (a subset of typescript), in the same fashion that Lisette dev has done.
You can't enforce it in any normal codebase because null is used extensively in the third party libraries you'll have to use for most projects.
Syntax is simple and small without too many weird/confusing features, it's cross platform, has a great runtime and GC out of the box, "errors as values" so you can build whatever kind of error mechanism you want on top, green threading, speedy AOT compiler. Footguns that apply when writing Go don't apply so much when just using it as a compile target.
I've been writing a tiny toy functional language targeting Go and it's been really fun.
Go's defer is generally good, but it interacts weirdly with error handling (huge wart on Go language design) and has weird scoping rules (function scoped instead of scope scoped).
Go allows creating lightweight threads to the point where it's a good pattern to just spin off goroutines left and right to your heart's content. That's more of a concurrency primitive than async. Sure, you combine it with a channel, and you've created an async future.
The explicit passing of contexts is interesting. I initially thought it would be awkward, but it works well in practice. Except of course when you need to call a blocking API that doesn't take context.
And in environments where you can run a multitasking runtime, that's pretty cool. Rust's async is more ambitious, but has its drawbacks.
Go's concurrency story (I wouldn't call it an async story) is way more yolo, as is the rest of the Go language. And in my experience that Go yolo tends to blow up in more hilarious ways once the system is complex enough.
But like I said, in my opinion this compares with Go not having an async story at all.
If you want to look at Rust peer languages though, I do think the direction the Zig team is heading with 0.16 looks like a good direction to me.
> why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time
It can’t be done at compile time because it’s a scheduler. Goroutines are scheduled in userland, they map M:N to “real” threads, so something has to be able to say “this thread needs to switch to a different goroutine”.
There’s two ways of doing this:
- Signal-based preemption: Set an alarm (which requires a syscall) that will interrupt the thread after a timeout, transferring control to the goroutine scheduler
- Insert a check to see if a re-schedule needs to happen, in certain choice parts of the compiled code (ie. At function call entry points.)
Golang used to only do the second one (and you can go back to this behavior with - asyncpreemptoff=1), it’s why there was a well-known issue that if you entered an infinite loop in a goroutine and never called any functions, other goroutines would be starved. They fixed that by implementing signal-based preemption above too, but it’s done on top of the second approach.
Granted, the prologue needs to happen anyway, because go needs to check if the stack needs to grow, on every function call. So there’s basically a “hook” installed into this prologue that is a single branch, saying “if the scheduler needs to switch, jump there now”, and it basically works sort of like an atomic bool the scheduler writes to when it needs to re-schedule a goroutine… Setting it to true causes that function to jump to the scheduler.
Go has done a lot of work to make all of this fast, and you’re right that it only aspires to be a “medium-fast” language, and things like mandatory GC make these sort of prologues round to zero in the scheme of things. But it’s something other languages are fully within their rights to avoid, is my point (and it sounds like you agree.)
There’s also functions that can be marked as “nosplit” that skip the prologue as well.
But otherwise, it has to be in every function because you might be 1 byte away from the top of go’s (small) stack size, then you call that simple add function, and if the prologue isn’t run the stack will overflow. Go has tiny stacks by default that grow if they need to, with this prologue functioning as the “do I need to split/grow the stack?” check, so it needs to be every function that does it. The scheduler hook is just a single branch that’s part of the prologue, so it’s not that much more expensive if you’re doing the prologue anyway.
https://github.com/ivov/lisette/issues/12
I have a few approaches in mind and will be addressing this soon.
It's a really valid FFI concern though! And I feel like superset languages like this live or die on their ability to be integrated smoothly side-by-side with the core language (F#, Scala, Kotlin, Typescript, Rescript)
I'm curious about the compiled Go output though. The Result desugaring gets pretty verbose, which is totally fine for generated code, but when something breaks at runtime you're probably reading Go, not Lisette. Does the LSP handle mapping errors back to source positions?
Also wondering about calling Lisette from existing Go code (not just the other direction). That feels like the hard part for adoption in a mixed codebase.
Is the goal here to eventually be production-ready or is it more of a language design exploration? Either way it's a cool project.
But I can't help wondering:
If it is similar to Rust why not make it the the same as Rust where it feature-matches?
Why import "foo.bar" instead of use foo::bar?
Why Bar.Baz => instead of Bar::Baz =>? What are you achieving here?
Why make it subtlety different so someone who knows Rust has to learn yet another language?
And someone who doesn't know Rust learns a language that is different enough that the knowledge doesn't transfer to writing Rust 1:1/naturally?
Also: int but float64?
Edit: typos
Lisette brings you the best of both worlds.
I'm curious what compilation times are like? Are there theoretical reasons it'd be order of magnitude slower than Go? I assume it does much less than the rust compiler...
Relatedly, I'd be curious to see some of the things from Rust this doesn't include, ideally in the docs. Eg I assume borrow checking, various data types, maybe async etc are intentionally omitted?
Go with more expressive types and a bit stricter compiler to prevent footguns would be a killer backend language. Similar to what TypeScript was to JavaScript.
My 2 cents would be to make it work well with TypeScript frontends. I think TypeScript is so popular in backends because 1. you can share types between frontend code and backend code and 2. it's easy for frontend devs to make changes to backend code.