Update — hardware validation complete. The constraint engine generated Verilog for an SR Latch (3 quantities, 6 valid states, forbidden S=1 R=1), synthesized through Yosys/nextpnr/icepack, and ran correctly on a Lattice iCE40-HX8K (Alchitry Cu V2). All 8 test criteria passed including constraint enforcement in silicon — the forbidden state was never entered. Deterministic and repeatable on reset. Full open-source toolchain from constraints to silicon, no hand-written RTL.
For the skeptics upthread: this isn't hand-waving anymore. The constraint spec goes in, synthesizable Verilog comes out, it passes on real hardware. The auto-test exercises every FSM transition and verifies output decode logic for all 6 state encodings.
Why do I get the overwhelming feeling that the author is not a technical person and they had a LLM write this based on some handwavey ideas? There's virtually no _there_ there.
"...demonstrates its capabilities through worked examples" - The hell it does, your "examples" are three lines long. If you're going to compare it with LLMs, then have it do something LLM-ish. Or hell, the MNIST number recognition task would be better than the "hey look i modeled a flip-flop in my funny language" example.
Am I being harsh? Yes, I am. The author is claiming that they have a system that can automatically generate code for "quantum" and "spintronic" computers, yet offers zero proof of that.
It's ok to be harsh — I was vague, and I know it. Here's why: I remember the story about DOS. Tim Paterson built it, showed too much, and someone else built an empire on it for $50K. I have working constraint rules that produce real circuit behaviors. The paper shows what comes out, not how it works, and that's deliberate. The patent is provisional. I'm not going to hand over implementation details on a forum so someone with more resources can run with it. You'd do the same thing.
The behaviors that emerge — hysteresis, oscillation, bistable memory — are the same computational primitives you see in biological neural circuits, but they come from constraint satisfaction over conserved quantities instead of simulating neurons. The architecture doesn't model neurons at all. It produces the same outputs through a different mechanism. Whether that still counts as "neuromorphic" is debatable — I use the term because the output behaviors map directly to the same hardware substrates (Loihi, SpiNNaker, etc).
Carver Mead's "Analog VLSI and Neural Systems" for where neuromorphic computing started. Intel's Loihi papers (Mike Davies et al.) for where it is now. Our paper takes a different path — constraints instead of neurons.
Thank You. I see those (and more) at the wikipedia entry for NC.
Any books you can recommend? I see a bunch on Amazon but not sure which are the good technical ones. Something with more information about the various hardware approaches (eg. non-ISA/hybrid/etc.) would be welcome.
13 comments
For the skeptics upthread: this isn't hand-waving anymore. The constraint spec goes in, synthesizable Verilog comes out, it passes on real hardware. The auto-test exercises every FSM transition and verifies output decode logic for all 6 state encodings.
Video of the board running and validation report are on the site: https://universalconstraintengine.net
"...demonstrates its capabilities through worked examples" - The hell it does, your "examples" are three lines long. If you're going to compare it with LLMs, then have it do something LLM-ish. Or hell, the MNIST number recognition task would be better than the "hey look i modeled a flip-flop in my funny language" example.
Am I being harsh? Yes, I am. The author is claiming that they have a system that can automatically generate code for "quantum" and "spintronic" computers, yet offers zero proof of that.
Any books you can recommend? I see a bunch on Amazon but not sure which are the good technical ones. Something with more information about the various hardware approaches (eg. non-ISA/hybrid/etc.) would be welcome.