I've been thinking of and briefly working on a similar project.
One idea is to analyze timing as well, and "trigger" things after certain sequences (so play 1-3-5 as say eighth notes and then get an in-rhythm arpeggio one octave higher) or detect the beat and play on the upbeat.
I haven't done any Rust, but this might give me a good reason to give a try.
Have you considered making it a plugin? (makes replay easier in my opinion, but brings other pain like relaunching the DAW between builds...)
I had added a metronome and a note generator to this earlier which aren't working well exactly right now. But this would definitely increase playability. I agree that there should be a VST plugin version of this as well. It can also live as a standalone app and a plugin. Could you elaborate on the analyze timing? If you had to play using this, what would you like to hear ?
I've got a few thoughts for features, if you're open to them:
1. Ability to specify where your "played" voice resides in the voicing: As the bass note, as an inner voice, or as the top line.
2. Options for first species, second species, third, florid, etc counterpoint for each of the generated voices. Ex: You play a single note and the upper voice plays two notes for every one of yours, etc, etc.
3. If you want to get real fancy, make the generated voices perform a canon of your played notes.
Have you been able to try it as well would love to hear what you think! Coming back to the features, regarding 1. you can already choose between soprano, alto, tenor or bass. I have still filed an issue for this, will help me remember to take vet this feature. Sometimes it's not as strict as it should be but that's also something I need to work on. Regarding 2. it's a good idea, helps you be in control of the kind of counterpoint you are doing, filed an issue for the same. Please feel free to comment on the issue. 3. is just feels is a little goofy as well I love it. I haver filed an issue for this as well check https://github.com/contrapunk-audio/contrapunk/issues/
"Realtime" as in "while playing guitar" has some pretty challenging latency requirements. Even if your solution is optimal, hardware specs will play a meaningful role. I'd be really interested if you've solved for this e2e.
Yes, latency was the main problem to solve here. Because of which I opted for Rust. The pipeline is:
- 128-sample cpal audio buffers (~2.7ms at 48kHz)
- Single-cycle pitch detection
- 2-frame McLeod pitch voting for confirmation
- Entire DSP pipeline is Rust, pre-allocated ring buffers with minimal heap pressure
The e2e from pluck to MIDI "note-on" signal, is under 10ms on an M-series Mac. Hardware matters for sure so an audio interface with low-latency drivers (I use an Audient iD14) helped a lot. The web version (app.contrapunk.com) adds AudioWorklet latency on top, so the native Mac app is noticeably tighter. I am still working on figuring out how to have lesser noise and pitch jitter in the final output. Also this works really well for higher notes, bass not so much right now. Still need to figure out how to handle harmonics better. I have created this issue for you for now, let me know if you would like to add anything else to this as well. https://github.com/contrapunk-audio/contrapunk/issues/6.
That's awesome thank you for sharing, really appreacite it ! Hope you have a good time with Contrapunk. Just let me know if you are facing any issues as well.
Sorry about that, you will have to download the app again there was an issue with the midi out for guitar atleast. Just to be sure, have you setup the IAC buses. I usually like to have 4 and then configure them as output devices.
There was a 1970's Indonesian progressive rock band called Contrapunk that released an album called "Putri Mohon Diri" [1]
You can find the recording on YT [2]
They were really unique - blending traditional Indonesian instruments, intense guitar work and classical influences.
Ah banger from the get go. Guess this is what I will be listening to through rest of the day. And I will add this as a fun fact to the website as well. Thank you for sharing :)
Nicely done! I saw you're building a cloud version so wanted to refer the excellent https://github.com/sonosaurus/sonobus which you might be able to integrate. All the best!
What a cool idea. I don't have a music setup capable of running this right now - perhaps in a couple of months - but if you were to post some sample recordings, I'd gladly listen to them.
How do you generate velocity values for the accompaniment notes?
Given that you already have a pitch tracker, it could be interesting to add key detection; just start playing, instead of telling the machine what key you're in, and it starts following along as soon as it catches on.
56 comments
FYI in this phrase: "AI is not going to kill music till people keep playing music together."
The "till" (until) kind of inverts what I think is the intended meaning.
A better replacement would be "as long as".
One idea is to analyze timing as well, and "trigger" things after certain sequences (so play 1-3-5 as say eighth notes and then get an in-rhythm arpeggio one octave higher) or detect the beat and play on the upbeat.
I haven't done any Rust, but this might give me a good reason to give a try.
Have you considered making it a plugin? (makes replay easier in my opinion, but brings other pain like relaunching the DAW between builds...)
I've got a few thoughts for features, if you're open to them:
1. Ability to specify where your "played" voice resides in the voicing: As the bass note, as an inner voice, or as the top line.
2. Options for first species, second species, third, florid, etc counterpoint for each of the generated voices. Ex: You play a single note and the upper voice plays two notes for every one of yours, etc, etc.
3. If you want to get real fancy, make the generated voices perform a canon of your played notes.
- 128-sample cpal audio buffers (~2.7ms at 48kHz) - Single-cycle pitch detection - 2-frame McLeod pitch voting for confirmation - Entire DSP pipeline is Rust, pre-allocated ring buffers with minimal heap pressure
The e2e from pluck to MIDI "note-on" signal, is under 10ms on an M-series Mac. Hardware matters for sure so an audio interface with low-latency drivers (I use an Audient iD14) helped a lot. The web version (app.contrapunk.com) adds AudioWorklet latency on top, so the native Mac app is noticeably tighter. I am still working on figuring out how to have lesser noise and pitch jitter in the final output. Also this works really well for higher notes, bass not so much right now. Still need to figure out how to handle harmonics better. I have created this issue for you for now, let me know if you would like to add anything else to this as well. https://github.com/contrapunk-audio/contrapunk/issues/6.
If you're interested in adding more scales, I have a wider range in my scale builder: https://xn--langy-yua.net/skala
https://steinbergmedia.github.io/vst3_doc/vstsdk/index.html
You can find the recording on YT [2] They were really unique - blending traditional Indonesian instruments, intense guitar work and classical influences.
[1] https://www.discogs.com/release/17424685-Contrapunk-Putri-Mo...
[2] https://www.youtube.com/watch?v=jb1792ZuXcY
> Guess this is what I will be listening to through rest of the day
You won't be alone :)
Such a cool project and the name and thanks for making it open source!
How do you generate velocity values for the accompaniment notes?
Given that you already have a pitch tracker, it could be interesting to add key detection; just start playing, instead of telling the machine what key you're in, and it starts following along as soon as it catches on.