Longer Tempo Synced LFO Rates
Nikolozi
A Mela user is working on generative presets and has a requirement for LFO rates to be 24, 32, 48, or 64 bars in length.
Further, he suggested that custom length would be ideal even ability to specify 24bars+1/16note.
My comment: I have to think about how LFO can be generalised to support it without affecting ease of use for common cases. Also, it might make more sense to have a more customisable modulator / sequencer with a fully customisable loop length.
Igor Pellegrini
Sorry for the OT from the other discussion with Samuel.
Thumb up 👍 for a customisable modulator, but even being able to manually set a higher value than the sync-knob allows, via the custom/typing input, would really help in using LFOs to act as automation (e.g. to enable/disable modules only every once in a while in a track)
Already getting to 64 bars would help quite a lot.
(personally I’m on Mela 3)
The feature can be described in the manual, for the users who want it, without affecting the current UX.
Samuel Lindeman
I definitely prefer a 'Step Value Sequence' (ie. Custom LFO Waveform) with optional per step time for longer (or super short) as a modulation source.
As you may know I prototype my brain-farts with Drambo so here's one...
This is a very simple 'value sequence'(using graphical modulator) fed thru a low-pass filter to smooth out the steps, and adding resonance to the filtered signal adds a bit of funky wobble.
For longer passages in Drambo I use the CV Gate Sequencer traverse thru the steps with a slow pulses so each step can be as long as I need it to be.
Cheers!
Nikolozi
Samuel Lindeman: Filtering the modulator signal with resonance is pretty cool. Would love to be able to do similar things in Mela. One of the reasons I haven't added modulator smoothing to Mela yet is because I haven't been able to come up with a good/simple solution yet. A modulation signal flowing between modules doesn't quite fit in Mela's UI. Well, not yet anyway. I do plan to add a module, maybe called Audiorate, which will be a modulator that will simply take the input audio and use it to module any parameter. So maybe there should be a module, let's call it Mod-to-Audio temporarily, that can take a modulator signal (via its dial being modulated) and generate an audio signal, the audio signal then can be processed (e.g. using a filter) and then using the Audiorate module to convert that audio signal back to modulator signal. It would look something like this:
[Mod-to-Audio] -> [Filter] -> [Audiorate]
The problem with this approach is that Mod-to-Audio probably block incoming audio signal and after Audiorate we are stuck with the modulation signal. Maybe it's not a problem in all cases, but maybe we can improve on it
An improved idea could be a container approach. I've mentioned this before but I'll state it again to add clarity. I want to add containers where grouped modules can be processed independently. For example:
{ G: [Osc 1] -> [Panner 1] } -> { G: [Osc 2] -> [Panner 2] }
where { G: } means a grouping container. So in this case, Panner 1 is only applied to Osc 1 and Panner 2 is only applied to Osc 2. And then they are added. This is in contrast to the following, Osc 1 is processed using Panner 1 then Osc 2 signal is added, and finally the resulting signal is processed in Panner 2:
[Osc 1] -> [Panner 1] -> [Osc 2] -> [Panner 2].
So, now back to modulators, we could have something like this:
[Osc 1] -> { M: [Filter] + [xyz] } -> [Amp]
The container { M: } is basically Mod-to-Audio and Audiorate modules combined. It will receive a modulator signal via a modulated dial, then convert it to audio and process it through Filter, xyz or whatever audio processor you want to insert in the container. And then convert that audio signal back to a modulation signal. For the Osc 1 and Amp modules, it would be as if nothing was in between, i.e. Amp module will receive Osc 1 output without modifications.
This is a rough idea, I was thinking about it as I was typing. But I think would be good to have something in Mela where we could take a modulation signal, process it via audio processors and convert it back to a modulation signal.
Samuel Lindeman
Nikolozi: I have an idea here regarding using external input as modulation source and passing it thru a sample & hold / decimator / average loop. This would in practice take snap-shot of the input level at a set interval and either keep it or average it over the set interval.
Processing modulation signals is always tricky and would require per sample calculation for 'everything' in the signal chain.
Nikolozi
Samuel Lindeman: For simplicity, Mela is already doing per-sample calculations for everything including modulations. I found when I introduced downsampled modulation signals, certain parameter modulations introduced unwanted noise. For example, modulating distortion drive sounded bad.
Also, to make things like 1-sample feedback possible, in the future. For example, for filter design etc.
If you think about Mela is already doing audiorate modulations with LFO, as it goes up to 655Hz. And the generated Sinusoid is generated at the sample rate.
Just to clarify, when I use the word audiorate I mean a signal that's audible and can be listened to. Otherwise, all modulator signals are generated at every sample in Mela.
What does Drambo do? Is not everything happening at the sample rate there also?
Samuel Lindeman
Nikolozi: Almost everything (except WT Effects which for now are per WT Frame) in Drambo are calculated at 'sample rate' (some modules do internal oversampling), manual feedback loops are delayed by one frame since everything goes from left to right and top to down to keep things optimized and CPU efficient as possible.
'Down-Sampled' modulation signals in Mela 4 could be usable in some cases for 'stepped modulation', this could work quite nicely for both pitch and various oscillator shapes, fm depth and create 'stepped animation'.
Think 'Sample & Hold', so even at reduced rates the 'source' would always output the last known value and as such should not create any 'noise' but if it only outputs a value every Xth sample then the 'gaps' in the modulation signal will create noise.
Gotcha regarding 'audio rate' in Mela 4. 'audio peak/rms level' (Envelope Follower) in particular could be a very usable modulation source for voice controlled filters.
Sorry for the Drambo references, but it is one of my main 'go to' app for almost everything noise-related on iOS.
Cheers!
Nikolozi
Samuel Lindeman: How many samples are in a frame in Drambo? is that user-settable?
Agreed, I think down-sampled modulation is great, as long as it's intended by the user. For example, if the user wants to decimate or bit reduce. Those should become possible at some point.
Keep the Drambo references coming. I'm happy to build on others' experiences and things they figured out work well. There aren't many apps of this kind out there.
Samuel Lindeman
Nikolozi: The Drambo WT 'Frame Size' is not user adjustable and the 'feedback loop' is always delayed by one audio buffer which is user adjustable down to 128 samples.
Considering that each 'frame' in a wave-table is 2048 samples long my best bet would be that the 'Frame Size' for WT processing is 2048 divided by the audio buffer size but I think it's faster since adjusting the audio-buffer size has no effect on the update frequency of the WT parameters (I guess it's buffer size compensated?!). All in all it works quite well but there are some limitations like per-sample FM of a WT oscillator, it can be modulated at audiorate but it's not 'super smooth' due to the frame-size miss-match.
The 'holy grail' for WaveTable oscillator effect processing is per sample processing at sample-rate which not even all hardware synths can do properly. It takes some serious 'DSP Magic' to be able to do this with multiple processing modules.
I know BM3's internal processing buffer was/is always 32 samples regardless of audio buffer-size or used sample-rate.
As I'm not a programmer I can only make somewhat educated guesses.
Cheers!