Fluidi

Fluid music, a new signaling protocol invented by S. Mann.

See for example, https://roli.com/products/seaboard/grand-stage for example of a product using this system.

The core of the idea is in this excerpt from issued US Patent 8,017,858:

Since most MIDI devices support 15 channels, this filterbanking of a MIDI device is performed by the following steps: 1. Initialize the instrument: For each of a desired number of MIDI channels (all 16 or 15, or the needed number such as 12) do the following once when the apparatus is first powered up: (a) Issue an instrument change command to select a non-decaying instrument such as a flute or organ (most MIDI synths default to piano which will not work as well for filterbanking because piano note sound output levels decay exponentially with time). A good choice of oscillator is strings (voice 49), which can be selected by the following command for channel 1: C0 49 49, by the following command for channel 2: C1 49 49, by the following command for channel 3: C2 49 49, and so on until all desired channels are set to a non-decaying instrument. Here the first byte of each commands is shown in base 0xF+1 (i.e. what's called "hex" or "hexadecimal" or "base sixteeen" by those who think in base 0xA, but obviously in base 0x10 in its own base), and the; (b) Initialize channel 1 to sound an "A" note, with, for example, the command: 0x90 45 127. Initialize channel 2 to sound a "B" note, with the command: 0x90 47 127. Initialize channel 3 to sound a "C" note, with the command: 0x90 48 127. Continue in this manner, initializing each channel to sound one of the desired notes on the scale. Now the instrument will be producing a "compass drone" that will drone with all the notes in the playing compass. 2. Now the instrument is initialized and ready to play music. Music is played by entering into the following instructions in an infinite loop: (a) Read the signal from the microphone, MIC A, on the first sounding port, 499A. Scale this signal onto the interval from 0 to 127. The microphone signal will go negative as well as positive, but the interval of allowable MIDI volumes only goes from 0 to 127 (i.e. not negative). In some embodiments this scaling is done by envelope tracking. In some embodiments the envelope tracking is done by computing the Hilbert Transform of the microphone signal, multiplying by square root of negative one, and then adding to the original microphone signal, and then computing the square root of the sum of the squares of the real and imaginary components, and then providing a linear scaling to map it to the desired interval. In other embodiments an absolute value function (in some embodiments followed by lowpass filtering) is used, together with appropriate linear scaling. Typically a volume is derived so that each midi channel is amplitude-modulated by the corresponding microphone input. We're now in an infinite loop and if the loop executes fast enough we'll have an essentially continuous update of the oscillator volumes, which maintains the acousticality of the instrument. In particular, set the volume of MIDI channel 1 to correspond with the signal volume level present on MIC A. This may be done with the MIDI command 0xB0 7 VOL, where VOL is the appropriate number from 0 to 127. (b) Read the signal from the microphone on the second sounding port, 499B. Scale this signal from MIC B onto the interval from 0 to 127. Adjust MIDI channel 2 volume to match this level. Use command: 0xB1 7 VOL, where VOL is the appropriate number from 0 to 127. (c) Read the signal from the microphone on the third sounding port, 499C. Scale this signal from MIC C onto the interval from 0 to 127. Adjust MIDI channel 3 volume to match this level. Use command: 0xB2 7 VOL, where VOL is the appropriate number from 0 to 127. (d) Continue, reading each microphone input, and setting each MIDI channel volume output to the corresponding value. (e) Remain in this infinite loop as long as power remains supplied to the instrument.

It's also on page 27 of Canadian 2633679.

See also from the following patent application:

[0082] In some embodiments much of this frequency-shifting is done using combinations of oscillators and modulators. In particular, a MIDI device is used for the oscillators, and thus some or all of the filterbanks in a hydraulophone installation can be implemented by way of MIDI devices. This is not the manner in which MIDI was designed to be used (i.e. MIDI is usually used for the production of sound rather than for the filtering or modification of already-existing sound), but certain behavior of certain MIDIdevices can be exploited to produce the desired effects processing.

[0083] Duringtouch: A curious side-effect of using MIDI-compliant oscillators to implement acoustic filterbanks leads to an embodiment I call duringtouch. Duringtouch is the use of MIDI signalling for a smooth, near-continuous processing of audio from a separate microphone, hydrophone, or geophone for each note on an instrument such as a hydraulophone.

[0084] Normally MIDI is used to trigger notes using a note-on command, at a particular velocity, perhaps followed by aftertouch (channel aftertouch or polyphonic aftertouch).

[0085] In duringtouch, however, the idea is to get a MIDI device to become a sound processing device. With many hydraulophone embodiments, there is no such thing as a note-off command, because all the notes sound for as long as the instrument is running. In preferred embodiments there is a continuous fluidity in which the turbulent flow of water, though each keyboard (jetboard) jet and sounding mechanism, causes each note to sound to some small degree even when no-one is playing the instrument.

[0086] When nobody is playing the instrument, it still makes sound from the gurgling of the water, and turbulence, etc. In fact, the gentle "purring" of the instrument is a soothing sound that many people enjoy while sitting in a park eating their lunch.

[...]

[0359] Since most MIDI devices support 15 channels, this filterbanking of a MIDI device is performed by the following steps: [0360] 1. Initialize the instrument: For each of a desired number of MIDI channels (all 16 or 15, or the needed number such as 12) do the following once when the apparatus is first powered up: [0361] (a) Issue an instrument change command to select a non-decaying instrument such as a flute or organ (most MIDI synths default to piano which will not work as well for filterbanking because piano note sound outut levels decay exponentially with time). A good choice of oscillator is strings (voice 49), which can be selected by the following command for channel 1: C0 49 49, by the following command for channel 2: C1 49 49, by the following command for channel 3: C2 49 49, and so on until all desired channels are set to a non-decaying instrument. Here the first byte of each commands is shown in base 0xF+1 (i.e. what's called "hex" or "hexadecimal" or "base sixteeen" by those who think in base 0xA, but obviously in base 0x10 in its own base), and the; [0362] (b) Initialize channel 1 to sound an "A" note, with, for example, the command: 0x90 45 127. Initialize channel 2 to sound a "B" note, with the command: 0x90 47 127. Initialize channel 3 to sound a "C" note, with the command: 0x90 48 127. Continue in this manner, initializing each channel to sound one of the desired notes on the scale. Now the instrument will be producing a "compass drone" that will drone with all the notes in the playing compass. [0363] 2. Now the instrument is initialized and ready to play music. Music is played by entering into the following instructions in an infinite loop: [0364] (a) Read the signal from the microphone, MIC A, on the first sounding port, 499A. Scale this signal onto the interval from 0 to 127. The microphone signal will go negative as well as positive, but the interval of allowable MIDI volumes only goes from 0 to 127 (i.e. not negative). In some embodiments this scaling is done by envelope tracking. In some embodiments the envelope tracking is done by computing the Hilbert Transform of the microphone signal, multiplying by square root of negative one, and then adding to the original microphone signal, and then computing the square root of the sum of the squares of the real and imaginary components, and then providing a linear scaling to map it to the desired interval. In other embodiments an absolute value function (in some embodiments followed by lowpass filtering) is used, together with appropriate linear scaling. Typically a volume is derived so that each midi channel is amplitude-modulated by the corresponding microphone input. We're now in an infinite loop and if the loop executes fast enough we'll have an essentially continuous update of the oscillator volumes, which maintains the acousticality of the instrument. In particular, set the volume ofMIDI channel 1 to correspond with the signal volume level present on MIC A. This may be done with the MIDI command 0xB0 7 VOL, where VOL is the appropriate number from 0 to 127. [0365] (b) Read the signal from the microphone on the second sounding port, 499B. Scale this signal from MIC B onto the interval from 0 to 127. Adjust MIDI channel 2 volume to match this level. Use command: 0xB1 7 VOL, where VOL is the appropriate number from 0 to 127. [0366] (c) Read the signal from the microphone on the third sounding port, 499C. Scale this signal from MIC C onto the interval from 0 to 127. Adjust MIDI channel 3 volume to match this level. Use command: 0xB2 7 VOL, where VOL is the appropriate number from 0 to 127. [0367] (d) Continue, reading each microphone input, and setting each MIDI channel volume output to the corresponding value. [0368] (e) Remain in this infinite loop as long as power remains supplied to the instrument.

[0369] The above algorithm represents a system that works with a simple form of "duringtouch". Duringtouch is a physics-based user-interface methodology with an acoustic-originating equivalent to polyphonic aftertouch found in the music synthesis world, but overcomes much of the limitations of polyphonic aftertouch. The electrical interface to a device that works with duringtouch is sometimes referred to as FLUIDI (Flexible Liquid User Interface Device Interface) where the word "Liquid" in no way limits the invention to use with liquids (i.e. the invention will work with solids, gases, plasmas, Bose Einstein Condensates, or various other states-of-matter). (See for example, "Natural interfaces for musical expression: Physiphones and a physics-based rganology", by S. Mann, in Proceedings of the 7th international conference on New interfaces for musical expression, 2007 Jun. 6, New York.)

[0370] A sound synthsizer that can be "hacked" in this manner to become a filterbank (i.e. an array of bandpass filters) is said to be FLUIDI-compliant. Surprisingly few MIDI synthesizers work with this "hack" (i.e. few synths are FLUIDI compliant), but enough exist as to make the invention viable. An example of a FLUIDI-compliant sound synthesizer is the Yamaha PSRE303.

[0371] Duringtouch and its associated electrical protocol, FLUIDI, often turns out to be a good low cost alternative to polyphonic aftertouch. It can also maintain much of the fluidity and acousticality of instruments such as physiphones that use physics-based acoustically-originated sounds.

[0372] The FLUIDI aspect of the invention is not limited to physiphones, i.e. it may also be used in electronic instruments (electrophones).

[0373] In some embodiments of the apparatus depicted in FIG. 4, the output signal is fed back to a speaker inside the outer housing of the instrument, and this acoustic feedback helps improve the sound of the instrument. In some of these feedback-based embodiments, a separate processor 430A is optimized for acoustic feedback, to drive feedback exciter 440.

[0374] Signal 450A passes through processor 430A and emerges as signal 460A. Signal 460A is connected by a jumper cable to the next processor 430B, at signal 470A input.

[0375] The instruments depicted in FIGS. 2 to 5 take on the form of giant flutes that emit fluid out of finger holes. The volume (sound level) of the instrument may be controlled by adjusting the water level, i.e. typically increasing the water flow will make the instrument play louder. This effect can be accentuated by installing an extra microphone or hydrophone in the mainfold or in an extra opening from the manifold and connecting it to a voltage controlled amplifier or gain control stage to respond in such a way as to increase the gain when the water increases, in a way that's more pronounced than what occurs naturally.

United States Patent Application
20120011990
Mann; Steve
January 19, 2012