Thursday 10 October 2024

The Phone Ringing

 We all want to play music over the telephone.

the hub piece of a bamboo roundhouse, freshly fabricated black steel with one rainy night's rust on it

This steel jointing device illustrates the radiant vibe of real time action.
The technology creates an inside, in which it is a certain time... Better, more real...

I wonder if you've

seen anything for playing music together, on a call?

we loosen our sense of time until the latency doesn't bother us.
I'd like to go for audio quality too though, and record everything, multitrackily.
I think here|there would want mixing with an offset of the latency, sometimes, depending on:
whether they were playing in time with me
as I turned up there, $late*1
and they got back at $late*2
or vise versa
or perhaps having some sync animation to take as sheet music.

this is goddamn researchalicious.




seen any decent WebRTC things at all?

I desire a mixer y transporter y synchroniser for a personal area network of phones and tablets that is my stage gear...

Jitsi is FOSS, powers talk.brave.com, can be yours as long as you have https so WebRTC will come out of its shell.





What does apparently exist,

via https://writing.exchange/@ernie/113285588015442907

As y’all probably know I have not really found a recording process for Linux I’m really happy with. Nothing hits the sweet spot for me. But I think I’ve gotten somewhat closer. - Video: GPU Screen Recorder https://flathub.org/apps/com.dec05eba.gpu_screen_recorder - Audio: Noisetorch https://github.com/noisetorch/NoiseTorch - Audio-to-text: Buzz https://chidiwilliams.github.io/buzz/docs Still not perfect. But getting there.

Sunday 14 July 2024

migrating zap.py -> docker compose = codium nirvana?

 I have a script doing containers, I want to move them to docker compose. 

the init:

        podman build -t cos .

        podman build -t py py


the run:

        podman run -v ~/v:/v:ro -v .:/app:exec -p 5000:5000 --rm -it --name py1 py bash -c 'python py/serve.py'

        podman run -v .:/app:exec -p 3000:3000 --rm -it --name cos1 cos bash -ci 'npm run dev -- --port 3000 --host 0.0.0.0'


Also, some other things. These all become:

version: '3.8'

x-defaults: &defaults
stdin_open: true
tty: true
restart: always

services:
cos:
build:
context: .
dockerfile: Containerfile
volumes:
- .:/app:exec
ports:
- "3000:3000"
command: bash -ci 'npm run dev -- --port 3000 --host 0.0.0.0'
container_name: cos1
<<: *defaults

py:
build:
context: py
dockerfile: Containerfile
volumes:
- ~/v:/v:ro
- .:/app:exec
ports:
- "5000:5000"
command: bash -c 'python py/serve.py'
container_name: py1
<<: *defaults
py2:
build:
context: py
dockerfile: Containerfile
volumes:
- .:/app:exec
ports:
- "8000:8000"
command: bash -c 'python py/ipfs.py'
container_name: py2
<<: *defaults

pl:
build:
context: pl
dockerfile: Containerfile
volumes:
- ../../stylehouse:/app:exec
ports:
- "1812:1812"
command: ./serve.pl
container_name: pl1
<<: *defaults

docker underworld

Then there's a haze of commands as I tried to install...

sudo apt install docker-compose

sudo pip3 install docker-compose

sudo apt install docker-desktop-4.30.0-amd64.deb

sudo apt install ./docker-desktop-4.30.0-amd64.deb

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

sudo apt install docker-compose-v2

sudo apt install docker-compose-v2

sudo apt install docker-compose-v2

sudo apt install docker-compose-v2

sudo apt install docker

sudo apt install docker-ce

I think I also tried via pip at one point.
I would get python errors, eg "no such param ... on HTTP...", indicating program-library version mismatch - also known as dependency hell.
Instructions (found on the web) are varied, so I'm not sure what's running.
dpkg -l 'docker-compose*' says
un  docker-compose-v2     <none>                      <none>       (no description available)
snap list says
docker                     24.0.5           2915   latest/stable    canonical✓     -

Weird. The codium installed via... flatpak.

start over

I needed to:

apt install docker-compose-v2 from ubuntu (rather than docker's apt repo),

And:

`snap install codium --classic`
this is less secure, enough to be able to connect to /var/run/docker.sock

And: 

`adduser $USER docker` and relogin, as per this

The "and relogin" thing caused a lot of confusion
    Until the computer crashes every few days..? What the hell.
    There's this *nix concept of your shell|session|environment containing a bunch of variables that hang around
    And your group membership state there can go stale, so you need to restart reality.

Now how to "have" the docker-compose cluster in codium...

Is via its right click menu in the file exporer:


I don't get the impression it can present me with the errors coming from my development, if it involves multiple containers.
Whereas I'd hoped for finding source file locations mentioned in error messages.

I assumed we had made more developer experience progress than this by now.


weeks later

the py container never starts with the others...
So I must `docker compose up`, which provides errors.
    This window I leave a tiny slat of visible beneath the code editor, as having it in the code editor console eats too much screen.

The many-little-slats approach to visual structure...


Now back to auto-git the user's world for them...





Sunday 9 June 2024

bitzliten: sound looper with Svelte 5

Following on from our last struggle to build nice features that was more about reactivity and refactoring.

Let us build:

Noisescape Visual

I look around for the tech. This has nice high-level features but is super slow:

Clementine does something that shows episodes of different sonic qualities quite well:


Which someone has made available on its own here

Which I hosted via a python server, then merged that with the other python server using flask Blueprint.

Involving lots of temporary files, and this string of pixels from moodbar:

# Create image from pixel values
width = len(pixel_values) // 3
height = 1
image = Image.frombytes('RGB', (width, height), pixel_values)

Then the frontendistry:

        <soundbox ...>
{#if cuelet.moodbar}
<moodbar class="liner"
style="background-image:{`url(${cuelet.moodbar})`}"
/>
<moodbar class="liner mask"/>
{/if}

With styles:

soundbox {
            ...
position: relative;
}
        ...
.liner {
width:100%;
height:100%;
position:absolute;
display:block;
}
moodbar {
background-size: contain;
filter: blur(3.14159px);
}
moodbar.mask {
background: url(vertical_mid_fade.webp);
mix-blend-mode:soft-light;
}

Here is vertical_mid_fade.webp, which helps it look more mineral:


Comes out like this:


Loooovely. More of this definitely.
The user may need all these glowy features disabled if it bogs them down...
Anyway!

Refactoring goes awry

I split from Cueleter a Cuelet, but the move from:
class SyncableCueleter {
    ...
sync_cuelets(playlets) {
        ...
cuelet = {in:playlet.in,out:playlet.out}
To
cuelet = new Cuelet({
orch:this,
in:playlet.in,out:playlet.out
})
Has left those <soundbox> (representation of each cuelet) saying "!buffer" and without moodbar, as if it no longer knows about properties on cuelet now that it isn't simply a hash.

A day later, I hear that:

You can also use $state in class fields (whether public or private):

class Todo {
done = $state(false);
text = $state();
        ...

So perhaps Cuelet needs to $state() for its buffer|moodbar properties:
class Cuelet {
public orch:Cueleter
public in
public out

buffer = $state()
moodbar = $state()
constructor(opt) {

As as is usual in Svelte, the compiler does half of the education:
    We must rename cuelet.ts to cuelet.svelte.ts, to be verbose about where Svelte can apply itself.

And yes. We are back where we were again.

npm update

Between svelte-5.0.0-next.85 and -next.131 they added live code updates! No more full page reload + wasm downloads while tweaking the UI:


Seems more flickery? Would make a nice transition - warm analogue glitch sputtery ray bendings.

This goes away when I resolve:

It says SYNC CUELETS a whole lot, unless we:
async get_moodbar() {
if (untrack(() => this.moodbar)) return
...and the same for this.buffer

Faster Selection edge sampling

Faster Knob

In KnobTime.svelte:

let {
value=$bindable(5),
// 20ms at a time
step=0.02,

...props
} = $props()

Zone's selections and files

We now create a Fili to load an input file.

They "happen" in a <File {fil}...

When ready, they spawns an associated Sele, which continue what sel used to do:

{#each selections as sel (sel.id)}
<Selection {sel} ... />
{/each}

Where we initialise it into...

let in_time:tracktime = $state(sel.in != null ? sel.in : 30)
let out_time:tracktime = $state(sel.out != null ? sel.out : 36)

So they can be changed by knobs, which reactively leads into:

$effect(() => {
// inclusively select dublet spaces
let fel = {
in: Math.floor(in_time / chunk_length) * chunk_length,
out: Math.ceil(out_time / chunk_length) * chunk_length,
}
if (fel.in != sel.in || fel.out != sel.out) {
// non-reactively set it here
sel.set(fel)
console.log("Selection Woke",sel)
// then cause a reaction
// < only needed when adjusting sel.out, wtf?
on_reselection()
}

This could probably be two lines, but see needed when adjusting sel.out

I will hopefully find time to minimise a bunch of this confusion, I'll just pass it along for now.

eg Sele's in and out don't need to be $state, but playlets does:

export class Sele {
public id
public fil:Fili

public in
public out
public playlets:adublet[] = $state([])
public modes

Anyway, when compiling this selection to playlets back up in Zone:

// generate a bunch of tiles for your ears to walk on
function make_playlets(sel:Sele):adublet[] {
// how to encode (modes)
sel.modes = clone_modes()
// and attaches the Fili's identity
if (!sel.fil.dig) debugger
set_modes_value(sel.modes,'input',sel.fil.name+"#"+sel.fil.dig)

We identify the fil in sel.modes as if it was another option to ffmpeg.

We shall move to something like this soon, when we think about syncing files to the worker and making that worker further away (via http)

sel then branches into nublets:

sel_to_modes(nublet,nublet.modes)
// this now describes a unique dublet
nublet.modes_json = JSON.stringify(nublet.modes)

So we can find things in the cache:

let ideal = dublets.find(
dublet => dublet.modes_json == nublet.modes_json
)

There is a vague level of matching too:

let vague = dublets.find(
dublet => dublet.in == nublet.in && dublet.out == nublet.out
&& dublet.sel.fil == nublet.sel.fil
)

Which would have it play whatever it has for that file+time, eg if you change the desired bitrate

This seems a little extraneous. Could we just play the input file? Read on...  

jammed in the Cuelet

I drop a file in and the cuelets fail to render the new audio!

If I widen the loop to get it to render a never-before-cached cuelet time, the new audio is used:


After headscratching, I put in this delete when syncing a new objectURL:
 
class Cuelet {
...
sync_cuelet(playlet:adublet) {
    ...
// find playable
let dublet = playlet.ideal_dub || playlet.vague_dub
if (!dublet) return
if (this.objectURL != dublet.objectURL) {
delete this.buffer

Since of course:

async decodeAudio() {
if (untrack(() => this.buffer)) return

We are now at f2e87c03bbba1: delete buffer when syncing a new objectURL

Horizontal Knob

In Knob:

let {
        ...
axis = "Y",
        ...
} = $props()

And at some point:

function get_movement(event:PointerEvent) {
let key = "movement"+axis
if (event[key] == null) throw "no such axis: "+axis
let movement = event[key]
// towards the top of the screen decreases Y
if (axis == "Y") movement *= -1
return movement
}

Then in KnobTime

<Knob
...
axis="X"

bind:grabbed={grabbed}
...
/>

{#if grabbed}
<aro></aro>
{/if}

Also if you look at Selection,

    the knobs are snippeted to Schaud for positioning on the ends of the cuelets

    and then positioned a little more to mark their exact values on the cuelets timeline!

<Schaud {needle_uplink} {sel} {on_reselection}>
{#snippet leftend(cueletsin:tracktime, width_per_s)}
...
<span>
<grit class="openbracket"
style="left:{-locator_grit(cueletsin,selin,width_per_s)}px">
<KnobTime
bind:value={selin}
{commit} >
{#snippet label()}
in
{/snippet}
</KnobTime>
</grit>
</span>
{/snippet}

Using:

function locator_grit(fromtime,totime,width_per_s) {
let delta = fromtime - totime
return delta * width_per_s
}

This looks like this:


There is the issue of your pointer not appearing where the knob ends up.

Perhaps we should slide cuelets along and leave the in-point still, like the slip tool in video editing? 

Then the...same for out-point? 

Perhaps if all the ui was squishy slotty constellations of stuff, wide angle lensing on the focus... 

The set of beads (cuelets) increasing in frequency as further bits are hauled in from the shoulder, your hand swooshing the sound sideways.

Speedy Knob

I get back to accommodating hi res time as in|out.

The inclusively select dublet spaces moved in here (from before)

export class Sele {
    ...
// receives fast and fine time adjustments
on_adjust(finely) {
this.in = finely.in
this.out = finely.out
// inclusively select dublet spaces
let fel = this.inclusivise()
if (fel.in != this.in_inclusive || fel.out != this.out_inclusive) {
this.in_inclusive = fel.in
this.out_inclusive = fel.out
console.log("Selection Woke",fel)
// then cause a reaction
this.enc.on_reselection()
}
}
    ...
get_timespace() {
// we have a notch
let length = this.out_inclusive - this.in_inclusive
if (isNaN(length)) throw "NaN"
let chunk_length = this.enc.chunk_length
let n_chunks = Math.ceil(length / chunk_length)
return {length,n_chunks,chunk_length}
}

Before, get_timespace() used this.in - which should have probably caused the joblet runner to react, but for some reason didn't?

The Cuelet 's localise_time() also needed to switch to this.in_inclusive - and thus stops many spurious calls to sync_cuelet() while we are cranking knobs, instead just when this.in_inclusive changes what playlets exist, or the playlets themselves mutate.

All this connection emerges when Selection.svelte does:

$effect(() => {
sel.on_adjust({in:in_time,out:out_time})
})

They transmit to Schaud.svelte

// respond to editing
let ori = modus[1] = new ModusOriginale()
$effect(() => {
sel?.in != null && ready && ori.edge_moved({which:'in'})
})
$effect(() => {
sel?.out != null && ready && ori.edge_moved({which:'out'})
})

And then ModusOriginale

edge_moved = ({which}) => {
if (this.edge_moved_recently()) return
        ...
// this one time we play this sound
this.zip = new Ziplet({orch:this.orch, mo:this, fil:this.fil})
Object.assign(this.zip,{playFrom,playFor})
this.zip.start()
let is = this.zip

let declack = 0.01
// replaces the old Zip
was && fadeout(was,declack)
fadein(is,declack)
// mutes the other Modus
let others = this.orch.modus.filter(mo => mo != this)
others.map(o => fadeout(o,declack))
// then after a while
let thence = playFor-declack
setTimeout(() => {
if (is != this.zip) return
fadeout(is,declack)
others.map(o => fadein(o,declack))
setTimeout(() => {
delete this.zip
},declack*1000)
},thence*1000)
}
// debounce eventing for in and out when sel is moved
public last_edge_moved_ts:unixtime
edge_moved_recently() {
let was = this.last_edge_moved_ts
this.last_edge_moved_ts = now()
let delta = this.last_edge_moved_ts - was
if (delta < 0.006) return 1
}

Most of that's pretty ugly for lack of abstraction of pushing a new soundtrack into the moment, aesthetically.

There are details in reality, like this bit of class Ziplet is now :

// selection trims
get duration():number {
if (this.playFor) {
// this is the duration when trimming the end
// it already knows about any playFrom
return this.playFor
}
let dur = this.whole_duration
if (this.playFrom) {
dur -= this.playFrom
}
return dur
}

Yet this didn't need to change:

get ends_at() {
if (this.startTime == null) throw "!startTime"
return this.startTime + this.duration
}

And here's a look at precise looping:



Thanks!



Other Projects with media in the browser

https://martinwecke.de/108/

a drum machine with a nice simple interface.

https://github.com/bwasti/mebm

browser based video editor that supports animation of images and text overlays

Other Stuff

educational material for teaching and learning Fundamentals of Music Processing (FMP) with a particular focus on the audio domain. Covering well-established topics in Music Information Retrieval (MIR) as motivating application scenarios, the FMP notebooks provide detailed textbook-like explanations of central techniques and algorithms in combination with Python code examples that illustrate how to implement the theory.

Similar Music Finder - gemtracks.com

For library expansion.

 "The society that separates its scholars from its warriors will have its thinking done by cowards and its fighting done by fools."