Sunday, 12 January 2025

svelte HMR

https://github.com/sveltejs/svelte-hmr/tree/master/packages/svelte-hmr#preservation-of-local-state says:

On the contrary, if you keep the default preserveLocalState to false, you can enable preservation of all the local state of a given component by adding the following comment: // @hmr:keep-all. You can also preserve only the state of some specific variables, by annotating them with: // @hmr:keep.

How to decide which systems to reboot when code changes? Developing gets waitsy when everything has to complicatedly clunk.

If I change code in audio.svelte.ts,

The reload happens to Call.svelte, which includes it via Peering via Party.


The later "hot updated" messages are for the nearest Component, though nothing seems to mention the test.js location of the change...

That first one though we have hooked in before with the following code, as vite suggested, but it doesn't seem to all be there:

// < HMR can sort-of be listened for
$effect(() => {
if (import.meta.hot) {
import.meta.hot.accept((mod) => {
let of = {
// always rather opaque
mod,
// always Call.svelte?t=...
url:import.meta.url,
// empty, guesses from AI at what it would expect to be here?
hot:import.meta.hot,
id:import.meta.hot.id,
}
console.log("Codechange!", of)
})
// if you write these as import.meta.hot.accept(
// it compiles in a ['default'] as a first argument
// and an error about our strings being a callback occurs elsewhere
// so svelte must be obscuring vite's advertised feature of thus:
let imh = import.meta.hot
// none of these work?
imh.accept('/src/lib/kolektiva/Party.svelte', (newFoo) => {
console.log("Party Codechange!", {url:import.meta.url,newFoo})
})
imh.accept('kolektiva/Party.svelte', (newFoo) => {
console.log("Party short Codechange!", {url:import.meta.url,newFoo})
})
imh.accept('./kolektiva/Party.svelte', (newFoo) => {
console.log("Party ./short Codechange!", {url:import.meta.url,newFoo})
})
}
else {
throw "never here"
}
})

Probably because svelte is between us and vite, doing something I can't find info about - the discord wants me to click squirrels.

And so,

Just separate what should reload into a component! Yay.

I guess this means my UI, which is a list of participants (Y axis) and their features (X axis), may want to be a multi-table join sort of thing.

Which means I'm writing a table class with synthesized absolute positioning! It also seems like the Basket grouper|compressor I built before...

Monday, 6 January 2025

On Trans

Trans people are great and you totally want to be in a world that is safe for them. That's my main angle. And I must reiterate that they are awesome, uncompromising and so forth. It's a messy universe and people have feelings. Lots of people get thrown across the gender divide for various reasons, eg Bob from Fight Club had some med shit and somehow a rather gender-definitive body panel of his popped up... The less uptight cultures have had them for ages, eg that ladyman in Little Big Man. Heaps of spectrums of stuff going on.

Believe it or not, a woman is something people want to be. Etc.
It's just weird that teens are ideally supposed to deeply figure out who they are before their natural hormones drag them one way or the other from the undifferentiated pudge that kids are made of. I think that's a source of tension. We don't do great with most people's development so it's weird having such an early deadline emerge.

I think most of its exposure is due to being interesting to write about. I mean would you look at all this...

Most of what people buy into is trash, is being trans the same? Probably not.
Another thing, men think like dicks and women think like vaginas. So there's a clue. Here's a fairly female sentence: Let us not scrutinise everyone to smithereens tho, for we are humans.

Sunday, 5 January 2025

linux high resolution audio

So, you can use all 24 bits and 96kHz of your soundcard, if you want:

 vi ~/.config/pipewire/pipewire.conf.d/99-sample-rate.conf

And specify:

context.properties = {
    default.clock.rate = 96000
    default.clock.allowed-rates = [ 44100 48000 88200 96000 ]
    default.clock.quantum = 1024
    default.clock.min-quantum = 32
    default.clock.max-quantum = 8192
}

Then:

pactl info

Should say:

Default Sample Specification: float32le 2ch 96000Hz

And yay, it sounds great, just look at all this music: 
There must be some upscaling going on somewhere...
I've seen easyeffects using 60% cpu, but restarting sound calmed it right down. On ubuntu 24:

systemctl --user restart pipewire pipewire-pulse

It seems like 16bit samples could become 24bit just by adding zeroes at the little end, and 48kHz can simply repeat each sample... What do we do when it's 44100 Hz? Hmmm

Claude AI

Let me help clarify how audio sample rate and bit depth conversion actually works:

  1. Bit Depth Conversion (16-bit to 24/32-bit):
  • It's not just adding zeros at the end
  • The conversion involves scaling the values to maintain the same relative amplitude
  • 16-bit values (-32,768 to +32,767) are scaled to fit the 24-bit range (-8,388,608 to +8,388,607)
  • This preserves the original signal's dynamic range while providing headroom for processing
  1. Sample Rate Conversion (44.1kHz to 96kHz):
  • It doesn't simply repeat samples - that would create artifacts
  • Instead, it uses interpolation algorithms to calculate new samples between the existing ones
  • Common methods include:
    • Linear interpolation (basic, can cause artifacts)
    • Sinc interpolation (more accurate but CPU intensive)
    • Polynomial interpolation (good balance of quality and performance)

The tricky part with 44.1kHz to 96kHz conversion is that it's not a simple multiple (like 48kHz to 96kHz). The resampler needs to generate roughly 2.18 new samples for each original sample (96000/44100 ≈ 2.18). This requires more complex mathematics to maintain audio quality and prevent artifacts.

What's interesting though is that 44.1kHz was chosen as the CD standard because it's just over twice the human hearing range (roughly 20kHz), allowing for the Nyquist frequency to capture all audible frequencies. The 96kHz sampling rate is just over twice 44.1kHz, giving us another doubling that provides even more headroom for audio processing and ultrasonic frequencies.

Let me clarify these audio fundamentals:

Sample Rate determines the highest frequency that can be captured/reproduced (Nyquist frequency = sample rate/2). So:

  • 44.1kHz can theoretically capture up to 22.05kHz
  • 96kHz can theoretically capture up to 48kHz

Ultrasonic frequencies (>20kHz):

  1. Are often filtered out, yes, because:
    • Most people can't hear them
    • They can cause intermodulation distortion
    • They can waste amplifier power
  2. Arguments for keeping them:
    • Some argue they affect the audible frequencies' timing/phase relationships
    • Could affect perception of transients (very fast changes in sound)
    • May preserve more natural decay characteristics of instruments
    • Some claim they have subtle psychoacoustic effects

Bit depth is actually about dynamic range and noise floor, not wave location precision:

  • Each bit represents 6dB of dynamic range
  • 16-bit = ~96dB range
  • 24-bit = ~144dB range
  • Allows for quieter sounds to be captured without noise
  • Gives more headroom for audio processing
Sound-space is logarithmic:

  1. We perceive volume logarithmically, not linearly:

  • Each doubling of sound pressure (6dB) feels like a similar step in loudness
  • That's why each bit in digital audio represents 6dB
  • Going from 1 to 2 bits feels like the same increase as 15 to 16 bits
  1. So the bits aren't about "resolution" in the visual sense:
  • They're not like pixels that divide the wave into finer slices
  • Instead, they give us more "floors" in the dynamic range "building"
  • Each additional bit doubles the number of possible volume levels
  • But these levels are spaced logarithmically to match how we hear
  1. Sample rate, on the other hand, IS more like visual resolution:
  • Higher sample rates let us capture higher frequencies
  • It's about how finely we can slice time
  • This directly affects the highest frequency we can represent

It's a common misconception about bit depth because we tend to think in visual/linear terms, but our ears work differently from our eyes! Audio processing has to account for this logarithmic nature of human hearing.

Conclusion

It doesn't use much CPU to do this, might as well switch it on.

Especially for capturing high quality audio you intend to work on, you want all that sound-space to avoid squishing your delicious waves before you can get your compressor etc on it.

If you want to burn CPU and|or decrease latency, pipewire apparently calls that option the quantum.

Have fun!


Thursday, 2 January 2025

recipe for making PCM-15, a phase change material with a melting point of approximately 18°C

Just jotting this down for later. Is AI writing up this rad video:

https://youtu.be/Nqxjfp4Gi0k




The recipe for making PCM-15, a phase change material with a melting point of approximately 18°C (65°F), is as follows:

  1. Ingredients:
    • 5 cups (1.2 L) of water
    • 1 cup (240 mL) of sodium sulfate
    • 1/4 cup (60 mL) of table salt (sodium chloride)
    • 4 teaspoons (20 mL) of xanthan gum (optional, for thickening)
  2. Instructions:
    • Add 5 cups of water to a pot and heat it on the stove
    • Add 1 cup of sodium sulfate and 1/4 cup of table salt to the water
    • Stir the mixture until it begins to slightly boil
    • Continue stirring until the salts are mostly dissolved. It's okay if some remain undissolved
    • If desired, add 4 teaspoons of xanthan gum and simmer for a few more minutes to thicken the mixture.
    • For a more gel-like consistency, continue boiling for about 30 minutes
This PCM mixture will have a melting point of about 18°C (65°F) and can provide significant cooling effects. The recipe does not require precise measurements, making it easy to prepare with basic kitchen tools

Tuesday, 31 December 2024

docker less Caddy

A
Spontaneous
Breakage

Here's the duckdns token... All you need to get HTTPS? 



The thing spat out:
expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=....duckdns.org&token=&txt=dV...kc&verbose=true]


The token went missing between that env and that thing its doing that...

As a separate curl request with my token put into it, works fine. With token = '', as the above url indicates, the server indeed responds "KO".

In other log noise, allowing larger UDP packets can be good for performance. There's probably a whole series of labs based around tweaking settings like this:
sysctl -w net.core.rmem_max=7500000

And reminder to tweak this:
Via (between) these commands:
Supposing you copied in your .ssh/authori... already

Now... I'm trying to

replicate

this problem on a debian system, shall we?
Installing docker via https://docs.docker.com/compose/install/
Was that the right way?
Nothing happens while doing the docker compose up
...until I docker compose down and try again

After a while, ping google.com becomes "Destination Host Unreachable"
Ie, internet gone, but I can be ssh'd into that computer from the LAN.
At that moment the journal says:
Dec 29 13:40:40 v connmand[704]: vethcf1d5d6 {newlink} index 32 address 6E:0B:C5:02:38:35 mtu 1500
Dec 29 13:40:40 v connmand[704]: vethcf1d5d6 {newlink} index 32 operstate 6 <UP>
Dec 29 13:41:05 v avahi-daemon[625]: Joining mDNS multicast group on interface vethcf1d5d6.IPv4 with address 169.254.216.172.
Dec 29 13:41:05 v avahi-daemon[625]: New relevant interface vethcf1d5d6.IPv4 for mDNS.
Dec 29 13:41:05 v avahi-daemon[625]: Registering new address record for 169.254.216.172 on vethcf1d5d6.IPv4.
Dec 29 13:41:05 v connmand[704]: vethcf1d5d6 {add} address 169.254.216.172/16 label vethcf1d5d6 family 2
Dec 29 13:41:05 v connmand[704]: vethcf1d5d6 {add} route 169.254.0.0 gw 0.0.0.0 scope 253 <LINK>
Dec 29 13:41:05 v connmand[704]: vethcf1d5d6 {add} route 0.0.0.0 gw 0.0.0.0 scope 253 <LINK>
Dec 29 13:41:05 v connmand[704]: wlxc4731ec7aa65 {del} route 0.0.0.0 gw 192.168.1.1 scope 0 <UNIVERSE>
Dec 29 13:41:05 v connmand[704]: vethcf1d5d6 {add} route 0.0.0.0 gw 0.0.0.0 scope 253 <LINK>
That last one looks dodgy, and probably wouldn't affect ssh...
I watch -n0.3 sudo route add default gw 192.168.1.1
This was going to do the trick but apparently doesn't anymore.

None of my docker compose projects get going - and the laptop seems to reliably turn off suddenly at some point...

Lets get off that laptop...

Hmm... I change the domain, requiring:
docker compose down --volumes
docker compose up --build

It builds and stuff, but the action is lame:
caddy-1     | {"level":"info","ts":1735460097.7095096,"logger":"tls.obtain",
"msg":"obtaining certificate","identifier":"voulais.duckdns.org"}
caddy-1     | {"level":"info","ts":1735460097.7118206,"logger":"tls.issuance.acme",
"msg":"using ACME account","account_id":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/177978324","account_contact":[]}
caddy-1     | {"level":"info","ts":1735460098.6765864,"logger":"tls.issuance.acme.acme_client",
"msg":"trying to solve challenge","identifier":"voulais.duckdns.org","challenge_type":"dns-01","ca":"https://acme-staging-v02.api.letsencrypt.org/directory"}
caddy-1     | {"level":"error","ts":1735460099.6868665,"logger":"tls.issuance.acme.acme_client",
"msg":"cleaning up solver","identifier":"voulais.duckdns.org","challenge_type":"dns-01","error":"no memory of presenting a DNS record for \"_acme-challenge.voulais.duckdns.org\" (usually OK if presenting also failed)"}
caddy-1     | {"level":"error","ts":1735460099.8798797,"logger":"tls.obtain",
"msg":"could not get certificate from issuer","identifier":"voulais.duckdns.org","issuer":"acme-v02.api.letsencrypt.org-directory","error":"[voulais.duckdns.org] solving challenges: presenting for challenge: adding temporary record for zone \"duckdns.org.\": DuckDNS request failed, expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=voulais.duckdns.org&token=&txt=wxmfH7orpNQRdOScCZPSObetw8bavTfbTmfe_Y40r1g&verbose=true], body: KO (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/177978324/21646456774) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)"}
caddy-1     | {"level":"error","ts":1735460099.8799827,"logger":"tls.obtain",
"msg":"will retry","error":"[voulais.duckdns.org] Obtain: [voulais.duckdns.org] solving challenges: presenting for challenge: adding temporary record for zone \"duckdns.org.\": DuckDNS request failed, expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=voulais.duckdns.org&token=&txt=wxmfH7orpNQRdOScCZPSObetw8bavTfbTmfe_Y40r1g&verbose=true], body: KO (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/177978324/21646456774) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)","attempt":10,"retrying_in":1200,"elapsed":4822.382168292,"max_duration":2592000}
So I believe letsencrypt is challenging us to host this text record, which we're failing to do.

Also, failing to reboot the fedora vm I'm trying to try out.
This turns out to be nothing, but is on the theme of computer gore.
wget s -O- > .ssh/authorized_keys
sudo service sshd start
Allows me to remote into this fedora vm!
Your hostname in the terminal prompt comes from $HOSTNAME, which comes from the ssh session! So it'll be whatever you named that ip in eg /etc/hosts.

Then I had this problem:
cos-jamo-1  | npm error Error: Could not read package.json: Error: EACCES: permission denied, open '/app/package.json'
Which is one of my first Fedora-isms!
Without volumes, Docker is much more predictable since everything stays contained.
Volumes are essential for getting my code changes loaded fast - if we put these in the container it would need rebuilding on every change.
Another option is to continuously lsyncd them into the container, which will use|generate inotify on both sides so it seems instant, and you don't end up having to stare intensely at the spot something might appear, like a caveman.
Another option is to sshfs but it won't generate inotify, it's basically just ftp over ssh into a fuse mount.
So anyway, just add this ,z:

cos-jamo:
...
volumes:
- .:/app:exec,z

The z option specifically tells SELinux to relabel the mounted content so containers can share it. Even though ll looks the same (the traditional Unix permissions haven't changed), SELinux has modified the security context behind the scenes. You can see these labels with:

ls -Z

So anyway, you can see all your listened addresses with sudo netstat -plant
0.0.0.0 means it is listening for every address on the machine.

Did I say Fedora? Now we're breaking in Debian 13 (Trixie) because EVERYTHING IS BROKEN!
It's a bad time when lots of things fall over at once.
This Christmas-NY period is totally haunted, do not not be with your people, on holiday, at this point.

Aaaand... Randomly, trying to up this letz project. Which isn't an ideal test subject as it contains a ton of python to build. The remote host randomly turns off!
It may just be this laptop I guess. I had to use one of its cpu heatsink screws to secure the cdrom drive, but I should probably just hot glue it.

Anyway, top 5 squirrels of 2024 comes out tomorrow!

So, lots of computers now, all encrypted, yet they can be unlocked remotely as per this wonderfully written guide: https://www.cyberciti.biz/security/how-to-unlock-luks-using-dropbear-ssh-keys-remotely-in-linux/
 but alas, I am on wireless so I don't bother.

RIGHT
After some googling and so on, I wrote:

A Bug Report


I was using this bit of Caddyfile, as seen via docker exec in the container:
dns duckdns {f6e-aaa-bbb-ccc-b86}


As implied by this part of the README:

dns duckdns {env.DUCKDNS_API_TOKEN}

Which I guess is a linguistic red-herring, stuff in {} interpolated to api_token => value before we get to UnmarshalCaddyfile(d *caddyfile.Dispenser) ..? Speculation.


Anyway. That doesn't trip this:

	if p.Provider.APIToken == "" {
		return d.Err("missing API token")
	}


and goes on to fail, the token parameter is casually empty:
caddy-1 | {"level":"error","ts":1735636941.773746,"logger":"tls.obtain","msg":"will retry","error":"[voulais.duckdns.org] Obtain: [voulais.duckdns.org] solving challenges: presenting for challenge: adding temporary record for zone \"duckdns.org.\": DuckDNS request failed, expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=voulais.duckdns.org&token=&txt=yWJ3zVVwwIRPxw14J3f2riEuFD805UOkC4OIFCwJcno&verbose=true], body: KO (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/178240924/21688731104) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)","attempt":5,"retrying_in":600,"elapsed":610.848246992,"max_duration":2592000}


And that's pretty much it. No idea why. Debugger time?


Other syntax variations do cause errors, eg with spaces or on a new line:

dns duckdns { $DUCKDNS_API_TOKEN }
dns duckdns {
    $DUCKDNS_API_TOKEN
}

Maybe it's on libdns/duckdns to double-check api_token != '' as it goes along.
Seems weird.


Thanks!


PS I of course made it more confusing by having a docker-compose.yml that did:

    volumes:
      - caddy_data:/data
      - caddy_config:/config


that was retaining an old config that worked, from before I made everything look neat with those extra curly braces, which I just didn't need. This stuck-state fell over a few days ago, somehow, as per chaos. For those playing along at home, you need to:

docker compose down --volumes
docker compose up --build


I've been rate limited now, it says "too many certificates (5) already issued" which is probably how many times I did the above.


Another random detail: I'm always "waiting on internal rate limiter" for 0.00005 seconds, which takes two log lines or 1/5th of all the log lines per tls.obtain.


And thanks again, it was super nice having HTTPS just go, as it did initially, and duck another little bill and personal info leak. Thanks.


My project is here: https://github.com/stylehouse/jamola/blob/main/docker-compose.yaml


Someone else in the same ditch who got me out: https://caddy.community/t/dns-challenge-with-duckdns/14994


And so, I wander back to development.

No idea why the other instances I tried to set up just didn't wanna.

I'll have to put that screw back in...

Further

I ended up moving the front end to a cloud host!

See here for the details of how to do that:

Caddy gave no indication of problems, yet wouldn't update duckdns, so I had to call it myself:
curl "https://www.duckdns.org/update?domains=voulais&token=...&ip=170.64.141.221"

And now it's GOOOOOOOOOOOOO




Serenely.

Yes. Mid refactoring I... Have lunch.
It's beautiful out there.
I come back, hit the space bar, and torrid techno bumps along in one lonely speaker.
I reverse the nearest RCA plugs and they say it's likely software.
Most of the state can be demolished and reset with:

systemctl --user restart pipewire pipewire-pulse

But this breaks my browsers' ability to find the audioDevice.
Even pavucontrol can't see any Input Devices except the Monitors now.
So I reboot.
Then production is down...
Why did the cos webserver not come up automatically when I rebooted?

s@s:~/src/jamola$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d4e4d18f3cd jamola-caddy "caddy run --config …" 41 hours ago Up About an hour 80/tcp, 2019/tcp, 443/udp, 0.0.0.0:9443->443/tcp, [::]:9443->443/tcp jamola-caddy-1
2fa20892e414 jamola-router-config "docker-entrypoint.s…" 41 hours ago Up About an hour jamola-router-config-1
a6be264aa4c5 letz-cos-bitz "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:9000->3000/tcp letz-cos-bitz-1
3b63c9938f2c letz-pl "./serve.pl" 7 weeks ago Up About an hour 127.0.0.1:1812->1812/tcp letz-pl-1
b34a27a9db9f letz-py2 "bash -c 'python py/…" 7 weeks ago Up About an hour 127.0.0.1:8000->8000/tcp letz-py2-1
e210a81ca6f5 letz-cos "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:3000->3000/tcp, 127.0.0.1:9229->9229/tcp letz-cos-1
s@s:~/src/jamola$ docker compose up -d
WARN[0000] The "ROUTER_URL" variable is not set. Defaulting to a blank string.
WARN[0000] The "ROUTER_USERNAME" variable is not set. Defaulting to a blank string.
WARN[0000] The "ROUTER_PASSWORD" variable is not set. Defaulting to a blank string.
[+] Running 3/3
✔ Container jamola-router-config-1 Running 0.0s
✔ Container jamola-caddy-1 Running 0.0s
✔ Container jamola-cos-jamo-1 Started 0.7s
s@s:~/src/jamola$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3370488e75c jamola-cos-jamo "/usr/local/bin/dock…" 4 seconds ago Up 3 seconds 127.0.0.1:9090->3000/tcp jamola-cos-jamo-1
1d4e4d18f3cd jamola-caddy "caddy run --config …" 41 hours ago Up About an hour 80/tcp, 2019/tcp, 443/udp, 0.0.0.0:9443->443/tcp, [::]:9443->443/tcp jamola-caddy-1
2fa20892e414 jamola-router-config "docker-entrypoint.s…" 41 hours ago Up About an hour jamola-router-config-1
a6be264aa4c5 letz-cos-bitz "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:9000->3000/tcp letz-cos-bitz-1
3b63c9938f2c letz-pl "./serve.pl" 7 weeks ago Up About an hour 127.0.0.1:1812->1812/tcp letz-pl-1
b34a27a9db9f letz-py2 "bash -c 'python py/…" 7 weeks ago Up About an hour 127.0.0.1:8000->8000/tcp letz-py2-1
e210a81ca6f5 letz-cos "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:3000->3000/tcp, 127.0.0.1:9229->9229/tcp letz-cos-1

also, the autossh connection should keep trying forever every 30s, as it currently gives up shortly after the first traffic from caddy and failing attempt to connect to localhost:9090:

s@s:~$ sudo systemctl status jamola-frontend-reverse-tunnel.service
jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy
Loaded: loaded (/etc/systemd/system/jamola-frontend-reverse-tunnel.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-01-08 17:34:12 NZDT; 1min 45s ago
Main PID: 9714 (autossh)
Tasks: 2 (limit: 18938)
Memory: 1.6M (peak: 2.1M)
CPU: 169ms
CGroup: /system.slice/jamola-frontend-reverse-tunnel.service
├─9714 /usr/lib/autossh/autossh -M 0 -N -R 0.0.0.0:3000:localhost:9090 -p 2023 d -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3"
└─9717 /usr/bin/ssh -N -R 0.0.0.0:3000:localhost:9090 -p 2023 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" d
Jan 08 17:34:12 s systemd[1]: Started jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy.
Jan 08 17:34:12 s autossh[9714]: port set to 0, monitoring disabled
Jan 08 17:34:12 s autossh[9714]: starting ssh (count 1)
Jan 08 17:34:12 s autossh[9714]: ssh child pid is 9717
s@s:~$ sudo systemctl status jamola-frontend-reverse-tunnel.service
jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy
Loaded: loaded (/etc/systemd/system/jamola-frontend-reverse-tunnel.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-01-08 17:34:12 NZDT; 4min 30s ago
Main PID: 9714 (autossh)
Tasks: 2 (limit: 18938)
Memory: 1.6M (peak: 2.1M)
CPU: 171ms
CGroup: /system.slice/jamola-frontend-reverse-tunnel.service
├─9714 /usr/lib/autossh/autossh -M 0 -N -R 0.0.0.0:3000:localhost:9090 -p 2023 d -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3"
└─9717 /usr/bin/ssh -N -R 0.0.0.0:3000:localhost:9090 -p 2023 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" d
Jan 08 17:34:12 s systemd[1]: Started jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy.
Jan 08 17:34:12 s autossh[9714]: port set to 0, monitoring disabled
Jan 08 17:34:12 s autossh[9714]: starting ssh (count 1)
Jan 08 17:34:12 s autossh[9714]: ssh child pid is 9717
Jan 08 17:36:05 s autossh[9717]: connect_to localhost port 9090: failed.
Jan 08 17:36:13 s autossh[9717]: connect_to localhost port 9090: failed.
Jan 08 17:36:34 s autossh[9717]: connect_to localhost port 9090: failed.

it is defined here:





But it's down again shortly later...

in journalctl it says

Jan 08 19:27:32 s autossh[16868]: port set to 0, monitoring disabled
Jan 08 19:27:32 s autossh[16868]: max start count reached; exiting

but this has no clues:

 s@s:~$ sudo systemctl status jamola-frontend-reverse-tunnel.service 
● jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy
     Loaded: loaded (/etc/systemd/system/jamola-frontend-reverse-tunnel.service; enabled; preset: enabled)
     Active: activating (auto-restart) since Wed 2025-01-08 19:27:32 NZDT; 17s ago
    Process: 16868 ExecStart=/usr/bin/autossh -M 0 -N -R 0.0.0.0:3000:localhost:9090 -p 2023 d -o ServerAliveInterval=30 >
   Main PID: 16868 (code=exited, status=0/SUCCESS)
        CPU: 5ms

The dot is grey now. This is a very wishy-washy way to present the failure and giving up of this service...

So systemctl seems bad, unless it's just me.
Why is this so hard? Should we just use supervisord? Should we just generate a passwordless key to use to get into ssh-tunnel-destiny on the cloud host from ssh-tunnel-source on the local host?
The latter.

Well, if you rename a container in the compose file before you down it, you'll need to:
docker compose down --remove-orphans

And if you change a config, eg the ssh key(s) we env into place, you must down, then up
# For changing configs (like Caddyfile): docker compose down docker compose up -d # For just changing .env values: docker compose up -d

The difference is because configs are treated as immutable container resources, while environment variables are part of the runtime configuration.

theproxy/*

to run the public Caddy server elsewhere: