As a separate curl request with my token put into it, works fine. With token = '', as the above url indicates, the server indeed responds "KO".
sysctl -w net.core.rmem_max=7500000
And reminder to tweak this:
Now... I'm trying to
replicate
Lets get off that laptop...
caddy-1 | {"level":"info","ts":1735460097.7095096,"logger":"tls.obtain",
"msg":"obtaining certificate","identifier":"voulais.duckdns.org"} caddy-1 | {"level":"info","ts":1735460097.7118206,"logger":"tls.issuance.acme",
"msg":"using ACME account","account_id":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/177978324","account_contact":[]} caddy-1 | {"level":"info","ts":1735460098.6765864,"logger":"tls.issuance.acme.acme_client",
"msg":"trying to solve challenge","identifier":"voulais.duckdns.org","challenge_type":"dns-01","ca":"https://acme-staging-v02.api.letsencrypt.org/directory"} caddy-1 | {"level":"error","ts":1735460099.6868665,"logger":"tls.issuance.acme.acme_client",
"msg":"cleaning up solver","identifier":"voulais.duckdns.org","challenge_type":"dns-01","error":"no memory of presenting a DNS record for \"_acme-challenge.voulais.duckdns.org\" (usually OK if presenting also failed)"} caddy-1 | {"level":"error","ts":1735460099.8798797,"logger":"tls.obtain",
"msg":"could not get certificate from issuer","identifier":"voulais.duckdns.org","issuer":"acme-v02.api.letsencrypt.org-directory","error":"[voulais.duckdns.org] solving challenges: presenting for challenge: adding temporary record for zone \"duckdns.org.\": DuckDNS request failed, expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=voulais.duckdns.org&token=&txt=wxmfH7orpNQRdOScCZPSObetw8bavTfbTmfe_Y40r1g&verbose=true], body: KO (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/177978324/21646456774) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)"} caddy-1 | {"level":"error","ts":1735460099.8799827,"logger":"tls.obtain",
"msg":"will retry","error":"[voulais.duckdns.org] Obtain: [voulais.duckdns.org] solving challenges: presenting for challenge: adding temporary record for zone \"duckdns.org.\": DuckDNS request failed, expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=voulais.duckdns.org&token=&txt=wxmfH7orpNQRdOScCZPSObetw8bavTfbTmfe_Y40r1g&verbose=true], body: KO (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/177978324/21646456774) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)","attempt":10,"retrying_in":1200,"elapsed":4822.382168292,"max_duration":2592000}
cos-jamo-1 | npm error Error: Could not read package.json: Error: EACCES: permission denied, open '/app/package.json'
Another option is to sshfs but it won't generate inotify, it's basically just ftp over ssh into a fuse mount.
The z
option specifically tells SELinux to relabel the mounted content so containers can share it. Even though ll
looks the same (the traditional Unix permissions haven't changed), SELinux has modified the security context behind the scenes. You can see these labels with:
ls -Z
This Christmas-NY period is totally haunted, do not not be with your people, on holiday, at this point.
A Bug Report
I was using this bit of Caddyfile, as seen via docker exec in the container:dns duckdns {f6e-aaa-bbb-ccc-b86}
As implied by this part of the README:
dns duckdns {env.DUCKDNS_API_TOKEN}
{}
interpolated to api_token => value
before we get to UnmarshalCaddyfile(d *caddyfile.Dispenser)
..? Speculation.Anyway. That doesn't trip this:
if p.Provider.APIToken == "" {
return d.Err("missing API token")
}
and goes on to fail, the token parameter is casually empty:caddy-1 | {"level":"error","ts":1735636941.773746,"logger":"tls.obtain","msg":"will retry","error":"[voulais.duckdns.org] Obtain: [voulais.duckdns.org] solving challenges: presenting for challenge: adding temporary record for zone \"duckdns.org.\": DuckDNS request failed, expected (OK) but got (KO), url: [https://www.duckdns.org/update?domains=voulais.duckdns.org&token=&txt=yWJ3zVVwwIRPxw14J3f2riEuFD805UOkC4OIFCwJcno&verbose=true], body: KO (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/178240924/21688731104) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)","attempt":5,"retrying_in":600,"elapsed":610.848246992,"max_duration":2592000}
And that's pretty much it. No idea why. Debugger time?
Other syntax variations do cause errors, eg with spaces or on a new line:
dns duckdns { $DUCKDNS_API_TOKEN }
dns duckdns {
$DUCKDNS_API_TOKEN
}
Maybe it's on libdns/duckdns to double-check api_token != ''
as it goes along.
Seems weird.
Thanks!
PS I of course made it more confusing by having a docker-compose.yml that did:
volumes:
- caddy_data:/data
- caddy_config:/config
that was retaining an old config that worked, from before I made everything look neat with those extra curly braces, which I just didn't need. This stuck-state fell over a few days ago, somehow, as per chaos. For those playing along at home, you need to:
docker compose down --volumes
docker compose up --build
I've been rate limited now, it says "too many certificates (5) already issued" which is probably how many times I did the above.
Another random detail: I'm always "waiting on internal rate limiter" for 0.00005 seconds, which takes two log lines or 1/5th of all the log lines per tls.obtain.
And thanks again, it was super nice having HTTPS just go, as it did initially, and duck another little bill and personal info leak. Thanks.
My project is here: https://github.com/stylehouse/jamola/blob/main/docker-compose.yaml
Someone else in the same ditch who got me out: https://caddy.community/t/dns-challenge-with-duckdns/14994
No idea why the other instances I tried to set up just didn't wanna.
Further
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d4e4d18f3cd jamola-caddy "caddy run --config …" 41 hours ago Up About an hour 80/tcp, 2019/tcp, 443/udp, 0.0.0.0:9443->443/tcp, [::]:9443->443/tcp jamola-caddy-1
2fa20892e414 jamola-router-config "docker-entrypoint.s…" 41 hours ago Up About an hour jamola-router-config-1
a6be264aa4c5 letz-cos-bitz "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:9000->3000/tcp letz-cos-bitz-1
3b63c9938f2c letz-pl "./serve.pl" 7 weeks ago Up About an hour 127.0.0.1:1812->1812/tcp letz-pl-1
b34a27a9db9f letz-py2 "bash -c 'python py/…" 7 weeks ago Up About an hour 127.0.0.1:8000->8000/tcp letz-py2-1
e210a81ca6f5 letz-cos "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:3000->3000/tcp, 127.0.0.1:9229->9229/tcp letz-cos-1
s@s:~/src/jamola$ docker compose up -d
WARN[0000] The "ROUTER_URL" variable is not set. Defaulting to a blank string.
WARN[0000] The "ROUTER_USERNAME" variable is not set. Defaulting to a blank string.
WARN[0000] The "ROUTER_PASSWORD" variable is not set. Defaulting to a blank string.
[+] Running 3/3
✔ Container jamola-router-config-1 Running 0.0s
✔ Container jamola-caddy-1 Running 0.0s
✔ Container jamola-cos-jamo-1 Started 0.7s
s@s:~/src/jamola$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3370488e75c jamola-cos-jamo "/usr/local/bin/dock…" 4 seconds ago Up 3 seconds 127.0.0.1:9090->3000/tcp jamola-cos-jamo-1
1d4e4d18f3cd jamola-caddy "caddy run --config …" 41 hours ago Up About an hour 80/tcp, 2019/tcp, 443/udp, 0.0.0.0:9443->443/tcp, [::]:9443->443/tcp jamola-caddy-1
2fa20892e414 jamola-router-config "docker-entrypoint.s…" 41 hours ago Up About an hour jamola-router-config-1
a6be264aa4c5 letz-cos-bitz "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:9000->3000/tcp letz-cos-bitz-1
3b63c9938f2c letz-pl "./serve.pl" 7 weeks ago Up About an hour 127.0.0.1:1812->1812/tcp letz-pl-1
b34a27a9db9f letz-py2 "bash -c 'python py/…" 7 weeks ago Up About an hour 127.0.0.1:8000->8000/tcp letz-py2-1
e210a81ca6f5 letz-cos "docker-entrypoint.s…" 7 weeks ago Up About an hour 127.0.0.1:3000->3000/tcp, 127.0.0.1:9229->9229/tcp letz-cos-1
also, the autossh connection should keep trying forever every 30s, as it currently gives up shortly after the first traffic from caddy and failing attempt to connect to localhost:9090:
● jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy
Loaded: loaded (/etc/systemd/system/jamola-frontend-reverse-tunnel.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-01-08 17:34:12 NZDT; 1min 45s ago
Main PID: 9714 (autossh)
Tasks: 2 (limit: 18938)
Memory: 1.6M (peak: 2.1M)
CPU: 169ms
CGroup: /system.slice/jamola-frontend-reverse-tunnel.service
├─9714 /usr/lib/autossh/autossh -M 0 -N -R 0.0.0.0:3000:localhost:9090 -p 2023 d -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3"
└─9717 /usr/bin/ssh -N -R 0.0.0.0:3000:localhost:9090 -p 2023 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" d
Jan 08 17:34:12 s systemd[1]: Started jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy.
Jan 08 17:34:12 s autossh[9714]: port set to 0, monitoring disabled
Jan 08 17:34:12 s autossh[9714]: starting ssh (count 1)
Jan 08 17:34:12 s autossh[9714]: ssh child pid is 9717
s@s:~$ sudo systemctl status jamola-frontend-reverse-tunnel.service
● jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy
Loaded: loaded (/etc/systemd/system/jamola-frontend-reverse-tunnel.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-01-08 17:34:12 NZDT; 4min 30s ago
Main PID: 9714 (autossh)
Tasks: 2 (limit: 18938)
Memory: 1.6M (peak: 2.1M)
CPU: 171ms
CGroup: /system.slice/jamola-frontend-reverse-tunnel.service
├─9714 /usr/lib/autossh/autossh -M 0 -N -R 0.0.0.0:3000:localhost:9090 -p 2023 d -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3"
└─9717 /usr/bin/ssh -N -R 0.0.0.0:3000:localhost:9090 -p 2023 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" d
Jan 08 17:34:12 s systemd[1]: Started jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy.
Jan 08 17:34:12 s autossh[9714]: port set to 0, monitoring disabled
Jan 08 17:34:12 s autossh[9714]: starting ssh (count 1)
Jan 08 17:34:12 s autossh[9714]: ssh child pid is 9717
Jan 08 17:36:05 s autossh[9717]: connect_to localhost port 9090: failed.
Jan 08 17:36:13 s autossh[9717]: connect_to localhost port 9090: failed.
Jan 08 17:36:34 s autossh[9717]: connect_to localhost port 9090: failed.
it is defined here:
in journalctl it says
Jan 08 19:27:32 s autossh[16868]: max start count reached; exiting
but this has no clues:
● jamola-frontend-reverse-tunnel.service - AutoSSH tunnel to cloud proxy
Loaded: loaded (/etc/systemd/system/jamola-frontend-reverse-tunnel.service; enabled; preset: enabled)
Active: activating (auto-restart) since Wed 2025-01-08 19:27:32 NZDT; 17s ago
Process: 16868 ExecStart=/usr/bin/autossh -M 0 -N -R 0.0.0.0:3000:localhost:9090 -p 2023 d -o ServerAliveInterval=30 >
Main PID: 16868 (code=exited, status=0/SUCCESS)
CPU: 5ms
So systemctl seems bad, unless it's just me.
Why is this so hard? Should we just use supervisord? Should we just generate a passwordless key to use to get into ssh-tunnel-destiny on the cloud host from ssh-tunnel-source on the local host?
The latter.
Well, if you rename a container in the compose file before you down it, you'll need to:
docker compose down --remove-orphans
# For changing configs (like Caddyfile): docker compose down docker compose up -d # For just changing .env values: docker compose up -d
The difference is because configs are treated as immutable container resources, while environment variables are part of the runtime configuration.
No comments:
Post a Comment