Added Adguard #16

Merged
sudo-kraken merged 2 commits from features/adguard into main 2024-12-06 00:24:53 +00:00
sudo-kraken commented 2024-12-04 21:31:30 +00:00 (Migrated from github.com)
No description provided.
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 03:04:44 +00:00
redbeardymcgee (Migrated from github.com) left a comment

Couple of questions before this one merges.

Couple of questions before this one merges.
redbeardymcgee (Migrated from github.com) commented 2024-12-05 03:00:02 +00:00

Is this problem solved after passt or pasta becomes the default? Right now I'm running with slirp4netns because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried pasta a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.

Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.

Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally. Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-05 03:02:51 +00:00

Is this port mapping mismatched because your environment already uses port 80 for something else? Or is this an upstream example like from their compose file? I would like to avoid mapping ports around if possible to make it easier to drop in a single quadlet with no quirks.

Is this port mapping mismatched because your environment already uses port 80 for something else? Or is this an upstream example like from their compose file? I would like to avoid mapping ports around if possible to make it easier to drop in a single quadlet with no quirks.
@ -0,0 +19,4 @@
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
PublishPort=8443:443/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-05 03:03:27 +00:00

i guess you can't mark a range of lines to comment on, but see above.

i guess you can't mark a range of lines to comment on, but see above.
sudo-kraken (Migrated from github.com) reviewed 2024-12-05 07:32:24 +00:00
sudo-kraken (Migrated from github.com) commented 2024-12-05 07:32:24 +00:00

I will test this today I have a load balanced setup so I can test on my failover node.

I will test this today I have a load balanced setup so I can test on my failover node.
sudo-kraken (Migrated from github.com) reviewed 2024-12-05 09:04:18 +00:00
sudo-kraken (Migrated from github.com) commented 2024-12-05 09:04:18 +00:00
          "NetworkSettings": {
               "EndpointID": "",
               "Gateway": "",
               "IPAddress": "",
               "IPPrefixLen": 0,
               "IPv6Gateway": "",
               "GlobalIPv6Address": "",
               "GlobalIPv6PrefixLen": 0,
               "MacAddress": "",
               "Bridge": "",
               "SandboxID": "",
               "HairpinMode": false,
               "LinkLocalIPv6Address": "",
               "LinkLocalIPv6PrefixLen": 0,
               "Ports": {
                    "3000/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "3000"
                         }
                    ],
                    "3000/udp": null,
                    "443/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "8443"
                         }
                    ],
                    "443/udp": null,
                    "53/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "53"
                         }
                    ],
                    "53/udp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "53"
                         }
                    ],
                    "5443/tcp": null,
                    "5443/udp": null,
                    "6060/tcp": null,
                    "67/udp": null,
                    "68/udp": null,
                    "784/udp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "784"
                         }
                    ],
                    "80/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "8844"
                         }
                    ],
                    "853/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "853"
                         }
                    ],
                    "853/udp": null
               },
               "SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
          },
          "Namespace": "",
          "IsInfra": false,
          "IsService": false,
          "KubeExitCodePropagation": "invalid",
          "lockNumber": 0,
          "Config": {
               "Hostname": "adguard",
               "Domainname": "",
               "User": "",
               "AttachStdin": false,
               "AttachStdout": false,
               "AttachStderr": false,
               "Tty": false,
               "OpenStdin": false,
               "StdinOnce": false,
               "Env": [
                    "container=podman",
                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                    "HOSTNAME=adguard",
                    "HOME=/root"
               ],
               "Cmd": [
                    "--no-check-update",
                    "-c",
                    "/opt/adguardhome/conf/AdGuardHome.yaml",
                    "-w",
                    "/opt/adguardhome/work"
               ],
               "Image": "docker.io/adguard/adguardhome:latest",
               "Volumes": null,
               "WorkingDir": "/opt/adguardhome/work",
               "Entrypoint": [
                    "/opt/adguardhome/AdGuardHome"
               ],
            
               "StopSignal": "SIGTERM",
               "HealthcheckOnFailureAction": "none",
               "CreateCommand": [
                    "/usr/bin/podman",
                    "run",
                    "--name=adguard",
                    "--cidfile=/run/user/1000/adguard.cid",
                    "--replace",
                    "--rm",
                    "--cgroups=split",
                    "--sdnotify=conmon",
                    "-d",
                    "-v",
                    "/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
                    "-v",
                    "adguard-work:/opt/adguardhome/work:z",
                    "-v",
                    "adguard-config:/opt/adguardhome/conf:z",
                    "--publish",
                    "53:53/tcp",
                    "--publish",
                    "53:53/udp",
                    "--publish",
                    "784:784/udp",
                    "--publish",
                    "853:853/tcp",
                    "--publish",
                    "3000:3000/tcp",
                    "--publish",
                    "8844:80/tcp",
                    "--publish",
                    "8443:443/tcp",
                    "--hostname",
                    "adguard",
                    "docker.io/adguard/adguardhome:latest"
               ],
               "Umask": "0022",
               "Timeout": 0,
               "StopTimeout": 10,
               "Passwd": true,
               "sdNotifyMode": "conmon",
               "sdNotifySocket": "/run/user/1000/systemd/notify"
          },
          "HostConfig": {
               "Binds": [
                    "adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
                    "adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
                    "/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
               ],
              },
               "NetworkMode": "pasta",
               "PortBindings": {
                    "3000/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "3000"
                         }
                    ],
                    "443/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "8443"
                         }
                    ],
                    "53/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "53"
                         }
                    ],
                    "53/udp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "53"
                         }
                    ],
                    "784/udp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "784"
                         }
                    ],
                    "80/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "8844"
                         }
                    ],
                    "853/tcp": [
                         {
                              "HostIp": "0.0.0.0",
                              "HostPort": "853"
                         }
                    ]
               }

podman version 5.2.2

If you omit the Network completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.

``` "NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "3000/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "3000" } ], "3000/udp": null, "443/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8443" } ], "443/udp": null, "53/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "53" } ], "53/udp": [ { "HostIp": "0.0.0.0", "HostPort": "53" } ], "5443/tcp": null, "5443/udp": null, "6060/tcp": null, "67/udp": null, "68/udp": null, "784/udp": [ { "HostIp": "0.0.0.0", "HostPort": "784" } ], "80/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8844" } ], "853/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "853" } ], "853/udp": null }, "SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876" }, "Namespace": "", "IsInfra": false, "IsService": false, "KubeExitCodePropagation": "invalid", "lockNumber": 0, "Config": { "Hostname": "adguard", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=podman", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=adguard", "HOME=/root" ], "Cmd": [ "--no-check-update", "-c", "/opt/adguardhome/conf/AdGuardHome.yaml", "-w", "/opt/adguardhome/work" ], "Image": "docker.io/adguard/adguardhome:latest", "Volumes": null, "WorkingDir": "/opt/adguardhome/work", "Entrypoint": [ "/opt/adguardhome/AdGuardHome" ], "StopSignal": "SIGTERM", "HealthcheckOnFailureAction": "none", "CreateCommand": [ "/usr/bin/podman", "run", "--name=adguard", "--cidfile=/run/user/1000/adguard.cid", "--replace", "--rm", "--cgroups=split", "--sdnotify=conmon", "-d", "-v", "/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z", "-v", "adguard-work:/opt/adguardhome/work:z", "-v", "adguard-config:/opt/adguardhome/conf:z", "--publish", "53:53/tcp", "--publish", "53:53/udp", "--publish", "784:784/udp", "--publish", "853:853/tcp", "--publish", "3000:3000/tcp", "--publish", "8844:80/tcp", "--publish", "8443:443/tcp", "--hostname", "adguard", "docker.io/adguard/adguardhome:latest" ], "Umask": "0022", "Timeout": 0, "StopTimeout": 10, "Passwd": true, "sdNotifyMode": "conmon", "sdNotifySocket": "/run/user/1000/systemd/notify" }, "HostConfig": { "Binds": [ "adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind", "adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind", "/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind" ], }, "NetworkMode": "pasta", "PortBindings": { "3000/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "3000" } ], "443/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8443" } ], "53/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "53" } ], "53/udp": [ { "HostIp": "0.0.0.0", "HostPort": "53" } ], "784/udp": [ { "HostIp": "0.0.0.0", "HostPort": "784" } ], "80/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8844" } ], "853/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "853" } ] } ``` `podman version 5.2.2` If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
sudo-kraken (Migrated from github.com) reviewed 2024-12-05 09:08:11 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
sudo-kraken (Migrated from github.com) commented 2024-12-05 09:08:11 +00:00

This is because these ports are in use, anyone running Nginx, OpenResty, Apache etc will have to do this.

This is because these ports are in use, anyone running Nginx, OpenResty, Apache etc will have to do this.
sudo-kraken commented 2024-12-05 09:08:39 +00:00 (Migrated from github.com)

Changes made to remove host networking and moved to pasta.

Changes made to remove host networking and moved to pasta.
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 13:32:52 +00:00
redbeardymcgee (Migrated from github.com) commented 2024-12-05 13:32:52 +00:00

That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.

I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.

That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 13:39:42 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-05 13:39:42 +00:00

Okay this is a common issue when running multiple web servers. In my setup, I do not expose any ports except 80 and 443 with Caddy as a reverse proxy in front of all my containers. I want to try to make such a setup explicit and easy for people. My Caddyfile has a bunch of simple blocks like:

@my-app host my-app.mcgee.red
	handle @my-app {
		reverse_proxy my-app:80
	}

In fact, I do not use the PublishPort setting at all on mine, except exactly 80 and 443 for caddy.container. I don't see a better way to solve this problem of port map mazes. What do you think?

Okay this is a common issue when running multiple web servers. In my setup, I do not expose any ports except `80` and `443` with Caddy as a reverse proxy in front of all my containers. I want to try to make such a setup explicit and easy for people. My Caddyfile has a bunch of simple blocks like: ``` @my-app host my-app.mcgee.red handle @my-app { reverse_proxy my-app:80 } ``` In fact, I do not use the `PublishPort` setting at all on mine, except exactly `80` and `443` for `caddy.container`. I don't see a better way to solve this problem of port map mazes. What do you think?
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 13:41:18 +00:00
redbeardymcgee (Migrated from github.com) commented 2024-12-05 13:41:18 +00:00

You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.

You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
sudo-kraken (Migrated from github.com) reviewed 2024-12-05 14:35:37 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
sudo-kraken (Migrated from github.com) commented 2024-12-05 14:35:37 +00:00

I generally agree 100% in most use cases, however I feel adguard and pi-hole would be the exception to the rule here as most people run these locally within the network rather than exposing them and have them handle dns requests etc directly rather than having a middle man proxy involved, as if caddy goes down you would forfeit all dns.

I generally agree 100% in most use cases, however I feel adguard and pi-hole would be the exception to the rule here as most people run these locally within the network rather than exposing them and have them handle dns requests etc directly rather than having a middle man proxy involved, as if caddy goes down you would forfeit all dns.
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 20:05:46 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-05 20:05:45 +00:00

I probably still need to correct some of my other quadlets, but in this repo I'd like to show the upstream defaults as much as possible for the basic quadlet service. In advanced examples, I will add documentation regarding how to overcome port conflicts.

Does that make sense to you? I'm not asking you to add a commit, just curious if my idea sounds okay to someone who isn't myself.

I probably still need to correct some of my other quadlets, but in this repo I'd like to show the upstream defaults as much as possible for the basic quadlet service. In advanced examples, I will add documentation regarding how to overcome port conflicts. Does that make sense to you? I'm not asking you to add a commit, just curious if my idea sounds okay to someone who isn't myself.
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 20:07:46 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-05 20:07:46 +00:00

Additionally, the reverse proxy is only for the web ui frontends on ports 80 and/or 443. The dns will still be handled without any proxying.

Additionally, the reverse proxy is only for the web ui frontends on ports `80` and/or `443`. The dns will still be handled without any proxying.
sudo-kraken (Migrated from github.com) reviewed 2024-12-05 22:24:05 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
sudo-kraken (Migrated from github.com) commented 2024-12-05 22:24:05 +00:00

Yeah that makes sense, if you want to keep it simple and have it run through a proxy, my only gripe would be that most people wont want to proxy a dns server and keep it local :)

Yeah that makes sense, if you want to keep it simple and have it run through a proxy, my only gripe would be that most people wont want to proxy a dns server and keep it local :)
redbeardymcgee (Migrated from github.com) reviewed 2024-12-05 23:49:06 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-05 23:49:06 +00:00

The proxy only applies to the frontend for the web ui. It won't affect the dns server or any other services.

The proxy only applies to the frontend for the web ui. It won't affect the dns server or any other services.
sudo-kraken (Migrated from github.com) reviewed 2024-12-06 00:06:59 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
sudo-kraken (Migrated from github.com) commented 2024-12-06 00:06:58 +00:00

So you would say omit the 80 and 443 port publishing from the quadlet here and have the proxy handle only that portion? How would the proxy communicate with the adguard container on internal ports, in podman 5+ if you dont enable a Network= it will default to a pasta network driver so there will be no exposed ports for you communicate with.

So you would say omit the 80 and 443 port publishing from the quadlet here and have the proxy handle only that portion? How would the proxy communicate with the adguard container on internal ports, in podman 5+ if you dont enable a `Network=` it will default to a pasta network driver so there will be no exposed ports for you communicate with.
redbeardymcgee (Migrated from github.com) reviewed 2024-12-06 00:21:00 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
redbeardymcgee (Migrated from github.com) commented 2024-12-06 00:21:00 +00:00

Publish 80:80 and 443:443 (and maybe 443:443/udp). It's not feasible to remap those ports for every quadlet under the sun, right? If there are port conflicts, that is an issue that you have to resolve in your local environment. You can either remap them yourself, downstream in your local env, or you can proxy and/or load balance them. The main thing is that's not something an upstream example should define.

However, as i said, I am working on advanced examples to demonstrate and document how to handle port conflicts among other common issues, such as re-using quadlets for different needs like running two postgres instances for different purposes.

Also, if you are remapping 80 and 443 often in your setup, I recommend putting caddy in front of it. You don't need a public domain. You can use it on your LAN just as effectively without exposing it to the web. Plus you can re-use the Network= key multiple times, so you can join 1 caddy instance to multiple networks. That caddy can then route you to all your web guis with a simple config, by container or host name. There are lots of ways to solve this problem, unless you just prefer to remap your host ports. You'll run out of ports eventually and have to start creating new virtual network interfaces to map more ports :P

Publish 80:80 and 443:443 (and maybe `443:443/udp`). It's not feasible to remap those ports for every quadlet under the sun, right? If there are port conflicts, that is an issue that you have to resolve in your local environment. You can either remap them yourself, downstream in your local env, or you can proxy and/or load balance them. The main thing is that's not something an upstream example should define. However, as i said, I am working on advanced examples to demonstrate and document how to handle port conflicts among other common issues, such as re-using quadlets for different needs like running two `postgres` instances for different purposes. Also, if you are remapping 80 and 443 often in your setup, I recommend putting caddy in front of it. You don't need a public domain. You can use it on your LAN just as effectively without exposing it to the web. Plus you can re-use the `Network=` key multiple times, so you can join 1 caddy instance to multiple networks. That caddy can then route you to all your web guis with a simple config, by container or host name. There are lots of ways to solve this problem, unless you just prefer to remap your host ports. You'll run out of ports eventually and have to start creating new virtual network interfaces to map more ports :P
redbeardymcgee commented 2024-12-06 00:23:48 +00:00 (Migrated from github.com)

I still need to work on unifying all the example quadlets and make sure they are all similarly formatted, so I am happy to accept this as-is. I will deal with the port mapping issue once I nail down how everything should look.

I still need to work on unifying all the example quadlets and make sure they are all similarly formatted, so I am happy to accept this as-is. I will deal with the port mapping issue once I nail down how everything should look.
sudo-kraken (Migrated from github.com) reviewed 2024-12-06 00:24:33 +00:00
@ -0,0 +18,4 @@
PublishPort=784:784/udp
PublishPort=853:853/tcp
PublishPort=3000:3000/tcp
PublishPort=8844:80/tcp
sudo-kraken (Migrated from github.com) commented 2024-12-06 00:24:33 +00:00

That would work by joining caddy to multiple networks without exposing ports, but most of the quadlet examples arent using defined networks so podman 5.0 defaults to pasta which means caddy has no networks to join to and therefore cant communicate with the containers ports internally through podman networks?

That would work by joining caddy to multiple networks without exposing ports, but most of the quadlet examples arent using defined networks so podman 5.0 defaults to pasta which means caddy has no networks to join to and therefore cant communicate with the containers ports internally through podman networks?
redbeardymcgee commented 2024-12-06 00:24:39 +00:00 (Migrated from github.com)

Thanks again for all your contributions @sudo-kraken . It was incredible to discover some new services, and I really appreciate the work you put in to tidy them all up too.

Thanks again for all your contributions @sudo-kraken . It was incredible to discover some new services, and I really appreciate the work you put in to tidy them all up too.
sudo-kraken commented 2024-12-06 00:25:50 +00:00 (Migrated from github.com)

Thanks again for all your contributions @sudo-kraken . It was incredible to discover some new services, and I really appreciate the work you put in to tidy them all up too.

No problem at all, I will keep contributing when I add new services or learn new things 👍

> Thanks again for all your contributions @sudo-kraken . It was incredible to discover some new services, and I really appreciate the work you put in to tidy them all up too. No problem at all, I will keep contributing when I add new services or learn new things 👍
Sign in to join this conversation.
No description provided.