Added Adguard #16
26
quadlets/adguard/adguard.container
Normal file
|
@ -0,0 +1,26 @@
|
|||
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
[Unit]
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
Description=Adguard Quadlet
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
[Service]
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
Restart=always
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
TimeoutStartSec=900
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
[Install]
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
WantedBy=default.target
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
[Container]
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
Image=docker.io/adguard/adguardhome:latest
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
ContainerName=adguard
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
HostName=adguard
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=53:53/tcp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=53:53/udp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=784:784/udp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=853:853/tcp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=3000:3000/tcp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=8844:80/tcp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
![]() Is this port mapping mismatched because your environment already uses port 80 for something else? Or is this an upstream example like from their compose file? I would like to avoid mapping ports around if possible to make it easier to drop in a single quadlet with no quirks. Is this port mapping mismatched because your environment already uses port 80 for something else? Or is this an upstream example like from their compose file? I would like to avoid mapping ports around if possible to make it easier to drop in a single quadlet with no quirks.
![]() This is because these ports are in use, anyone running Nginx, OpenResty, Apache etc will have to do this. This is because these ports are in use, anyone running Nginx, OpenResty, Apache etc will have to do this.
![]() Okay this is a common issue when running multiple web servers. In my setup, I do not expose any ports except
In fact, I do not use the Okay this is a common issue when running multiple web servers. In my setup, I do not expose any ports except `80` and `443` with Caddy as a reverse proxy in front of all my containers. I want to try to make such a setup explicit and easy for people. My Caddyfile has a bunch of simple blocks like:
```
@my-app host my-app.mcgee.red
handle @my-app {
reverse_proxy my-app:80
}
```
In fact, I do not use the `PublishPort` setting at all on mine, except exactly `80` and `443` for `caddy.container`. I don't see a better way to solve this problem of port map mazes. What do you think?
![]() I generally agree 100% in most use cases, however I feel adguard and pi-hole would be the exception to the rule here as most people run these locally within the network rather than exposing them and have them handle dns requests etc directly rather than having a middle man proxy involved, as if caddy goes down you would forfeit all dns. I generally agree 100% in most use cases, however I feel adguard and pi-hole would be the exception to the rule here as most people run these locally within the network rather than exposing them and have them handle dns requests etc directly rather than having a middle man proxy involved, as if caddy goes down you would forfeit all dns.
![]() I probably still need to correct some of my other quadlets, but in this repo I'd like to show the upstream defaults as much as possible for the basic quadlet service. In advanced examples, I will add documentation regarding how to overcome port conflicts. Does that make sense to you? I'm not asking you to add a commit, just curious if my idea sounds okay to someone who isn't myself. I probably still need to correct some of my other quadlets, but in this repo I'd like to show the upstream defaults as much as possible for the basic quadlet service. In advanced examples, I will add documentation regarding how to overcome port conflicts.
Does that make sense to you? I'm not asking you to add a commit, just curious if my idea sounds okay to someone who isn't myself.
![]() Additionally, the reverse proxy is only for the web ui frontends on ports Additionally, the reverse proxy is only for the web ui frontends on ports `80` and/or `443`. The dns will still be handled without any proxying.
![]() Yeah that makes sense, if you want to keep it simple and have it run through a proxy, my only gripe would be that most people wont want to proxy a dns server and keep it local :) Yeah that makes sense, if you want to keep it simple and have it run through a proxy, my only gripe would be that most people wont want to proxy a dns server and keep it local :)
![]() The proxy only applies to the frontend for the web ui. It won't affect the dns server or any other services. The proxy only applies to the frontend for the web ui. It won't affect the dns server or any other services.
![]() So you would say omit the 80 and 443 port publishing from the quadlet here and have the proxy handle only that portion? How would the proxy communicate with the adguard container on internal ports, in podman 5+ if you dont enable a So you would say omit the 80 and 443 port publishing from the quadlet here and have the proxy handle only that portion? How would the proxy communicate with the adguard container on internal ports, in podman 5+ if you dont enable a `Network=` it will default to a pasta network driver so there will be no exposed ports for you communicate with.
![]() Publish 80:80 and 443:443 (and maybe However, as i said, I am working on advanced examples to demonstrate and document how to handle port conflicts among other common issues, such as re-using quadlets for different needs like running two Also, if you are remapping 80 and 443 often in your setup, I recommend putting caddy in front of it. You don't need a public domain. You can use it on your LAN just as effectively without exposing it to the web. Plus you can re-use the Publish 80:80 and 443:443 (and maybe `443:443/udp`). It's not feasible to remap those ports for every quadlet under the sun, right? If there are port conflicts, that is an issue that you have to resolve in your local environment. You can either remap them yourself, downstream in your local env, or you can proxy and/or load balance them. The main thing is that's not something an upstream example should define.
However, as i said, I am working on advanced examples to demonstrate and document how to handle port conflicts among other common issues, such as re-using quadlets for different needs like running two `postgres` instances for different purposes.
Also, if you are remapping 80 and 443 often in your setup, I recommend putting caddy in front of it. You don't need a public domain. You can use it on your LAN just as effectively without exposing it to the web. Plus you can re-use the `Network=` key multiple times, so you can join 1 caddy instance to multiple networks. That caddy can then route you to all your web guis with a simple config, by container or host name. There are lots of ways to solve this problem, unless you just prefer to remap your host ports. You'll run out of ports eventually and have to start creating new virtual network interfaces to map more ports :P
![]() That would work by joining caddy to multiple networks without exposing ports, but most of the quadlet examples arent using defined networks so podman 5.0 defaults to pasta which means caddy has no networks to join to and therefore cant communicate with the containers ports internally through podman networks? That would work by joining caddy to multiple networks without exposing ports, but most of the quadlet examples arent using defined networks so podman 5.0 defaults to pasta which means caddy has no networks to join to and therefore cant communicate with the containers ports internally through podman networks?
|
||||
PublishPort=8443:443/tcp
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
![]() i guess you can't mark a range of lines to comment on, but see above. i guess you can't mark a range of lines to comment on, but see above.
|
||||
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
Volume=adguard-config:/opt/adguardhome/work:z
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
Volume=adguard-work:/opt/adguardhome/conf:z
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
Volume=/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
3
quadlets/adguard/adguard.volume
Normal file
|
@ -0,0 +1,3 @@
|
|||
[Volume]
|
||||
VolumeName=adguard-config
|
||||
VolumeName=adguard-work
|
Is this problem solved after
passt
orpasta
becomes the default? Right now I'm running withslirp4netns
because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I triedpasta
a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.