Added Adguard #16
|
@ -13,8 +13,6 @@ Image=docker.io/adguard/adguardhome:latest
|
|||
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
ContainerName=adguard
|
||||
HostName=adguard
|
||||
|
||||
# Optionally run this on network mode host, if you want real client ips to show in the logs
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
# Network=host
|
||||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
||||
PublishPort=53:53/tcp
|
||||
PublishPort=53:53/udp
|
||||
PublishPort=784:784/udp
|
||||
|
|
|||
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
![]() Is this problem solved after Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note. Is this problem solved after `passt` or `pasta` becomes the default? Right now I'm running with `slirp4netns` because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I tried `pasta` a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.
Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.
![]() I will test this today I have a load balanced setup so I can test on my failover node. I will test this today I have a load balanced setup so I can test on my failover node.
![]()
If you omit the ```
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"3000/udp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"443/udp": null,
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"5443/tcp": null,
"5443/udp": null,
"6060/tcp": null,
"67/udp": null,
"68/udp": null,
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
],
"853/udp": null
},
"SandboxKey": "/run/user/1000/netns/netns-16cb5875-7dae-a547-4eca-5e9390b22876"
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 0,
"Config": {
"Hostname": "adguard",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=podman",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=adguard",
"HOME=/root"
],
"Cmd": [
"--no-check-update",
"-c",
"/opt/adguardhome/conf/AdGuardHome.yaml",
"-w",
"/opt/adguardhome/work"
],
"Image": "docker.io/adguard/adguardhome:latest",
"Volumes": null,
"WorkingDir": "/opt/adguardhome/work",
"Entrypoint": [
"/opt/adguardhome/AdGuardHome"
],
"StopSignal": "SIGTERM",
"HealthcheckOnFailureAction": "none",
"CreateCommand": [
"/usr/bin/podman",
"run",
"--name=adguard",
"--cidfile=/run/user/1000/adguard.cid",
"--replace",
"--rm",
"--cgroups=split",
"--sdnotify=conmon",
"-d",
"-v",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:z",
"-v",
"adguard-work:/opt/adguardhome/work:z",
"-v",
"adguard-config:/opt/adguardhome/conf:z",
"--publish",
"53:53/tcp",
"--publish",
"53:53/udp",
"--publish",
"784:784/udp",
"--publish",
"853:853/tcp",
"--publish",
"3000:3000/tcp",
"--publish",
"8844:80/tcp",
"--publish",
"8443:443/tcp",
"--hostname",
"adguard",
"docker.io/adguard/adguardhome:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "conmon",
"sdNotifySocket": "/run/user/1000/systemd/notify"
},
"HostConfig": {
"Binds": [
"adguard-work:/opt/adguardhome/work:z,rw,rprivate,nosuid,nodev,rbind",
"adguard-config:/opt/adguardhome/conf:z,rw,rprivate,nosuid,nodev,rbind",
"/var/log/AdGuardHome.log:/var/log/AdGuardHome.log:rw,rprivate,rbind"
],
},
"NetworkMode": "pasta",
"PortBindings": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
],
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"53/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"53/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "53"
}
],
"784/udp": [
{
"HostIp": "0.0.0.0",
"HostPort": "784"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8844"
}
],
"853/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "853"
}
]
}
```
`podman version 5.2.2`
If you omit the `Network` completely and expose the ports as normal (note I changed 80 to something else as its in use on the host) it works and shows up correctly as you can see it defaults to pasta networking and is fully operational.
![]() That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly. I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default. That's awesome! I don't have a setup that can test as well as you just did yet so thanks for clearing that so quickly.
I guess we should leave it as it is for now, because my proof of concept is running 4.x still, and that's what my other setup document is based on. I need to find a way to set a reminder to change this note once I have the setup doc in better shape, and using 5.x by default.
![]() You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning. You already submitted the change for this, so I'll just take your updated version. I didn't notice you had put that together before I got up this morning.
|
Is this problem solved after
passt
orpasta
becomes the default? Right now I'm running withslirp4netns
because on AlmaLinux my podman is still on 4.x. I actually have compiled 5.x from source to test socket activation, so I triedpasta
a bit. I know it works, but I have to finesse it on boot to start my containers because something comes up in the wrong order and there's no network when the containers try to start. I'm really not sure what all was going wrong yet. I think on a distro like fedora instead it might be working normally.Anyway, it's supposed to support source ip mapping. Are you running a version of podman/adguard/pasta where we could try that out? Maybe it can get rid of this note.