"HA" Pihole between Debian, Synology and Docker
A while back, scrolling through YouTube, I stumbled across a video titled "High Availability Pi-Hole? Yes Please!", which I thought was interesting - it seemed to address a problem that I didn't really think I had.. but why not poke around at it?
Pi-Hole is a network-wide ad blocking DNS server. It can function as a DHCP server too, but I figure my router is mighty fine at that. It's not very good at custom DNS entries, local CNAMEs and other DNS functionality that I would like. Since PiHole is basically running DNSMasq under the hood (or, was? I'm not really clear TBH), this seemed like a bit of a fun project.
GravitySync works to synchronize the
gravity.db SQLLite files that PiHole runs on. It does this by establishing a path between a primary and secondary PiHoles (by SSH) and periodically synchronizing the data between the two PiHole's.
As always, this isn't a walk-through. Check the YouTube for that - it's pretty good. These are my notes about the issues I ran into. Hopefully they'll help you.
- Primary PiHole - this is the PiHole that we're making changes on. Any changes that apply to the whole network get done here, and is synchronized to the secondary.
- Primary PiHole host - this is the server that is running
dockerthat runs the primary PiHole image. In my case, this is my Synology NAS.
- Secondary PiHole - this is the "redundant" PiHole that we're going to synchronize our changes to.
- Secondary PiHole host - like the primary PiHole host, this is the server that's running
dockerthat the PiHole image runs on. In my case, this is my Debian server.
General approach is to connect to the primary host, get the DB files from the primary PiHole, copy them to the secondary host, and load the data files in. Since this is evidently all SQLIte, we've got plenty of good tools to help along the way.
First, there was one PiHole.
One PiHole install on my Synology later, complete with a DoH Cloudflare local resolver, and I'm pretty happy with the result:
1# docker-compose.yml 2 3version: "3" 4 5services: 6 cloudflared: 7 container_name: cloudflared 8 image: crazymax/cloudflared 9 restart: unless-stopped 10 ports: 11 - 5053:5053/udp 12 environment: 13 - "TS=America/Edmonton" 14 15 pihole: 16 image: pihole/pihole:latest 17 container_name: pihole 18 depends_on: 19 - cloudflared 20 environment: 21 TZ: "America/Edmonton" 22 DNS1: "127.0.0.1#5053" 23 ServerIP: "0.0.0.0" 24 WEB_PORT: "8080" 25 cap_add: 26 - NET_ADMIN 27 volumes: 28 - "/volume1/docker/pihole/dnsmasq.d/:/etc/dnsmasq.d" 29 - "/volume1/docker/pihole/pihole/:/etc/pihole/" 30 restart: unless-stopped 31 network_mode: host
docker on my Synology from the Synology store to facilitate this. This file is stored in
The setup is pretty straightforward.
cloudflared runs on port 5053, and Pihole is configured to use
127.0.0.1:5053 as it's only resolver. Server IP is bound to 0.0.0.0, since it'll be effectively sharing the host's network.
I set my Ubiquity router to point to my Synology for DNS, and we're off to the races.
Then, a second PiHole.
After watching the YouTube video, the idea of having a pair of local DNS servers seemed like a good idea. I have had the odd case where PiHole would update, and fail to come back to life correctly, leaving the network without DNS. This could be solved...
So - taking the
docker-compose.yml file from above, adapting the paths slightly to my Debian machine, a second PiHole was up and running.
Now for the syncing.
Synology does a lot of magic under the hood. I've been able to expand its storage on a few occasions just by replacing the drives one at a time, and it magically rebuilds while continuing to dutifully serve files and services. By SSHing into the shell, we see it's some basic Linux environment.
But that's kinda where it ends. It lacks some facilities that make thing whole thing work nicely -
We can solve Git by installing the Git Server from the Synology app store. I have no intentions of using it, but it provides the binaries that I need to get going.
crontab binary is another story - one that I didn't resolve.
Dealing with Crontab
Reading the instructions, I know my Primary PiHole is the one running on my Synology. Running the bash script provided failed almost immediately - first with the missing
git binaries that I mentioned above, then missing
Remember when we said that it was a periodic sync operation? Well, turns out that the sync operation is initiated by the secondary PiHole - not the primary. This means that I didn't need
crontab on this host at all.
One quick edit of the setup script later to comment out the failure counter, we basically get a passing grade from the script. Making sure
sudo works as desired is the key outcome here.
Dealing with SSH Auth
I was going to need to enable key based SSH authentication into the Synology host for this to work. To handle that, it turns out that I needed to enable Homedir's in the Synology software. This is in the User control panel, in the Advanced tab, at the veeery bottom:
Once this is setup, we're able to establish passwordless authentication to the Synology. I'm not going over that here.
Now that I can SSH to the Synology host without issues...
Rsync not working
Continuing the work on the secondary still, I'm getting failures to sync the
gravity.db file from the primary (Synology) to the secondary (linux). I can SSH between hosts without any issues, but rsync is telling me that I didn't have permissions to copy the file.
This wound up being a disabled service in Synology. Check the File Services control panel, under the Rsync tab. Enable the services:
And now the sync happens!
Configuring sync on the secondary
The default configuration seemed to have a few issues that needed fixing:
- The inclusion of
SKIP_CUSTOM=1(I think) was causing my DNS records to be skipped on import
- The absence of
INCLUDE_CNAME=1was causing the omission of my CNAME records to be synced.
Correcting both of these issues made for correct secondary PiHole.
The default configuration for
gravity-sync automate (I think) has it doing a "smart" sync between the two PiHole's. I'm really trying to treat the primary (Synology) PiHole as the authoritative PiHole, so I updated the cron job to replace the "smart" sync with a "pull", effectively telling the whole thing to overwrite the secondary PiHole's DB with that of the primary's.
Last thing to do was run a few tests - make some changes on the primary, force a sync, see that they're reflected on the secondary. Test that I can make DNS queries against both hosts and finally update the DHCP configuration to hand out the IPs of both internal DNS servers.