Infrastructure
cloudflare
Proxies
Each server (Webserver, Piwigo Server) runs a cloudflared docker container which provides a cloudflare Zero-Trust tunnel with respective keys.
All containers on a server share a docker network named frontend. In consequence, all attached docker containers can be reached under their service names (and the respective port if required) using the cloudflare proxy.
All services are reachable through cloudflare proxies:
Webserver
http://frontend-> klharriettes.org/www.klharriettes.orghttp://strapi:1337-> admin.klharriettes.orghttp://metabase:3000-> reporting.klharriettes.org
Piwigo Server
http://piwigo-> gallery.klharriettes.org
Caching
cloudflare provides a Content-Delivery-Network (CDN) which caches a lot of the websites content on a regional level, improving the performance/latency recognizably.
The website built is a Single Page App (SPA), which means that only a single page is loaded (index.html) and the structure of the website is build as a virtual DOM. Moreover, it has been developed as a Progressive Web App (PWA).
This requires certain files to be excluded from caching to be able to deliver the latest version of the app without any caching issues. Hence, we must exclude the service worker (sw.js) and the file registering it (registerSW.js), the web manifest (manifest.webmanifest) and the main file index.html from any caching.
TIP
When looking for index.html in your browsers Dev Tools -> Network tab it can be found under the respective domain name, e.g., klharriettes.org
We must interfere at two different points of the architecture:
Add no-cache headers to the respective files in our nginx configuration for the frontend (see here)
We must exclude these files from any cloudflare caching. This is accomplished by setting a caching rule like
html(http.request.uri.path eq "/sw.js") or (http.request.uri.path eq "/index.html") or (http.request.uri.path eq "/manifest.webmanifest") or (http.request.uri.path eq "/registerSW.js")
in the Caching area of each domain, and set it to Bypass. To ensure that this works, the header of these files should be validated in the Dev Tools of the browser:
Cf-Cache-Status: DYNAMIC
If the file is actually cached the status will be set to HIT.
Moreover, to be on the safe side, the cloudflare cache should be purged every time after a new version of the frontend has been deployed.
Further information on caching can be found here
Captcha - Web visitor validation
cloudflare offers with turnstile a user-friendly tool which allows to validate the web visitor - often with no user interaction at all. In the worst case the user has to click "Verify". The turnstile validation can also be moved to the background without any visibility to or interaction with the web visitor (not implemented here).
A turnstile key is created once during setup and can be used for multiple domains and all subdomains thereof.
A turnstile challenge (when a web visitor hits a protected site and is verified) returns a token which needs to be verified using a secret key (see below)
Workers
cloudflare offers so-called Compute (Workers) which can be used to provide endpoints running small snippets of code. We use this twice in our context: to verify the turnstile token with the secret key at the endpoint https://*.klharriettes.org/turnstile-verification and for deploying the documentation with Vitepress
The turnstile-verification worker is a rather simple script:
export default {
async fetch(request, env) {
if (request.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type",
},
});
}
if (request.method !== "POST") {
return new Response("Method Not Allowed", { status: 405 });
}
try {
const { token } = await request.json();
const secret = env.TURNSTILE_SECRET_KEY; // Make sure this is set in cloudflare environment variables
const response = await fetch("https://challenges.cloudflare.com/turnstile/v0/siteverify", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ secret, response: token }),
});
const data = await response.json();
const success = data.success;
return new Response(JSON.stringify({ success }), {
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*", // Allow requests from anywhere
},
});
} catch (error) {
return new Response(JSON.stringify({ error: "Verification failed" }), {
status: 500,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
},
});
}
},
};Web Analytics
cloudflare offers even in the free plan (limited) Web analytics capabilities. To establish this the domain needs to be registered, and no further setup is required as cloudflare injects its analytics code during passing on the proxy requests.
Directory Structure
The ansible scripts are creating a standardized directory structures for all components on both Webserver and Piwigo Server. This structure can be customized using the variables implemented in the main installer playbook (klhhh-install.yml)
The directory tree always starts at /home/ubuntuand contains the following sub-folders:
- 📁 cloud: used to mount any cloud storage with rclone. Next subfolder should be named following the storage type (google, onedrive)
- 📁 docker: contains all components installed through docker. Subfolder names are component/appnames. All mounts for a particular component should be confined into the docker/appname folder
- 📁 logs: contains script logs where required (docker related log files should be stored in the respective folders). The log files should be named following the component/appname
- 📁 scripts: contains scripts required to back up/restore etc. Sub-folders are named using component/appname. Script files itself should be named following purpose_appname.sh (e.g.,
backup_piwigo.sh).
The Webserver tree for instance looks like this:
📁 .
├─ 📁 cloud
│ └─ 📁 google
│ └─ 📁 Backups
├─ 📁 docker
│ ├─ 📁 cloudflare
│ ├─ 📁 frontend
│ │ └─ 📁 app
│ ├─ 📁 metabase
│ │ ├─ 📁 plugins
│ │ └─ 📁 urandom
│ ├─ 📁 mysql
│ │ ├─ 📁 backups
│ │ ├─ 📁 data
│ │ └─ 📁 ts-mysql
│ └─ 📁 strapi
│ └─ 📁 app
├─ 📁 logs
├─ 📁 scripts
│ ├─ 📁 mysql
│ └─ 📁 strapiMiscellaneous
In order to have a secure and convenient work environment on the production servers a number of helper tools will be installed on all servers:
docker - where possible the solutions will be installed using a docker environment, hence we need the respective environment.
rclone - a smart tool to connect any type of cloud storage to a linux machine. We connect Google Drive to
~/cloud/google/Backupsin order to store all backups. rclone installs itself as a system service and can be started and stopped usingsudo systemctl start rclonetailscale - the access to the database management tool (adminer) needs to be protected using a VPN. tailscale is a modern, light-weight, WireGuard-based VPN. For this purpose tailscale serve is used and (at the time of writing) started manually using
shtailscale serve http://localhost:8080INFO
The adminer port is set to 8080 in the respective docker-compose.yml file.
some other optimizations like vim settings and adaptations of the system prompt using starship
