Continue tweaking the Matrix post

This commit is contained in:
Joshua Boniface 2022-10-17 10:01:33 -04:00
parent f9ffef0a77
commit b1c6fd541e
1 changed files with 331 additions and 241 deletions

View File

@ -91,10 +91,10 @@ In order to provide a single entrypoint to the load balancers, the administrator
```
# Global configuration options.
global_defs {
# Use a dedicated IPv4 multicast group; adjust the last octet if this conflicts within your network.
# Use a dedicated IPv4 multicast group; adjust the last octet if this conflicts within your network.
vrrp_mcast_group4 224.0.0.21
# Use VRRP version 3 in strict mode and with no iptables configuration.
# Use VRRP version 3 in strict mode and with no iptables configuration.
vrrp_version 3
vrrp_strict
vrrp_iptables
@ -110,24 +110,24 @@ vrrp_script chk {
# Primary IPv4 VIP configuration.
vrrp_instance VIP_4 {
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
state MASTER
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
interface ens2
# A dedicated, unique virtual router ID for this cluster; adjust this if required.
# A dedicated, unique virtual router ID for this cluster; adjust this if required.
virtual_router_id 21
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
priority 200
# The (list of) virtual IP address(es) with CIDR subnet mask for the "blbvip" host.
# The (list of) virtual IP address(es) with CIDR subnet mask for the "blbvip" host.
virtual_ipaddress {
10.0.0.2/24
}
# Use the HAProxy check script for this VIP.
# Use the HAProxy check script for this VIP.
track_script {
chk
}
@ -529,28 +529,73 @@ ma1sd is an optional component for Matrix, providing 3PID (e.g. email, phone num
## Step 6 - Reverse proxy
For this guide, HAProxy was selected as the reverse proxy of choice. This is mostly due to my familiarity with it, but also to a lesser degree for its more advanced functionality and, in my opinion, nicer configuration syntax. This section is broken into two parts, one for a "single-server" instance, and one for a "load-balanced", multi-server instance with an additional 2 slave worker servers and with separate proxy servers. The latter setup is more complicated, but provides additional scalability and redundancy, allowing the Synapse instance to scale to much higher user volumes with greater performance and reliability.
For this guide, HAProxy was selected as the reverse proxy of choice. This is mostly due to my familiarity with it, but also to a lesser degree for its more advanced functionality and, in my opinion, nicer configuration syntax. This section provides configuration for a "load-balanced", multi-server instance with an additional 2 slave worker servers and with separate proxy servers; a single-server instance with basic split workers can be made by removing the additional servers. This will allow the homeserver to grow to many dozens or even hundreds of users. In this setup, the load balancer is separated out onto a separate pair of servers, with a `keepalived` VIP (virtual IP address) shared between them. The name `mlbvip` should resolve to this IP, and all previous worker configurations should use this `mlbvip` hostname as the connection target for the replication directives. Both a reasonable `keepalived` configuration for the VIP and the HAProxy configuration are provided.
### Single-Server instance
The two proxy hosts can be named as desired, in my case using the names `mlb1` and `mlb2`. These names must resolve in DNS, or be specified in `/etc/hosts` on both servers.
The simpler configuration of a split-worker Synapse server makes user of only a single server, running one copy of each worker, and thus provides only the split-worker performance boost with no redundancy or load-balancing of the supported workers. In this setup, there is no `mlbvip` and instead all instances of this hostname in the previous configurations should be replaced with either `localhost`, `127.0.0.1`, or `::1` in order to send connections only to the local system.
The reverse proxy for this setup also excludes several elements that would be provided in a "load-balanced" instance, thus ensuring that it only proxies those services that are absolutely required.
The HAProxy configuration below can be used verbatim, and inline comments provide additional clarification and information to avoid breaking up the configuration snippit.
The Keepalived configuration below can be used on both proxy hosts, and inline comments provide additional clarification and information as well as indicating any changes required between the hosts. The VIP should be selected from the free IPs of your server subnet.
```
# Global configuration options.
global_defs {
# Use a dedicated IPv6 multicast group; adjust the last octet if this conflicts within your network.
vrrp_mcast_group4 224.0.0.21
# Use VRRP version 3 in strict mode and with no iptables configuration.
vrrp_version 3
vrrp_strict
vrrp_iptables
}
# HAProxy check script, to ensure that this host will not become PRIMARY if HAProxy is not active.
vrrp_script chk {
script "/usr/bin/haproxyctl show info"
interval 5
rise 2
fall 2
}
# Primary IPv4 VIP configuration.
vrrp_instance VIP_4 {
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
state MASTER
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
interface ens2
# A dedicated, unique virtual router ID for this cluster; adjust this if required.
virtual_router_id 21
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
priority 200
# The (list of) virtual IP address(es) with CIDR subnet mask.
virtual_ipaddress {
10.0.0.10/24
}
# Use the HAProxy check script for this VIP.
track_script {
chk
}
}
```
Once the above configuration is installed at `/etc/keepalived/keepalived.conf`, restart the Keepalived service with `sudo systemctl restart keepalived` on each host. You should see the VIP become active on the first host.
The HAProxy configuration below can be used verbatim on both proxy hosts, and inline comments provide additional clarification and information to avoid breaking up the configuration snippit. In this example we use `peer` configuration to enable the use of `stick-tables` directives, which ensure that individual user sessions are synchronized between the HAProxy instances during failovers; with this setting, if the hostnames of the load balancers do not resolve, HAProxy will not start. Some additional, advanced features are used in several ACLs to ensure that, for instance, specific users and rooms are always directed to the same workers if possible, which is required by the individual workers as specified in [the Matrix documentation](https://github.com/matrix-org/synapse/blob/master/docs/workers.md).
```
# Global settings - tune HAProxy for optimal performance, administration, and security.
global
# Send logs to the "local6" service on the local host, via an rsyslog UDP listener. Enable debug logging to log individual connections.
# Send logs to the "local6" service on the local host, via an rsyslog UDP listener. Enable debug logging to log individual connections.
log ip6-localhost:514 local6 debug
log-send-hostname
chroot /var/lib/haproxy
pidfile /run/haproxy/haproxy.pid
# Use multi-threadded support (available with HAProxy 1.8+) for optimal performance in high-load situations. Adjust `nbthread` as needed for your host's core count (1/2 is optimal).
# Use multi-threadded support (available with HAProxy 1.8+) for optimal performance in high-load situations. Adjust `nbthread` as needed for your host's core count (2-4 is optimal).
nbproc 1
nbthread 2
nbthread 4
# Provide a stats socket for `hatop`
stats socket /var/lib/haproxy/admin.sock mode 660 level admin process 1
@ -567,13 +612,12 @@ global
# Set default SSL configurations, including a modern highly-secure configuration requiring TLS1.2 client support.
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
tune.ssl.default-dh-param 4096
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
ssl-default-server-ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384
ssl-default-server-options ssl-min-ver TLSv1.2 no-tls-tickets
# Default settings - provide some default settings that are applicable to (most) of the listeners and backends below.
defaults
log global
@ -587,287 +631,333 @@ defaults
default-server init-addr libc,last,none
timeout client 30s
timeout connect 5s
timeout server 30s
timeout connect 30s
timeout server 300s
timeout tunnel 3600s
timeout http-keep-alive 5s
timeout http-request 15s
timeout http-keep-alive 60s
timeout http-request 30s
timeout queue 60s
timeout tarpit 60s
# Statistics listener with authentication - provides stats for the HAProxy instance via a WebUI (optional)
userlist admin
# WARNING - CHANE ME TO A REAL PASSWORD OR A SHA512-hashed PASSWORD (with `password` instead of `insecure-password`). IF YOU USE `insecure-password`, MAKE SURE THIS CONFIGURATION IS NOT WORLD-READABLE.
user admin insecure-password P4ssw0rd
listen stats
bind :::5555 v4v6
mode http
stats enable
stats uri /
stats hide-version
stats refresh 10s
stats show-node
stats show-legends
acl is_admin http_auth(admin)
http-request auth realm "Admin access required" if !is_admin
peers keepalived-pair
# Peers for site bl0
peer mlb1.i.bonilan.net mlb1.i.bonilan.net:1023
peer mlb2.i.bonilan.net mlb2.i.bonilan.net:1023
resolvers nsX
nameserver ns1 10.101.0.61:53
nameserver ns2 10.101.0.62:53
userlist admin
user admin password MySuperSecretPassword123
listen stats
bind :::5555 v4v6
mode http
stats enable
stats uri /
stats hide-version
stats refresh 10s
stats show-node
stats show-legends
acl is_admin http_auth(admin)
http-request auth realm "Admin access" if !is_admin
# HTTP frontend - provides the unencrypted HTTP listener, by default only redirectinging to HTTPS
frontend http
bind :::80 v4v6
mode http
option httplog
acl url_letsencrypt path_beg /.well-known/acme-challenge/
use_backend letsencrypt if url_letsencrypt
redirect scheme https if !url_letsencrypt !{ ssl_fc }
# Uncomment these lines if you want to run Let's Encrypt on the local machine too, and this will forward requests to a Certbot backend later in this file (optional)
#acl url_letsencrypt path_beg /.well-known/acme-challenge/
#use_backend certbot if url_letsencrypt
redirect scheme https if !{ ssl_fc }
# HTTPS frontend - provides the main HTTPS listener, both on port 443 for clients and port 8448 for federation
frontend https
# Bind to both ports, using certificates from `/etc/ssl/letsencrypt`; point this at whichever directory contains your (combined format) certificates.
bind :::443 v4v6 ssl crt /etc/ssl/letsencrypt/ alpn h2,http/1.1
bind :::8448 v4v6 ssl crt /etc/ssl/letsencrypt/ alpn h2,http/1.1
mode http
option httplog
capture request header Host len 64
# Capture the Host header to forward along
capture request header Host len 32
# Add X-Forwarded headers to alert backend processes that these requests are proxied
http-request set-header X-Forwarded-Proto https
http-request add-header X-Forwarded-Host %[req.hdr(host)]
http-request add-header X-Forwarded-Server %[req.hdr(host)]
http-request add-header X-Forwarded-Port %[dst_port]
# Domain ACLs - adjust these to reflect your subdomains
# In my case, I use three subdomains: "im.bonifacelabs.ca" for the Synapse Matrix homeserver itself, and "riot.bonifacelabs.ca" with an alias of "chat.bonifacelabs.ca" for the Riot frontend.
# A combination of a `.well-known/matrix/server` path (at "bonifacelabs.ca") and a DNS SRV record are used for delegation, though you could run `host_matrix` on your bare domain instead, for instance.
# Method ACLs
acl http_method_get method GET
# Domain ACLs
acl host_matrix hdr_dom(host) im.bonifacelabs.ca
acl host_riot hdr_dom(host) chat.bonifacelabs.ca
acl host_riot hdr_dom(host) riot.bonifacelabs.ca
acl host_element hdr_dom(host) chat.bonifacelabs.ca
# Synchrotron worker
acl url_synchrotron path_reg ^/_matrix/client/(v2_alpha|r0)/sync$
acl url_synchrotron path_reg ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
acl url_synchrotron path_reg ^/_matrix/client/(api/v1|r0)/initialSync$
acl url_synchrotron path_reg ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
use_backend synapse_synchrotron if host_matrix url_synchrotron
# URL ACLs
# Sync requests
acl url_workerX_stick-auth path_reg ^/_matrix/client/(r0|v3)/sync$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3)/events$
acl url_workerX_stick-auth path_reg ^/_matrix/client/(api/v1|r0|v3)/initialSync$
acl url_workerX_stick-auth path_reg ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
# Federation Reader worker
acl url_federationreader path_reg ^/_matrix/federation/v1/send/
acl url_federationreader path_reg ^/_matrix/federation/v1/groups/
acl url_federationreader path_reg ^/_matrix/federation/v1/event/
acl url_federationreader path_reg ^/_matrix/federation/v1/state/
acl url_federationreader path_reg ^/_matrix/federation/v1/state_ids/
acl url_federationreader path_reg ^/_matrix/federation/v1/backfill/
acl url_federationreader path_reg ^/_matrix/federation/v1/get_missing_events/
acl url_federationreader path_reg ^/_matrix/federation/v1/publicRooms
acl url_federationreader path_reg ^/_matrix/federation/v1/query/
acl url_federationreader path_reg ^/_matrix/federation/v1/make_join/
acl url_federationreader path_reg ^/_matrix/federation/v1/make_leave/
acl url_federationreader path_reg ^/_matrix/federation/v1/send_join/
acl url_federationreader path_reg ^/_matrix/federation/v2/send_join/
acl url_federationreader path_reg ^/_matrix/federation/v1/send_leave/
acl url_federationreader path_reg ^/_matrix/federation/v2/send_leave/
acl url_federationreader path_reg ^/_matrix/federation/v1/invite/
acl url_federationreader path_reg ^/_matrix/federation/v2/invite/
acl url_federationreader path_reg ^/_matrix/federation/v1/query_auth/
acl url_federationreader path_reg ^/_matrix/federation/v1/event_auth/
acl url_federationreader path_reg ^/_matrix/federation/v1/exchange_third_party_invite/
acl url_federationreader path_reg ^/_matrix/federation/v1/user/devices/
acl url_federationreader path_reg ^/_matrix/federation/v1/send/
acl url_federationreader path_reg ^/_matrix/federation/v1/get_groups_publicised$
acl url_federationreader path_reg ^/_matrix/key/v2/query
use_backend synapse_federation_reader if host_matrix url_federationreader
# Federation requests
acl url_workerX_generic path_reg ^/_matrix/federation/v1/event/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/state/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/state_ids/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/backfill/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/get_missing_events/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/publicRooms
acl url_workerX_generic path_reg ^/_matrix/federation/v1/query/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/make_join/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/make_leave/
acl url_workerX_generic path_reg ^/_matrix/federation/(v1|v2)/send_join/
acl url_workerX_generic path_reg ^/_matrix/federation/(v1|v2)/send_leave/
acl url_workerX_generic path_reg ^/_matrix/federation/(v1|v2)/invite/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/event_auth/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/exchange_third_party_invite/
acl url_workerX_generic path_reg ^/_matrix/federation/v1/user/devices/
acl url_workerX_generic path_reg ^/_matrix/key/v2/query
acl url_workerX_generic path_reg ^/_matrix/federation/v1/hierarchy/
# Federation Media Repository worker
# Inbound federation transaction request
acl url_workerX_stick-src path_reg ^/_matrix/federation/v1/send/
# Client API requests
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
acl url_workerX_generic path_reg ^/_matrix/client/v1/rooms/.*/hierarchy$
acl url_workerX_generic path_reg ^/_matrix/client/unstable/org.matrix.msc2716/rooms/.*/batch_send$
acl url_workerX_generic path_reg ^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/account/3pid$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/account/whoami$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/devices$
acl url_workerX_generic path_reg ^/_matrix/client/versions$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/search$
# Encryption requests
# Note that ^/_matrix/client/(r0|v3|unstable)/keys/upload/ requires `worker_main_http_uri`
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/query$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/changes$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/claim$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/room_keys/
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/upload/
# Registration/login requests
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/login$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/register$
acl url_workerX_generic path_reg ^/_matrix/client/v1/register/m.login.registration_token/validity$
# Event sending requests
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/join/
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
# User directory search requests
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/user_directory/search$
# Pagination requests
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
# Push rules (GET-only)
acl url_push-rules path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
# Directory worker endpoints
acl url_directory-worker path_reg ^/_matrix/client/(r0|v3|unstable)/user_directory/search$
# Event persister endpoints
acl url_stream-worker path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/sendToDevice/
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/.*/tags
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/.*/account_data
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers
acl url_stream-worker path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
# Backend directors
use_backend synapseX_worker_generic if host_matrix url_workerX_generic
use_backend synapseX_worker_generic if host_matrix url_push-rules http_method_get
use_backend synapseX_worker_stick-auth if host_matrix url_workerX_stick-auth
use_backend synapseX_worker_stick-src if host_matrix url_workerX_stick-src
use_backend synapseX_worker_stick-path if host_matrix url_workerX_stick-path
use_backend synapse0_directory_worker if host_matrix url_directory-worker
use_backend synapse0_stream_worker if host_matrix url_stream-worker
# Master workers (single-instance) - Federation media repository requests
acl url_mediarepository path_reg ^/_matrix/media/
acl url_mediarepository path_reg ^/_synapse/admin/v1/purge_media_cache$
acl url_mediarepository path_reg ^/_synapse/admin/v1/room/.*/media.*$
acl url_mediarepository path_reg ^/_synapse/admin/v1/user/.*/media.*$
acl url_mediarepository path_reg ^/_synapse/admin/v1/media/.*$
acl url_mediarepository path_reg ^/_synapse/admin/v1/quarantine_media/.*$
use_backend synapse_media_repository if host_matrix url_mediarepository
# Client Reader worker
acl url_clientreader path_reg ^/_matrix/client/(r0|unstable)/register$
acl url_clientreader path_reg ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/login$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/keys/query$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
acl url_clientreader path_reg ^/_matrix/client/versions$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
use_backend synapse_client_reader if host_matrix url_clientreader
# User Dir worker
acl url_userdir path_reg ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
use_backend synapse_user_dir if host_matrix url_userdir
# Frontend Proxy worker
acl url_frontendproxy path_reg ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
acl url_frontendproxy path_reg ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
use_backend synapse_frontend_proxy if host_matrix url_frontendproxy
# Event Creator worker
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/join/
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/profile/
use_backend synapse_event_creator if host_matrix url_eventcreator
acl url_mediarepository path_reg ^/_synapse/admin/v1/users/.*/media$
use_backend synapse0_media_repository if host_matrix url_mediarepository
# MXISD/MA1SD worker
acl url_ma1sd path_reg ^/_matrix/client/(api/v1|r0|unstable)/user_directory
acl url_ma1sd path_reg ^/_matrix/client/(api/v1|r0|unstable)/login
acl url_ma1sd path_reg ^/_matrix/identity
use_backend synapse_ma1sd if host_matrix url_ma1sd
use_backend synapse0_ma1sd if host_matrix url_ma1sd
# Catchalls for Matrix and Riot endpoints not configured above
use_backend synapse_synapse if host_matrix
use_backend riot_http if host_riot
# Webhook service
acl url_webhook path_reg ^/webhook
use_backend synapse0_webhook if host_matrix url_webhook
# Default to Riot if the wrong/no Host header is specified
default_backend riot_http
# .well-known configs
acl url_wellknown path_reg ^/.well-known/matrix
use_backend elementX_http if host_matrix url_wellknown
# Riot frontend backend
backend riot_http
# HTTP mode backend (identical for all workers)
# Catchall Matrix and RElement
use_backend synapse0_master if host_matrix
use_backend elementX_http if host_element
# Default to Riot
default_backend elementX_http
frontend ma1sd_http
bind :::8090 v4v6
mode http
option httplog
use_backend synapse0_ma1sd
# Use layer 7 checking
backend letsencrypt
mode http
server elbvip.i.bonilan.net elbvip.i.bonilan.net:80 resolvers nsX resolve-prefer ipv4
backend elementX_http
mode http
balance leastconn
option httpchk GET /index.html
# Force users (by source IP) to visit the same backend server
stick-table type ipv6 size 5000k peers keepalived-pair expire 72h
stick on src
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server element1 element1.i.bonilan.net:80 resolvers nsX resolve-prefer ipv4 check inter 5000 cookie element1.i.bonilan.net
server element2 element2.i.bonilan.net:80 resolvers nsX resolve-prefer ipv4 check inter 5000 cookie element2.i.bonilan.net
# If you want to use a sorryserver page, uncomment these lines and add them to all other backends too
#errorfile 500 /etc/haproxy/sorryserver.http
#errorfile 502 /etc/haproxy/sorryserver.http
#errorfile 503 /etc/haproxy/sorryserver.http
#errorfile 504 /etc/haproxy/sorryserver.http
# Backend server on localhost port 8080, with backend checking every 5 seconds (identical for all workers)
server localhost 127.0.0.1:8080 check inter 5000
# Primary Synapse backend
backend synapse_synapse
backend synapse0_master
mode http
server localhost 127.0.0.1:8008 check inter 5000
balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8008 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# ma1sd backend (optional)
backend synapse_ma1sd
backend synapse0_directory_worker
mode http
server localhost 127.0.0.1:8090 check inter 5000
balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8033 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Client Reader backend
backend synapse_client_reader
backend synapse0_stream_worker
mode http
server localhost 127.0.0.1:8091 check inter 5000
balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8035 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Event Creator backend
backend synapse_event_creator
backend synapse0_media_repository
mode http
server localhost 127.0.0.1:8092 check inter 5000
balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8095 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Federation Reader backend
backend synapse_federation_reader
backend synapse0_ma1sd
mode http
server localhost 127.0.0.1:8093 check inter 5000
balance roundrobin
option httpchk
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8090 resolvers nsX resolve-prefer ipv4 check inter 5000
# Frontend Proxy backend
backend synapse_frontend_proxy
backend synapse0_webhook
mode http
server localhost 127.0.0.1:8094 check inter 5000
balance roundrobin
option httpchk GET /
server synapse0.i.bonilan.net synapse0.i.bonilan.net:4785 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Media Repository backend
backend synapse_media_repository
backend synapseX_worker_generic
mode http
server localhost 127.0.0.1:8095 check inter 5000
balance roundrobin
option httpchk
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
# Synchrotron backend
backend synapse_synchrotron
backend synapseX_worker_stick-auth
mode http
server localhost 127.0.0.1:8096 check inter 5000
balance roundrobin
option httpchk
# Force users (by Authorization header) to visit the same backend server
stick-table type string len 1024 size 5000k peers keepalived-pair expire 72h
stick on hdr(Authorization)
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
# User Dir backend
backend synapse_user_dir
backend synapseX_worker_stick-path
mode http
server localhost 127.0.0.1:8097 check inter 5000
```
balance roundrobin
option httpchk
# Force users to visit the same backend server
stick-table type string len 1024 size 5000k peers keepalived-pair expire 72h
stick on path,word(5,/) if { path_reg ^/_matrix/client/(r0|unstable)/rooms }
stick on path,word(6,/) if { path_reg ^/_matrix/client/api/v1/rooms }
stick on path
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
Once the above configuration is installed at `/etc/haproxy/haproxy.cfg`, restart the HAProxy service with `sudo systemctl restart haproxy`. You will now have access to the various endpoints on ports 443 and 8448 with a redirection from port 80 to port 443 to enforce SSL from clients.
### Multi-Server, Load-Balanced instance
The more advanced configuration of multiple load-balanced workers on multiple services provides a more redundant and scalable instance, thus allowing the homeserver to grow to many dozens or even hundreds of users. In this setup, the load balancer is separated out onto a separate pair of servers, with a `keepalived` VIP (virtual IP address) shared between them. The name `mlbvip` should resolve to this IP, and all previous worker configurations should use this `mlbvip` hostname as the connection target for the replication directives. Both a reasonable `keepalived` configuration for the VIP and the HAProxy configuration are provided.
The two proxy hosts can be named as desired, in my case using the names `mlb1` and `mlb2`. These names must resolve in DNS, or be specified in `/etc/hosts` on both servers.
The Keepalived configuration below can be used on both proxy hosts, and inline comments provide additional clarification and information as well as indicating any changes required between the hosts. The VIP should be selected from the free IPs of your server subnet.
```
# Global configuration options.
global_defs {
# Use a dedicated IPv6 multicast group; adjust the last octet if this conflicts within your network.
vrrp_mcast_group4 224.0.0.21
# Use VRRP version 3 in strict mode and with no iptables configuration.
vrrp_version 3
vrrp_strict
vrrp_iptables
}
# HAProxy check script, to ensure that this host will not become PRIMARY if HAProxy is not active.
vrrp_script chk {
script "/usr/bin/haproxyctl show info"
interval 5
rise 2
fall 2
}
# Primary IPv4 VIP configuration.
vrrp_instance VIP_4 {
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
state MASTER
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
interface ens2
# A dedicated, unique virtual router ID for this cluster; adjust this if required.
virtual_router_id 21
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
priority 200
# The (list of) virtual IP address(es) with CIDR subnet mask.
virtual_ipaddress {
10.0.0.10/24
}
# Use the HAProxy check script for this VIP.
track_script {
chk
}
}
```
Once the above configuration is installed at `/etc/keepalived/keepalived.conf`, restart the Keepalived service with `sudo systemctl restart keepalived` on each host. You should see the VIP become active on the first host.
The HAProxy configuration below can be used verbatim on both proxy hosts, and inline comments provide additional clarification and information to avoid breaking up the configuration snippit. In this example we use `peer` configuration to enable the use of `stick-tables` directives, which ensure that individual user sessions are synchronized between the HAProxy instances during failovers; with this setting, if the hostnames of the load balancers do not resolve, HAProxy will not start. Some additional, advanced features are used in several ACLs to ensure that, for instance, specific users and rooms are always directed to the same workers if possible, which is required by the individual workers as specified in [the Matrix documentation](https://github.com/matrix-org/synapse/blob/master/docs/workers.md).
```
backend synapseX_worker_stick-src
mode http
balance roundrobin
option httpchk
# Force users (by source IP) to visit the same backend server
stick-table type ipv6 size 5000k peers keepalived-pair expire 72h
stick on src
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
```
Once the above configurations are installed on each server, restart the HAProxy service with `sudo systemctl restart haproxy`. You will now have access to the various endpoints on ports 443 and 8448 with a redirection from port 80 to port 443 to enforce SSL from clients.