Continue tweaking the Matrix post

This commit is contained in:
Joshua Boniface 2022-10-17 10:01:33 -04:00
parent f9ffef0a77
commit b1c6fd541e
1 changed files with 331 additions and 241 deletions

View File

@ -91,10 +91,10 @@ In order to provide a single entrypoint to the load balancers, the administrator
``` ```
# Global configuration options. # Global configuration options.
global_defs { global_defs {
# Use a dedicated IPv4 multicast group; adjust the last octet if this conflicts within your network. # Use a dedicated IPv4 multicast group; adjust the last octet if this conflicts within your network.
vrrp_mcast_group4 224.0.0.21 vrrp_mcast_group4 224.0.0.21
# Use VRRP version 3 in strict mode and with no iptables configuration. # Use VRRP version 3 in strict mode and with no iptables configuration.
vrrp_version 3 vrrp_version 3
vrrp_strict vrrp_strict
vrrp_iptables vrrp_iptables
@ -110,24 +110,24 @@ vrrp_script chk {
# Primary IPv4 VIP configuration. # Primary IPv4 VIP configuration.
vrrp_instance VIP_4 { vrrp_instance VIP_4 {
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot. # Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
state MASTER state MASTER
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name. # Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
interface ens2 interface ens2
# A dedicated, unique virtual router ID for this cluster; adjust this if required. # A dedicated, unique virtual router ID for this cluster; adjust this if required.
virtual_router_id 21 virtual_router_id 21
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server. # The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
priority 200 priority 200
# The (list of) virtual IP address(es) with CIDR subnet mask for the "blbvip" host. # The (list of) virtual IP address(es) with CIDR subnet mask for the "blbvip" host.
virtual_ipaddress { virtual_ipaddress {
10.0.0.2/24 10.0.0.2/24
} }
# Use the HAProxy check script for this VIP. # Use the HAProxy check script for this VIP.
track_script { track_script {
chk chk
} }
@ -529,28 +529,73 @@ ma1sd is an optional component for Matrix, providing 3PID (e.g. email, phone num
## Step 6 - Reverse proxy ## Step 6 - Reverse proxy
For this guide, HAProxy was selected as the reverse proxy of choice. This is mostly due to my familiarity with it, but also to a lesser degree for its more advanced functionality and, in my opinion, nicer configuration syntax. This section is broken into two parts, one for a "single-server" instance, and one for a "load-balanced", multi-server instance with an additional 2 slave worker servers and with separate proxy servers. The latter setup is more complicated, but provides additional scalability and redundancy, allowing the Synapse instance to scale to much higher user volumes with greater performance and reliability. For this guide, HAProxy was selected as the reverse proxy of choice. This is mostly due to my familiarity with it, but also to a lesser degree for its more advanced functionality and, in my opinion, nicer configuration syntax. This section provides configuration for a "load-balanced", multi-server instance with an additional 2 slave worker servers and with separate proxy servers; a single-server instance with basic split workers can be made by removing the additional servers. This will allow the homeserver to grow to many dozens or even hundreds of users. In this setup, the load balancer is separated out onto a separate pair of servers, with a `keepalived` VIP (virtual IP address) shared between them. The name `mlbvip` should resolve to this IP, and all previous worker configurations should use this `mlbvip` hostname as the connection target for the replication directives. Both a reasonable `keepalived` configuration for the VIP and the HAProxy configuration are provided.
### Single-Server instance The two proxy hosts can be named as desired, in my case using the names `mlb1` and `mlb2`. These names must resolve in DNS, or be specified in `/etc/hosts` on both servers.
The simpler configuration of a split-worker Synapse server makes user of only a single server, running one copy of each worker, and thus provides only the split-worker performance boost with no redundancy or load-balancing of the supported workers. In this setup, there is no `mlbvip` and instead all instances of this hostname in the previous configurations should be replaced with either `localhost`, `127.0.0.1`, or `::1` in order to send connections only to the local system. The Keepalived configuration below can be used on both proxy hosts, and inline comments provide additional clarification and information as well as indicating any changes required between the hosts. The VIP should be selected from the free IPs of your server subnet.
The reverse proxy for this setup also excludes several elements that would be provided in a "load-balanced" instance, thus ensuring that it only proxies those services that are absolutely required. ```
# Global configuration options.
The HAProxy configuration below can be used verbatim, and inline comments provide additional clarification and information to avoid breaking up the configuration snippit. global_defs {
# Use a dedicated IPv6 multicast group; adjust the last octet if this conflicts within your network.
vrrp_mcast_group4 224.0.0.21
# Use VRRP version 3 in strict mode and with no iptables configuration.
vrrp_version 3
vrrp_strict
vrrp_iptables
}
# HAProxy check script, to ensure that this host will not become PRIMARY if HAProxy is not active.
vrrp_script chk {
script "/usr/bin/haproxyctl show info"
interval 5
rise 2
fall 2
}
# Primary IPv4 VIP configuration.
vrrp_instance VIP_4 {
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
state MASTER
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
interface ens2
# A dedicated, unique virtual router ID for this cluster; adjust this if required.
virtual_router_id 21
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
priority 200
# The (list of) virtual IP address(es) with CIDR subnet mask.
virtual_ipaddress {
10.0.0.10/24
}
# Use the HAProxy check script for this VIP.
track_script {
chk
}
}
```
Once the above configuration is installed at `/etc/keepalived/keepalived.conf`, restart the Keepalived service with `sudo systemctl restart keepalived` on each host. You should see the VIP become active on the first host.
The HAProxy configuration below can be used verbatim on both proxy hosts, and inline comments provide additional clarification and information to avoid breaking up the configuration snippit. In this example we use `peer` configuration to enable the use of `stick-tables` directives, which ensure that individual user sessions are synchronized between the HAProxy instances during failovers; with this setting, if the hostnames of the load balancers do not resolve, HAProxy will not start. Some additional, advanced features are used in several ACLs to ensure that, for instance, specific users and rooms are always directed to the same workers if possible, which is required by the individual workers as specified in [the Matrix documentation](https://github.com/matrix-org/synapse/blob/master/docs/workers.md).
``` ```
# Global settings - tune HAProxy for optimal performance, administration, and security.
global global
# Send logs to the "local6" service on the local host, via an rsyslog UDP listener. Enable debug logging to log individual connections. # Send logs to the "local6" service on the local host, via an rsyslog UDP listener. Enable debug logging to log individual connections.
log ip6-localhost:514 local6 debug log ip6-localhost:514 local6 debug
log-send-hostname log-send-hostname
chroot /var/lib/haproxy chroot /var/lib/haproxy
pidfile /run/haproxy/haproxy.pid pidfile /run/haproxy/haproxy.pid
# Use multi-threadded support (available with HAProxy 1.8+) for optimal performance in high-load situations. Adjust `nbthread` as needed for your host's core count (1/2 is optimal). # Use multi-threadded support (available with HAProxy 1.8+) for optimal performance in high-load situations. Adjust `nbthread` as needed for your host's core count (2-4 is optimal).
nbproc 1 nbproc 1
nbthread 2 nbthread 4
# Provide a stats socket for `hatop` # Provide a stats socket for `hatop`
stats socket /var/lib/haproxy/admin.sock mode 660 level admin process 1 stats socket /var/lib/haproxy/admin.sock mode 660 level admin process 1
@ -567,13 +612,12 @@ global
# Set default SSL configurations, including a modern highly-secure configuration requiring TLS1.2 client support. # Set default SSL configurations, including a modern highly-secure configuration requiring TLS1.2 client support.
ca-base /etc/ssl/certs ca-base /etc/ssl/certs
crt-base /etc/ssl/private crt-base /etc/ssl/private
tune.ssl.default-dh-param 4096 tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384 ssl-default-bind-ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
ssl-default-server-ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384 ssl-default-server-ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384
ssl-default-server-options ssl-min-ver TLSv1.2 no-tls-tickets ssl-default-server-options ssl-min-ver TLSv1.2 no-tls-tickets
# Default settings - provide some default settings that are applicable to (most) of the listeners and backends below.
defaults defaults
log global log global
@ -587,287 +631,333 @@ defaults
default-server init-addr libc,last,none default-server init-addr libc,last,none
timeout client 30s timeout client 30s
timeout connect 5s timeout connect 30s
timeout server 30s timeout server 300s
timeout tunnel 3600s timeout tunnel 3600s
timeout http-keep-alive 5s timeout http-keep-alive 60s
timeout http-request 15s timeout http-request 30s
timeout queue 60s timeout queue 60s
timeout tarpit 60s timeout tarpit 60s
# Statistics listener with authentication - provides stats for the HAProxy instance via a WebUI (optional) peers keepalived-pair
userlist admin # Peers for site bl0
# WARNING - CHANE ME TO A REAL PASSWORD OR A SHA512-hashed PASSWORD (with `password` instead of `insecure-password`). IF YOU USE `insecure-password`, MAKE SURE THIS CONFIGURATION IS NOT WORLD-READABLE. peer mlb1.i.bonilan.net mlb1.i.bonilan.net:1023
user admin insecure-password P4ssw0rd peer mlb2.i.bonilan.net mlb2.i.bonilan.net:1023
listen stats
bind :::5555 v4v6 resolvers nsX
mode http nameserver ns1 10.101.0.61:53
stats enable nameserver ns2 10.101.0.62:53
stats uri /
stats hide-version userlist admin
stats refresh 10s user admin password MySuperSecretPassword123
stats show-node
stats show-legends listen stats
acl is_admin http_auth(admin) bind :::5555 v4v6
http-request auth realm "Admin access required" if !is_admin mode http
stats enable
stats uri /
stats hide-version
stats refresh 10s
stats show-node
stats show-legends
acl is_admin http_auth(admin)
http-request auth realm "Admin access" if !is_admin
# HTTP frontend - provides the unencrypted HTTP listener, by default only redirectinging to HTTPS
frontend http frontend http
bind :::80 v4v6 bind :::80 v4v6
mode http mode http
option httplog option httplog
acl url_letsencrypt path_beg /.well-known/acme-challenge/
use_backend letsencrypt if url_letsencrypt
redirect scheme https if !url_letsencrypt !{ ssl_fc }
# Uncomment these lines if you want to run Let's Encrypt on the local machine too, and this will forward requests to a Certbot backend later in this file (optional)
#acl url_letsencrypt path_beg /.well-known/acme-challenge/
#use_backend certbot if url_letsencrypt
redirect scheme https if !{ ssl_fc }
# HTTPS frontend - provides the main HTTPS listener, both on port 443 for clients and port 8448 for federation
frontend https frontend https
# Bind to both ports, using certificates from `/etc/ssl/letsencrypt`; point this at whichever directory contains your (combined format) certificates.
bind :::443 v4v6 ssl crt /etc/ssl/letsencrypt/ alpn h2,http/1.1 bind :::443 v4v6 ssl crt /etc/ssl/letsencrypt/ alpn h2,http/1.1
bind :::8448 v4v6 ssl crt /etc/ssl/letsencrypt/ alpn h2,http/1.1 bind :::8448 v4v6 ssl crt /etc/ssl/letsencrypt/ alpn h2,http/1.1
mode http mode http
option httplog option httplog
capture request header Host len 64
# Capture the Host header to forward along
capture request header Host len 32
# Add X-Forwarded headers to alert backend processes that these requests are proxied
http-request set-header X-Forwarded-Proto https http-request set-header X-Forwarded-Proto https
http-request add-header X-Forwarded-Host %[req.hdr(host)] http-request add-header X-Forwarded-Host %[req.hdr(host)]
http-request add-header X-Forwarded-Server %[req.hdr(host)] http-request add-header X-Forwarded-Server %[req.hdr(host)]
http-request add-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Port %[dst_port]
# Domain ACLs - adjust these to reflect your subdomains # Method ACLs
# In my case, I use three subdomains: "im.bonifacelabs.ca" for the Synapse Matrix homeserver itself, and "riot.bonifacelabs.ca" with an alias of "chat.bonifacelabs.ca" for the Riot frontend. acl http_method_get method GET
# A combination of a `.well-known/matrix/server` path (at "bonifacelabs.ca") and a DNS SRV record are used for delegation, though you could run `host_matrix` on your bare domain instead, for instance.
# Domain ACLs
acl host_matrix hdr_dom(host) im.bonifacelabs.ca acl host_matrix hdr_dom(host) im.bonifacelabs.ca
acl host_riot hdr_dom(host) chat.bonifacelabs.ca acl host_element hdr_dom(host) chat.bonifacelabs.ca
acl host_riot hdr_dom(host) riot.bonifacelabs.ca
# Synchrotron worker # URL ACLs
acl url_synchrotron path_reg ^/_matrix/client/(v2_alpha|r0)/sync$ # Sync requests
acl url_synchrotron path_reg ^/_matrix/client/(api/v1|v2_alpha|r0)/events$ acl url_workerX_stick-auth path_reg ^/_matrix/client/(r0|v3)/sync$
acl url_synchrotron path_reg ^/_matrix/client/(api/v1|r0)/initialSync$ acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3)/events$
acl url_synchrotron path_reg ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$ acl url_workerX_stick-auth path_reg ^/_matrix/client/(api/v1|r0|v3)/initialSync$
use_backend synapse_synchrotron if host_matrix url_synchrotron acl url_workerX_stick-auth path_reg ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
# Federation Reader worker # Federation requests
acl url_federationreader path_reg ^/_matrix/federation/v1/send/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/event/
acl url_federationreader path_reg ^/_matrix/federation/v1/groups/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/state/
acl url_federationreader path_reg ^/_matrix/federation/v1/event/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/state_ids/
acl url_federationreader path_reg ^/_matrix/federation/v1/state/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/backfill/
acl url_federationreader path_reg ^/_matrix/federation/v1/state_ids/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/get_missing_events/
acl url_federationreader path_reg ^/_matrix/federation/v1/backfill/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/publicRooms
acl url_federationreader path_reg ^/_matrix/federation/v1/get_missing_events/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/query/
acl url_federationreader path_reg ^/_matrix/federation/v1/publicRooms acl url_workerX_generic path_reg ^/_matrix/federation/v1/make_join/
acl url_federationreader path_reg ^/_matrix/federation/v1/query/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/make_leave/
acl url_federationreader path_reg ^/_matrix/federation/v1/make_join/ acl url_workerX_generic path_reg ^/_matrix/federation/(v1|v2)/send_join/
acl url_federationreader path_reg ^/_matrix/federation/v1/make_leave/ acl url_workerX_generic path_reg ^/_matrix/federation/(v1|v2)/send_leave/
acl url_federationreader path_reg ^/_matrix/federation/v1/send_join/ acl url_workerX_generic path_reg ^/_matrix/federation/(v1|v2)/invite/
acl url_federationreader path_reg ^/_matrix/federation/v2/send_join/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/event_auth/
acl url_federationreader path_reg ^/_matrix/federation/v1/send_leave/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/exchange_third_party_invite/
acl url_federationreader path_reg ^/_matrix/federation/v2/send_leave/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/user/devices/
acl url_federationreader path_reg ^/_matrix/federation/v1/invite/ acl url_workerX_generic path_reg ^/_matrix/key/v2/query
acl url_federationreader path_reg ^/_matrix/federation/v2/invite/ acl url_workerX_generic path_reg ^/_matrix/federation/v1/hierarchy/
acl url_federationreader path_reg ^/_matrix/federation/v1/query_auth/
acl url_federationreader path_reg ^/_matrix/federation/v1/event_auth/
acl url_federationreader path_reg ^/_matrix/federation/v1/exchange_third_party_invite/
acl url_federationreader path_reg ^/_matrix/federation/v1/user/devices/
acl url_federationreader path_reg ^/_matrix/federation/v1/send/
acl url_federationreader path_reg ^/_matrix/federation/v1/get_groups_publicised$
acl url_federationreader path_reg ^/_matrix/key/v2/query
use_backend synapse_federation_reader if host_matrix url_federationreader
# Federation Media Repository worker # Inbound federation transaction request
acl url_workerX_stick-src path_reg ^/_matrix/federation/v1/send/
# Client API requests
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
acl url_workerX_generic path_reg ^/_matrix/client/v1/rooms/.*/hierarchy$
acl url_workerX_generic path_reg ^/_matrix/client/unstable/org.matrix.msc2716/rooms/.*/batch_send$
acl url_workerX_generic path_reg ^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/account/3pid$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/account/whoami$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/devices$
acl url_workerX_generic path_reg ^/_matrix/client/versions$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/search$
# Encryption requests
# Note that ^/_matrix/client/(r0|v3|unstable)/keys/upload/ requires `worker_main_http_uri`
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/query$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/changes$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/claim$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/room_keys/
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/keys/upload/
# Registration/login requests
acl url_workerX_generic path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/login$
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/register$
acl url_workerX_generic path_reg ^/_matrix/client/v1/register/m.login.registration_token/validity$
# Event sending requests
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/join/
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
# User directory search requests
acl url_workerX_generic path_reg ^/_matrix/client/(r0|v3|unstable)/user_directory/search$
# Pagination requests
acl url_workerX_stick-path path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
# Push rules (GET-only)
acl url_push-rules path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
# Directory worker endpoints
acl url_directory-worker path_reg ^/_matrix/client/(r0|v3|unstable)/user_directory/search$
# Event persister endpoints
acl url_stream-worker path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/sendToDevice/
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/.*/tags
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/.*/account_data
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt
acl url_stream-worker path_reg ^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers
acl url_stream-worker path_reg ^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
# Backend directors
use_backend synapseX_worker_generic if host_matrix url_workerX_generic
use_backend synapseX_worker_generic if host_matrix url_push-rules http_method_get
use_backend synapseX_worker_stick-auth if host_matrix url_workerX_stick-auth
use_backend synapseX_worker_stick-src if host_matrix url_workerX_stick-src
use_backend synapseX_worker_stick-path if host_matrix url_workerX_stick-path
use_backend synapse0_directory_worker if host_matrix url_directory-worker
use_backend synapse0_stream_worker if host_matrix url_stream-worker
# Master workers (single-instance) - Federation media repository requests
acl url_mediarepository path_reg ^/_matrix/media/ acl url_mediarepository path_reg ^/_matrix/media/
acl url_mediarepository path_reg ^/_synapse/admin/v1/purge_media_cache$ acl url_mediarepository path_reg ^/_synapse/admin/v1/purge_media_cache$
acl url_mediarepository path_reg ^/_synapse/admin/v1/room/.*/media.*$ acl url_mediarepository path_reg ^/_synapse/admin/v1/room/.*/media.*$
acl url_mediarepository path_reg ^/_synapse/admin/v1/user/.*/media.*$ acl url_mediarepository path_reg ^/_synapse/admin/v1/user/.*/media.*$
acl url_mediarepository path_reg ^/_synapse/admin/v1/media/.*$ acl url_mediarepository path_reg ^/_synapse/admin/v1/media/.*$
acl url_mediarepository path_reg ^/_synapse/admin/v1/quarantine_media/.*$ acl url_mediarepository path_reg ^/_synapse/admin/v1/quarantine_media/.*$
use_backend synapse_media_repository if host_matrix url_mediarepository acl url_mediarepository path_reg ^/_synapse/admin/v1/users/.*/media$
use_backend synapse0_media_repository if host_matrix url_mediarepository
# Client Reader worker
acl url_clientreader path_reg ^/_matrix/client/(r0|unstable)/register$
acl url_clientreader path_reg ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/login$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/keys/query$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
acl url_clientreader path_reg ^/_matrix/client/versions$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
acl url_clientreader path_reg ^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
use_backend synapse_client_reader if host_matrix url_clientreader
# User Dir worker
acl url_userdir path_reg ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
use_backend synapse_user_dir if host_matrix url_userdir
# Frontend Proxy worker
acl url_frontendproxy path_reg ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
acl url_frontendproxy path_reg ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
use_backend synapse_frontend_proxy if host_matrix url_frontendproxy
# Event Creator worker
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/join/
acl url_eventcreator path_reg ^/_matrix/client/(api/v1|r0|unstable)/profile/
use_backend synapse_event_creator if host_matrix url_eventcreator
# MXISD/MA1SD worker # MXISD/MA1SD worker
acl url_ma1sd path_reg ^/_matrix/client/(api/v1|r0|unstable)/user_directory acl url_ma1sd path_reg ^/_matrix/client/(api/v1|r0|unstable)/user_directory
acl url_ma1sd path_reg ^/_matrix/client/(api/v1|r0|unstable)/login acl url_ma1sd path_reg ^/_matrix/client/(api/v1|r0|unstable)/login
acl url_ma1sd path_reg ^/_matrix/identity acl url_ma1sd path_reg ^/_matrix/identity
use_backend synapse_ma1sd if host_matrix url_ma1sd use_backend synapse0_ma1sd if host_matrix url_ma1sd
# Catchalls for Matrix and Riot endpoints not configured above # Webhook service
use_backend synapse_synapse if host_matrix acl url_webhook path_reg ^/webhook
use_backend riot_http if host_riot use_backend synapse0_webhook if host_matrix url_webhook
# Default to Riot if the wrong/no Host header is specified # .well-known configs
default_backend riot_http acl url_wellknown path_reg ^/.well-known/matrix
use_backend elementX_http if host_matrix url_wellknown
# Riot frontend backend # Catchall Matrix and RElement
backend riot_http use_backend synapse0_master if host_matrix
# HTTP mode backend (identical for all workers) use_backend elementX_http if host_element
# Default to Riot
default_backend elementX_http
frontend ma1sd_http
bind :::8090 v4v6
mode http mode http
option httplog
use_backend synapse0_ma1sd
# Use layer 7 checking backend letsencrypt
mode http
server elbvip.i.bonilan.net elbvip.i.bonilan.net:80 resolvers nsX resolve-prefer ipv4
backend elementX_http
mode http
balance leastconn
option httpchk GET /index.html option httpchk GET /index.html
# Force users (by source IP) to visit the same backend server
stick-table type ipv6 size 5000k peers keepalived-pair expire 72h
stick on src
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server element1 element1.i.bonilan.net:80 resolvers nsX resolve-prefer ipv4 check inter 5000 cookie element1.i.bonilan.net
server element2 element2.i.bonilan.net:80 resolvers nsX resolve-prefer ipv4 check inter 5000 cookie element2.i.bonilan.net
# If you want to use a sorryserver page, uncomment these lines and add them to all other backends too backend synapse0_master
#errorfile 500 /etc/haproxy/sorryserver.http
#errorfile 502 /etc/haproxy/sorryserver.http
#errorfile 503 /etc/haproxy/sorryserver.http
#errorfile 504 /etc/haproxy/sorryserver.http
# Backend server on localhost port 8080, with backend checking every 5 seconds (identical for all workers)
server localhost 127.0.0.1:8080 check inter 5000
# Primary Synapse backend
backend synapse_synapse
mode http mode http
server localhost 127.0.0.1:8008 check inter 5000 balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8008 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# ma1sd backend (optional) backend synapse0_directory_worker
backend synapse_ma1sd
mode http mode http
server localhost 127.0.0.1:8090 check inter 5000 balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8033 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Client Reader backend backend synapse0_stream_worker
backend synapse_client_reader
mode http mode http
server localhost 127.0.0.1:8091 check inter 5000 balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8035 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Event Creator backend backend synapse0_media_repository
backend synapse_event_creator
mode http mode http
server localhost 127.0.0.1:8092 check inter 5000 balance roundrobin
option httpchk
retries 0
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8095 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Federation Reader backend backend synapse0_ma1sd
backend synapse_federation_reader
mode http mode http
server localhost 127.0.0.1:8093 check inter 5000 balance roundrobin
option httpchk
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse0.i.bonilan.net synapse0.i.bonilan.net:8090 resolvers nsX resolve-prefer ipv4 check inter 5000
# Frontend Proxy backend backend synapse0_webhook
backend synapse_frontend_proxy
mode http mode http
server localhost 127.0.0.1:8094 check inter 5000 balance roundrobin
option httpchk GET /
server synapse0.i.bonilan.net synapse0.i.bonilan.net:4785 resolvers nsX resolve-prefer ipv4 check inter 5000 backup
# Media Repository backend backend synapseX_worker_generic
backend synapse_media_repository
mode http mode http
server localhost 127.0.0.1:8095 check inter 5000 balance roundrobin
option httpchk
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
# Synchrotron backend backend synapseX_worker_stick-auth
backend synapse_synchrotron
mode http mode http
server localhost 127.0.0.1:8096 check inter 5000 balance roundrobin
option httpchk
# Force users (by Authorization header) to visit the same backend server
stick-table type string len 1024 size 5000k peers keepalived-pair expire 72h
stick on hdr(Authorization)
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
# User Dir backend backend synapseX_worker_stick-path
backend synapse_user_dir
mode http mode http
server localhost 127.0.0.1:8097 check inter 5000 balance roundrobin
``` option httpchk
# Force users to visit the same backend server
stick-table type string len 1024 size 5000k peers keepalived-pair expire 72h
stick on path,word(5,/) if { path_reg ^/_matrix/client/(r0|unstable)/rooms }
stick on path,word(6,/) if { path_reg ^/_matrix/client/api/v1/rooms }
stick on path
errorfile 500 /etc/haproxy/sorryserver.http
errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
errorfile 504 /etc/haproxy/sorryserver.http
server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
Once the above configuration is installed at `/etc/haproxy/haproxy.cfg`, restart the HAProxy service with `sudo systemctl restart haproxy`. You will now have access to the various endpoints on ports 443 and 8448 with a redirection from port 80 to port 443 to enforce SSL from clients. backend synapseX_worker_stick-src
mode http
### Multi-Server, Load-Balanced instance balance roundrobin
option httpchk
The more advanced configuration of multiple load-balanced workers on multiple services provides a more redundant and scalable instance, thus allowing the homeserver to grow to many dozens or even hundreds of users. In this setup, the load balancer is separated out onto a separate pair of servers, with a `keepalived` VIP (virtual IP address) shared between them. The name `mlbvip` should resolve to this IP, and all previous worker configurations should use this `mlbvip` hostname as the connection target for the replication directives. Both a reasonable `keepalived` configuration for the VIP and the HAProxy configuration are provided. # Force users (by source IP) to visit the same backend server
stick-table type ipv6 size 5000k peers keepalived-pair expire 72h
The two proxy hosts can be named as desired, in my case using the names `mlb1` and `mlb2`. These names must resolve in DNS, or be specified in `/etc/hosts` on both servers. stick on src
errorfile 500 /etc/haproxy/sorryserver.http
The Keepalived configuration below can be used on both proxy hosts, and inline comments provide additional clarification and information as well as indicating any changes required between the hosts. The VIP should be selected from the free IPs of your server subnet. errorfile 502 /etc/haproxy/sorryserver.http
errorfile 503 /etc/haproxy/sorryserver.http
``` errorfile 504 /etc/haproxy/sorryserver.http
# Global configuration options. server synapse1.i.bonilan.net synapse1.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
global_defs { server synapse2.i.bonilan.net synapse2.i.bonilan.net:8030 resolvers nsX resolve-prefer ipv4 check inter 5000
# Use a dedicated IPv6 multicast group; adjust the last octet if this conflicts within your network.
vrrp_mcast_group4 224.0.0.21
# Use VRRP version 3 in strict mode and with no iptables configuration.
vrrp_version 3
vrrp_strict
vrrp_iptables
}
# HAProxy check script, to ensure that this host will not become PRIMARY if HAProxy is not active.
vrrp_script chk {
script "/usr/bin/haproxyctl show info"
interval 5
rise 2
fall 2
}
# Primary IPv4 VIP configuration.
vrrp_instance VIP_4 {
# Initial state, MASTER on both hosts to ensure that at least one host becomes active immediately on boot.
state MASTER
# Interface to place the VIP on; this is optional though still recommended on single-NIC machines; replace "ens2" with your actual NIC name.
interface ens2
# A dedicated, unique virtual router ID for this cluster; adjust this if required.
virtual_router_id 21
# The priority. Set to 200 for the primary (first) server, and to 100 for the secondary (second) server.
priority 200
# The (list of) virtual IP address(es) with CIDR subnet mask.
virtual_ipaddress {
10.0.0.10/24
}
# Use the HAProxy check script for this VIP.
track_script {
chk
}
}
```
Once the above configuration is installed at `/etc/keepalived/keepalived.conf`, restart the Keepalived service with `sudo systemctl restart keepalived` on each host. You should see the VIP become active on the first host.
The HAProxy configuration below can be used verbatim on both proxy hosts, and inline comments provide additional clarification and information to avoid breaking up the configuration snippit. In this example we use `peer` configuration to enable the use of `stick-tables` directives, which ensure that individual user sessions are synchronized between the HAProxy instances during failovers; with this setting, if the hostnames of the load balancers do not resolve, HAProxy will not start. Some additional, advanced features are used in several ACLs to ensure that, for instance, specific users and rooms are always directed to the same workers if possible, which is required by the individual workers as specified in [the Matrix documentation](https://github.com/matrix-org/synapse/blob/master/docs/workers.md).
```
``` ```
Once the above configurations are installed on each server, restart the HAProxy service with `sudo systemctl restart haproxy`. You will now have access to the various endpoints on ports 443 and 8448 with a redirection from port 80 to port 443 to enforce SSL from clients. Once the above configurations are installed on each server, restart the HAProxy service with `sudo systemctl restart haproxy`. You will now have access to the various endpoints on ports 443 and 8448 with a redirection from port 80 to port 443 to enforce SSL from clients.