Add jute.maxbuffer to Zookeeper environment ops

Adds this option based on the findings of
https://github.com/python-zk/kazoo/issues/630, whereby restores of >1MB
in size would fail. This is considered an unsafe option, but given our
usecase no actual znode should ever exceed this limit; this is purely
for the large transactions that come from a `pvc task restore` action to
an empty Zookeeper instance.
This commit is contained in:
Joshua Boniface 2023-09-01 15:42:24 -04:00
parent 075ce8ea22
commit bcb5962353
1 changed files with 7 additions and 1 deletions

View File

@ -5,6 +5,12 @@ ZOOCFG=/etc/zookeeper/conf/zoo.cfg
ZOO_LOG_DIR=/var/log/zookeeper
ZOO_LOG4J_PROP=INFO,ROLLINGFILE
JMXLOCALONLY=false
JAVA_OPTS="-Djava.net.preferIPv4Stack=True"
# java.net.preferIPv4Stack=True
# Prefer IPv4 over IPV6 to avoid strange headaches in mixed environments.
# jute.maxbuffer=67108864
# Increase the maximum buffer size from 1048575 (1MB) to 67108864 (64MB); required to allow a single `create`
# transaction, in the /api/v1/restore specifically, of >1MB of data; 64MB seems a reasonable limit given my
# cluster is only ~5.2MB of raw JSON data and beyond 12x that seems like a cluster too large for PVC.
JAVA_OPTS="-Djava.net.preferIPv4Stack=True -Djute.maxbuffer=67108864"
JAVA=/usr/bin/java
CLASSPATH="/etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/netty.jar:/usr/share/java/slf4j-api.jar:/usr/share/java/slf4j-log4j12.jar:/usr/share/java/zookeeper.jar"