Go to file
Joshua Boniface ffaa4c033f Improve handling of large file uploads
By default, Werkzeug would require the entire file (be it an OVA or
image file) to be uploaded and saved to a temporary, fake file under
`/tmp`, before any further processing could occur. This blocked most of
the execution of these functions until the upload was completed.

This entirely defeated the purpose of what I was trying to do, which was
to save the uploads directly to the temporary blockdev in each case,
thus avoiding any sort of memory or (host) disk usage.

The solution is two-fold:

  1. First, ensure that the `location='args'` value is set in
  RequestParser; without this, the `files` portion would be parsed
  during the argument parsing, which was the original source of this
  blocking behaviour.

  2. Instead of the convoluted request handling that was being done
  originally here, instead entirely defer the parsing of the `files`
  arguments until the point in the code where they are ready to be
  saved. Then, using an override stream_factory that simply opens the
  temporary blockdev, the upload can commence while being written
  directly out to it, rather than using `/tmp` space.

This does alter the error handling slightly; it is impossible to check
if the argument was passed until this point in the code, so it may take
longer to fail if the API consumer does not specify a file as they
should. This is a minor trade-off and I would expect my API consumers to
be sane here.
2020-10-19 01:00:34 -04:00
api-daemon Improve handling of large file uploads 2020-10-19 01:00:34 -04:00
client-cli Allow network-less managed networks 2020-10-18 23:13:12 -04:00
daemon-common Add cluster overprovision determination 2020-10-18 14:57:22 -04:00
debian Bump base version to 0.9 2020-10-18 14:31:19 -04:00
docs Add provisioned memory to node info 2020-10-18 14:17:15 -04:00
node-daemon Bump base version to 0.9 2020-10-18 14:31:19 -04:00
.file-header Update copyright header year to 2020 2020-01-08 19:38:02 -05:00
.gitignore Ignore swap files 2018-06-18 21:26:36 -04:00
.gitlab-ci.yml Standardize package building 2020-08-26 11:04:58 -04:00
LICENSE Remove licence blurb for python_dhcp_server 2018-10-14 16:29:39 -04:00
README.md Bump version to 0.8 2020-08-26 10:24:44 -04:00
build-and-deploy.sh Standardize package building 2020-08-26 11:04:58 -04:00
build-deb.sh Update package version to 0.7 2020-02-15 23:25:47 -05:00
build-unstable-deb.sh Standardize package building 2020-08-26 11:04:58 -04:00
gen-api-doc Add DB migration update script 2020-02-15 23:23:09 -05:00
gen-api-migrations Fix pvcapid config in migrations script 2020-03-15 17:33:27 -04:00
mkdocs.yml Revert "Add material theme to docs" 2019-07-10 15:23:26 -04:00
pvc_logo.svg A few more tweaks 2018-06-06 02:43:34 -04:00

README.md

PVC - The Parallel Virtual Cluster system

Logo banner

License Release Pipeline Status Documentation Status

NOTICE FOR GITHUB: This repository is a read-only mirror of the PVC repositories from my personal GitLab instance. Pull requests submitted here will not be merged. Issues submitted here will however be treated as authoritative.

PVC is a KVM+Ceph+Zookeeper-based, Free Software, scalable, redundant, self-healing, and self-managing private cloud solution designed with administrator simplicity in mind. It is built from the ground-up to be redundant at the host layer, allowing the cluster to gracefully handle the loss of nodes or their components, both due to hardware failure or due to maintenance. It is able to scale from a minimum of 3 nodes up to 12 or more nodes, while retaining performance and flexibility, allowing the administrator to build a small cluster today and grow it as needed.

The major goal of PVC is to be administrator friendly, providing the power of Enterprise-grade private clouds like OpenStack, Nutanix, and VMWare to homelabbers, SMBs, and small ISPs, without the cost or complexity. It believes in picking the best tool for a job and abstracting it behind the cluster as a whole, freeing the administrator from the boring and time-consuming task of selecting the best component, and letting them get on with the things that really matter. Administration can be done from a simple CLI or via a RESTful API capable of building full-featured web frontends or additional applications, taking a self-documenting approach to keep the administrator learning curvet as low as possible. Setup is easy and straightforward with an ISO-based node installer and Ansible role framework designed to get a cluster up and running as quickly as possible. Build your cloud in an hour, grow it as you need, and never worry about it: just add physical servers.

Getting Started

To get started with PVC, read the Cluster Architecture document, then see Installing for details on setting up a set of PVC nodes, using the PVC Ansible framework to configure and bootstrap a cluster, and managing it with the pvc CLI tool or RESTful HTTP API. For details on the project, its motivation, and architectural details, see the About page.

Changelog

v0.8

Numerous improvements and bugfixes. This release is suitable for general use and is pre-release-quality software.

v0.7

Numerous improvements and bugfixes, revamped documentation. This release is suitable for general use and is beta-quality software.

v0.6

Numerous improvements and bugfixes, full implementation of the provisioner, full implementation of the API CLI client (versus direct CLI client). This release is suitable for general use and is beta-quality software.

v0.5

First public release; fully implements the VM, network, and storage managers, the HTTP API, and the pvc-ansible framework for deploying and bootstrapping a cluster. This release is suitable for general use, though it is still alpha-quality software and should be expected to change significantly until 1.0 is released.

v0.4

Full implementation of virtual management and virtual networking functionality. Partial implementation of storage functionality.

v0.3

Basic implementation of virtual management functionality.