Commit Graph

267 Commits

Author SHA1 Message Date
Joshua Boniface afdf254297 Bump version to 0.9.32 2021-08-19 12:37:58 -04:00
Joshua Boniface 42e776fac1 Properly handle exceptions getting VM stats 2021-08-19 12:36:31 -04:00
Joshua Boniface 7ecc6a2635 Bump version to 0.9.31 2021-07-30 12:08:12 -04:00
Joshua Boniface 3ab6365a53 Adjust receive output to show proper source 2021-07-22 15:43:08 -04:00
Joshua Boniface 2a99a27feb Bump version to 0.9.30 2021-07-20 00:01:45 -04:00
Joshua Boniface fa1d93e933 Bump version to 0.9.29 2021-07-19 16:55:41 -04:00
Joshua Boniface 6ead21a308 Handle cleanup from a failure properly 2021-07-19 12:39:13 -04:00
Joshua Boniface b7c8c2ee3d Fix handling of this_node and d_domain in cleanup 2021-07-19 12:36:35 -04:00
Joshua Boniface d48f58930b Use harder exits and add cleanup termination 2021-07-19 12:27:16 -04:00
Joshua Boniface 7c36388c8f Add post-networking delay and adjust daemon delay 2021-07-19 12:23:45 -04:00
Joshua Boniface 71e4d0b32a Bump version to 0.9.28 2021-07-19 09:29:34 -04:00
Joshua Boniface 15d92c483f Bump version to 0.9.27 2021-07-19 00:03:40 -04:00
Joshua Boniface 602093029c Bump version to 0.9.26 2021-07-18 20:49:52 -04:00
Joshua Boniface b770e15a91 Fix final termination of logger
We need to do a bit more finagling with the logger on termination to
ensure that all messages are written and the queue drained before
actually terminating.
2021-07-18 19:53:00 -04:00
Joshua Boniface e23a65128a Remove del of logger item 2021-07-18 19:03:47 -04:00
Joshua Boniface 3a2478ee0c Cleanly terminate logger on cleanup 2021-07-18 18:57:44 -04:00
Joshua Boniface 323c7c41ae Implement node logging into Zookeeper
Adds the ability to send node daemon logs to Zookeeper to facilitate a
command like "pvc node log", similar to "pvc vm log". Each node stores
its logs in a separate tree under "/logs" which can then be combined or
queried. By default, set by config, only 2000 lines are kept.
2021-07-18 17:11:43 -04:00
Joshua Boniface cd1db3d587 Ensure node name is part of confing 2021-07-18 16:38:58 -04:00
Joshua Boniface 75fb60b1b4 Add VM list filtering by tag
Uses same method as state or node filtering, rather than altering how
the main LIMIT field works.
2021-07-14 00:59:20 -04:00
Joshua Boniface c6d552ae57 Rework success checks for IPMI fencing
Previously, if the node failed to restart, it was declared a "bad fence"
and no further action would be taken. However, there are some
situations, for instance critical hardware failures, where intelligent
systems will not attempt (or succeed at) starting up the node in such a
case, which would result in dead, known-offline nodes without recovery.

Tweak this behaviour somewhat. The main path of Reboot -> Check On ->
Success + fence-flush is retained, but some additional side-paths are
now defined:

1. We attempt to power "on" the chassis 1 second after the reboot, just
in case it is off and can be recovered. We then wait another 2 seconds
and check the power status (as we did before).

2. If the reboot succeeded, follow this series of choices:

    a. If the chassis is on, the fence succeeded.

    b. If the chassis is off, the fence "succeeded" as well.

    c. If the chassis is in some other state, the fence failed.

3. If the reboot failed, follow this series of choices:

    a. If the chassis is off, the fence itself failed, but we can treat
    it as "succeeded"" since the chassis is in a known-offline state.
    This is the most likely situation when there is a critical hardware
    failure, and the server's IPMI does not allow itself to start back
    up again.

    b. If the chassis is in any other state ("on" or unknown), the fence
    itself failed and we must treat this as a fence failure.

Overall, this should alleviate the aforementioned issue of a critical
failure rendering the node persistently "off" not triggering a
fence-flush and ensure fencing is more robust.
2021-07-13 17:54:41 -04:00
Joshua Boniface 2e9f6ac201 Bump version to 0.9.25 2021-07-11 23:19:09 -04:00
Joshua Boniface f09849bedf Don't overwrite shutdown state on termination
Just a minor quibble and not really impactful.
2021-07-11 23:18:14 -04:00
Joshua Boniface c76149141f Only log ZK connections when persistent
Prevents spam in the API logs.
2021-07-10 23:35:49 -04:00
Joshua Boniface f00c4d07f4 Add date output to keepalive
Helps track when there is a log follow in "-o cat" mode.
2021-07-10 23:24:59 -04:00
Joshua Boniface 20b66c10e1 Move two more commands to Rados library 2021-07-10 17:28:42 -04:00
Joshua Boniface cfeba50b17 Revert "Return to all command-based Ceph gathering"
This reverts commit 65d14ccd92.

This was actually a bad idea. For inexplicable reasons, running these
Ceph commands manually (not even via Python, but in a normal shell)
takes 7 * two orders of magnitude longer than running them with the
Rados module, so long in fact that some basic commands like "ceph
health" would sometimes take longer than the 1 second timeout to
complete. The Rados commands would however take about 1ms instead.

Despite the occasional issues when monitors drop out, the Rados module
is clearly far superior to the shell commands for any moderately-loaded
Ceph cluster. We can look into solving timeouts another way (perhaps
with Processes instead of Threads) at a later time.

Rados module "ceph health":
    b'{"checks":{},"status":"HEALTH_OK"}'
    0.001204 (s)
    b'{"checks":{},"status":"HEALTH_OK"}'
    0.001258 (s)
Command "ceph health":
    joshua@hv1.c.bonilan.net ~ $ time ceph health >/dev/null
    real    0m0.772s
    user    0m0.707s
    sys     0m0.046s
    joshua@hv1.c.bonilan.net ~ $ time ceph health >/dev/null
    real    0m0.796s
    user    0m0.728s
    sys     0m0.054s
2021-07-10 03:47:45 -04:00
Joshua Boniface 551bae2518 Bump version to 0.9.24 2021-07-09 15:58:36 -04:00
Joshua Boniface 2b5dc286ab Correct failure to get ceph_health data 2021-07-09 13:10:28 -04:00
Joshua Boniface 330cf14638 Remove return statements in keepalive collectors
These seem to bork the keepalive timer process, so just remove them and
let it continue to press on.
2021-07-09 13:04:17 -04:00
Joshua Boniface 65d14ccd92 Return to all command-based Ceph gathering
Using the Rados module was very problematic, specifically because it had
no sensible timeout parameters and thus would hang for many seconds.
This has poor implications since it blocks further keepalives.

Instead, remove the Rados usage entirely and go back completely to using
manual OS commands to gather this information. While this may cause PID
exhaustion more quickly it's worthwhile to avoid failure scenarios when
Ceph stats time out.

Closes #137
2021-07-06 11:30:45 -04:00
Joshua Boniface 7082982a33 Bump version to 0.9.23 2021-07-05 23:40:32 -04:00
Joshua Boniface 5b6ef71909 Ensure daemon mode is updated on startup
Fixes the side effect of the previous bug during deploys of 0.9.22.
2021-07-05 23:39:23 -04:00
Joshua Boniface be7b0be8ed Fix typo in schema path name 2021-07-05 23:23:23 -04:00
Joshua Boniface 37cd278bc2 Bump version to 0.9.22 2021-07-05 14:18:51 -04:00
Joshua Boniface a69105569f Add node PVC version data to Node information
Allows API client to see the currently-active version of the node
daemon.
2021-07-05 09:57:38 -04:00
Joshua Boniface 21a1a7da9e Fix bad schema reference
Not sure how this didn't cause an issue until now, but the wrong key
path was used and this was getting unexpected data with the newly-added
version string instead of the proper mode string.
2021-07-05 09:53:51 -04:00
Joshua Boniface f0fd3d3f0e Make extra sure VMs terminate when told
When doing a stop_vm or terminate_vm, check again after 0.2 seconds
and try re-terminating if it's still running. Covers cases where a VM
doesn't stop if given the 'stop' state.
2021-07-02 11:40:34 -04:00
Joshua Boniface f12de6727d Adjust logo slightly and add debug state 2021-07-02 02:32:08 -04:00
Joshua Boniface e94f5354e6 Update startup messages with new ASCII logo 2021-07-02 02:21:30 -04:00
Joshua Boniface c51023ba81 Add profiler to keepalive function 2021-07-02 01:55:15 -04:00
Joshua Boniface 39e82ee426 Cast base schema version to int
Or all our comparisons will fail later and nodes can't start.
2021-06-30 09:40:33 -04:00
Joshua Boniface fe0a1d582a Bump version to 0.9.21 2021-06-29 19:21:31 -04:00
Joshua Boniface 3490ecbb59 Remove explicit ZK address from Patronictl command 2021-06-22 03:31:06 -04:00
Joshua Boniface 2928d695c9 Ensure migration method is updated on state changes 2021-06-22 03:20:15 -04:00
Joshua Boniface 26dd24e3f5 Ensure MTU is set on VF when starting up 2021-06-22 02:26:14 -04:00
Joshua Boniface e623909a43 Store PHY MAC for VFs and restore after free 2021-06-22 00:56:47 -04:00
Joshua Boniface 60e1da09dd Don't try any shenannegans when updating NICs
Trying to do this on the VMInstance side had problems because we can't
differentiate the 3 types of migration there. So, just update this in
the API side and hope everything goes well.

This introduces an edge bug: if a VM is using a macvtap SR-IOV device,
and then tries to migrate, and the migrate is aborted, the NIC lists
will be inconsistent.

When I revamp the VMInstance in the future, I should be able to correct
this, but for now we'll have to live with that edgecase.
2021-06-22 00:00:50 -04:00
Joshua Boniface 7d42fba373 Ensure being in migrate doesn't abort shutdown 2021-06-21 23:28:53 -04:00
Joshua Boniface 24ce361a04 Ensure SR-IOV NIC states are updated on migration 2021-06-21 23:18:34 -04:00
Joshua Boniface eeb83da97d Add support for SR-IOV NICs to VMs 2021-06-21 23:18:22 -04:00