Compare commits
96 Commits
Author | SHA1 | Date | |
---|---|---|---|
d84e94eff4 | |||
ce9d0e9603 | |||
3aea5ae34b | |||
3f5076d9ca | |||
8ed602ef9c | |||
e501345e44 | |||
d8f97d090a | |||
082648f3b2 | |||
2df8f5d407 | |||
ca65cb66b8 | |||
616d7c43ed | |||
4fe3a73980 | |||
26084741d0 | |||
4a52ff56b9 | |||
0a367898a0 | |||
ca5327b908 | |||
d36d8e0637 | |||
36588a3a81 | |||
c02bc0b46a | |||
1e4350ca6f | |||
b8852e116e | |||
9e468d3524 | |||
11f045f100 | |||
fd80eb9e22 | |||
6ac82d6ce9 | |||
b438b9b4c2 | |||
4417bd374b | |||
9d5f50f82a | |||
56a9e48163 | |||
31a117e21c | |||
57768f2583 | |||
e4e4e336b4 | |||
0caea03428 | |||
65932b20d2 | |||
1b8b32b07c | |||
39ce704969 | |||
d2a5fe59c0 | |||
8678dedfea | |||
0aefafa7f7 | |||
6db4df51c0 | |||
5ddf72855b | |||
0e05ce8b07 | |||
78780039de | |||
99f579e41a | |||
07577a52a9 | |||
45040a5635 | |||
097f0d9be4 | |||
ca68321be3 | |||
b322841edf | |||
4c58addead | |||
e811c5bbfb | |||
dd44f2f42b | |||
24c86f2c42 | |||
db558ec91f | |||
7c99618752 | |||
59ca296c58 | |||
c18c76f42c | |||
a7432281a8 | |||
d975f90f29 | |||
b16e2b4925 | |||
90f965f516 | |||
d2b52c6fe6 | |||
8125aea4f3 | |||
f3de900bdb | |||
9c7041f12c | |||
c67fc05219 | |||
760805fec1 | |||
158ed8d3f0 | |||
574623f2a8 | |||
db09b4c983 | |||
560cb609ba | |||
670596ed8e | |||
bd8536d9d1 | |||
95c59c2b39 | |||
b29c69378d | |||
ad60f4b1f1 | |||
68638d7760 | |||
4fa9878e01 | |||
602c2f9d4a | |||
c979fed10a | |||
1231ba19b7 | |||
1de57ab6f3 | |||
e419855911 | |||
49e5ce1176 | |||
92df125a77 | |||
7ace5b5056 | |||
eeb8879f73 | |||
37310e5455 | |||
26c2c2c295 | |||
d564671e1c | |||
4f25c55efc | |||
3532dcc11f | |||
ce985234c3 | |||
83704d8677 | |||
97e318a2ca | |||
4505b239eb |
38
README.md
38
README.md
@ -1,4 +1,4 @@
|
|||||||
# PVC - The Parallel Virtual Cluster suite
|
# PVC - The Parallel Virtual Cluster system
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img alt="Logo banner" src="https://git.bonifacelabs.ca/uploads/-/system/project/avatar/135/pvc_logo.png"/>
|
<img alt="Logo banner" src="https://git.bonifacelabs.ca/uploads/-/system/project/avatar/135/pvc_logo.png"/>
|
||||||
@ -9,19 +9,35 @@
|
|||||||
<a href="https://parallelvirtualcluster.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
<a href="https://parallelvirtualcluster.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
PVC is a suite of Python 3 tools to manage virtualized clusters. It provides a fully-functional private cloud based on four key principles:
|
**NOTICE FOR GITHUB**: This repository is a read-only mirror of the PVC repositories from my personal GitLab instance. Pull requests submitted here will not be merged. Issues submitted here will however be treated as authoritative.
|
||||||
|
|
||||||
1. Be Free Software Forever (or Bust)
|
PVC is a KVM+Ceph+Zookeeper-based, Free Software, scalable, redundant, self-healing, and self-managing private cloud solution designed with administrator simplicity in mind. It is built from the ground-up to be redundant at the host layer, allowing the cluster to gracefully handle the loss of nodes or their components, both due to hardware failure or due to maintenance. It is able to scale from a minimum of 3 nodes up to 12 or more nodes, while retaining performance and flexibility, allowing the administrator to build a small cluster today and grow it as needed.
|
||||||
2. Be Opinionated and Efficient and Pick The Best Software
|
|
||||||
3. Be Scalable and Redundant but Not Hyperscale
|
|
||||||
4. Be Simple To Use, Configure, and Maintain
|
|
||||||
|
|
||||||
It is designed to be an administrator-friendly but extremely powerful and rich modern private cloud system, but without the feature bloat and complexity of tools like OpenStack. With PVC, an administrator can provision, manage, and update a cluster of dozens or more hypervisors running thousands of VMs using a simple CLI tool, HTTP API, or [eventually] web interface. PVC is based entirely on Debian GNU/Linux and Free-and-Open-Source tools, providing the glue to bootstrap, provision and manage the cluster, then getting out of the administrators' way.
|
The major goal of PVC is to be administrator friendly, providing the power of Enterprise-grade private clouds like OpenStack, Nutanix, and VMWare to homelabbers, SMBs, and small ISPs, without the cost or complexity. It believes in picking the best tool for a job and abstracting it behind the cluster as a whole, freeing the administrator from the boring and time-consuming task of selecting the best component, and letting them get on with the things that really matter. Administration can be done from a simple CLI or via a RESTful API capable of building full-featured web frontends or additional applications, taking a self-documenting approach to keep the administrator learning curvet as low as possible. Setup is easy and straightforward with an [ISO-based node installer](https://git.bonifacelabs.ca/parallelvirtualcluster/pvc-installer) and [Ansible role framework](https://git.bonifacelabs.ca/parallelvirtualcluster/pvc-ansible) designed to get a cluster up and running as quickly as possible. Build your cloud in an hour, grow it as you need, and never worry about it: just add physical servers.
|
||||||
|
|
||||||
Your cloud, the best way; just add physical servers.
|
## Getting Started
|
||||||
|
|
||||||
[See the documentation here](https://parallelvirtualcluster.readthedocs.io/en/latest/)
|
To get started with PVC, read the [Cluster Architecture document](https://parallelvirtualcluster.readthedocs.io/en/latest/architecture/cluster/), then see [Installing](https://parallelvirtualcluster.readthedocs.io/en/latest/installing) for details on setting up a set of PVC nodes, using the [PVC Ansible](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/ansible) framework to configure and bootstrap a cluster, and managing it with the [`pvc` CLI tool](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/cli) or [RESTful HTTP API](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/api). For details on the project, its motivation, and architectural details, see [the About page](https://parallelvirtualcluster.readthedocs.io/en/latest/about).
|
||||||
|
|
||||||
[See the API reference here](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/api-reference.html)
|
## Changelog
|
||||||
|
|
||||||
|
#### v0.7
|
||||||
|
|
||||||
|
Numerous improvements and bugfixes, revamped documentation. This release is suitable for general use and is beta-quality software.
|
||||||
|
|
||||||
|
#### v0.6
|
||||||
|
|
||||||
|
Numerous improvements and bugfixes, full implementation of the provisioner, full implementation of the API CLI client (versus direct CLI client). This release is suitable for general use and is beta-quality software.
|
||||||
|
|
||||||
|
#### v0.5
|
||||||
|
|
||||||
|
First public release; fully implements the VM, network, and storage managers, the HTTP API, and the pvc-ansible framework for deploying and bootstrapping a cluster. This release is suitable for general use, though it is still alpha-quality software and should be expected to change significantly until 1.0 is released.
|
||||||
|
|
||||||
|
#### v0.4
|
||||||
|
|
||||||
|
Full implementation of virtual management and virtual networking functionality. Partial implementation of storage functionality.
|
||||||
|
|
||||||
|
#### v0.3
|
||||||
|
|
||||||
|
Basic implementation of virtual management functionality.
|
||||||
|
|
||||||
**NOTICE FOR GITHUB**: This repository is a read-only mirror of the PVC repositories. Pull requests submitted here will not be merged.
|
|
||||||
|
1
api-daemon/daemon_lib
Symbolic link
1
api-daemon/daemon_lib
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../daemon-common
|
1
api-daemon/migrations/README
Normal file
1
api-daemon/migrations/README
Normal file
@ -0,0 +1 @@
|
|||||||
|
Generic single-database configuration.
|
45
api-daemon/migrations/alembic.ini
Normal file
45
api-daemon/migrations/alembic.ini
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
# A generic, single database configuration.
|
||||||
|
|
||||||
|
[alembic]
|
||||||
|
# template used to generate migration files
|
||||||
|
# file_template = %%(rev)s_%%(slug)s
|
||||||
|
|
||||||
|
# set to 'true' to run the environment during
|
||||||
|
# the 'revision' command, regardless of autogenerate
|
||||||
|
# revision_environment = false
|
||||||
|
script_location = .
|
||||||
|
|
||||||
|
# Logging configuration
|
||||||
|
[loggers]
|
||||||
|
keys = root,sqlalchemy,alembic
|
||||||
|
|
||||||
|
[handlers]
|
||||||
|
keys = console
|
||||||
|
|
||||||
|
[formatters]
|
||||||
|
keys = generic
|
||||||
|
|
||||||
|
[logger_root]
|
||||||
|
level = WARN
|
||||||
|
handlers = console
|
||||||
|
qualname =
|
||||||
|
|
||||||
|
[logger_sqlalchemy]
|
||||||
|
level = WARN
|
||||||
|
handlers =
|
||||||
|
qualname = sqlalchemy.engine
|
||||||
|
|
||||||
|
[logger_alembic]
|
||||||
|
level = INFO
|
||||||
|
handlers =
|
||||||
|
qualname = alembic
|
||||||
|
|
||||||
|
[handler_console]
|
||||||
|
class = StreamHandler
|
||||||
|
args = (sys.stderr,)
|
||||||
|
level = NOTSET
|
||||||
|
formatter = generic
|
||||||
|
|
||||||
|
[formatter_generic]
|
||||||
|
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||||
|
datefmt = %H:%M:%S
|
87
api-daemon/migrations/env.py
Normal file
87
api-daemon/migrations/env.py
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
from __future__ import with_statement
|
||||||
|
from alembic import context
|
||||||
|
from sqlalchemy import engine_from_config, pool
|
||||||
|
from logging.config import fileConfig
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# this is the Alembic Config object, which provides
|
||||||
|
# access to the values within the .ini file in use.
|
||||||
|
config = context.config
|
||||||
|
|
||||||
|
# Interpret the config file for Python logging.
|
||||||
|
# This line sets up loggers basically.
|
||||||
|
fileConfig(config.config_file_name)
|
||||||
|
logger = logging.getLogger('alembic.env')
|
||||||
|
|
||||||
|
# add your model's MetaData object here
|
||||||
|
# for 'autogenerate' support
|
||||||
|
# from myapp import mymodel
|
||||||
|
# target_metadata = mymodel.Base.metadata
|
||||||
|
from flask import current_app
|
||||||
|
config.set_main_option('sqlalchemy.url',
|
||||||
|
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
|
||||||
|
target_metadata = current_app.extensions['migrate'].db.metadata
|
||||||
|
|
||||||
|
# other values from the config, defined by the needs of env.py,
|
||||||
|
# can be acquired:
|
||||||
|
# my_important_option = config.get_main_option("my_important_option")
|
||||||
|
# ... etc.
|
||||||
|
|
||||||
|
|
||||||
|
def run_migrations_offline():
|
||||||
|
"""Run migrations in 'offline' mode.
|
||||||
|
|
||||||
|
This configures the context with just a URL
|
||||||
|
and not an Engine, though an Engine is acceptable
|
||||||
|
here as well. By skipping the Engine creation
|
||||||
|
we don't even need a DBAPI to be available.
|
||||||
|
|
||||||
|
Calls to context.execute() here emit the given string to the
|
||||||
|
script output.
|
||||||
|
|
||||||
|
"""
|
||||||
|
url = config.get_main_option("sqlalchemy.url")
|
||||||
|
context.configure(url=url)
|
||||||
|
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
|
||||||
|
|
||||||
|
def run_migrations_online():
|
||||||
|
"""Run migrations in 'online' mode.
|
||||||
|
|
||||||
|
In this scenario we need to create an Engine
|
||||||
|
and associate a connection with the context.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# this callback is used to prevent an auto-migration from being generated
|
||||||
|
# when there are no changes to the schema
|
||||||
|
# reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
|
||||||
|
def process_revision_directives(context, revision, directives):
|
||||||
|
if getattr(config.cmd_opts, 'autogenerate', False):
|
||||||
|
script = directives[0]
|
||||||
|
if script.upgrade_ops.is_empty():
|
||||||
|
directives[:] = []
|
||||||
|
logger.info('No changes in schema detected.')
|
||||||
|
|
||||||
|
engine = engine_from_config(config.get_section(config.config_ini_section),
|
||||||
|
prefix='sqlalchemy.',
|
||||||
|
poolclass=pool.NullPool)
|
||||||
|
|
||||||
|
connection = engine.connect()
|
||||||
|
context.configure(connection=connection,
|
||||||
|
target_metadata=target_metadata,
|
||||||
|
process_revision_directives=process_revision_directives,
|
||||||
|
**current_app.extensions['migrate'].configure_args)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
finally:
|
||||||
|
connection.close()
|
||||||
|
|
||||||
|
if context.is_offline_mode():
|
||||||
|
run_migrations_offline()
|
||||||
|
else:
|
||||||
|
run_migrations_online()
|
24
api-daemon/migrations/script.py.mako
Normal file
24
api-daemon/migrations/script.py.mako
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
"""${message}
|
||||||
|
|
||||||
|
Revision ID: ${up_revision}
|
||||||
|
Revises: ${down_revision | comma,n}
|
||||||
|
Create Date: ${create_date}
|
||||||
|
|
||||||
|
"""
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
${imports if imports else ""}
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = ${repr(up_revision)}
|
||||||
|
down_revision = ${repr(down_revision)}
|
||||||
|
branch_labels = ${repr(branch_labels)}
|
||||||
|
depends_on = ${repr(depends_on)}
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade():
|
||||||
|
${upgrades if upgrades else "pass"}
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade():
|
||||||
|
${downgrades if downgrades else "pass"}
|
112
api-daemon/migrations/versions/2d1daa722a0a_pvc_version_0_6.py
Normal file
112
api-daemon/migrations/versions/2d1daa722a0a_pvc_version_0_6.py
Normal file
@ -0,0 +1,112 @@
|
|||||||
|
"""PVC version 0.6
|
||||||
|
|
||||||
|
Revision ID: 2d1daa722a0a
|
||||||
|
Revises:
|
||||||
|
Create Date: 2020-02-15 23:14:14.733134
|
||||||
|
|
||||||
|
"""
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '2d1daa722a0a'
|
||||||
|
down_revision = None
|
||||||
|
branch_labels = None
|
||||||
|
depends_on = None
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade():
|
||||||
|
# ### commands auto generated by Alembic - please adjust! ###
|
||||||
|
op.create_table('network_template',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('mac_template', sa.Text(), nullable=True),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('script',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('script', sa.Text(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('storage_template',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('system_template',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('vcpu_count', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('vram_mb', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('serial', sa.Boolean(), nullable=False),
|
||||||
|
sa.Column('vnc', sa.Boolean(), nullable=False),
|
||||||
|
sa.Column('vnc_bind', sa.Text(), nullable=True),
|
||||||
|
sa.Column('node_limit', sa.Text(), nullable=True),
|
||||||
|
sa.Column('node_selector', sa.Text(), nullable=True),
|
||||||
|
sa.Column('node_autostart', sa.Boolean(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('userdata',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('userdata', sa.Text(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('network',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('network_template', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('vni', sa.Integer(), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['network_template'], ['network_template.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id')
|
||||||
|
)
|
||||||
|
op.create_table('profile',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('system_template', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('network_template', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('storage_template', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('userdata', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('script', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('arguments', sa.Text(), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['network_template'], ['network_template.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['script'], ['script.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['storage_template'], ['storage_template.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['system_template'], ['system_template.id'], ),
|
||||||
|
sa.ForeignKeyConstraint(['userdata'], ['userdata.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('storage',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('storage_template', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('pool', sa.Text(), nullable=False),
|
||||||
|
sa.Column('disk_id', sa.Text(), nullable=False),
|
||||||
|
sa.Column('source_volume', sa.Text(), nullable=True),
|
||||||
|
sa.Column('disk_size_gb', sa.Integer(), nullable=True),
|
||||||
|
sa.Column('mountpoint', sa.Text(), nullable=True),
|
||||||
|
sa.Column('filesystem', sa.Text(), nullable=True),
|
||||||
|
sa.Column('filesystem_args', sa.Text(), nullable=True),
|
||||||
|
sa.ForeignKeyConstraint(['storage_template'], ['storage_template.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id')
|
||||||
|
)
|
||||||
|
# ### end Alembic commands ###
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade():
|
||||||
|
# ### commands auto generated by Alembic - please adjust! ###
|
||||||
|
op.drop_table('storage')
|
||||||
|
op.drop_table('profile')
|
||||||
|
op.drop_table('network')
|
||||||
|
op.drop_table('userdata')
|
||||||
|
op.drop_table('system_template')
|
||||||
|
op.drop_table('storage_template')
|
||||||
|
op.drop_table('script')
|
||||||
|
op.drop_table('network_template')
|
||||||
|
# ### end Alembic commands ###
|
@ -0,0 +1,76 @@
|
|||||||
|
"""PVC version 0.7
|
||||||
|
|
||||||
|
Revision ID: 88c8514684f7
|
||||||
|
Revises: 2d1daa722a0a
|
||||||
|
Create Date: 2020-02-16 19:49:50.126265
|
||||||
|
|
||||||
|
"""
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '88c8514684f7'
|
||||||
|
down_revision = '2d1daa722a0a'
|
||||||
|
branch_labels = None
|
||||||
|
depends_on = None
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade():
|
||||||
|
# ### commands auto generated by Alembic - please adjust! ###
|
||||||
|
op.create_table('ova',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('ovf', sa.Text(), nullable=False),
|
||||||
|
sa.PrimaryKeyConstraint('id'),
|
||||||
|
sa.UniqueConstraint('name')
|
||||||
|
)
|
||||||
|
op.create_table('ova_volume',
|
||||||
|
sa.Column('id', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('ova', sa.Integer(), nullable=False),
|
||||||
|
sa.Column('pool', sa.Text(), nullable=False),
|
||||||
|
sa.Column('volume_name', sa.Text(), nullable=False),
|
||||||
|
sa.Column('volume_format', sa.Text(), nullable=False),
|
||||||
|
sa.Column('disk_id', sa.Text(), nullable=False),
|
||||||
|
sa.Column('disk_size_gb', sa.Integer(), nullable=False),
|
||||||
|
sa.ForeignKeyConstraint(['ova'], ['ova.id'], ),
|
||||||
|
sa.PrimaryKeyConstraint('id')
|
||||||
|
)
|
||||||
|
op.alter_column('network', 'network_template',
|
||||||
|
existing_type=sa.INTEGER(),
|
||||||
|
nullable=False)
|
||||||
|
op.add_column('network_template', sa.Column('ova', sa.Integer(), nullable=True))
|
||||||
|
op.create_foreign_key(None, 'network_template', 'ova', ['ova'], ['id'])
|
||||||
|
op.add_column('profile', sa.Column('ova', sa.Integer(), nullable=True))
|
||||||
|
op.add_column('profile', sa.Column('profile_type', sa.Text(), nullable=False))
|
||||||
|
op.create_foreign_key(None, 'profile', 'ova', ['ova'], ['id'])
|
||||||
|
op.alter_column('storage', 'storage_template',
|
||||||
|
existing_type=sa.INTEGER(),
|
||||||
|
nullable=False)
|
||||||
|
op.add_column('storage_template', sa.Column('ova', sa.Integer(), nullable=True))
|
||||||
|
op.create_foreign_key(None, 'storage_template', 'ova', ['ova'], ['id'])
|
||||||
|
op.add_column('system_template', sa.Column('ova', sa.Integer(), nullable=True))
|
||||||
|
op.create_foreign_key(None, 'system_template', 'ova', ['ova'], ['id'])
|
||||||
|
# ### end Alembic commands ###
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade():
|
||||||
|
# ### commands auto generated by Alembic - please adjust! ###
|
||||||
|
op.drop_constraint(None, 'system_template', type_='foreignkey')
|
||||||
|
op.drop_column('system_template', 'ova')
|
||||||
|
op.drop_constraint(None, 'storage_template', type_='foreignkey')
|
||||||
|
op.drop_column('storage_template', 'ova')
|
||||||
|
op.alter_column('storage', 'storage_template',
|
||||||
|
existing_type=sa.INTEGER(),
|
||||||
|
nullable=True)
|
||||||
|
op.drop_constraint(None, 'profile', type_='foreignkey')
|
||||||
|
op.drop_column('profile', 'profile_type')
|
||||||
|
op.drop_column('profile', 'ova')
|
||||||
|
op.drop_constraint(None, 'network_template', type_='foreignkey')
|
||||||
|
op.drop_column('network_template', 'ova')
|
||||||
|
op.alter_column('network', 'network_template',
|
||||||
|
existing_type=sa.INTEGER(),
|
||||||
|
nullable=True)
|
||||||
|
op.drop_table('ova_volume')
|
||||||
|
op.drop_table('ova')
|
||||||
|
# ### end Alembic commands ###
|
@ -109,6 +109,7 @@ def install(**kwargs):
|
|||||||
|
|
||||||
# The root, var, and log volumes have specific values
|
# The root, var, and log volumes have specific values
|
||||||
if disk['mountpoint'] == "/":
|
if disk['mountpoint'] == "/":
|
||||||
|
root_disk['scsi_id'] = disk_id
|
||||||
dump = 0
|
dump = 0
|
||||||
cpass = 1
|
cpass = 1
|
||||||
elif disk['mountpoint'] == '/var' or disk['mountpoint'] == '/var/log':
|
elif disk['mountpoint'] == '/var' or disk['mountpoint'] == '/var/log':
|
||||||
@ -184,12 +185,12 @@ interface "ens2" {
|
|||||||
GRUB_DEFAULT=0
|
GRUB_DEFAULT=0
|
||||||
GRUB_TIMEOUT=1
|
GRUB_TIMEOUT=1
|
||||||
GRUB_DISTRIBUTOR="PVC Virtual Machine"
|
GRUB_DISTRIBUTOR="PVC Virtual Machine"
|
||||||
GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/{root_disk} console=tty0 console=ttyS0,115200n8"
|
GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-{root_disk} console=tty0 console=ttyS0,115200n8"
|
||||||
GRUB_CMDLINE_LINUX=""
|
GRUB_CMDLINE_LINUX=""
|
||||||
GRUB_TERMINAL=console
|
GRUB_TERMINAL=console
|
||||||
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
|
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
|
||||||
GRUB_DISABLE_LINUX_UUID=false
|
GRUB_DISABLE_LINUX_UUID=false
|
||||||
""".format(root_disk=root_disk['disk_id'])
|
""".format(root_disk=root_disk['scsi_id'])
|
||||||
fh.write(data)
|
fh.write(data)
|
||||||
|
|
||||||
# Chroot, do some in-root tasks, then exit the chroot
|
# Chroot, do some in-root tasks, then exit the chroot
|
15
api-daemon/pvc-api-db-upgrade
Executable file
15
api-daemon/pvc-api-db-upgrade
Executable file
@ -0,0 +1,15 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Apply PVC database migrations
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
|
||||||
|
export PVC_CONFIG_FILE="/etc/pvc/pvcapid.yaml"
|
||||||
|
|
||||||
|
if [[ ! -f ${PVC_CONFIG_FILE} ]]; then
|
||||||
|
echo "Create a configuration file at ${PVC_CONFIG_FILE} before upgrading the database."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
pushd /usr/share/pvc
|
||||||
|
./pvcapid-manage.py db upgrade
|
||||||
|
popd
|
35
api-daemon/pvcapid-manage.py
Executable file
35
api-daemon/pvcapid-manage.py
Executable file
@ -0,0 +1,35 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# manage.py - PVC Database management tasks
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
import os
|
||||||
|
from flask_migrate import Migrate, MigrateCommand
|
||||||
|
from flask_script import Manager
|
||||||
|
|
||||||
|
from pvcapid.flaskapi import app, db, config
|
||||||
|
|
||||||
|
migrate = Migrate(app, db)
|
||||||
|
manager = Manager(app)
|
||||||
|
|
||||||
|
manager.add_command('db', MigrateCommand)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
manager.run()
|
16
api-daemon/pvcapid-worker.service
Normal file
16
api-daemon/pvcapid-worker.service
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
# Parallel Virtual Cluster Provisioner API provisioner worker unit file
|
||||||
|
|
||||||
|
[Unit]
|
||||||
|
Description = Parallel Virtual Cluster API provisioner worker
|
||||||
|
After = network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type = simple
|
||||||
|
WorkingDirectory = /usr/share/pvc
|
||||||
|
Environment = PYTHONUNBUFFERED=true
|
||||||
|
Environment = PVC_CONFIG_FILE=/etc/pvc/pvcapid.yaml
|
||||||
|
ExecStart = /usr/bin/celery worker -A pvcapid.flaskapi.celery --concurrency 1 --loglevel INFO
|
||||||
|
Restart = on-failure
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy = multi-user.target
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
# pvcd.py - Node daemon startup stub
|
# pvcapid.py - API daemon startup stub
|
||||||
# Part of the Parallel Virtual Cluster (PVC) system
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
#
|
#
|
||||||
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
@ -20,4 +20,4 @@
|
|||||||
#
|
#
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
import pvcd.Daemon
|
import pvcapid.Daemon
|
@ -1,11 +1,11 @@
|
|||||||
---
|
---
|
||||||
# pvc-api client configuration file example
|
# pvcapid configuration file example
|
||||||
#
|
#
|
||||||
# This configuration file specifies details for the PVC API client running on
|
# This configuration file specifies details for the PVC API daemon running on
|
||||||
# this machine. Default values are not supported; the values in this sample
|
# this machine. Default values are not supported; the values in this sample
|
||||||
# configuration are considered defaults and can be used as-is.
|
# configuration are considered defaults and can be used as-is.
|
||||||
#
|
#
|
||||||
# Copy this example to /etc/pvc/pvc-api.conf and edit to your needs
|
# Copy this example to /etc/pvc/pvcapid.conf and edit to your needs
|
||||||
|
|
||||||
pvc:
|
pvc:
|
||||||
# debug: Enable/disable API debug mode
|
# debug: Enable/disable API debug mode
|
||||||
@ -70,7 +70,7 @@ pvc:
|
|||||||
storage_hosts:
|
storage_hosts:
|
||||||
- pvchv1
|
- pvchv1
|
||||||
- pvchv2
|
- pvchv2
|
||||||
- pvchv2
|
- pvchv3
|
||||||
# storage_domain: The storage domain name, concatenated with the coordinators list names
|
# storage_domain: The storage domain name, concatenated with the coordinators list names
|
||||||
# to form monitor access strings
|
# to form monitor access strings
|
||||||
storage_domain: "pvc.storage"
|
storage_domain: "pvc.storage"
|
@ -8,8 +8,8 @@ After = network-online.target
|
|||||||
Type = simple
|
Type = simple
|
||||||
WorkingDirectory = /usr/share/pvc
|
WorkingDirectory = /usr/share/pvc
|
||||||
Environment = PYTHONUNBUFFERED=true
|
Environment = PYTHONUNBUFFERED=true
|
||||||
Environment = PVC_CONFIG_FILE=/etc/pvc/pvc-api.yaml
|
Environment = PVC_CONFIG_FILE=/etc/pvc/pvcapid.yaml
|
||||||
ExecStart = /usr/share/pvc/pvc-api.py
|
ExecStart = /usr/share/pvc/pvcapid.py
|
||||||
Restart = on-failure
|
Restart = on-failure
|
||||||
|
|
||||||
[Install]
|
[Install]
|
49
api-daemon/pvcapid/Daemon.py
Executable file
49
api-daemon/pvcapid/Daemon.py
Executable file
@ -0,0 +1,49 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# Daemon.py - PVC HTTP API daemon
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
import gevent.pywsgi
|
||||||
|
import pvcapid.flaskapi as pvc_api
|
||||||
|
|
||||||
|
##########################################################
|
||||||
|
# Entrypoint
|
||||||
|
##########################################################
|
||||||
|
if pvc_api.config['debug']:
|
||||||
|
# Run in Flask standard mode
|
||||||
|
pvc_api.app.run(pvc_api.config['listen_address'], pvc_api.config['listen_port'])
|
||||||
|
else:
|
||||||
|
if pvc_api.config['ssl_enabled']:
|
||||||
|
# Run the WSGI server with SSL
|
||||||
|
http_server = gevent.pywsgi.WSGIServer(
|
||||||
|
(pvc_api.config['listen_address'], pvc_api.config['listen_port']),
|
||||||
|
app,
|
||||||
|
keyfile=pvc_api.config['ssl_key_file'],
|
||||||
|
certfile=pvc_api.config['ssl_cert_file']
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Run the ?WSGI server without SSL
|
||||||
|
http_server = gevent.pywsgi.WSGIServer(
|
||||||
|
(pvc_api.config['listen_address'], pvc_api.config['listen_port']),
|
||||||
|
pvc_api.app
|
||||||
|
)
|
||||||
|
|
||||||
|
print('Starting PyWSGI server at {}:{} with SSL={}, Authentication={}'.format(pvc_api.config['listen_address'], pvc_api.config['listen_port'], pvc_api.config['ssl_enabled'], pvc_api.config['auth_enabled']))
|
||||||
|
http_server.serve_forever()
|
File diff suppressed because it is too large
Load Diff
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
# pvcapi_helper.py - PVC HTTP API functions
|
# helper.py - PVC HTTP API helper functions
|
||||||
# Part of the Parallel Virtual Cluster (PVC) system
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
#
|
#
|
||||||
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
@ -24,14 +24,24 @@ import flask
|
|||||||
import json
|
import json
|
||||||
import lxml.etree as etree
|
import lxml.etree as etree
|
||||||
|
|
||||||
from distutils.util import strtobool
|
from distutils.util import strtobool as dustrtobool
|
||||||
|
|
||||||
import client_lib.common as pvc_common
|
import daemon_lib.common as pvc_common
|
||||||
import client_lib.cluster as pvc_cluster
|
import daemon_lib.cluster as pvc_cluster
|
||||||
import client_lib.node as pvc_node
|
import daemon_lib.node as pvc_node
|
||||||
import client_lib.vm as pvc_vm
|
import daemon_lib.vm as pvc_vm
|
||||||
import client_lib.network as pvc_network
|
import daemon_lib.network as pvc_network
|
||||||
import client_lib.ceph as pvc_ceph
|
import daemon_lib.ceph as pvc_ceph
|
||||||
|
|
||||||
|
def strtobool(stringv):
|
||||||
|
if stringv is None:
|
||||||
|
return False
|
||||||
|
if isinstance(stringv, bool):
|
||||||
|
return bool(stringv)
|
||||||
|
try:
|
||||||
|
return bool(dustrtobool(stringv))
|
||||||
|
except:
|
||||||
|
return False
|
||||||
|
|
||||||
#
|
#
|
||||||
# Initialization function
|
# Initialization function
|
||||||
@ -428,7 +438,7 @@ def vm_define(xml, node, limit, selector, autostart):
|
|||||||
xml_data = etree.fromstring(xml)
|
xml_data = etree.fromstring(xml)
|
||||||
new_cfg = etree.tostring(xml_data, pretty_print=True).decode('utf8')
|
new_cfg = etree.tostring(xml_data, pretty_print=True).decode('utf8')
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {'message': 'Error: XML is malformed or incorrect: {}'.format(e)}, 400
|
return { 'message': 'XML is malformed or incorrect: {}'.format(e) }, 400
|
||||||
|
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.define_vm(zk_conn, new_cfg, node, limit, selector, autostart, profile=None)
|
retflag, retdata = pvc_vm.define_vm(zk_conn, new_cfg, node, limit, selector, autostart, profile=None)
|
||||||
@ -510,7 +520,7 @@ def vm_modify(name, restart, xml):
|
|||||||
xml_data = etree.fromstring(xml)
|
xml_data = etree.fromstring(xml)
|
||||||
new_cfg = etree.tostring(xml_data, pretty_print=True).decode('utf8')
|
new_cfg = etree.tostring(xml_data, pretty_print=True).decode('utf8')
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {'message': 'Error: XML is malformed or incorrect: {}'.format(e)}, 400
|
return { 'message': 'XML is malformed or incorrect: {}'.format(e) }, 400
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.modify_vm(zk_conn, name, restart, new_cfg)
|
retflag, retdata = pvc_vm.modify_vm(zk_conn, name, restart, new_cfg)
|
||||||
pvc_common.stopZKConnection(zk_conn)
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
@ -579,12 +589,12 @@ def vm_start(name):
|
|||||||
}
|
}
|
||||||
return output, retcode
|
return output, retcode
|
||||||
|
|
||||||
def vm_restart(name):
|
def vm_restart(name, wait):
|
||||||
"""
|
"""
|
||||||
Restart a VM in the PVC cluster.
|
Restart a VM in the PVC cluster.
|
||||||
"""
|
"""
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.restart_vm(zk_conn, name)
|
retflag, retdata = pvc_vm.restart_vm(zk_conn, name, wait)
|
||||||
pvc_common.stopZKConnection(zk_conn)
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
if retflag:
|
if retflag:
|
||||||
@ -597,12 +607,12 @@ def vm_restart(name):
|
|||||||
}
|
}
|
||||||
return output, retcode
|
return output, retcode
|
||||||
|
|
||||||
def vm_shutdown(name):
|
def vm_shutdown(name, wait):
|
||||||
"""
|
"""
|
||||||
Shutdown a VM in the PVC cluster.
|
Shutdown a VM in the PVC cluster.
|
||||||
"""
|
"""
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.shutdown_vm(zk_conn, name)
|
retflag, retdata = pvc_vm.shutdown_vm(zk_conn, name, wait)
|
||||||
pvc_common.stopZKConnection(zk_conn)
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
if retflag:
|
if retflag:
|
||||||
@ -651,12 +661,12 @@ def vm_disable(name):
|
|||||||
}
|
}
|
||||||
return output, retcode
|
return output, retcode
|
||||||
|
|
||||||
def vm_move(name, node):
|
def vm_move(name, node, wait):
|
||||||
"""
|
"""
|
||||||
Move a VM to another node.
|
Move a VM to another node.
|
||||||
"""
|
"""
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.move_vm(zk_conn, name, node)
|
retflag, retdata = pvc_vm.move_vm(zk_conn, name, node, wait)
|
||||||
pvc_common.stopZKConnection(zk_conn)
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
if retflag:
|
if retflag:
|
||||||
@ -669,12 +679,12 @@ def vm_move(name, node):
|
|||||||
}
|
}
|
||||||
return output, retcode
|
return output, retcode
|
||||||
|
|
||||||
def vm_migrate(name, node, flag_force):
|
def vm_migrate(name, node, flag_force, wait):
|
||||||
"""
|
"""
|
||||||
Temporarily migrate a VM to another node.
|
Temporarily migrate a VM to another node.
|
||||||
"""
|
"""
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.migrate_vm(zk_conn, name, node, flag_force)
|
retflag, retdata = pvc_vm.migrate_vm(zk_conn, name, node, flag_force, wait)
|
||||||
pvc_common.stopZKConnection(zk_conn)
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
if retflag:
|
if retflag:
|
||||||
@ -687,12 +697,12 @@ def vm_migrate(name, node, flag_force):
|
|||||||
}
|
}
|
||||||
return output, retcode
|
return output, retcode
|
||||||
|
|
||||||
def vm_unmigrate(name):
|
def vm_unmigrate(name, wait):
|
||||||
"""
|
"""
|
||||||
Unmigrate a migrated VM.
|
Unmigrate a migrated VM.
|
||||||
"""
|
"""
|
||||||
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
retflag, retdata = pvc_vm.unmigrate_vm(zk_conn, name)
|
retflag, retdata = pvc_vm.unmigrate_vm(zk_conn, name, wait)
|
||||||
pvc_common.stopZKConnection(zk_conn)
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
if retflag:
|
if retflag:
|
||||||
@ -1327,6 +1337,144 @@ def ceph_volume_remove(pool, name):
|
|||||||
}
|
}
|
||||||
return output, retcode
|
return output, retcode
|
||||||
|
|
||||||
|
def ceph_volume_upload(pool, volume, data, img_type):
|
||||||
|
"""
|
||||||
|
Upload a raw file via HTTP post to a PVC Ceph volume
|
||||||
|
"""
|
||||||
|
# Determine the image conversion options
|
||||||
|
if img_type not in ['raw', 'vmdk', 'qcow2', 'qed', 'vdi', 'vpc']:
|
||||||
|
output = {
|
||||||
|
"message": "Image type '{}' is not valid.".format(img_type)
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Get the size of the target block device
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retcode, retdata = pvc_ceph.get_list_volume(zk_conn, pool, volume, is_fuzzy=False)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
# If there's no target, return failure
|
||||||
|
if not retcode or len(retdata) < 1:
|
||||||
|
output = {
|
||||||
|
"message": "Target volume '{}' does not exist in pool '{}'.".format(volume, pool)
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
return output, retcode
|
||||||
|
dev_size = retdata[0]['stats']['size']
|
||||||
|
|
||||||
|
def cleanup_maps_and_volumes():
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
# Unmap the target blockdev
|
||||||
|
retflag, retdata = pvc_ceph.unmap_volume(zk_conn, pool, volume)
|
||||||
|
# Unmap the temporary blockdev
|
||||||
|
retflag, retdata = pvc_ceph.unmap_volume(zk_conn, pool, "{}_tmp".format(volume))
|
||||||
|
# Remove the temporary blockdev
|
||||||
|
retflag, retdata = pvc_ceph.remove_volume(zk_conn, pool, "{}_tmp".format(volume))
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
|
# Create a temporary block device to store non-raw images
|
||||||
|
if img_type == 'raw':
|
||||||
|
# Map the target blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.map_volume(zk_conn, pool, volume)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
dest_blockdev = retdata
|
||||||
|
|
||||||
|
# Save the data to the blockdev directly
|
||||||
|
try:
|
||||||
|
data.save(dest_blockdev)
|
||||||
|
except:
|
||||||
|
output = {
|
||||||
|
'message': "Failed to write image file to volume."
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
output = {
|
||||||
|
'message': "Wrote uploaded file to volume '{}' in pool '{}'.".format(volume, pool)
|
||||||
|
}
|
||||||
|
retcode = 200
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Write the image directly to the blockdev
|
||||||
|
else:
|
||||||
|
# Create a temporary blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.add_volume(zk_conn, pool, "{}_tmp".format(volume), dev_size)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Map the temporary target blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.map_volume(zk_conn, pool, "{}_tmp".format(volume))
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
temp_blockdev = retdata
|
||||||
|
|
||||||
|
# Map the target blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.map_volume(zk_conn, pool, volume)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
dest_blockdev = retdata
|
||||||
|
|
||||||
|
# Save the data to the temporary blockdev directly
|
||||||
|
try:
|
||||||
|
data.save(temp_blockdev)
|
||||||
|
except:
|
||||||
|
output = {
|
||||||
|
'message': "Failed to write image file to temporary volume."
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Convert from the temporary to destination format on the blockdevs
|
||||||
|
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||||
|
'qemu-img convert -C -f {} -O raw {} {}'.format(img_type, temp_blockdev, dest_blockdev)
|
||||||
|
)
|
||||||
|
if retcode:
|
||||||
|
output = {
|
||||||
|
'message': "Failed to convert image format from '{}' to 'raw': {}".format(img_type, stderr)
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
output = {
|
||||||
|
'message': "Converted and wrote uploaded file to volume '{}' in pool '{}'.".format(volume, pool)
|
||||||
|
}
|
||||||
|
retcode = 200
|
||||||
|
cleanup_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
def ceph_volume_snapshot_list(pool=None, volume=None, limit=None, is_fuzzy=True):
|
def ceph_volume_snapshot_list(pool=None, volume=None, limit=None, is_fuzzy=True):
|
||||||
"""
|
"""
|
||||||
Get the list of RBD volume snapshots in the Ceph storage cluster.
|
Get the list of RBD volume snapshots in the Ceph storage cluster.
|
@ -53,6 +53,7 @@ libvirt_header = """<domain type='kvm'>
|
|||||||
<on_reboot>restart</on_reboot>
|
<on_reboot>restart</on_reboot>
|
||||||
<on_crash>restart</on_crash>
|
<on_crash>restart</on_crash>
|
||||||
<devices>
|
<devices>
|
||||||
|
<console type='pty'/>
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# File footer, closing devices and domain elements
|
# File footer, closing devices and domain elements
|
||||||
@ -75,7 +76,6 @@ devices_default = """ <emulator>/usr/bin/kvm</emulator>
|
|||||||
devices_serial = """ <serial type='pty'>
|
devices_serial = """ <serial type='pty'>
|
||||||
<log file='/var/log/libvirt/{vm_name}.log' append='on'/>
|
<log file='/var/log/libvirt/{vm_name}.log' append='on'/>
|
||||||
</serial>
|
</serial>
|
||||||
<console type='pty'/>
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# VNC device
|
# VNC device
|
215
api-daemon/pvcapid/models.py
Executable file
215
api-daemon/pvcapid/models.py
Executable file
@ -0,0 +1,215 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# models.py - PVC Database models
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
from pvcapid.flaskapi import app, db
|
||||||
|
|
||||||
|
class DBSystemTemplate(db.Model):
|
||||||
|
__tablename__ = 'system_template'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
vcpu_count = db.Column(db.Integer, nullable=False)
|
||||||
|
vram_mb = db.Column(db.Integer, nullable=False)
|
||||||
|
serial = db.Column(db.Boolean, nullable=False)
|
||||||
|
vnc = db.Column(db.Boolean, nullable=False)
|
||||||
|
vnc_bind = db.Column(db.Text)
|
||||||
|
node_limit = db.Column(db.Text)
|
||||||
|
node_selector = db.Column(db.Text)
|
||||||
|
node_autostart = db.Column(db.Boolean, nullable=False)
|
||||||
|
ova = db.Column(db.Integer, db.ForeignKey("ova.id"), nullable=True)
|
||||||
|
|
||||||
|
def __init__(self, name, vcpu_count, vram_mb, serial, vnc, vnc_bind, node_limit, node_selector, node_autostart, ova=None):
|
||||||
|
self.name = name
|
||||||
|
self.vcpu_count = vcpu_count
|
||||||
|
self.vram_mb = vram_mb
|
||||||
|
self.serial = serial
|
||||||
|
self.vnc = vnc
|
||||||
|
self.vnc_bind = vnc_bind
|
||||||
|
self.node_limit = node_limit
|
||||||
|
self.node_selector = node_selector
|
||||||
|
self.node_autostart = node_autostart
|
||||||
|
self.ova = ova
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBNetworkTemplate(db.Model):
|
||||||
|
__tablename__ = 'network_template'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
mac_template = db.Column(db.Text)
|
||||||
|
ova = db.Column(db.Integer, db.ForeignKey("ova.id"), nullable=True)
|
||||||
|
|
||||||
|
def __init__(self, name, mac_template, ova=None):
|
||||||
|
self.name = name
|
||||||
|
self.mac_template = mac_template
|
||||||
|
self.ova = ova
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBNetworkElement(db.Model):
|
||||||
|
__tablename__ = 'network'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
network_template = db.Column(db.Integer, db.ForeignKey("network_template.id"), nullable=False)
|
||||||
|
vni = db.Column(db.Integer, nullable=False)
|
||||||
|
|
||||||
|
def __init__(self, network_template, vni):
|
||||||
|
self.network_template = network_template
|
||||||
|
self.vni = vni
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBStorageTemplate(db.Model):
|
||||||
|
__tablename__ = 'storage_template'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
ova = db.Column(db.Integer, db.ForeignKey("ova.id"), nullable=True)
|
||||||
|
|
||||||
|
def __init__(self, name, ova=None):
|
||||||
|
self.name = name
|
||||||
|
self.ova = ova
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBStorageElement(db.Model):
|
||||||
|
__tablename__ = 'storage'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
storage_template = db.Column(db.Integer, db.ForeignKey("storage_template.id"), nullable=False)
|
||||||
|
pool = db.Column(db.Text, nullable=False)
|
||||||
|
disk_id = db.Column(db.Text, nullable=False)
|
||||||
|
source_volume = db.Column(db.Text)
|
||||||
|
disk_size_gb = db.Column(db.Integer)
|
||||||
|
mountpoint = db.Column(db.Text)
|
||||||
|
filesystem = db.Column(db.Text)
|
||||||
|
filesystem_args = db.Column(db.Text)
|
||||||
|
|
||||||
|
def __init__(self, storage_template, pool, disk_id, source_volume, disk_size_gb, mountpoint, filesystem, filesystem_args):
|
||||||
|
self.storage_template = storage_template
|
||||||
|
self.pool = pool
|
||||||
|
self.disk_id = disk_id
|
||||||
|
self.source_volume = source_volume
|
||||||
|
self.disk_size_gb = disk_size_gb
|
||||||
|
self.mountpoint = mountpoint
|
||||||
|
self.filesystem = filesystem
|
||||||
|
self.filesystem_args = filesystem_args
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBUserdata(db.Model):
|
||||||
|
__tablename__ = 'userdata'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
userdata = db.Column(db.Text, nullable=False)
|
||||||
|
|
||||||
|
def __init__(self, name, userdata):
|
||||||
|
self.name = name
|
||||||
|
self.userdata = userdata
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBScript(db.Model):
|
||||||
|
__tablename__ = 'script'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
script = db.Column(db.Text, nullable=False)
|
||||||
|
|
||||||
|
def __init__(self, name, script):
|
||||||
|
self.name = name
|
||||||
|
self.script = script
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBOva(db.Model):
|
||||||
|
__tablename__ = 'ova'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
ovf = db.Column(db.Text, nullable=False)
|
||||||
|
|
||||||
|
def __init__(self, name, ovf):
|
||||||
|
self.name = name
|
||||||
|
self.ovf = ovf
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBOvaVolume(db.Model):
|
||||||
|
__tablename__ = 'ova_volume'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
ova = db.Column(db.Integer, db.ForeignKey("ova.id"), nullable=False)
|
||||||
|
pool = db.Column(db.Text, nullable=False)
|
||||||
|
volume_name = db.Column(db.Text, nullable=False)
|
||||||
|
volume_format = db.Column(db.Text, nullable=False)
|
||||||
|
disk_id = db.Column(db.Text, nullable=False)
|
||||||
|
disk_size_gb = db.Column(db.Integer, nullable=False)
|
||||||
|
|
||||||
|
def __init__(self, ova, pool, volume_name, volume_format, disk_id, disk_size_gb):
|
||||||
|
self.ova = ova
|
||||||
|
self.pool = pool
|
||||||
|
self.volume_name = volume_name
|
||||||
|
self.volume_format = volume_format
|
||||||
|
self.disk_id = disk_id
|
||||||
|
self.disk_size_gb = disk_size_gb
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
||||||
|
|
||||||
|
class DBProfile(db.Model):
|
||||||
|
__tablename__ = 'profile'
|
||||||
|
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.Text, nullable=False, unique=True)
|
||||||
|
profile_type = db.Column(db.Text, nullable=False)
|
||||||
|
system_template = db.Column(db.Integer, db.ForeignKey("system_template.id"))
|
||||||
|
network_template = db.Column(db.Integer, db.ForeignKey("network_template.id"))
|
||||||
|
storage_template = db.Column(db.Integer, db.ForeignKey("storage_template.id"))
|
||||||
|
userdata = db.Column(db.Integer, db.ForeignKey("userdata.id"))
|
||||||
|
script = db.Column(db.Integer, db.ForeignKey("script.id"))
|
||||||
|
ova = db.Column(db.Integer, db.ForeignKey("ova.id"))
|
||||||
|
arguments = db.Column(db.Text)
|
||||||
|
|
||||||
|
def __init__(self, name, profile_type, system_template, network_template, storage_template, userdata, script, ova, arguments):
|
||||||
|
self.name = name
|
||||||
|
self.profile_type = profile_type
|
||||||
|
self.system_template = system_template
|
||||||
|
self.network_template = network_template
|
||||||
|
self.storage_template = storage_template
|
||||||
|
self.userdata = userdata
|
||||||
|
self.script = script
|
||||||
|
self.ova = ova
|
||||||
|
self.arguments = arguments
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '<id {}>'.format(self.id)
|
547
api-daemon/pvcapid/ova.py
Executable file
547
api-daemon/pvcapid/ova.py
Executable file
@ -0,0 +1,547 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# ova.py - PVC OVA parser library
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
import flask
|
||||||
|
import json
|
||||||
|
import psycopg2
|
||||||
|
import psycopg2.extras
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import time
|
||||||
|
import math
|
||||||
|
import tarfile
|
||||||
|
import shutil
|
||||||
|
import shlex
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
import lxml.etree
|
||||||
|
|
||||||
|
import daemon_lib.common as pvc_common
|
||||||
|
import daemon_lib.node as pvc_node
|
||||||
|
import daemon_lib.vm as pvc_vm
|
||||||
|
import daemon_lib.network as pvc_network
|
||||||
|
import daemon_lib.ceph as pvc_ceph
|
||||||
|
|
||||||
|
import pvcapid.libvirt_schema as libvirt_schema
|
||||||
|
import pvcapid.provisioner as provisioner
|
||||||
|
|
||||||
|
#
|
||||||
|
# Common functions
|
||||||
|
#
|
||||||
|
|
||||||
|
# Database connections
|
||||||
|
def open_database(config):
|
||||||
|
conn = psycopg2.connect(
|
||||||
|
host=config['database_host'],
|
||||||
|
port=config['database_port'],
|
||||||
|
dbname=config['database_name'],
|
||||||
|
user=config['database_user'],
|
||||||
|
password=config['database_password']
|
||||||
|
)
|
||||||
|
cur = conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
|
||||||
|
return conn, cur
|
||||||
|
|
||||||
|
def close_database(conn, cur, failed=False):
|
||||||
|
if not failed:
|
||||||
|
conn.commit()
|
||||||
|
cur.close()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
#
|
||||||
|
# OVA functions
|
||||||
|
#
|
||||||
|
def list_ova(limit, is_fuzzy=True):
|
||||||
|
if limit:
|
||||||
|
if is_fuzzy:
|
||||||
|
# Handle fuzzy vs. non-fuzzy limits
|
||||||
|
if not re.match('\^.*', limit):
|
||||||
|
limit = '%' + limit
|
||||||
|
else:
|
||||||
|
limit = limit[1:]
|
||||||
|
if not re.match('.*\$', limit):
|
||||||
|
limit = limit + '%'
|
||||||
|
else:
|
||||||
|
limit = limit[:-1]
|
||||||
|
|
||||||
|
query = "SELECT id, name FROM {} WHERE name LIKE %s;".format('ova')
|
||||||
|
args = (limit, )
|
||||||
|
else:
|
||||||
|
query = "SELECT id, name FROM {};".format('ova')
|
||||||
|
args = ()
|
||||||
|
|
||||||
|
conn, cur = open_database(config)
|
||||||
|
cur.execute(query, args)
|
||||||
|
data = cur.fetchall()
|
||||||
|
close_database(conn, cur)
|
||||||
|
|
||||||
|
ova_data = list()
|
||||||
|
|
||||||
|
for ova in data:
|
||||||
|
ova_id = ova.get('id')
|
||||||
|
ova_name = ova.get('name')
|
||||||
|
|
||||||
|
query = "SELECT pool, volume_name, volume_format, disk_id, disk_size_gb FROM {} WHERE ova = %s;".format('ova_volume')
|
||||||
|
args = (ova_id,)
|
||||||
|
conn, cur = open_database(config)
|
||||||
|
cur.execute(query, args)
|
||||||
|
volumes = cur.fetchall()
|
||||||
|
close_database(conn, cur)
|
||||||
|
|
||||||
|
ova_data.append({'id': ova_id, 'name': ova_name, 'volumes': volumes})
|
||||||
|
|
||||||
|
if ova_data:
|
||||||
|
return ova_data, 200
|
||||||
|
else:
|
||||||
|
return { 'message': 'No OVAs found.' }, 404
|
||||||
|
|
||||||
|
def delete_ova(name):
|
||||||
|
ova_data, retcode = list_ova(name, is_fuzzy=False)
|
||||||
|
if retcode != 200:
|
||||||
|
retmsg = { 'message': 'The OVA "{}" does not exist.'.format(name) }
|
||||||
|
retcode = 400
|
||||||
|
return retmsg, retcode
|
||||||
|
|
||||||
|
conn, cur = open_database(config)
|
||||||
|
ova_id = ova_data[0].get('id')
|
||||||
|
try:
|
||||||
|
# Get the list of volumes for this OVA
|
||||||
|
query = "SELECT pool, volume_name FROM ova_volume WHERE ova = %s;"
|
||||||
|
args = (ova_id,)
|
||||||
|
cur.execute(query, args)
|
||||||
|
volumes = cur.fetchall()
|
||||||
|
|
||||||
|
# Remove each volume for this OVA
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
for volume in volumes:
|
||||||
|
pvc_ceph.remove_volume(zk_conn, volume.get('pool'), volume.get('volume_name'))
|
||||||
|
|
||||||
|
# Delete the volume entries from the database
|
||||||
|
query = "DELETE FROM ova_volume WHERE ova = %s;"
|
||||||
|
args = (ova_id,)
|
||||||
|
cur.execute(query, args)
|
||||||
|
|
||||||
|
# Delete the profile entries from the database
|
||||||
|
query = "DELETE FROM profile WHERE ova = %s;"
|
||||||
|
args = (ova_id,)
|
||||||
|
cur.execute(query, args)
|
||||||
|
|
||||||
|
# Delete the system_template entries from the database
|
||||||
|
query = "DELETE FROM system_template WHERE ova = %s;"
|
||||||
|
args = (ova_id,)
|
||||||
|
cur.execute(query, args)
|
||||||
|
|
||||||
|
# Delete the OVA entry from the database
|
||||||
|
query = "DELETE FROM ova WHERE id = %s;"
|
||||||
|
args = (ova_id,)
|
||||||
|
cur.execute(query, args)
|
||||||
|
|
||||||
|
retmsg = { "message": 'Removed OVA image "{}".'.format(name) }
|
||||||
|
retcode = 200
|
||||||
|
except Exception as e:
|
||||||
|
retmsg = { 'message': 'Failed to remove OVA "{}": {}'.format(name, e) }
|
||||||
|
retcode = 400
|
||||||
|
close_database(conn, cur)
|
||||||
|
return retmsg, retcode
|
||||||
|
|
||||||
|
def upload_ova(ova_data, pool, name, ova_size):
|
||||||
|
ova_archive = None
|
||||||
|
|
||||||
|
# Cleanup function
|
||||||
|
def cleanup_ova_maps_and_volumes():
|
||||||
|
# Close the OVA archive
|
||||||
|
if ova_archive:
|
||||||
|
ova_archive.close()
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
# Unmap the OVA temporary blockdev
|
||||||
|
retflag, retdata = pvc_ceph.unmap_volume(zk_conn, pool, "ova_{}".format(name))
|
||||||
|
# Remove the OVA temporary blockdev
|
||||||
|
retflag, retdata = pvc_ceph.remove_volume(zk_conn, pool, "ova_{}".format(name))
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
|
# Normalize the OVA size to MB
|
||||||
|
# The function always return XXXXB, so strip off the B and convert to an integer
|
||||||
|
ova_size_bytes = int(pvc_ceph.format_bytes_fromhuman(ova_size)[:-1])
|
||||||
|
# Put the size into KB which rbd --size can understand
|
||||||
|
ova_size_kb = math.ceil(ova_size_bytes / 1024)
|
||||||
|
ova_size = "{}K".format(ova_size_kb)
|
||||||
|
|
||||||
|
# Verify that the cluster has enough space to store the OVA volumes (2x OVA size, temporarily, 1x permanently)
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
pool_information = pvc_ceph.getPoolInformation(zk_conn, pool)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
pool_free_space_bytes = int(pool_information['stats']['free_bytes'])
|
||||||
|
if ova_size_bytes * 2 >= pool_free_space_bytes:
|
||||||
|
output = {
|
||||||
|
'message': "The cluster does not have enough free space ({}) to store the OVA volume ({}).".format(
|
||||||
|
pvc_ceph.format_bytes_tohuman(pool_free_space_bytes),
|
||||||
|
pvc_ceph.format_bytes_tohuman(ova_size_bytes)
|
||||||
|
)
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Create a temporary OVA blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.add_volume(zk_conn, pool, "ova_{}".format(name), ova_size)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Map the temporary OVA blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.map_volume(zk_conn, pool, "ova_{}".format(name))
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
ova_blockdev = retdata
|
||||||
|
|
||||||
|
# Save the OVA data to the temporary blockdev directly
|
||||||
|
try:
|
||||||
|
ova_data.save(ova_blockdev)
|
||||||
|
except:
|
||||||
|
output = {
|
||||||
|
'message': "Failed to write OVA file to temporary volume."
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Set up the TAR reader for the OVA temporary blockdev
|
||||||
|
ova_archive = tarfile.open(name=ova_blockdev)
|
||||||
|
# Determine the files in the OVA
|
||||||
|
members = ova_archive.getmembers()
|
||||||
|
except tarfile.TarError:
|
||||||
|
output = {
|
||||||
|
'message': "The uploaded OVA file is not readable."
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Parse through the members list and extract the OVF file
|
||||||
|
for element in set(x for x in members if re.match('.*\.ovf$', x.name)):
|
||||||
|
ovf_file = ova_archive.extractfile(element)
|
||||||
|
|
||||||
|
# Parse the OVF file to get our VM details
|
||||||
|
ovf_parser = OVFParser(ovf_file)
|
||||||
|
ovf_xml_raw = ovf_parser.getXML()
|
||||||
|
virtual_system = ovf_parser.getVirtualSystems()[0]
|
||||||
|
virtual_hardware = ovf_parser.getVirtualHardware(virtual_system)
|
||||||
|
disk_map = ovf_parser.getDiskMap(virtual_system)
|
||||||
|
|
||||||
|
# Close the OVF file
|
||||||
|
ovf_file.close()
|
||||||
|
|
||||||
|
# Create and upload each disk volume
|
||||||
|
for idx, disk in enumerate(disk_map):
|
||||||
|
disk_identifier = "sd{}".format(chr(ord('a') + idx))
|
||||||
|
volume = "ova_{}_{}".format(name, disk_identifier)
|
||||||
|
dev_src = disk.get('src')
|
||||||
|
dev_type = dev_src.split('.')[-1]
|
||||||
|
dev_size_raw = ova_archive.getmember(dev_src).size
|
||||||
|
vm_volume_size = disk.get('capacity')
|
||||||
|
|
||||||
|
# Normalize the dev size to KB
|
||||||
|
# The function always return XXXXB, so strip off the B and convert to an integer
|
||||||
|
dev_size_bytes = int(pvc_ceph.format_bytes_fromhuman(dev_size_raw)[:-1])
|
||||||
|
dev_size_kb = math.ceil(dev_size_bytes / 1024)
|
||||||
|
dev_size = "{}K".format(dev_size_kb)
|
||||||
|
|
||||||
|
def cleanup_img_maps():
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
# Unmap the temporary blockdev
|
||||||
|
retflag, retdata = pvc_ceph.unmap_volume(zk_conn, pool, volume)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
|
||||||
|
# Create the blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.add_volume(zk_conn, pool, volume, dev_size)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_img_maps()
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Map the blockdev
|
||||||
|
zk_conn = pvc_common.startZKConnection(config['coordinators'])
|
||||||
|
retflag, retdata = pvc_ceph.map_volume(zk_conn, pool, volume)
|
||||||
|
pvc_common.stopZKConnection(zk_conn)
|
||||||
|
if not retflag:
|
||||||
|
output = {
|
||||||
|
'message': retdata.replace('\"', '\'')
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_img_maps()
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
temp_blockdev = retdata
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Open (extract) the TAR archive file and seek to byte 0
|
||||||
|
vmdk_file = ova_archive.extractfile(disk.get('src'))
|
||||||
|
vmdk_file.seek(0)
|
||||||
|
# Open the temporary blockdev and seek to byte 0
|
||||||
|
blk_file = open(temp_blockdev, 'wb')
|
||||||
|
blk_file.seek(0)
|
||||||
|
# Write the contents of vmdk_file into blk_file
|
||||||
|
bytes_written = blk_file.write(vmdk_file.read())
|
||||||
|
# Close blk_file (and flush the buffers)
|
||||||
|
blk_file.close()
|
||||||
|
# Close vmdk_file
|
||||||
|
vmdk_file.close()
|
||||||
|
# Perform an OS-level sync
|
||||||
|
pvc_common.run_os_command('sync')
|
||||||
|
except:
|
||||||
|
output = {
|
||||||
|
'message': "Failed to write image file '{}' to temporary volume.".format(disk.get('src'))
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
cleanup_img_maps()
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
cleanup_img_maps()
|
||||||
|
|
||||||
|
cleanup_ova_maps_and_volumes()
|
||||||
|
|
||||||
|
# Prepare the database entries
|
||||||
|
query = "INSERT INTO ova (name, ovf) VALUES (%s, %s);"
|
||||||
|
args = (name, ovf_xml_raw)
|
||||||
|
conn, cur = open_database(config)
|
||||||
|
try:
|
||||||
|
cur.execute(query, args)
|
||||||
|
close_database(conn, cur)
|
||||||
|
except Exception as e:
|
||||||
|
output = {
|
||||||
|
'message': 'Failed to create OVA entry "{}": {}'.format(name, e)
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
close_database(conn, cur)
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Get the OVA database id
|
||||||
|
query = "SELECT id FROM ova WHERE name = %s;"
|
||||||
|
args = (name, )
|
||||||
|
conn, cur = open_database(config)
|
||||||
|
cur.execute(query, args)
|
||||||
|
ova_id = cur.fetchone()['id']
|
||||||
|
close_database(conn, cur)
|
||||||
|
|
||||||
|
# Prepare disk entries in ova_volume
|
||||||
|
for idx, disk in enumerate(disk_map):
|
||||||
|
disk_identifier = "sd{}".format(chr(ord('a') + idx))
|
||||||
|
volume_type = disk.get('src').split('.')[-1]
|
||||||
|
volume = "ova_{}_{}".format(name, disk_identifier)
|
||||||
|
vm_volume_size = disk.get('capacity')
|
||||||
|
|
||||||
|
# The function always return XXXXB, so strip off the B and convert to an integer
|
||||||
|
vm_volume_size_bytes = int(pvc_ceph.format_bytes_fromhuman(vm_volume_size)[:-1])
|
||||||
|
vm_volume_size_gb = math.ceil(vm_volume_size_bytes / 1024 / 1024 / 1024)
|
||||||
|
|
||||||
|
query = "INSERT INTO ova_volume (ova, pool, volume_name, volume_format, disk_id, disk_size_gb) VALUES (%s, %s, %s, %s, %s, %s);"
|
||||||
|
args = (ova_id, pool, volume, volume_type, disk_identifier, vm_volume_size_gb)
|
||||||
|
|
||||||
|
conn, cur = open_database(config)
|
||||||
|
try:
|
||||||
|
cur.execute(query, args)
|
||||||
|
close_database(conn, cur)
|
||||||
|
except Exception as e:
|
||||||
|
output = {
|
||||||
|
'message': 'Failed to create OVA volume entry "{}": {}'.format(volume, e)
|
||||||
|
}
|
||||||
|
retcode = 400
|
||||||
|
close_database(conn, cur)
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
# Prepare a system_template for the OVA
|
||||||
|
vcpu_count = virtual_hardware.get('vcpus')
|
||||||
|
vram_mb = virtual_hardware.get('vram')
|
||||||
|
if virtual_hardware.get('graphics-controller') == 1:
|
||||||
|
vnc = True
|
||||||
|
serial = False
|
||||||
|
else:
|
||||||
|
vnc = False
|
||||||
|
serial = True
|
||||||
|
retdata, retcode = provisioner.create_template_system(name, vcpu_count, vram_mb, serial, vnc, vnc_bind=None, ova=ova_id)
|
||||||
|
system_template, retcode = provisioner.list_template_system(name, is_fuzzy=False)
|
||||||
|
system_template_name = system_template[0].get('name')
|
||||||
|
|
||||||
|
# Prepare a barebones profile for the OVA
|
||||||
|
retdata, retcode = provisioner.create_profile(name, 'ova', system_template_name, None, None, userdata=None, script=None, ova=name, arguments=None)
|
||||||
|
|
||||||
|
output = {
|
||||||
|
'message': "Imported OVA image '{}'.".format(name)
|
||||||
|
}
|
||||||
|
retcode = 200
|
||||||
|
return output, retcode
|
||||||
|
|
||||||
|
#
|
||||||
|
# OVF parser
|
||||||
|
#
|
||||||
|
class OVFParser(object):
|
||||||
|
RASD_TYPE = {
|
||||||
|
"1": "vmci",
|
||||||
|
"3": "vcpus",
|
||||||
|
"4": "vram",
|
||||||
|
"5": "ide-controller",
|
||||||
|
"6": "scsi-controller",
|
||||||
|
"10": "ethernet-adapter",
|
||||||
|
"15": "cdrom",
|
||||||
|
"17": "disk",
|
||||||
|
"20": "other-storage-device",
|
||||||
|
"23": "usb-controller",
|
||||||
|
"24": "graphics-controller",
|
||||||
|
"35": "sound-controller"
|
||||||
|
}
|
||||||
|
|
||||||
|
def _getFilelist(self):
|
||||||
|
path = "{{{schema}}}References/{{{schema}}}File".format(schema=self.OVF_SCHEMA)
|
||||||
|
id_attr = "{{{schema}}}id".format(schema=self.OVF_SCHEMA)
|
||||||
|
href_attr = "{{{schema}}}href".format(schema=self.OVF_SCHEMA)
|
||||||
|
current_list = self.xml.findall(path)
|
||||||
|
results = [(x.get(id_attr), x.get(href_attr)) for x in current_list]
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _getDisklist(self):
|
||||||
|
path = "{{{schema}}}DiskSection/{{{schema}}}Disk".format(schema=self.OVF_SCHEMA)
|
||||||
|
id_attr = "{{{schema}}}diskId".format(schema=self.OVF_SCHEMA)
|
||||||
|
ref_attr = "{{{schema}}}fileRef".format(schema=self.OVF_SCHEMA)
|
||||||
|
cap_attr = "{{{schema}}}capacity".format(schema=self.OVF_SCHEMA)
|
||||||
|
cap_units = "{{{schema}}}capacityAllocationUnits".format(schema=self.OVF_SCHEMA)
|
||||||
|
current_list = self.xml.findall(path)
|
||||||
|
results = [(x.get(id_attr), x.get(ref_attr), x.get(cap_attr), x.get(cap_units)) for x in current_list]
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _getAttributes(self, virtual_system, path, attribute):
|
||||||
|
current_list = virtual_system.findall(path)
|
||||||
|
results = [x.get(attribute) for x in current_list]
|
||||||
|
return results
|
||||||
|
|
||||||
|
def __init__(self, ovf_file):
|
||||||
|
self.xml = lxml.etree.parse(ovf_file)
|
||||||
|
|
||||||
|
# Define our schemas
|
||||||
|
envelope_tag = self.xml.find(".")
|
||||||
|
self.XML_SCHEMA = envelope_tag.nsmap.get('xsi')
|
||||||
|
self.OVF_SCHEMA = envelope_tag.nsmap.get('ovf')
|
||||||
|
self.RASD_SCHEMA = envelope_tag.nsmap.get('rasd')
|
||||||
|
self.SASD_SCHEMA = envelope_tag.nsmap.get('sasd')
|
||||||
|
self.VSSD_SCHEMA = envelope_tag.nsmap.get('vssd')
|
||||||
|
|
||||||
|
self.ovf_version = int(self.OVF_SCHEMA.split('/')[-1])
|
||||||
|
|
||||||
|
# Get the file and disk lists
|
||||||
|
self.filelist = self._getFilelist()
|
||||||
|
self.disklist = self._getDisklist()
|
||||||
|
|
||||||
|
def getVirtualSystems(self):
|
||||||
|
return self.xml.findall("{{{schema}}}VirtualSystem".format(schema=self.OVF_SCHEMA))
|
||||||
|
|
||||||
|
def getXML(self):
|
||||||
|
return lxml.etree.tostring(self.xml, pretty_print=True).decode('utf8')
|
||||||
|
|
||||||
|
def getVirtualHardware(self, virtual_system):
|
||||||
|
hardware_list = virtual_system.findall(
|
||||||
|
"{{{schema}}}VirtualHardwareSection/{{{schema}}}Item".format(schema=self.OVF_SCHEMA)
|
||||||
|
)
|
||||||
|
virtual_hardware = {}
|
||||||
|
|
||||||
|
for item in hardware_list:
|
||||||
|
try:
|
||||||
|
item_type = self.RASD_TYPE[item.find("{{{rasd}}}ResourceType".format(rasd=self.RASD_SCHEMA)).text]
|
||||||
|
except:
|
||||||
|
continue
|
||||||
|
quantity = item.find("{{{rasd}}}VirtualQuantity".format(rasd=self.RASD_SCHEMA))
|
||||||
|
if quantity is None:
|
||||||
|
virtual_hardware[item_type] = 1
|
||||||
|
else:
|
||||||
|
virtual_hardware[item_type] = quantity.text
|
||||||
|
|
||||||
|
return virtual_hardware
|
||||||
|
|
||||||
|
def getDiskMap(self, virtual_system):
|
||||||
|
# OVF v2 uses the StorageItem field, while v1 uses the normal Item field
|
||||||
|
if self.ovf_version < 2:
|
||||||
|
hardware_list = virtual_system.findall(
|
||||||
|
"{{{schema}}}VirtualHardwareSection/{{{schema}}}Item".format(schema=self.OVF_SCHEMA)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
hardware_list = virtual_system.findall(
|
||||||
|
"{{{schema}}}VirtualHardwareSection/{{{schema}}}StorageItem".format(schema=self.OVF_SCHEMA)
|
||||||
|
)
|
||||||
|
disk_list = []
|
||||||
|
|
||||||
|
for item in hardware_list:
|
||||||
|
item_type = None
|
||||||
|
|
||||||
|
if self.SASD_SCHEMA is not None:
|
||||||
|
item_type = self.RASD_TYPE[item.find("{{{sasd}}}ResourceType".format(sasd=self.SASD_SCHEMA)).text]
|
||||||
|
else:
|
||||||
|
item_type = self.RASD_TYPE[item.find("{{{rasd}}}ResourceType".format(rasd=self.RASD_SCHEMA)).text]
|
||||||
|
|
||||||
|
if item_type != 'disk':
|
||||||
|
continue
|
||||||
|
|
||||||
|
hostref = None
|
||||||
|
if self.SASD_SCHEMA is not None:
|
||||||
|
hostref = item.find("{{{sasd}}}HostResource".format(sasd=self.SASD_SCHEMA))
|
||||||
|
else:
|
||||||
|
hostref = item.find("{{{rasd}}}HostResource".format(rasd=self.RASD_SCHEMA))
|
||||||
|
if hostref is None:
|
||||||
|
continue
|
||||||
|
disk_res = hostref.text
|
||||||
|
|
||||||
|
# Determine which file this disk_res ultimately represents
|
||||||
|
(disk_id, disk_ref, disk_capacity, disk_capacity_unit) = [x for x in self.disklist if x[0] == disk_res.split('/')[-1]][0]
|
||||||
|
(file_id, disk_src) = [x for x in self.filelist if x[0] == disk_ref][0]
|
||||||
|
|
||||||
|
if disk_capacity_unit is not None:
|
||||||
|
# Handle the unit conversion
|
||||||
|
base_unit, action, multiple = disk_capacity_unit.split()
|
||||||
|
multiple_base, multiple_exponent = multiple.split('^')
|
||||||
|
disk_capacity = int(disk_capacity) * ( int(multiple_base) ** int(multiple_exponent) )
|
||||||
|
|
||||||
|
# Append the disk with all details to the list
|
||||||
|
disk_list.append({
|
||||||
|
"id": disk_id,
|
||||||
|
"ref": disk_ref,
|
||||||
|
"capacity": disk_capacity,
|
||||||
|
"src": disk_src
|
||||||
|
})
|
||||||
|
|
||||||
|
return disk_list
|
File diff suppressed because it is too large
Load Diff
@ -16,7 +16,7 @@ HOSTS=( ${@} )
|
|||||||
echo "${HOSTS[@]}"
|
echo "${HOSTS[@]}"
|
||||||
|
|
||||||
# Build the packages
|
# Build the packages
|
||||||
$SUDO ./build-deb.sh
|
./build-deb.sh
|
||||||
|
|
||||||
# Install the client(s) locally
|
# Install the client(s) locally
|
||||||
$SUDO dpkg -i ../pvc-client*.deb
|
$SUDO dpkg -i ../pvc-client*.deb
|
||||||
@ -28,11 +28,11 @@ for HOST in ${HOSTS[@]}; do
|
|||||||
ssh $HOST $SUDO rm -rf /tmp/pvc
|
ssh $HOST $SUDO rm -rf /tmp/pvc
|
||||||
ssh $HOST mkdir /tmp/pvc
|
ssh $HOST mkdir /tmp/pvc
|
||||||
scp ../*.deb $HOST:/tmp/pvc/
|
scp ../*.deb $HOST:/tmp/pvc/
|
||||||
ssh $HOST $SUDO dpkg -i /tmp/pvc/*.deb
|
ssh $HOST $SUDO dpkg -i /tmp/pvc/{pvc-client-cli,pvc-daemon-common,pvc-daemon-api,pvc-daemon-node}*.deb
|
||||||
ssh $HOST $SUDO systemctl restart pvcd
|
|
||||||
ssh $HOST rm -rf /tmp/pvc
|
ssh $HOST rm -rf /tmp/pvc
|
||||||
|
ssh $HOST $SUDO systemctl restart pvcnoded
|
||||||
echo "****"
|
echo "****"
|
||||||
echo "Waiting 10s for host ${HOST} to stabilize"
|
echo "Waiting 15s for host ${HOST} to stabilize"
|
||||||
echo "****"
|
echo "****"
|
||||||
sleep 10
|
sleep 15
|
||||||
done
|
done
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
ver="0.6"
|
ver="$( head -1 debian/changelog | awk -F'[()-]' '{ print $2 }' )"
|
||||||
git pull
|
git pull
|
||||||
rm ../pvc_*
|
rm ../pvc_*
|
||||||
dh_make -p pvc_${ver} --createorig --single --yes
|
dh_make -p pvc_${ver} --createorig --single --yes
|
||||||
|
@ -1 +0,0 @@
|
|||||||
../client-common
|
|
@ -1,11 +0,0 @@
|
|||||||
CREATE TABLE system_template (id SERIAL PRIMARY KEY, name TEXT NOT NULL UNIQUE, vcpu_count INT NOT NULL, vram_mb INT NOT NULL, serial BOOL NOT NULL, vnc BOOL NOT NULL, vnc_bind TEXT, node_limit TEXT, node_selector TEXT, node_autostart BOOL NOT NULL);
|
|
||||||
CREATE TABLE network_template (id SERIAL PRIMARY KEY, name TEXT NOT NULL UNIQUE, mac_template TEXT);
|
|
||||||
CREATE TABLE network (id SERIAL PRIMARY KEY, network_template INT REFERENCES network_template(id), vni INT NOT NULL);
|
|
||||||
CREATE TABLE storage_template (id SERIAL PRIMARY KEY, name TEXT NOT NULL UNIQUE);
|
|
||||||
CREATE TABLE storage (id SERIAL PRIMARY KEY, storage_template INT REFERENCES storage_template(id), pool TEXT NOT NULL, disk_id TEXT NOT NULL, source_volume TEXT, disk_size_gb INT, mountpoint TEXT, filesystem TEXT, filesystem_args TEXT);
|
|
||||||
CREATE TABLE userdata (id SERIAL PRIMARY KEY, name TEXT NOT NULL UNIQUE, userdata TEXT NOT NULL);
|
|
||||||
CREATE TABLE script (id SERIAL PRIMARY KEY, name TEXT NOT NULL UNIQUE, script TEXT NOT NULL);
|
|
||||||
CREATE TABLE profile (id SERIAL PRIMARY KEY, name TEXT NOT NULL UNIQUE, system_template INT REFERENCES system_template(id), network_template INT REFERENCES network_template(id), storage_template INT REFERENCES storage_template(id), userdata INT REFERENCES userdata(id), script INT REFERENCES script(id), arguments text);
|
|
||||||
|
|
||||||
INSERT INTO userdata (name, userdata) VALUES ('empty', '');
|
|
||||||
INSERT INTO script (name, script) VALUES ('empty', '');
|
|
@ -1,16 +0,0 @@
|
|||||||
# Parallel Virtual Cluster Provisioner client worker unit file
|
|
||||||
|
|
||||||
[Unit]
|
|
||||||
Description = Parallel Virtual Cluster Provisioner worker
|
|
||||||
After = network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type = simple
|
|
||||||
WorkingDirectory = /usr/share/pvc
|
|
||||||
Environment = PYTHONUNBUFFERED=true
|
|
||||||
Environment = PVC_CONFIG_FILE=/etc/pvc/pvc-api.yaml
|
|
||||||
ExecStart = /usr/bin/celery worker -A pvc-api.celery --concurrency 1 --loglevel INFO
|
|
||||||
Restart = on-failure
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy = multi-user.target
|
|
@ -25,8 +25,10 @@ import json
|
|||||||
import time
|
import time
|
||||||
import math
|
import math
|
||||||
|
|
||||||
|
from requests_toolbelt.multipart.encoder import MultipartEncoder, MultipartEncoderMonitor
|
||||||
|
|
||||||
import cli_lib.ansiprint as ansiprint
|
import cli_lib.ansiprint as ansiprint
|
||||||
from cli_lib.common import call_api
|
from cli_lib.common import UploadProgressBar, call_api
|
||||||
|
|
||||||
#
|
#
|
||||||
# Supplemental functions
|
# Supplemental functions
|
||||||
@ -855,6 +857,41 @@ def ceph_volume_add(config, pool, volume, size):
|
|||||||
|
|
||||||
return retstatus, response.json()['message']
|
return retstatus, response.json()['message']
|
||||||
|
|
||||||
|
def ceph_volume_upload(config, pool, volume, image_format, image_file):
|
||||||
|
"""
|
||||||
|
Upload a disk image to a Ceph volume
|
||||||
|
|
||||||
|
API endpoint: POST /api/v1/storage/ceph/volume/{pool}/{volume}/upload
|
||||||
|
API arguments: image_format={image_format}
|
||||||
|
API schema: {"message":"{data}"}
|
||||||
|
"""
|
||||||
|
import click
|
||||||
|
|
||||||
|
bar = UploadProgressBar(image_file, end_message="Parsing file on remote side...", end_nl=False)
|
||||||
|
upload_data = MultipartEncoder(
|
||||||
|
fields={ 'file': ('filename', open(image_file, 'rb'), 'text/plain')}
|
||||||
|
)
|
||||||
|
upload_monitor = MultipartEncoderMonitor(upload_data, bar.update)
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"Content-Type": upload_monitor.content_type
|
||||||
|
}
|
||||||
|
params = {
|
||||||
|
'image_format': image_format
|
||||||
|
}
|
||||||
|
|
||||||
|
response = call_api(config, 'post', '/storage/ceph/volume/{}/{}/upload'.format(pool, volume), headers=headers, params=params, data=upload_monitor)
|
||||||
|
|
||||||
|
click.echo("done.")
|
||||||
|
click.echo()
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
retstatus = True
|
||||||
|
else:
|
||||||
|
retstatus = False
|
||||||
|
|
||||||
|
return retstatus, response.json()['message']
|
||||||
|
|
||||||
def ceph_volume_remove(config, pool, volume):
|
def ceph_volume_remove(config, pool, volume):
|
||||||
"""
|
"""
|
||||||
Remove Ceph volume
|
Remove Ceph volume
|
||||||
|
@ -20,10 +20,81 @@
|
|||||||
#
|
#
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
|
import os
|
||||||
|
import io
|
||||||
|
import math
|
||||||
|
import time
|
||||||
import requests
|
import requests
|
||||||
import click
|
import click
|
||||||
|
|
||||||
def call_api(config, operation, request_uri, params=None, data=None):
|
def format_bytes(size_bytes):
|
||||||
|
byte_unit_matrix = {
|
||||||
|
'B': 1,
|
||||||
|
'K': 1024,
|
||||||
|
'M': 1024*1024,
|
||||||
|
'G': 1024*1024*1024,
|
||||||
|
'T': 1024*1024*1024*1024,
|
||||||
|
'P': 1024*1024*1024*1024*1024
|
||||||
|
}
|
||||||
|
human_bytes = '0B'
|
||||||
|
for unit in sorted(byte_unit_matrix, key=byte_unit_matrix.get):
|
||||||
|
formatted_bytes = int(math.ceil(size_bytes / byte_unit_matrix[unit]))
|
||||||
|
if formatted_bytes < 10000:
|
||||||
|
human_bytes = '{}{}'.format(formatted_bytes, unit)
|
||||||
|
break
|
||||||
|
return human_bytes
|
||||||
|
|
||||||
|
class UploadProgressBar(object):
|
||||||
|
def __init__(self, filename, end_message='', end_nl=True):
|
||||||
|
file_size = os.path.getsize(filename)
|
||||||
|
file_size_human = format_bytes(file_size)
|
||||||
|
click.echo("Uploading file (total size {})...".format(file_size_human))
|
||||||
|
|
||||||
|
self.length = file_size
|
||||||
|
self.time_last = int(round(time.time() * 1000)) - 1000
|
||||||
|
self.bytes_last = 0
|
||||||
|
self.bytes_diff = 0
|
||||||
|
self.is_end = False
|
||||||
|
|
||||||
|
self.end_message = end_message
|
||||||
|
self.end_nl = end_nl
|
||||||
|
if not self.end_nl:
|
||||||
|
self.end_suffix = ' '
|
||||||
|
else:
|
||||||
|
self.end_suffix = ''
|
||||||
|
|
||||||
|
self.bar = click.progressbar(length=self.length, show_eta=True)
|
||||||
|
|
||||||
|
def update(self, monitor):
|
||||||
|
bytes_cur = monitor.bytes_read
|
||||||
|
self.bytes_diff += bytes_cur - self.bytes_last
|
||||||
|
if self.bytes_last == bytes_cur:
|
||||||
|
self.is_end = True
|
||||||
|
self.bytes_last = bytes_cur
|
||||||
|
|
||||||
|
time_cur = int(round(time.time() * 1000))
|
||||||
|
if (time_cur - 1000) > self.time_last:
|
||||||
|
self.time_last = time_cur
|
||||||
|
self.bar.update(self.bytes_diff)
|
||||||
|
self.bytes_diff = 0
|
||||||
|
|
||||||
|
if self.is_end:
|
||||||
|
self.bar.update(self.bytes_diff)
|
||||||
|
self.bytes_diff = 0
|
||||||
|
click.echo()
|
||||||
|
click.echo()
|
||||||
|
if self.end_message:
|
||||||
|
click.echo(self.end_message + self.end_suffix, nl=self.end_nl)
|
||||||
|
|
||||||
|
class ErrorResponse(requests.Response):
|
||||||
|
def __init__(self, json_data, status_code):
|
||||||
|
self.json_data = json_data
|
||||||
|
self.status_code = status_code
|
||||||
|
|
||||||
|
def json(self):
|
||||||
|
return self.json_data
|
||||||
|
|
||||||
|
def call_api(config, operation, request_uri, headers={}, params=None, data=None, files=None):
|
||||||
# Craft the URI
|
# Craft the URI
|
||||||
uri = '{}://{}{}{}'.format(
|
uri = '{}://{}{}{}'.format(
|
||||||
config['api_scheme'],
|
config['api_scheme'],
|
||||||
@ -34,9 +105,7 @@ def call_api(config, operation, request_uri, params=None, data=None):
|
|||||||
|
|
||||||
# Craft the authentication header if required
|
# Craft the authentication header if required
|
||||||
if config['api_key']:
|
if config['api_key']:
|
||||||
headers = {'X-Api-Key': config['api_key']}
|
headers['X-Api-Key'] = config['api_key']
|
||||||
else:
|
|
||||||
headers = None
|
|
||||||
|
|
||||||
# Determine the request type and hit the API
|
# Determine the request type and hit the API
|
||||||
try:
|
try:
|
||||||
@ -52,14 +121,16 @@ def call_api(config, operation, request_uri, params=None, data=None):
|
|||||||
uri,
|
uri,
|
||||||
headers=headers,
|
headers=headers,
|
||||||
params=params,
|
params=params,
|
||||||
data=data
|
data=data,
|
||||||
|
files=files
|
||||||
)
|
)
|
||||||
if operation == 'put':
|
if operation == 'put':
|
||||||
response = requests.put(
|
response = requests.put(
|
||||||
uri,
|
uri,
|
||||||
headers=headers,
|
headers=headers,
|
||||||
params=params,
|
params=params,
|
||||||
data=data
|
data=data,
|
||||||
|
files=files
|
||||||
)
|
)
|
||||||
if operation == 'patch':
|
if operation == 'patch':
|
||||||
response = requests.patch(
|
response = requests.patch(
|
||||||
@ -76,8 +147,8 @@ def call_api(config, operation, request_uri, params=None, data=None):
|
|||||||
data=data
|
data=data
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
click.echo('Failed to connect to the API: {}'.format(e))
|
message = 'Failed to connect to the API: {}'.format(e)
|
||||||
exit(1)
|
response = ErrorResponse({'message':message}, 500)
|
||||||
|
|
||||||
# Display debug output
|
# Display debug output
|
||||||
if config['debug']:
|
if config['debug']:
|
||||||
|
@ -205,8 +205,11 @@ def net_dhcp_list(config, net, limit, only_static=False):
|
|||||||
params = dict()
|
params = dict()
|
||||||
if limit:
|
if limit:
|
||||||
params['limit'] = limit
|
params['limit'] = limit
|
||||||
|
|
||||||
if only_static:
|
if only_static:
|
||||||
params['static'] = True
|
params['static'] = True
|
||||||
|
else:
|
||||||
|
params['static'] = False
|
||||||
|
|
||||||
response = call_api(config, 'get', '/network/{net}/lease'.format(net=net), params=params)
|
response = call_api(config, 'get', '/network/{net}/lease'.format(net=net), params=params)
|
||||||
|
|
||||||
|
@ -25,8 +25,10 @@ import re
|
|||||||
import subprocess
|
import subprocess
|
||||||
import ast
|
import ast
|
||||||
|
|
||||||
|
from requests_toolbelt.multipart.encoder import MultipartEncoder, MultipartEncoderMonitor
|
||||||
|
|
||||||
import cli_lib.ansiprint as ansiprint
|
import cli_lib.ansiprint as ansiprint
|
||||||
from cli_lib.common import call_api
|
from cli_lib.common import UploadProgressBar, call_api
|
||||||
|
|
||||||
#
|
#
|
||||||
# Primary functions
|
# Primary functions
|
||||||
@ -85,7 +87,24 @@ def template_add(config, params, template_type=None):
|
|||||||
|
|
||||||
return retvalue, response.json()['message']
|
return retvalue, response.json()['message']
|
||||||
|
|
||||||
def template_remove(config, name, template_type=None):
|
def template_modify(config, params, name, template_type):
|
||||||
|
"""
|
||||||
|
Modify an existing template of {template_type} with {params}
|
||||||
|
|
||||||
|
API endpoint: PUT /api/v1/provisioner/template/{template_type}/{name}
|
||||||
|
API_arguments: args
|
||||||
|
API schema: {message}
|
||||||
|
"""
|
||||||
|
response = call_api(config, 'put', '/provisioner/template/{template_type}/{name}'.format(template_type=template_type, name=name), params=params)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
retvalue = True
|
||||||
|
else:
|
||||||
|
retvalue = False
|
||||||
|
|
||||||
|
return retvalue, response.json()['message']
|
||||||
|
|
||||||
|
def template_remove(config, name, template_type):
|
||||||
"""
|
"""
|
||||||
Remove template {name} of {template_type}
|
Remove template {name} of {template_type}
|
||||||
|
|
||||||
@ -170,6 +189,21 @@ def userdata_list(config, limit):
|
|||||||
else:
|
else:
|
||||||
return False, response.json()['message']
|
return False, response.json()['message']
|
||||||
|
|
||||||
|
def userdata_show(config, name):
|
||||||
|
"""
|
||||||
|
Get information about userdata name
|
||||||
|
|
||||||
|
API endpoint: GET /api/v1/provisioner/userdata/{name}
|
||||||
|
API arguments:
|
||||||
|
API schema: [{json_data_object},{json_data_object},etc.]
|
||||||
|
"""
|
||||||
|
response = call_api(config, 'get', '/provisioner/userdata/{}'.format(name))
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return True, response.json()[0]['userdata']
|
||||||
|
else:
|
||||||
|
return False, response.json()['message']
|
||||||
|
|
||||||
def userdata_add(config, params):
|
def userdata_add(config, params):
|
||||||
"""
|
"""
|
||||||
Add a new userdata with {params}
|
Add a new userdata with {params}
|
||||||
@ -272,6 +306,21 @@ def script_list(config, limit):
|
|||||||
else:
|
else:
|
||||||
return False, response.json()['message']
|
return False, response.json()['message']
|
||||||
|
|
||||||
|
def script_show(config, name):
|
||||||
|
"""
|
||||||
|
Get information about script name
|
||||||
|
|
||||||
|
API endpoint: GET /api/v1/provisioner/script/{name}
|
||||||
|
API arguments:
|
||||||
|
API schema: [{json_data_object},{json_data_object},etc.]
|
||||||
|
"""
|
||||||
|
response = call_api(config, 'get', '/provisioner/script/{}'.format(name))
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return True, response.json()[0]['script']
|
||||||
|
else:
|
||||||
|
return False, response.json()['message']
|
||||||
|
|
||||||
def script_add(config, params):
|
def script_add(config, params):
|
||||||
"""
|
"""
|
||||||
Add a new script with {params}
|
Add a new script with {params}
|
||||||
@ -340,6 +389,89 @@ def script_remove(config, name):
|
|||||||
|
|
||||||
return retvalue, response.json()['message']
|
return retvalue, response.json()['message']
|
||||||
|
|
||||||
|
def ova_info(config, name):
|
||||||
|
"""
|
||||||
|
Get information about OVA image {name}
|
||||||
|
|
||||||
|
API endpoint: GET /api/v1/provisioner/ova/{name}
|
||||||
|
API arguments:
|
||||||
|
API schema: {json_data_object}
|
||||||
|
"""
|
||||||
|
response = call_api(config, 'get', '/provisioner/ova/{name}'.format(name=name))
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return True, response.json()[0]
|
||||||
|
else:
|
||||||
|
return False, response.json()['message']
|
||||||
|
|
||||||
|
def ova_list(config, limit):
|
||||||
|
"""
|
||||||
|
Get list information about OVA images (limited by {limit})
|
||||||
|
|
||||||
|
API endpoint: GET /api/v1/provisioner/ova
|
||||||
|
API arguments: limit={limit}
|
||||||
|
API schema: [{json_data_object},{json_data_object},etc.]
|
||||||
|
"""
|
||||||
|
params = dict()
|
||||||
|
if limit:
|
||||||
|
params['limit'] = limit
|
||||||
|
|
||||||
|
response = call_api(config, 'get', '/provisioner/ova', params=params)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return True, response.json()
|
||||||
|
else:
|
||||||
|
return False, response.json()['message']
|
||||||
|
|
||||||
|
def ova_upload(config, name, ova_file, params):
|
||||||
|
"""
|
||||||
|
Upload an OVA image to the cluster
|
||||||
|
|
||||||
|
API endpoint: POST /api/v1/provisioner/ova/{name}
|
||||||
|
API arguments: pool={pool}, ova_size={ova_size}
|
||||||
|
API schema: {"message":"{data}"}
|
||||||
|
"""
|
||||||
|
import click
|
||||||
|
|
||||||
|
bar = UploadProgressBar(ova_file, end_message="Parsing file on remote side...", end_nl=False)
|
||||||
|
upload_data = MultipartEncoder(
|
||||||
|
fields={ 'file': ('filename', open(ova_file, 'rb'), 'text/plain')}
|
||||||
|
)
|
||||||
|
upload_monitor = MultipartEncoderMonitor(upload_data, bar.update)
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"Content-Type": upload_monitor.content_type
|
||||||
|
}
|
||||||
|
|
||||||
|
response = call_api(config, 'post', '/provisioner/ova/{}'.format(name), headers=headers, params=params, data=upload_monitor)
|
||||||
|
|
||||||
|
click.echo("done.")
|
||||||
|
click.echo()
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
retstatus = True
|
||||||
|
else:
|
||||||
|
retstatus = False
|
||||||
|
|
||||||
|
return retstatus, response.json()['message']
|
||||||
|
|
||||||
|
def ova_remove(config, name):
|
||||||
|
"""
|
||||||
|
Remove OVA image {name}
|
||||||
|
|
||||||
|
API endpoint: DELETE /api/v1/provisioner/ova/{name}
|
||||||
|
API_arguments:
|
||||||
|
API schema: {message}
|
||||||
|
"""
|
||||||
|
response = call_api(config, 'delete', '/provisioner/ova/{name}'.format(name=name))
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
retvalue = True
|
||||||
|
else:
|
||||||
|
retvalue = False
|
||||||
|
|
||||||
|
return retvalue, response.json()['message']
|
||||||
|
|
||||||
def profile_info(config, profile):
|
def profile_info(config, profile):
|
||||||
"""
|
"""
|
||||||
Get information about profile
|
Get information about profile
|
||||||
@ -1069,15 +1201,139 @@ def format_list_script(script_data, lines=None):
|
|||||||
|
|
||||||
return '\n'.join([script_list_output_header] + script_list_output)
|
return '\n'.join([script_list_output_header] + script_list_output)
|
||||||
|
|
||||||
|
def format_list_ova(ova_data):
|
||||||
|
if isinstance(ova_data, dict):
|
||||||
|
ova_data = [ ova_data ]
|
||||||
|
|
||||||
|
ova_list_output = []
|
||||||
|
|
||||||
|
# Determine optimal column widths
|
||||||
|
ova_name_length = 5
|
||||||
|
ova_id_length = 3
|
||||||
|
ova_disk_id_length = 8
|
||||||
|
ova_disk_size_length = 10
|
||||||
|
ova_disk_pool_length = 5
|
||||||
|
ova_disk_volume_format_length = 7
|
||||||
|
ova_disk_volume_name_length = 13
|
||||||
|
|
||||||
|
for ova in ova_data:
|
||||||
|
# ova_name column
|
||||||
|
_ova_name_length = len(str(ova['name'])) + 1
|
||||||
|
if _ova_name_length > ova_name_length:
|
||||||
|
ova_name_length = _ova_name_length
|
||||||
|
# ova_id column
|
||||||
|
_ova_id_length = len(str(ova['id'])) + 1
|
||||||
|
if _ova_id_length > ova_id_length:
|
||||||
|
ova_id_length = _ova_id_length
|
||||||
|
|
||||||
|
for disk in ova['volumes']:
|
||||||
|
# ova_disk_id column
|
||||||
|
_ova_disk_id_length = len(str(disk['disk_id'])) + 1
|
||||||
|
if _ova_disk_id_length > ova_disk_id_length:
|
||||||
|
ova_disk_id_length = _ova_disk_id_length
|
||||||
|
# ova_disk_size column
|
||||||
|
_ova_disk_size_length = len(str(disk['disk_size_gb'])) + 1
|
||||||
|
if _ova_disk_size_length > ova_disk_size_length:
|
||||||
|
ova_disk_size_length = _ova_disk_size_length
|
||||||
|
# ova_disk_pool column
|
||||||
|
_ova_disk_pool_length = len(str(disk['pool'])) + 1
|
||||||
|
if _ova_disk_pool_length > ova_disk_pool_length:
|
||||||
|
ova_disk_pool_length = _ova_disk_pool_length
|
||||||
|
# ova_disk_volume_format column
|
||||||
|
_ova_disk_volume_format_length = len(str(disk['volume_format'])) + 1
|
||||||
|
if _ova_disk_volume_format_length > ova_disk_volume_format_length:
|
||||||
|
ova_disk_volume_format_length = _ova_disk_volume_format_length
|
||||||
|
# ova_disk_volume_name column
|
||||||
|
_ova_disk_volume_name_length = len(str(disk['volume_name'])) + 1
|
||||||
|
if _ova_disk_volume_name_length > ova_disk_volume_name_length:
|
||||||
|
ova_disk_volume_name_length = _ova_disk_volume_name_length
|
||||||
|
|
||||||
|
# Format the string (header)
|
||||||
|
ova_list_output_header = '{bold}{ova_name: <{ova_name_length}} {ova_id: <{ova_id_length}} \
|
||||||
|
{ova_disk_id: <{ova_disk_id_length}} \
|
||||||
|
{ova_disk_size: <{ova_disk_size_length}} \
|
||||||
|
{ova_disk_pool: <{ova_disk_pool_length}} \
|
||||||
|
{ova_disk_volume_format: <{ova_disk_volume_format_length}} \
|
||||||
|
{ova_disk_volume_name: <{ova_disk_volume_name_length}}{end_bold}'.format(
|
||||||
|
ova_name_length=ova_name_length,
|
||||||
|
ova_id_length=ova_id_length,
|
||||||
|
ova_disk_id_length=ova_disk_id_length,
|
||||||
|
ova_disk_pool_length=ova_disk_pool_length,
|
||||||
|
ova_disk_size_length=ova_disk_size_length,
|
||||||
|
ova_disk_volume_format_length=ova_disk_volume_format_length,
|
||||||
|
ova_disk_volume_name_length=ova_disk_volume_name_length,
|
||||||
|
bold=ansiprint.bold(),
|
||||||
|
end_bold=ansiprint.end(),
|
||||||
|
ova_name='Name',
|
||||||
|
ova_id='ID',
|
||||||
|
ova_disk_id='Disk ID',
|
||||||
|
ova_disk_size='Size [GB]',
|
||||||
|
ova_disk_pool='Pool',
|
||||||
|
ova_disk_volume_format='Format',
|
||||||
|
ova_disk_volume_name='Source Volume',
|
||||||
|
)
|
||||||
|
|
||||||
|
# Format the string (elements)
|
||||||
|
for ova in sorted(ova_data, key=lambda i: i.get('name', None)):
|
||||||
|
ova_list_output.append(
|
||||||
|
'{bold}{ova_name: <{ova_name_length}} {ova_id: <{ova_id_length}}{end_bold}'.format(
|
||||||
|
ova_name_length=ova_name_length,
|
||||||
|
ova_id_length=ova_id_length,
|
||||||
|
bold='',
|
||||||
|
end_bold='',
|
||||||
|
ova_name=str(ova['name']),
|
||||||
|
ova_id=str(ova['id'])
|
||||||
|
)
|
||||||
|
)
|
||||||
|
for disk in sorted(ova['volumes'], key=lambda i: i.get('disk_id', None)):
|
||||||
|
ova_list_output.append(
|
||||||
|
'{bold}{ova_name: <{ova_name_length}} {ova_id: <{ova_id_length}} \
|
||||||
|
{ova_disk_id: <{ova_disk_id_length}} \
|
||||||
|
{ova_disk_size: <{ova_disk_size_length}} \
|
||||||
|
{ova_disk_pool: <{ova_disk_pool_length}} \
|
||||||
|
{ova_disk_volume_format: <{ova_disk_volume_format_length}} \
|
||||||
|
{ova_disk_volume_name: <{ova_disk_volume_name_length}}{end_bold}'.format(
|
||||||
|
ova_name_length=ova_name_length,
|
||||||
|
ova_id_length=ova_id_length,
|
||||||
|
ova_disk_id_length=ova_disk_id_length,
|
||||||
|
ova_disk_size_length=ova_disk_size_length,
|
||||||
|
ova_disk_pool_length=ova_disk_pool_length,
|
||||||
|
ova_disk_volume_format_length=ova_disk_volume_format_length,
|
||||||
|
ova_disk_volume_name_length=ova_disk_volume_name_length,
|
||||||
|
bold='',
|
||||||
|
end_bold='',
|
||||||
|
ova_name='',
|
||||||
|
ova_id='',
|
||||||
|
ova_disk_id=str(disk['disk_id']),
|
||||||
|
ova_disk_size=str(disk['disk_size_gb']),
|
||||||
|
ova_disk_pool=str(disk['pool']),
|
||||||
|
ova_disk_volume_format=str(disk['volume_format']),
|
||||||
|
ova_disk_volume_name=str(disk['volume_name']),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return '\n'.join([ova_list_output_header] + ova_list_output)
|
||||||
|
|
||||||
def format_list_profile(profile_data):
|
def format_list_profile(profile_data):
|
||||||
if isinstance(profile_data, dict):
|
if isinstance(profile_data, dict):
|
||||||
profile_data = [ profile_data ]
|
profile_data = [ profile_data ]
|
||||||
|
|
||||||
|
# Format the profile "source" from the type and, if applicable, OVA profile name
|
||||||
|
for profile in profile_data:
|
||||||
|
profile_type = profile['type']
|
||||||
|
if 'ova' in profile_type:
|
||||||
|
# Set the source to the name of the OVA:
|
||||||
|
profile['source'] = 'OVA {}'.format(profile['ova'])
|
||||||
|
else:
|
||||||
|
# Set the source to be the type
|
||||||
|
profile['source'] = profile_type
|
||||||
|
|
||||||
profile_list_output = []
|
profile_list_output = []
|
||||||
|
|
||||||
# Determine optimal column widths
|
# Determine optimal column widths
|
||||||
profile_name_length = 5
|
profile_name_length = 5
|
||||||
profile_id_length = 3
|
profile_id_length = 3
|
||||||
|
profile_source_length = 7
|
||||||
|
|
||||||
profile_system_template_length = 7
|
profile_system_template_length = 7
|
||||||
profile_network_template_length = 8
|
profile_network_template_length = 8
|
||||||
@ -1094,6 +1350,10 @@ def format_list_profile(profile_data):
|
|||||||
_profile_id_length = len(str(profile['id'])) + 1
|
_profile_id_length = len(str(profile['id'])) + 1
|
||||||
if _profile_id_length > profile_id_length:
|
if _profile_id_length > profile_id_length:
|
||||||
profile_id_length = _profile_id_length
|
profile_id_length = _profile_id_length
|
||||||
|
# profile_source column
|
||||||
|
_profile_source_length = len(str(profile['source'])) + 1
|
||||||
|
if _profile_source_length > profile_source_length:
|
||||||
|
profile_source_length = _profile_source_length
|
||||||
# profile_system_template column
|
# profile_system_template column
|
||||||
_profile_system_template_length = len(str(profile['system_template'])) + 1
|
_profile_system_template_length = len(str(profile['system_template'])) + 1
|
||||||
if _profile_system_template_length > profile_system_template_length:
|
if _profile_system_template_length > profile_system_template_length:
|
||||||
@ -1116,7 +1376,7 @@ def format_list_profile(profile_data):
|
|||||||
profile_script_length = _profile_script_length
|
profile_script_length = _profile_script_length
|
||||||
|
|
||||||
# Format the string (header)
|
# Format the string (header)
|
||||||
profile_list_output_header = '{bold}{profile_name: <{profile_name_length}} {profile_id: <{profile_id_length}} \
|
profile_list_output_header = '{bold}{profile_name: <{profile_name_length}} {profile_id: <{profile_id_length}} {profile_source: <{profile_source_length}} \
|
||||||
Templates: {profile_system_template: <{profile_system_template_length}} \
|
Templates: {profile_system_template: <{profile_system_template_length}} \
|
||||||
{profile_network_template: <{profile_network_template_length}} \
|
{profile_network_template: <{profile_network_template_length}} \
|
||||||
{profile_storage_template: <{profile_storage_template_length}} \
|
{profile_storage_template: <{profile_storage_template_length}} \
|
||||||
@ -1125,6 +1385,7 @@ Data: {profile_userdata: <{profile_userdata_length}} \
|
|||||||
{profile_arguments}{end_bold}'.format(
|
{profile_arguments}{end_bold}'.format(
|
||||||
profile_name_length=profile_name_length,
|
profile_name_length=profile_name_length,
|
||||||
profile_id_length=profile_id_length,
|
profile_id_length=profile_id_length,
|
||||||
|
profile_source_length=profile_source_length,
|
||||||
profile_system_template_length=profile_system_template_length,
|
profile_system_template_length=profile_system_template_length,
|
||||||
profile_network_template_length=profile_network_template_length,
|
profile_network_template_length=profile_network_template_length,
|
||||||
profile_storage_template_length=profile_storage_template_length,
|
profile_storage_template_length=profile_storage_template_length,
|
||||||
@ -1134,6 +1395,7 @@ Data: {profile_userdata: <{profile_userdata_length}} \
|
|||||||
end_bold=ansiprint.end(),
|
end_bold=ansiprint.end(),
|
||||||
profile_name='Name',
|
profile_name='Name',
|
||||||
profile_id='ID',
|
profile_id='ID',
|
||||||
|
profile_source='Source',
|
||||||
profile_system_template='System',
|
profile_system_template='System',
|
||||||
profile_network_template='Network',
|
profile_network_template='Network',
|
||||||
profile_storage_template='Storage',
|
profile_storage_template='Storage',
|
||||||
@ -1145,7 +1407,7 @@ Data: {profile_userdata: <{profile_userdata_length}} \
|
|||||||
# Format the string (elements)
|
# Format the string (elements)
|
||||||
for profile in sorted(profile_data, key=lambda i: i.get('name', None)):
|
for profile in sorted(profile_data, key=lambda i: i.get('name', None)):
|
||||||
profile_list_output.append(
|
profile_list_output.append(
|
||||||
'{bold}{profile_name: <{profile_name_length}} {profile_id: <{profile_id_length}} \
|
'{bold}{profile_name: <{profile_name_length}} {profile_id: <{profile_id_length}} {profile_source: <{profile_source_length}} \
|
||||||
{profile_system_template: <{profile_system_template_length}} \
|
{profile_system_template: <{profile_system_template_length}} \
|
||||||
{profile_network_template: <{profile_network_template_length}} \
|
{profile_network_template: <{profile_network_template_length}} \
|
||||||
{profile_storage_template: <{profile_storage_template_length}} \
|
{profile_storage_template: <{profile_storage_template_length}} \
|
||||||
@ -1154,6 +1416,7 @@ Data: {profile_userdata: <{profile_userdata_length}} \
|
|||||||
{profile_arguments}{end_bold}'.format(
|
{profile_arguments}{end_bold}'.format(
|
||||||
profile_name_length=profile_name_length,
|
profile_name_length=profile_name_length,
|
||||||
profile_id_length=profile_id_length,
|
profile_id_length=profile_id_length,
|
||||||
|
profile_source_length=profile_source_length,
|
||||||
profile_system_template_length=profile_system_template_length,
|
profile_system_template_length=profile_system_template_length,
|
||||||
profile_network_template_length=profile_network_template_length,
|
profile_network_template_length=profile_network_template_length,
|
||||||
profile_storage_template_length=profile_storage_template_length,
|
profile_storage_template_length=profile_storage_template_length,
|
||||||
@ -1163,6 +1426,7 @@ Data: {profile_userdata: <{profile_userdata_length}} \
|
|||||||
end_bold='',
|
end_bold='',
|
||||||
profile_name=profile['name'],
|
profile_name=profile['name'],
|
||||||
profile_id=profile['id'],
|
profile_id=profile['id'],
|
||||||
|
profile_source=profile['source'],
|
||||||
profile_system_template=profile['system_template'],
|
profile_system_template=profile['system_template'],
|
||||||
profile_network_template=profile['network_template'],
|
profile_network_template=profile['network_template'],
|
||||||
profile_storage_template=profile['storage_template'],
|
profile_storage_template=profile['storage_template'],
|
||||||
|
@ -173,16 +173,17 @@ def vm_remove(config, vm, delete_disks=False):
|
|||||||
|
|
||||||
return retstatus, response.json()['message']
|
return retstatus, response.json()['message']
|
||||||
|
|
||||||
def vm_state(config, vm, target_state):
|
def vm_state(config, vm, target_state, wait=False):
|
||||||
"""
|
"""
|
||||||
Modify the current state of VM
|
Modify the current state of VM
|
||||||
|
|
||||||
API endpoint: POST /vm/{vm}/state
|
API endpoint: POST /vm/{vm}/state
|
||||||
API arguments: state={state}
|
API arguments: state={state}, wait={wait}
|
||||||
API schema: {"message":"{data}"}
|
API schema: {"message":"{data}"}
|
||||||
"""
|
"""
|
||||||
params={
|
params={
|
||||||
'state': target_state,
|
'state': target_state,
|
||||||
|
'wait': str(wait).lower()
|
||||||
}
|
}
|
||||||
response = call_api(config, 'post', '/vm/{vm}/state'.format(vm=vm), params=params)
|
response = call_api(config, 'post', '/vm/{vm}/state'.format(vm=vm), params=params)
|
||||||
|
|
||||||
@ -193,18 +194,19 @@ def vm_state(config, vm, target_state):
|
|||||||
|
|
||||||
return retstatus, response.json()['message']
|
return retstatus, response.json()['message']
|
||||||
|
|
||||||
def vm_node(config, vm, target_node, action, force=False):
|
def vm_node(config, vm, target_node, action, force=False, wait=False):
|
||||||
"""
|
"""
|
||||||
Modify the current node of VM via {action}
|
Modify the current node of VM via {action}
|
||||||
|
|
||||||
API endpoint: POST /vm/{vm}/node
|
API endpoint: POST /vm/{vm}/node
|
||||||
API arguments: node={target_node}, action={action}, force={force}
|
API arguments: node={target_node}, action={action}, force={force}, wait={wait}
|
||||||
API schema: {"message":"{data}"}
|
API schema: {"message":"{data}"}
|
||||||
"""
|
"""
|
||||||
params={
|
params={
|
||||||
'node': target_node,
|
'node': target_node,
|
||||||
'action': action,
|
'action': action,
|
||||||
'force': force
|
'force': str(force).lower(),
|
||||||
|
'wait': str(wait).lower()
|
||||||
}
|
}
|
||||||
response = call_api(config, 'post', '/vm/{vm}/node'.format(vm=vm), params=params)
|
response = call_api(config, 'post', '/vm/{vm}/node'.format(vm=vm), params=params)
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@
|
|||||||
import kazoo.client
|
import kazoo.client
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
|
|
||||||
# Exists function
|
# Exists function
|
||||||
def exists(zk_conn, key):
|
def exists(zk_conn, key):
|
||||||
|
@ -48,7 +48,7 @@ myhostname = socket.gethostname().split('.')[0]
|
|||||||
zk_host = ''
|
zk_host = ''
|
||||||
|
|
||||||
default_store_data = {
|
default_store_data = {
|
||||||
'cfgfile': '/etc/pvc/pvc-api.yaml' # pvc/api/listen_address, pvc/api/listen_port
|
'cfgfile': '/etc/pvc/pvcapid.yaml' # pvc/api/listen_address, pvc/api/listen_port
|
||||||
}
|
}
|
||||||
|
|
||||||
#
|
#
|
||||||
@ -87,6 +87,9 @@ def get_config(store_data, cluster=None):
|
|||||||
host, port, scheme, api_key = read_from_yaml(cfgfile)
|
host, port, scheme, api_key = read_from_yaml(cfgfile)
|
||||||
else:
|
else:
|
||||||
return { 'badcfg': True }
|
return { 'badcfg': True }
|
||||||
|
# Handle an all-wildcard address
|
||||||
|
if host == '0.0.0.0':
|
||||||
|
host = '127.0.0.1'
|
||||||
else:
|
else:
|
||||||
# This is a static configuration, get the raw details
|
# This is a static configuration, get the raw details
|
||||||
host = cluster_details['host']
|
host = cluster_details['host']
|
||||||
@ -335,7 +338,7 @@ def cli_node():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -345,7 +348,11 @@ def cli_node():
|
|||||||
@click.argument(
|
@click.argument(
|
||||||
'node'
|
'node'
|
||||||
)
|
)
|
||||||
def node_secondary(node):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for transition to complete before returning.'
|
||||||
|
)
|
||||||
|
def node_secondary(node, wait):
|
||||||
"""
|
"""
|
||||||
Take NODE out of primary router mode.
|
Take NODE out of primary router mode.
|
||||||
"""
|
"""
|
||||||
@ -358,7 +365,24 @@ def node_secondary(node):
|
|||||||
click.echo()
|
click.echo()
|
||||||
|
|
||||||
retcode, retmsg = pvc_node.node_coordinator_state(config, node, 'secondary')
|
retcode, retmsg = pvc_node.node_coordinator_state(config, node, 'secondary')
|
||||||
cleanup(retcode, retmsg)
|
if not retcode:
|
||||||
|
cleanup(retcode, retmsg)
|
||||||
|
else:
|
||||||
|
if wait:
|
||||||
|
click.echo(retmsg)
|
||||||
|
click.echo("Waiting for state transition... ", nl=False)
|
||||||
|
# Every half-second, check if the API is reachable and the node is in secondary state
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
_retcode, _retmsg = pvc_node.node_info(config, node)
|
||||||
|
if _retmsg['coordinator_state'] == 'secondary':
|
||||||
|
retmsg = "done."
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
time.sleep(0.5)
|
||||||
|
except:
|
||||||
|
time.sleep(0.5)
|
||||||
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc node primary
|
# pvc node primary
|
||||||
@ -367,7 +391,11 @@ def node_secondary(node):
|
|||||||
@click.argument(
|
@click.argument(
|
||||||
'node'
|
'node'
|
||||||
)
|
)
|
||||||
def node_primary(node):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for transition to complete before returning.'
|
||||||
|
)
|
||||||
|
def node_primary(node, wait):
|
||||||
"""
|
"""
|
||||||
Put NODE into primary router mode.
|
Put NODE into primary router mode.
|
||||||
"""
|
"""
|
||||||
@ -380,7 +408,24 @@ def node_primary(node):
|
|||||||
click.echo()
|
click.echo()
|
||||||
|
|
||||||
retcode, retmsg = pvc_node.node_coordinator_state(config, node, 'primary')
|
retcode, retmsg = pvc_node.node_coordinator_state(config, node, 'primary')
|
||||||
cleanup(retcode, retmsg)
|
if not retcode:
|
||||||
|
cleanup(retcode, retmsg)
|
||||||
|
else:
|
||||||
|
if wait:
|
||||||
|
click.echo(retmsg)
|
||||||
|
click.echo("Waiting for state transition... ", nl=False)
|
||||||
|
# Every half-second, check if the API is reachable and the node is in secondary state
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
_retcode, _retmsg = pvc_node.node_info(config, node)
|
||||||
|
if _retmsg['coordinator_state'] == 'primary':
|
||||||
|
retmsg = "done."
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
time.sleep(0.5)
|
||||||
|
except:
|
||||||
|
time.sleep(0.5)
|
||||||
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc node flush
|
# pvc node flush
|
||||||
@ -484,7 +529,7 @@ def cli_vm():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -527,7 +572,7 @@ def vm_define(vmconfig, target_node, node_limit, node_selector, node_autostart):
|
|||||||
except:
|
except:
|
||||||
cleanup(False, 'Error: XML is malformed or invalid')
|
cleanup(False, 'Error: XML is malformed or invalid')
|
||||||
|
|
||||||
retcode, retmsg = pvc_vm.define_vm(config, new_cfg, target_node, node_limit, node_selector, node_autostart)
|
retcode, retmsg = pvc_vm.vm_define(config, new_cfg, target_node, node_limit, node_selector, node_autostart)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -726,12 +771,16 @@ def vm_start(domain):
|
|||||||
@click.argument(
|
@click.argument(
|
||||||
'domain'
|
'domain'
|
||||||
)
|
)
|
||||||
def vm_restart(domain):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for restart to complete before returning.'
|
||||||
|
)
|
||||||
|
def vm_restart(domain, wait):
|
||||||
"""
|
"""
|
||||||
Restart running virtual machine DOMAIN. DOMAIN may be a UUID or name.
|
Restart running virtual machine DOMAIN. DOMAIN may be a UUID or name.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
retcode, retmsg = pvc_vm.vm_state(config, domain, 'restart')
|
retcode, retmsg = pvc_vm.vm_state(config, domain, 'restart', wait=wait)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -741,12 +790,16 @@ def vm_restart(domain):
|
|||||||
@click.argument(
|
@click.argument(
|
||||||
'domain'
|
'domain'
|
||||||
)
|
)
|
||||||
def vm_shutdown(domain):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for shutdown to complete before returning.'
|
||||||
|
)
|
||||||
|
def vm_shutdown(domain, wait):
|
||||||
"""
|
"""
|
||||||
Gracefully shut down virtual machine DOMAIN. DOMAIN may be a UUID or name.
|
Gracefully shut down virtual machine DOMAIN. DOMAIN may be a UUID or name.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
retcode, retmsg = pvc_vm.vm_state(config, domain, 'shutdown')
|
retcode, retmsg = pvc_vm.vm_state(config, domain, 'shutdown', wait=wait)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -792,12 +845,16 @@ def vm_disable(domain):
|
|||||||
'-t', '--target', 'target_node', default=None,
|
'-t', '--target', 'target_node', default=None,
|
||||||
help='Target node to migrate to; autodetect if unspecified.'
|
help='Target node to migrate to; autodetect if unspecified.'
|
||||||
)
|
)
|
||||||
def vm_move(domain, target_node):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for migration to complete before returning.'
|
||||||
|
)
|
||||||
|
def vm_move(domain, target_node, wait):
|
||||||
"""
|
"""
|
||||||
Permanently move virtual machine DOMAIN, via live migration if running and possible, to another node. DOMAIN may be a UUID or name.
|
Permanently move virtual machine DOMAIN, via live migration if running and possible, to another node. DOMAIN may be a UUID or name.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
retcode, retmsg = pvc_vm.vm_node(config, domain, target_node, 'move', force=False)
|
retcode, retmsg = pvc_vm.vm_node(config, domain, target_node, 'move', force=False, wait=wait)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -815,12 +872,16 @@ def vm_move(domain, target_node):
|
|||||||
'-f', '--force', 'force_migrate', is_flag=True, default=False,
|
'-f', '--force', 'force_migrate', is_flag=True, default=False,
|
||||||
help='Force migrate an already migrated VM; does not replace an existing previous node value.'
|
help='Force migrate an already migrated VM; does not replace an existing previous node value.'
|
||||||
)
|
)
|
||||||
def vm_migrate(domain, target_node, force_migrate):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for migration to complete before returning.'
|
||||||
|
)
|
||||||
|
def vm_migrate(domain, target_node, force_migrate, wait):
|
||||||
"""
|
"""
|
||||||
Temporarily migrate running virtual machine DOMAIN, via live migration if possible, to another node. DOMAIN may be a UUID or name. If DOMAIN is not running, it will be started on the target node.
|
Temporarily migrate running virtual machine DOMAIN, via live migration if possible, to another node. DOMAIN may be a UUID or name. If DOMAIN is not running, it will be started on the target node.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
retcode, retmsg = pvc_vm.vm_node(config, domain, target_node, 'migrate', force=force_migrate)
|
retcode, retmsg = pvc_vm.vm_node(config, domain, target_node, 'migrate', force=force_migrate, wait=wait)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -830,12 +891,16 @@ def vm_migrate(domain, target_node, force_migrate):
|
|||||||
@click.argument(
|
@click.argument(
|
||||||
'domain'
|
'domain'
|
||||||
)
|
)
|
||||||
def vm_unmigrate(domain):
|
@click.option(
|
||||||
|
'-w', '--wait', 'wait', is_flag=True, default=False,
|
||||||
|
help='Wait for migration to complete before returning.'
|
||||||
|
)
|
||||||
|
def vm_unmigrate(domain, wait):
|
||||||
"""
|
"""
|
||||||
Restore previously migrated virtual machine DOMAIN, via live migration if possible, to its original node. DOMAIN may be a UUID or name. If DOMAIN is not running, it will be started on the target node.
|
Restore previously migrated virtual machine DOMAIN, via live migration if possible, to its original node. DOMAIN may be a UUID or name. If DOMAIN is not running, it will be started on the target node.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
retcode, retmsg = pvc_vm.vm_node(config, domain, None, 'unmigrate', force=False)
|
retcode, retmsg = pvc_vm.vm_node(config, domain, None, 'unmigrate', force=False, wait=wait)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -965,7 +1030,7 @@ def cli_network():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1201,7 +1266,7 @@ def net_dhcp():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1290,7 +1355,7 @@ def net_acl():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1419,7 +1484,7 @@ def cli_storage():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1460,7 +1525,7 @@ def ceph_osd():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1618,7 +1683,7 @@ def ceph_pool():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1702,7 +1767,7 @@ def ceph_volume():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1726,6 +1791,40 @@ def ceph_volume_add(pool, name, size):
|
|||||||
retcode, retmsg = pvc_ceph.ceph_volume_add(config, pool, name, size)
|
retcode, retmsg = pvc_ceph.ceph_volume_add(config, pool, name, size)
|
||||||
cleanup(retcode, retmsg)
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc storage volume upload
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='upload', short_help='Upload a local image file to RBD volume.')
|
||||||
|
@click.argument(
|
||||||
|
'pool'
|
||||||
|
)
|
||||||
|
@click.argument(
|
||||||
|
'name'
|
||||||
|
)
|
||||||
|
@click.argument(
|
||||||
|
'image_file'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-f', '--format', 'image_format',
|
||||||
|
default='raw', show_default=True,
|
||||||
|
help='The format of the source image.'
|
||||||
|
)
|
||||||
|
def ceph_volume_upload(pool, name, image_format, image_file):
|
||||||
|
"""
|
||||||
|
Upload a disk image file IMAGE_FILE to the RBD volume NAME in pool POOL.
|
||||||
|
|
||||||
|
The volume NAME must exist in the pool before uploading to it, and must be large enough to fit the disk image in raw format.
|
||||||
|
|
||||||
|
If the image format is "raw", the image is uploaded directly to the target volume without modification. Otherwise, it will be converted into raw format by "qemu-img convert" on the remote side before writing using a temporary volume. The image format must be a valid format recognized by "qemu-img", such as "vmdk" or "qcow2".
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not os.path.exists(image_file):
|
||||||
|
click.echo("ERROR: File '{}' does not exist!".format(image_file))
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
retcode, retmsg = pvc_ceph.ceph_volume_upload(config, pool, name, image_format, image_file)
|
||||||
|
cleanup(retcode, retmsg)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc storage volume remove
|
# pvc storage volume remove
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1848,7 +1947,7 @@ def ceph_volume_snapshot():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1966,7 +2065,7 @@ def cli_provisioner():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -1979,7 +2078,7 @@ def provisioner_template():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
|
|
||||||
@ -2009,7 +2108,7 @@ def provisioner_template_system():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2098,6 +2197,68 @@ def provisioner_template_system_add(name, vcpus, vram, serial, vnc, vnc_bind, no
|
|||||||
retcode, retdata = pvc_provisioner.template_add(config, params, template_type='system')
|
retcode, retdata = pvc_provisioner.template_add(config, params, template_type='system')
|
||||||
cleanup(retcode, retdata)
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner template system modify
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='modify', short_help='Modify an existing system template.')
|
||||||
|
@click.argument(
|
||||||
|
'name'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-u', '--vcpus', 'vcpus',
|
||||||
|
type=int,
|
||||||
|
help='The number of vCPUs.'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-m', '--vram', 'vram',
|
||||||
|
type=int,
|
||||||
|
help='The amount of vRAM (in MB).'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-s', '--serial', 'serial',
|
||||||
|
is_flag=True, default=None,
|
||||||
|
help='Enable the virtual serial console.'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-n', '--vnc', 'vnc',
|
||||||
|
is_flag=True, default=None,
|
||||||
|
help='Enable the VNC console.'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-b', '--vnc-bind', 'vnc_bind',
|
||||||
|
help='Bind VNC to this IP address instead of localhost.'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'--node-limit', 'node_limit',
|
||||||
|
help='Limit VM operation to this CSV list of node(s).'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'--node-selector', 'node_selector',
|
||||||
|
type=click.Choice(['mem', 'vcpus', 'vms', 'load'], case_sensitive=False),
|
||||||
|
help='Use this selector to determine the optimal node during migrations.'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'--node-autostart', 'node_autostart',
|
||||||
|
is_flag=True, default=None,
|
||||||
|
help='Autostart VM with their parent Node on first/next boot.'
|
||||||
|
)
|
||||||
|
def provisioner_template_system_modify(name, vcpus, vram, serial, vnc, vnc_bind, node_limit, node_selector, node_autostart):
|
||||||
|
"""
|
||||||
|
Add a new system template NAME to the PVC cluster provisioner.
|
||||||
|
"""
|
||||||
|
params = dict()
|
||||||
|
params['vcpus'] = vcpus
|
||||||
|
params['vram'] = vram
|
||||||
|
params['serial'] = serial
|
||||||
|
params['vnc'] = vnc
|
||||||
|
params['vnc_bind'] = vnc_bind
|
||||||
|
params['node_limit'] = node_limit
|
||||||
|
params['node_selector'] = node_selector
|
||||||
|
params['node_autostart'] = node_autostart
|
||||||
|
|
||||||
|
retcode, retdata = pvc_provisioner.template_modify(config, params, name, template_type='system')
|
||||||
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc provisioner template system remove
|
# pvc provisioner template system remove
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2134,7 +2295,7 @@ def provisioner_template_network():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2237,7 +2398,7 @@ def provisioner_template_network_vni():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2300,7 +2461,7 @@ def provisioner_template_storage():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2371,7 +2532,7 @@ def provisioner_template_storage_disk():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2489,7 +2650,7 @@ def provisioner_userdata():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2517,6 +2678,20 @@ def provisioner_userdata_list(limit, full):
|
|||||||
retdata = pvc_provisioner.format_list_userdata(retdata, lines)
|
retdata = pvc_provisioner.format_list_userdata(retdata, lines)
|
||||||
cleanup(retcode, retdata)
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner userdata show
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='show', short_help='Show contents of userdata documents.')
|
||||||
|
@click.argument(
|
||||||
|
'name'
|
||||||
|
)
|
||||||
|
def provisioner_userdata_show(name):
|
||||||
|
"""
|
||||||
|
Show the full contents of userdata document NAME.
|
||||||
|
"""
|
||||||
|
retcode, retdata = pvc_provisioner.userdata_show(config, name)
|
||||||
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc provisioner userdata add
|
# pvc provisioner userdata add
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2647,7 +2822,7 @@ def provisioner_script():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2675,6 +2850,20 @@ def provisioner_script_list(limit, full):
|
|||||||
retdata = pvc_provisioner.format_list_script(retdata, lines)
|
retdata = pvc_provisioner.format_list_script(retdata, lines)
|
||||||
cleanup(retcode, retdata)
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner script show
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='show', short_help='Show contents of script documents.')
|
||||||
|
@click.argument(
|
||||||
|
'name'
|
||||||
|
)
|
||||||
|
def provisioner_script_show(name):
|
||||||
|
"""
|
||||||
|
Show the full contents of script document NAME.
|
||||||
|
"""
|
||||||
|
retcode, retdata = pvc_provisioner.script_show(config, name)
|
||||||
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc provisioner script add
|
# pvc provisioner script add
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2796,6 +2985,99 @@ def provisioner_script_remove(name, confirm_flag):
|
|||||||
cleanup(retcode, retdata)
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner ova
|
||||||
|
###############################################################################
|
||||||
|
@click.group(name='ova', short_help='Manage PVC provisioner OVA images.', context_settings=CONTEXT_SETTINGS)
|
||||||
|
def provisioner_ova():
|
||||||
|
"""
|
||||||
|
Manage ovas in the PVC provisioner.
|
||||||
|
"""
|
||||||
|
# Abort commands under this group if config is bad
|
||||||
|
if config.get('badcfg', None):
|
||||||
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner ova list
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='list', short_help='List all OVA images.')
|
||||||
|
@click.argument(
|
||||||
|
'limit', default=None, required=False
|
||||||
|
)
|
||||||
|
def provisioner_ova_list(limit):
|
||||||
|
"""
|
||||||
|
List all OVA images in the PVC cluster provisioner.
|
||||||
|
"""
|
||||||
|
retcode, retdata = pvc_provisioner.ova_list(config, limit)
|
||||||
|
if retcode:
|
||||||
|
retdata = pvc_provisioner.format_list_ova(retdata)
|
||||||
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner ova upload
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='upload', short_help='Upload OVA file.')
|
||||||
|
@click.argument(
|
||||||
|
'name'
|
||||||
|
)
|
||||||
|
@click.argument(
|
||||||
|
'filename'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-p', '--pool', 'pool',
|
||||||
|
required=True,
|
||||||
|
help='The storage pool for the OVA images.'
|
||||||
|
)
|
||||||
|
def provisioner_ova_upload(name, filename, pool):
|
||||||
|
"""
|
||||||
|
Upload a new OVA image NAME from FILENAME.
|
||||||
|
|
||||||
|
Only single-file (.ova) OVA/OVF images are supported. For multi-file (.ovf + .vmdk) OVF images, concatenate them with "tar" then upload the resulting file.
|
||||||
|
|
||||||
|
Once uploaded, a provisioner system template and OVA-type profile, each named NAME, will be created to store the configuration of the OVA.
|
||||||
|
|
||||||
|
Note that the provisioner profile for the OVA will not contain any network template definitions, and will ignore network definitions from the OVA itself. The administrator must modify the profile's network template as appropriate to set the desired network configuration.
|
||||||
|
|
||||||
|
Storage templates, provisioning scripts, and arguments for OVA-type profiles will be ignored and should not be set.
|
||||||
|
"""
|
||||||
|
if not os.path.exists(filename):
|
||||||
|
click.echo("ERROR: File '{}' does not exist!".format(filename))
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
params = dict()
|
||||||
|
params['pool'] = pool
|
||||||
|
params['ova_size'] = os.path.getsize(filename)
|
||||||
|
|
||||||
|
retcode, retdata = pvc_provisioner.ova_upload(config, name, filename, params)
|
||||||
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# pvc provisioner ova remove
|
||||||
|
###############################################################################
|
||||||
|
@click.command(name='remove', short_help='Remove OVA image.')
|
||||||
|
@click.argument(
|
||||||
|
'name'
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
'-y', '--yes', 'confirm_flag',
|
||||||
|
is_flag=True, default=False,
|
||||||
|
help='Confirm the removal'
|
||||||
|
)
|
||||||
|
def provisioner_ova_remove(name, confirm_flag):
|
||||||
|
"""
|
||||||
|
Remove OVA image NAME from the PVC cluster provisioner.
|
||||||
|
"""
|
||||||
|
if not confirm_flag:
|
||||||
|
try:
|
||||||
|
click.confirm('Remove OVA image {}'.format(name), prompt_suffix='? ', abort=True)
|
||||||
|
except:
|
||||||
|
exit(0)
|
||||||
|
|
||||||
|
retcode, retdata = pvc_provisioner.ova_remove(config, name)
|
||||||
|
cleanup(retcode, retdata)
|
||||||
|
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# pvc provisioner profile
|
# pvc provisioner profile
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2806,7 +3088,7 @@ def provisioner_profile():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -2832,47 +3114,54 @@ def provisioner_profile_list(limit):
|
|||||||
@click.argument(
|
@click.argument(
|
||||||
'name'
|
'name'
|
||||||
)
|
)
|
||||||
|
@click.option(
|
||||||
|
'-p', '--profile-type', 'profile_type',
|
||||||
|
default='provisioner', show_default=True,
|
||||||
|
type=click.Choice(['provisioner', 'ova'], case_sensitive=False),
|
||||||
|
help='The type of profile.'
|
||||||
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'-s', '--system-template', 'system_template',
|
'-s', '--system-template', 'system_template',
|
||||||
required=True,
|
|
||||||
help='The system template for the profile.'
|
help='The system template for the profile.'
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'-n', '--network-template', 'network_template',
|
'-n', '--network-template', 'network_template',
|
||||||
required=True,
|
|
||||||
help='The network template for the profile.'
|
help='The network template for the profile.'
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'-t', '--storage-template', 'storage_template',
|
'-t', '--storage-template', 'storage_template',
|
||||||
required=True,
|
|
||||||
help='The storage template for the profile.'
|
help='The storage template for the profile.'
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'-u', '--userdata', 'userdata',
|
'-u', '--userdata', 'userdata',
|
||||||
required=True,
|
|
||||||
help='The userdata document for the profile.'
|
help='The userdata document for the profile.'
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'-x', '--script', 'script',
|
'-x', '--script', 'script',
|
||||||
required=True,
|
|
||||||
help='The script for the profile.'
|
help='The script for the profile.'
|
||||||
)
|
)
|
||||||
|
@click.option(
|
||||||
|
'-o', '--ova', 'ova',
|
||||||
|
help='The OVA image for the profile.'
|
||||||
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'-a', '--script-arg', 'script_args',
|
'-a', '--script-arg', 'script_args',
|
||||||
default=[], multiple=True,
|
default=[], multiple=True,
|
||||||
help='Additional argument to the script install() function in key=value format.'
|
help='Additional argument to the script install() function in key=value format.'
|
||||||
)
|
)
|
||||||
def provisioner_profile_add(name, system_template, network_template, storage_template, userdata, script, script_args):
|
def provisioner_profile_add(name, profile_type, system_template, network_template, storage_template, userdata, script, ova, script_args):
|
||||||
"""
|
"""
|
||||||
Add a new provisioner profile NAME.
|
Add a new provisioner profile NAME.
|
||||||
"""
|
"""
|
||||||
params = dict()
|
params = dict()
|
||||||
params['name'] = name
|
params['name'] = name
|
||||||
|
params['profile_type'] = profile_type
|
||||||
params['system_template'] = system_template
|
params['system_template'] = system_template
|
||||||
params['network_template'] = network_template
|
params['network_template'] = network_template
|
||||||
params['storage_template'] = storage_template
|
params['storage_template'] = storage_template
|
||||||
params['userdata'] = userdata
|
params['userdata'] = userdata
|
||||||
params['script'] = script
|
params['script'] = script
|
||||||
|
params['ova'] = ova
|
||||||
params['arg'] = script_args
|
params['arg'] = script_args
|
||||||
|
|
||||||
retcode, retdata = pvc_provisioner.profile_add(config, params)
|
retcode, retdata = pvc_provisioner.profile_add(config, params)
|
||||||
@ -3086,7 +3375,7 @@ def cli_maintenance():
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
@ -3127,7 +3416,7 @@ def status_cluster(oformat):
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
retcode, retdata = pvc_cluster.get_info(config)
|
retcode, retdata = pvc_cluster.get_info(config)
|
||||||
@ -3150,7 +3439,7 @@ def init_cluster(confirm_flag):
|
|||||||
"""
|
"""
|
||||||
# Abort commands under this group if config is bad
|
# Abort commands under this group if config is bad
|
||||||
if config.get('badcfg', None):
|
if config.get('badcfg', None):
|
||||||
click.echo('No cluster specified and no local pvc-api.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
click.echo('No cluster specified and no local pvcapid.yaml configuration found. Use "pvc cluster" to add a cluster API to connect to.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
if not confirm_flag:
|
if not confirm_flag:
|
||||||
@ -3186,7 +3475,7 @@ def cli(_cluster, _debug):
|
|||||||
"PVC_CLUSTER": Set the cluster to access instead of using --cluster/-c
|
"PVC_CLUSTER": Set the cluster to access instead of using --cluster/-c
|
||||||
|
|
||||||
If no PVC_CLUSTER/--cluster is specified, attempts first to load the "local" cluster, checking
|
If no PVC_CLUSTER/--cluster is specified, attempts first to load the "local" cluster, checking
|
||||||
for an API configuration in "/etc/pvc/pvc-api.yaml". If this is also not found, abort.
|
for an API configuration in "/etc/pvc/pvcapid.yaml". If this is also not found, abort.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
global config
|
global config
|
||||||
@ -3271,6 +3560,7 @@ ceph_pool.add_command(ceph_pool_remove)
|
|||||||
ceph_pool.add_command(ceph_pool_list)
|
ceph_pool.add_command(ceph_pool_list)
|
||||||
|
|
||||||
ceph_volume.add_command(ceph_volume_add)
|
ceph_volume.add_command(ceph_volume_add)
|
||||||
|
ceph_volume.add_command(ceph_volume_upload)
|
||||||
ceph_volume.add_command(ceph_volume_resize)
|
ceph_volume.add_command(ceph_volume_resize)
|
||||||
ceph_volume.add_command(ceph_volume_rename)
|
ceph_volume.add_command(ceph_volume_rename)
|
||||||
ceph_volume.add_command(ceph_volume_clone)
|
ceph_volume.add_command(ceph_volume_clone)
|
||||||
@ -3291,6 +3581,7 @@ cli_storage.add_command(ceph_volume)
|
|||||||
|
|
||||||
provisioner_template_system.add_command(provisioner_template_system_list)
|
provisioner_template_system.add_command(provisioner_template_system_list)
|
||||||
provisioner_template_system.add_command(provisioner_template_system_add)
|
provisioner_template_system.add_command(provisioner_template_system_add)
|
||||||
|
provisioner_template_system.add_command(provisioner_template_system_modify)
|
||||||
provisioner_template_system.add_command(provisioner_template_system_remove)
|
provisioner_template_system.add_command(provisioner_template_system_remove)
|
||||||
|
|
||||||
provisioner_template_network.add_command(provisioner_template_network_list)
|
provisioner_template_network.add_command(provisioner_template_network_list)
|
||||||
@ -3315,15 +3606,21 @@ provisioner_template.add_command(provisioner_template_storage)
|
|||||||
provisioner_template.add_command(provisioner_template_list)
|
provisioner_template.add_command(provisioner_template_list)
|
||||||
|
|
||||||
provisioner_userdata.add_command(provisioner_userdata_list)
|
provisioner_userdata.add_command(provisioner_userdata_list)
|
||||||
|
provisioner_userdata.add_command(provisioner_userdata_show)
|
||||||
provisioner_userdata.add_command(provisioner_userdata_add)
|
provisioner_userdata.add_command(provisioner_userdata_add)
|
||||||
provisioner_userdata.add_command(provisioner_userdata_modify)
|
provisioner_userdata.add_command(provisioner_userdata_modify)
|
||||||
provisioner_userdata.add_command(provisioner_userdata_remove)
|
provisioner_userdata.add_command(provisioner_userdata_remove)
|
||||||
|
|
||||||
provisioner_script.add_command(provisioner_script_list)
|
provisioner_script.add_command(provisioner_script_list)
|
||||||
|
provisioner_script.add_command(provisioner_script_show)
|
||||||
provisioner_script.add_command(provisioner_script_add)
|
provisioner_script.add_command(provisioner_script_add)
|
||||||
provisioner_script.add_command(provisioner_script_modify)
|
provisioner_script.add_command(provisioner_script_modify)
|
||||||
provisioner_script.add_command(provisioner_script_remove)
|
provisioner_script.add_command(provisioner_script_remove)
|
||||||
|
|
||||||
|
provisioner_ova.add_command(provisioner_ova_list)
|
||||||
|
provisioner_ova.add_command(provisioner_ova_upload)
|
||||||
|
provisioner_ova.add_command(provisioner_ova_remove)
|
||||||
|
|
||||||
provisioner_profile.add_command(provisioner_profile_list)
|
provisioner_profile.add_command(provisioner_profile_list)
|
||||||
provisioner_profile.add_command(provisioner_profile_add)
|
provisioner_profile.add_command(provisioner_profile_add)
|
||||||
provisioner_profile.add_command(provisioner_profile_modify)
|
provisioner_profile.add_command(provisioner_profile_modify)
|
||||||
@ -3332,6 +3629,7 @@ provisioner_profile.add_command(provisioner_profile_remove)
|
|||||||
cli_provisioner.add_command(provisioner_template)
|
cli_provisioner.add_command(provisioner_template)
|
||||||
cli_provisioner.add_command(provisioner_userdata)
|
cli_provisioner.add_command(provisioner_userdata)
|
||||||
cli_provisioner.add_command(provisioner_script)
|
cli_provisioner.add_command(provisioner_script)
|
||||||
|
cli_provisioner.add_command(provisioner_ova)
|
||||||
cli_provisioner.add_command(provisioner_profile)
|
cli_provisioner.add_command(provisioner_profile)
|
||||||
cli_provisioner.add_command(provisioner_create)
|
cli_provisioner.add_command(provisioner_create)
|
||||||
cli_provisioner.add_command(provisioner_status)
|
cli_provisioner.add_command(provisioner_status)
|
||||||
|
32
client-cli/scripts/README
Normal file
32
client-cli/scripts/README
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
# PVC helper scripts
|
||||||
|
|
||||||
|
These helper scripts are included with the PVC client to aid administrators in some meta-functions.
|
||||||
|
|
||||||
|
The following scripts are provided for use:
|
||||||
|
|
||||||
|
## `migrate_vm`
|
||||||
|
|
||||||
|
Migrates a VM, with downtime, from one PVC cluster to another.
|
||||||
|
|
||||||
|
`migrate_vm <vm> <source_cluster> <destination_cluster>`
|
||||||
|
|
||||||
|
### Arguments
|
||||||
|
|
||||||
|
* `vm`: The virtual machine to migrate
|
||||||
|
* `source_cluster`: The source PVC cluster; must be a valid cluster to the local PVC client
|
||||||
|
* `destination_cluster`: The destination PVC cluster; must be a valid cluster to the local PVC client
|
||||||
|
|
||||||
|
## `import_vm`
|
||||||
|
|
||||||
|
Imports a VM from another platform into a PVC cluster.
|
||||||
|
|
||||||
|
## `export_vm`
|
||||||
|
|
||||||
|
Exports a (stopped) VM from a PVC cluster to another platform.
|
||||||
|
|
||||||
|
`export_vm <vm> <source_cluster>`
|
||||||
|
|
||||||
|
### Arguments
|
||||||
|
|
||||||
|
* `vm`: The virtual machine to migrate
|
||||||
|
* `source_cluster`: The source PVC cluster; must be a valid cluster to the local PVC client
|
99
client-cli/scripts/export_vm
Executable file
99
client-cli/scripts/export_vm
Executable file
@ -0,0 +1,99 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# export_vm - Exports a VM from a PVC cluster to local files
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
set -o errexit
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo -e "Export a VM from a PVC cluster to local files."
|
||||||
|
echo -e "Usage:"
|
||||||
|
echo -e " $0 <vm> <source_cluster> [<destination_directory>]"
|
||||||
|
echo -e ""
|
||||||
|
echo -e "Important information:"
|
||||||
|
echo -e " * The local user must have valid SSH access to the primary coordinator in the source_cluster."
|
||||||
|
echo -e " * The user on the cluster primary coordinator must have 'sudo' access."
|
||||||
|
echo -e " * If the VM is not in 'stop' state, it will be shut down."
|
||||||
|
echo -e " * Do not switch the cluster primary coordinator while the script is running."
|
||||||
|
echo -e " * Ensure you have enough space in <destination_directory> to store all VM disk images."
|
||||||
|
}
|
||||||
|
|
||||||
|
fail() {
|
||||||
|
echo -e "$@"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
if [[ -z ${1} || -z ${2} ]]; then
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
source_vm="${1}"
|
||||||
|
source_cluster="${2}"
|
||||||
|
if [[ -n "${3}" ]]; then
|
||||||
|
destination_directory="${3}"
|
||||||
|
else
|
||||||
|
destination_directory="."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify the cluster is reachable
|
||||||
|
pvc -c ${source_cluster} status &>/dev/null || fail "Specified source_cluster is not accessible"
|
||||||
|
|
||||||
|
# Determine the connection IP
|
||||||
|
cluster_address="$( pvc cluster list 2>/dev/null | grep -i "^${source_cluster}" | awk '{ print $2 }' )"
|
||||||
|
|
||||||
|
# Attempt to connect to the cluster address
|
||||||
|
ssh ${cluster_address} which pvc &>/dev/null || fail "Could not SSH to source_cluster primary coordinator host"
|
||||||
|
|
||||||
|
# Verify that the VM exists
|
||||||
|
pvc -c ${source_cluster} vm info ${source_vm} &>/dev/null || fail "Specified VM is not present on the cluster"
|
||||||
|
|
||||||
|
echo "Verification complete."
|
||||||
|
|
||||||
|
# Shut down the VM
|
||||||
|
echo -n "Shutting down VM..."
|
||||||
|
set +o errexit
|
||||||
|
pvc -c ${source_cluster} vm shutdown ${source_vm} &>/dev/null
|
||||||
|
shutdown_success=$?
|
||||||
|
while ! pvc -c ${source_cluster} vm info ${source_vm} 2>/dev/null | grep '^State' | grep -q -E 'stop|disable'; do
|
||||||
|
sleep 1
|
||||||
|
echo -n "."
|
||||||
|
done
|
||||||
|
set -o errexit
|
||||||
|
echo " done."
|
||||||
|
|
||||||
|
# Dump the XML file
|
||||||
|
echo -n "Exporting VM configuration file... "
|
||||||
|
pvc -c ${source_cluster} vm dump ${source_vm} 1> ${destination_directory}/${source_vm}.xml 2>/dev/null
|
||||||
|
echo "done".
|
||||||
|
|
||||||
|
# Determine the list of volumes in this VM
|
||||||
|
volume_list="$( pvc -c ${source_cluster} vm info --long ${source_vm} 2>/dev/null | grep -w 'rbd' | awk '{ print $3 }' )"
|
||||||
|
for volume in ${volume_list}; do
|
||||||
|
volume_pool="$( awk -F '/' '{ print $1 }' <<<"${volume}" )"
|
||||||
|
volume_name="$( awk -F '/' '{ print $2 }' <<<"${volume}" )"
|
||||||
|
volume_size="$( pvc -c ${source_cluster} storage volume list -p ${volume_pool} ${volume_name} 2>/dev/null | grep "^${volume_name}" | awk '{ print $3 }' )"
|
||||||
|
echo -n "Exporting disk ${volume_name} (${volume_size})... "
|
||||||
|
ssh ${cluster_address} sudo rbd map ${volume_pool}/${volume_name} &>/dev/null || fail "Failed to map volume ${volume}"
|
||||||
|
ssh ${cluster_address} sudo dd if="/dev/rbd/${volume_pool}/${volume_name}" bs=1M 2>/dev/null | dd bs=1M of="${destination_directory}/${volume_name}.img" 2>/dev/null
|
||||||
|
ssh ${cluster_address} sudo rbd unmap ${volume_pool}/${volume_name} &>/dev/null || fail "Failed to unmap volume ${volume}"
|
||||||
|
echo "done."
|
||||||
|
done
|
119
client-cli/scripts/force_single_node
Executable file
119
client-cli/scripts/force_single_node
Executable file
@ -0,0 +1,119 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# force_single_node - Manually promote a single coordinator node from a degraded cluster
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
set -o errexit
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo -e "Manually promote a single coordinator node from a degraded cluster"
|
||||||
|
echo -e ""
|
||||||
|
echo -e "DANGER: This action will cause a permanent split-brain within the cluster"
|
||||||
|
echo -e " which will have to be corrected manually upon cluster restoration."
|
||||||
|
echo -e ""
|
||||||
|
echo -e "This script is primarily designed for small clusters in situations where 2"
|
||||||
|
echo -e "of the 3 coordinators have become unreachable or shut down. It will promote"
|
||||||
|
echo -e "the remaining lone_node to act as a standalone coordinator, allowing basic"
|
||||||
|
echo -e "cluster functionality to continue in a heavily degraded state until the"
|
||||||
|
echo -e "situation can be rectified. This should only be done in exceptional cases"
|
||||||
|
echo -e "as a disaster recovery mechanism when the remaining nodes will remain down"
|
||||||
|
echo -e "for a significant amount of time but some VMs are required to run. In general,"
|
||||||
|
echo -e "use of this script is not advisable."
|
||||||
|
echo -e ""
|
||||||
|
echo -e "Usage:"
|
||||||
|
echo -e " $0 <target_cluster> <lone_node>"
|
||||||
|
echo -e ""
|
||||||
|
echo -e "Important information:"
|
||||||
|
echo -e " * The lone_node must be a fully-qualified name that is directly reachable from"
|
||||||
|
echo -e " the local system via SSH."
|
||||||
|
echo -e " * The local user must have valid SSH access to the lone_node in the cluster."
|
||||||
|
echo -e " * The user on the cluster node must have 'sudo' access."
|
||||||
|
}
|
||||||
|
|
||||||
|
fail() {
|
||||||
|
echo -e "$@"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
if [[ -z ${1} || -z ${2} ]]; then
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
target_cluster="${1}"
|
||||||
|
lone_node="${2}"
|
||||||
|
lone_node_shortname="${lone_node%%.*}"
|
||||||
|
|
||||||
|
# Attempt to connect to the node
|
||||||
|
ssh ${lone_node} which pvc &>/dev/null || fail "Could not SSH to the lone_node host"
|
||||||
|
|
||||||
|
echo "Verification complete."
|
||||||
|
|
||||||
|
echo -n "Allowing Ceph single-node operation... "
|
||||||
|
temp_monmap="$( ssh ${lone_node} mktemp )"
|
||||||
|
ssh ${lone_node} "sudo systemctl stop ceph-mon@${lone_node_shortname}" &>/dev/null
|
||||||
|
ssh ${lone_node} "ceph-mon -i ${lone_node_shortname} --extract-monmap ${temp_monmap}" &>/dev/null
|
||||||
|
ssh ${lone_node} "sudo cp ${tmp_monmap} /etc/ceph/monmap.orig" &>/dev/null
|
||||||
|
mon_list="$( ssh ${lone_node} strings ${temp_monmap} | sort | uniq )"
|
||||||
|
for mon in ${mon_list}; do
|
||||||
|
if [[ ${mon} == ${lone_node_shortname} ]]; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
ssh ${lone_node} "sudo monmaptool ${temp_monmap} --rm ${mon}" &>/dev/null
|
||||||
|
done
|
||||||
|
ssh ${lone_node} "sudo ceph-mon -i ${lone_node_shortname} --inject-monmap ${temp_monmap}" &>/dev/null
|
||||||
|
ssh ${lone_node} "sudo systemctl start ceph-mon@${lone_node_shortname}" &>/dev/null
|
||||||
|
sleep 5
|
||||||
|
ssh ${lone_node} "sudo ceph osd set noout" &>/dev/null
|
||||||
|
echo "done."
|
||||||
|
echo -e "Restoration steps:"
|
||||||
|
echo -e " sudo systemctl stop ceph-mon@${lone_node_shortname}"
|
||||||
|
echo -e " sudo ceph-mon -i ${lone_node_shortname} --inject-monmap /etc/ceph/monmap.orig"
|
||||||
|
echo -e " sudo systemctl start ceph-mon@${lone_node_shortname}"
|
||||||
|
echo -e " sudo ceph osd unset noout"
|
||||||
|
|
||||||
|
echo -n "Allowing Zookeeper single-node operation... "
|
||||||
|
temp_zoocfg="$( ssh ${lone_node} mktemp )"
|
||||||
|
ssh ${lone_node} "sudo systemctl stop zookeeper"
|
||||||
|
ssh ${lone_node} "sudo awk -v lone_node=${lone_node_shortname} '{
|
||||||
|
FS="=|:"
|
||||||
|
if ( $1 ~ /^server/ ){
|
||||||
|
if ($2 == lone_node) {
|
||||||
|
print $0
|
||||||
|
} else {
|
||||||
|
print "#" $0
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
print $0
|
||||||
|
}
|
||||||
|
}' /etc/zookeeper/conf/zoo.cfg > ${temp_zoocfg}"
|
||||||
|
ssh ${lone_node} "sudo mv /etc/zookeeper/conf/zoo.cfg /etc/zookeeper/conf/zoo.cfg.orig"
|
||||||
|
ssh ${lone_node} "sudo mv ${temp_zoocfg} /etc/zookeeper/conf/zoo.cfg"
|
||||||
|
ssh ${lone_node} "sudo systemctl start zookeeper"
|
||||||
|
echo "done."
|
||||||
|
echo -e "Restoration steps:"
|
||||||
|
echo -e " sudo systemctl stop zookeeper"
|
||||||
|
echo -e " sudo mv /etc/zookeeper/conf/zoo.cfg.orig /etc/zookeeper/conf/zoo.cfg"
|
||||||
|
echo -e " sudo systemctl start zookeeper"
|
||||||
|
ssh ${lone_node} "sudo systemctl stop ceph-mon@${lone_node_shortname}"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
ssh ${lone_node} "sudo pvc status 2>/dev/null"
|
81
client-cli/scripts/import_vm
Executable file
81
client-cli/scripts/import_vm
Executable file
@ -0,0 +1,81 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# import_vm - Imports a VM to a PVC cluster from local files
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
set -o errexit
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo -e "Import a VM to a PVC cluster from local files."
|
||||||
|
echo -e "Usage:"
|
||||||
|
echo -e " $0 <destination_cluster> <destination_pool> <vm_configuration_file> <vm_disk_file_1> [<vm_disk_file_2>] [...]"
|
||||||
|
echo -e ""
|
||||||
|
echo -e "Important information:"
|
||||||
|
echo -e " * At least one disk must be specified; all disks that are present in vm_configuration_file"
|
||||||
|
echo -e " should be specified, though this is not strictly requireda."
|
||||||
|
echo -e " * Do not switch the cluster primary coordinator while the script is running."
|
||||||
|
echo -e " * Ensure you have enough space on the destination cluster to store all VM disks."
|
||||||
|
}
|
||||||
|
|
||||||
|
fail() {
|
||||||
|
echo -e "$@"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
if [[ -z ${1} || -z ${2} || -z ${3} || -z ${4} ]]; then
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
destination_cluster="${1}"; shift
|
||||||
|
destination_pool="${1}"; shift
|
||||||
|
vm_config_file="${1}"; shift
|
||||||
|
vm_disk_files=( ${@} )
|
||||||
|
|
||||||
|
# Verify the cluster is reachable
|
||||||
|
pvc -c ${destination_cluster} status &>/dev/null || fail "Specified destination_cluster is not accessible"
|
||||||
|
|
||||||
|
# Determine the connection IP
|
||||||
|
cluster_address="$( pvc cluster list 2>/dev/null | grep -i "^${destination_cluster}" | awk '{ print $2 }' )"
|
||||||
|
|
||||||
|
echo "Verification complete."
|
||||||
|
|
||||||
|
# Determine information about the VM from the config file
|
||||||
|
parse_xml_field() {
|
||||||
|
field="${1}"
|
||||||
|
line="$( grep -F "<${field}>" ${vm_config_file} )"
|
||||||
|
awk -F '>|<' '{ print $3 }' <<<"${line}"
|
||||||
|
}
|
||||||
|
vm_name="$( parse_xml_field name )"
|
||||||
|
echo "Importing VM ${vm_name}..."
|
||||||
|
pvc -c ${destination_cluster} vm define ${vm_config_file} 2>/dev/null
|
||||||
|
|
||||||
|
# Create the disks on the cluster
|
||||||
|
for disk_file in ${vm_disk_files[@]}; do
|
||||||
|
disk_file_basename="$( basename ${disk_file} )"
|
||||||
|
disk_file_ext="${disk_file_basename##*.}"
|
||||||
|
disk_file_name="$( basename ${disk_file_basename} .${disk_file_ext} )"
|
||||||
|
disk_file_size="$( stat --format="%s" ${disk_file} )"
|
||||||
|
|
||||||
|
echo "Importing disk ${disk_file_name}... "
|
||||||
|
pvc -c ${destination_cluster} storage volume add ${destination_pool} ${disk_file_name} ${disk_file_size}B 2>/dev/null
|
||||||
|
pvc -c ${destination_cluster} storage volume upload ${destination_pool} ${disk_file_name} ${disk_file} 2>/dev/null
|
||||||
|
done
|
116
client-cli/scripts/migrate_vm
Executable file
116
client-cli/scripts/migrate_vm
Executable file
@ -0,0 +1,116 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# migrate_vm - Exports a VM from a PVC cluster to another PVC cluster
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
set -o errexit
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo -e "Export a VM from a PVC cluster to another PVC cluster."
|
||||||
|
echo -e "Usage:"
|
||||||
|
echo -e " $0 <vm> <source_cluster> <destination_cluster> <destination_pool>"
|
||||||
|
echo -e ""
|
||||||
|
echo -e "Important information:"
|
||||||
|
echo -e " * The local user must have valid SSH access to the primary coordinator in the source_cluster."
|
||||||
|
echo -e " * The user on the cluster primary coordinator must have 'sudo' access."
|
||||||
|
echo -e " * If the VM is not in 'stop' state, it will be shut down."
|
||||||
|
echo -e " * Do not switch the cluster primary coordinator on either cluster while the script is running."
|
||||||
|
echo -e " * Ensure you have enough space on the target cluster to store all VM disks."
|
||||||
|
}
|
||||||
|
|
||||||
|
fail() {
|
||||||
|
echo -e "$@"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
if [[ -z ${1} || -z ${2} || -z ${3} || -z ${4} ]]; then
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
source_vm="${1}"
|
||||||
|
source_cluster="${2}"
|
||||||
|
destination_cluster="${3}"
|
||||||
|
destination_pool="${4}"
|
||||||
|
|
||||||
|
# Verify each cluster is reachable
|
||||||
|
pvc -c ${source_cluster} status &>/dev/null || fail "Specified source_cluster is not accessible"
|
||||||
|
pvc -c ${destination_cluster} status &>/dev/null || fail "Specified destination_cluster is not accessible"
|
||||||
|
|
||||||
|
# Determine the connection IPs
|
||||||
|
source_cluster_address="$( pvc cluster list 2>/dev/null | grep -i "^${source_cluster}" | awk '{ print $2 }' )"
|
||||||
|
destination_cluster_address="$( pvc cluster list 2>/dev/null | grep -i "^${destination_cluster}" | awk '{ print $2 }' )"
|
||||||
|
|
||||||
|
# Attempt to connect to the cluster addresses
|
||||||
|
ssh ${source_cluster_address} which pvc &>/dev/null || fail "Could not SSH to source_cluster primary coordinator host"
|
||||||
|
ssh ${destination_cluster_address} which pvc &>/dev/null || fail "Could not SSH to destination_cluster primary coordinator host"
|
||||||
|
|
||||||
|
# Verify that the VM exists
|
||||||
|
pvc -c ${source_cluster} vm info ${source_vm} &>/dev/null || fail "Specified VM is not present on the source cluster"
|
||||||
|
|
||||||
|
echo "Verification complete."
|
||||||
|
|
||||||
|
# Shut down the VM
|
||||||
|
echo -n "Shutting down VM..."
|
||||||
|
set +o errexit
|
||||||
|
pvc -c ${source_cluster} vm shutdown ${source_vm} &>/dev/null
|
||||||
|
shutdown_success=$?
|
||||||
|
while ! pvc -c ${source_cluster} vm info ${source_vm} 2>/dev/null | grep '^State' | grep -q -E 'stop|disable'; do
|
||||||
|
sleep 1
|
||||||
|
echo -n "."
|
||||||
|
done
|
||||||
|
set -o errexit
|
||||||
|
echo " done."
|
||||||
|
|
||||||
|
tempfile="$( mktemp )"
|
||||||
|
|
||||||
|
# Dump the XML file
|
||||||
|
echo -n "Exporting VM configuration file from source cluster... "
|
||||||
|
pvc -c ${source_cluster} vm dump ${source_vm} 1> ${tempfile} 2>/dev/null
|
||||||
|
echo "done."
|
||||||
|
|
||||||
|
# Import the XML file
|
||||||
|
echo -n "Importing VM configuration file to destination cluster... "
|
||||||
|
pvc -c ${destination_cluster} vm define ${tempfile}
|
||||||
|
echo "done."
|
||||||
|
|
||||||
|
rm -f ${tempfile}
|
||||||
|
|
||||||
|
# Determine the list of volumes in this VM
|
||||||
|
volume_list="$( pvc -c ${source_cluster} vm info --long ${source_vm} 2>/dev/null | grep -w 'rbd' | awk '{ print $3 }' )"
|
||||||
|
|
||||||
|
# Parse and migrate each volume
|
||||||
|
for volume in ${volume_list}; do
|
||||||
|
volume_pool="$( awk -F '/' '{ print $1 }' <<<"${volume}" )"
|
||||||
|
volume_name="$( awk -F '/' '{ print $2 }' <<<"${volume}" )"
|
||||||
|
volume_size="$( pvc -c ${source_cluster} storage volume list -p ${volume_pool} ${volume_name} 2>/dev/null | grep "^${volume_name}" | awk '{ print $3 }' )"
|
||||||
|
echo "Transferring disk ${volume_name} (${volume_size})... "
|
||||||
|
pvc -c ${destination_cluster} storage volume add ${destination_pool} ${volume_name} ${volume_size} 2>/dev/null
|
||||||
|
ssh ${source_cluster_address} sudo rbd map ${volume_pool}/${volume_name} &>/dev/null || fail "Failed to map volume ${volume} on source cluster"
|
||||||
|
ssh ${destination_cluster_address} sudo rbd map ${volume_pool}/${volume_name} &>/dev/null || fail "Failed to map volume ${volume} on destination cluster"
|
||||||
|
ssh ${source_cluster_address} sudo dd if="/dev/rbd/${volume_pool}/${volume_name}" bs=1M 2>/dev/null | pv | ssh ${destination_cluster_address} sudo dd bs=1M of="/dev/rbd/${destination_pool}/${volume_name}" 2>/dev/null
|
||||||
|
ssh ${source_cluster_address} sudo rbd unmap ${volume_pool}/${volume_name} &>/dev/null || fail "Failed to unmap volume ${volume} on source cluster"
|
||||||
|
ssh ${destination_cluster_address} sudo rbd unmap ${volume_pool}/${volume_name} &>/dev/null || fail "Failed to unmap volume ${volume} on destination cluster"
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ ${shutdown_success} -eq 0 ]]; then
|
||||||
|
pvc -c ${destination_cluster} vm start ${source_vm}
|
||||||
|
fi
|
@ -20,15 +20,16 @@
|
|||||||
#
|
#
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
|
import os
|
||||||
import re
|
import re
|
||||||
import click
|
import click
|
||||||
import json
|
import json
|
||||||
import time
|
import time
|
||||||
import math
|
import math
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
import client_lib.zkhandler as zkhandler
|
import daemon_lib.zkhandler as zkhandler
|
||||||
import client_lib.common as common
|
import daemon_lib.common as common
|
||||||
|
|
||||||
#
|
#
|
||||||
# Supplemental functions
|
# Supplemental functions
|
||||||
@ -96,8 +97,11 @@ def format_bytes_tohuman(databytes):
|
|||||||
|
|
||||||
def format_bytes_fromhuman(datahuman):
|
def format_bytes_fromhuman(datahuman):
|
||||||
# Trim off human-readable character
|
# Trim off human-readable character
|
||||||
dataunit = datahuman[-1]
|
dataunit = str(datahuman)[-1]
|
||||||
datasize = int(datahuman[:-1])
|
datasize = int(str(datahuman)[:-1])
|
||||||
|
if not re.match('[A-Z]', dataunit):
|
||||||
|
dataunit = 'B'
|
||||||
|
datasize = int(datahuman)
|
||||||
databytes = datasize * byte_unit_matrix[dataunit]
|
databytes = datasize * byte_unit_matrix[dataunit]
|
||||||
return '{}B'.format(databytes)
|
return '{}B'.format(databytes)
|
||||||
|
|
||||||
@ -205,6 +209,8 @@ def getOutputColoursOSD(osd_information):
|
|||||||
|
|
||||||
return osd_up_flag, osd_up_colour, osd_in_flag, osd_in_colour
|
return osd_up_flag, osd_up_colour, osd_in_flag, osd_in_colour
|
||||||
|
|
||||||
|
# OSD addition and removal uses the /cmd/ceph pipe
|
||||||
|
# These actions must occur on the specific node they reference
|
||||||
def add_osd(zk_conn, node, device, weight):
|
def add_osd(zk_conn, node, device, weight):
|
||||||
# Verify the target node exists
|
# Verify the target node exists
|
||||||
if not common.verifyNode(zk_conn, node):
|
if not common.verifyNode(zk_conn, node):
|
||||||
@ -279,118 +285,35 @@ def in_osd(zk_conn, osd_id):
|
|||||||
if not verifyOSD(zk_conn, osd_id):
|
if not verifyOSD(zk_conn, osd_id):
|
||||||
return False, 'ERROR: No OSD with ID "{}" is present in the cluster.'.format(osd_id)
|
return False, 'ERROR: No OSD with ID "{}" is present in the cluster.'.format(osd_id)
|
||||||
|
|
||||||
# Tell the cluster to online an OSD
|
retcode, stdout, stderr = common.run_os_command('ceph osd in {}'.format(osd_id))
|
||||||
in_osd_string = 'osd_in {}'.format(osd_id)
|
if retcode:
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': in_osd_string})
|
return False, 'ERROR: Failed to enable OSD {}: {}'.format(osd_id, stderr)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-osd_in':
|
|
||||||
message = 'Set OSD {} online in the cluster.'.format(osd_id)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to set OSD online; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
success = False
|
|
||||||
message = 'ERROR Command ignored by node.'
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
return True, 'Set OSD {} online.'.format(osd_id)
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
time.sleep(0.5)
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
|
||||||
|
|
||||||
def out_osd(zk_conn, osd_id):
|
def out_osd(zk_conn, osd_id):
|
||||||
if not verifyOSD(zk_conn, osd_id):
|
if not verifyOSD(zk_conn, osd_id):
|
||||||
return False, 'ERROR: No OSD with ID "{}" is present in the cluster.'.format(osd_id)
|
return False, 'ERROR: No OSD with ID "{}" is present in the cluster.'.format(osd_id)
|
||||||
|
|
||||||
# Tell the cluster to offline an OSD
|
retcode, stdout, stderr = common.run_os_command('ceph osd out {}'.format(osd_id))
|
||||||
out_osd_string = 'osd_out {}'.format(osd_id)
|
if retcode:
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': out_osd_string})
|
return False, 'ERROR: Failed to disable OSD {}: {}'.format(osd_id, stderr)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-osd_out':
|
|
||||||
message = 'Set OSD {} offline in the cluster.'.format(osd_id)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to set OSD offline; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
success = False
|
|
||||||
message = 'ERROR Command ignored by node.'
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
return True, 'Set OSD {} offline.'.format(osd_id)
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
time.sleep(0.5)
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
|
||||||
|
|
||||||
def set_osd(zk_conn, option):
|
def set_osd(zk_conn, option):
|
||||||
# Tell the cluster to set an OSD property
|
retcode, stdout, stderr = common.run_os_command('ceph osd set {}'.format(option))
|
||||||
set_osd_string = 'osd_set {}'.format(option)
|
if retcode:
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': set_osd_string})
|
return False, 'ERROR: Failed to set property "{}": {}'.format(option, stderr)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-osd_set':
|
|
||||||
message = 'Set OSD property {} on the cluster.'.format(option)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to set OSD property; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
success = False
|
|
||||||
message = 'ERROR Command ignored by node.'
|
|
||||||
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
return True, 'Set OSD property "{}".'.format(option)
|
||||||
return success, message
|
|
||||||
|
|
||||||
def unset_osd(zk_conn, option):
|
def unset_osd(zk_conn, option):
|
||||||
# Tell the cluster to unset an OSD property
|
retcode, stdout, stderr = common.run_os_command('ceph osd unset {}'.format(option))
|
||||||
unset_osd_string = 'osd_unset {}'.format(option)
|
if retcode:
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': unset_osd_string})
|
return False, 'ERROR: Failed to unset property "{}": {}'.format(option, stderr)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-osd_unset':
|
|
||||||
message = 'Unset OSD property {} on the cluster.'.format(option)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to unset OSD property; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
success = False
|
|
||||||
message = 'ERROR Command ignored by node.'
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
return True, 'Unset OSD property "{}".'.format(option)
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
time.sleep(0.5)
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
|
||||||
|
|
||||||
def get_list_osd(zk_conn, limit, is_fuzzy=True):
|
def get_list_osd(zk_conn, limit, is_fuzzy=True):
|
||||||
osd_list = []
|
osd_list = []
|
||||||
@ -664,65 +587,66 @@ def getPoolInformation(zk_conn, pool):
|
|||||||
return pool_information
|
return pool_information
|
||||||
|
|
||||||
def add_pool(zk_conn, name, pgs, replcfg):
|
def add_pool(zk_conn, name, pgs, replcfg):
|
||||||
# Tell the cluster to create a new pool
|
# Prepare the copies/mincopies variables
|
||||||
add_pool_string = 'pool_add {},{},{}'.format(name, pgs, replcfg)
|
try:
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': add_pool_string})
|
copies, mincopies = replcfg.split(',')
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
copies = int(copies.replace('copies=', ''))
|
||||||
time.sleep(0.5)
|
mincopies = int(mincopies.replace('mincopies=', ''))
|
||||||
# Acquire a read lock, so we get the return exclusively
|
except:
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
copies = None
|
||||||
with lock:
|
mincopies = None
|
||||||
try:
|
if not copies or not mincopies:
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
return False, 'ERROR: Replication configuration "{}" is not valid.'.format(replcfg)
|
||||||
if result == 'success-pool_add':
|
|
||||||
message = 'Created new RBD pool "{}" with "{}" PGs and replication configuration {}.'.format(name, pgs, replcfg)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to create new pool; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 1. Create the pool
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('ceph osd pool create {} {} replicated'.format(name, pgs))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to create pool "{}" with {} PGs: {}'.format(name, pgs, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
# 2. Set the size and minsize
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd pool set {} size {}'.format(name, copies))
|
||||||
|
if retcode:
|
||||||
|
return False, 'ERROR: Failed to set pool "{}" size of {}: {}'.format(name, copies, stderr)
|
||||||
|
|
||||||
return success, message
|
retcode, stdout, stderr = common.run_os_command('ceph osd pool set {} min_size {}'.format(name, mincopies))
|
||||||
|
if retcode:
|
||||||
|
return False, 'ERROR: Failed to set pool "{}" minimum size of {}: {}'.format(name, mincopies, stderr)
|
||||||
|
|
||||||
|
# 3. Enable RBD application
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd pool application enable {} rbd'.format(name))
|
||||||
|
if retcode:
|
||||||
|
return False, 'ERROR: Failed to enable RBD application on pool "{}" : {}'.format(name, stderr)
|
||||||
|
|
||||||
|
# 4. Add the new pool to Zookeeper
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/ceph/pools/{}'.format(name): '',
|
||||||
|
'/ceph/pools/{}/pgs'.format(name): pgs,
|
||||||
|
'/ceph/pools/{}/stats'.format(name): '{}',
|
||||||
|
'/ceph/volumes/{}'.format(name): '',
|
||||||
|
'/ceph/snapshots/{}'.format(name): '',
|
||||||
|
})
|
||||||
|
|
||||||
|
return True, 'Created RBD pool "{}" with {} PGs'.format(name, pgs)
|
||||||
|
|
||||||
def remove_pool(zk_conn, name):
|
def remove_pool(zk_conn, name):
|
||||||
if not verifyPool(zk_conn, name):
|
if not verifyPool(zk_conn, name):
|
||||||
return False, 'ERROR: No pool with name "{}" is present in the cluster.'.format(name)
|
return False, 'ERROR: No pool with name "{}" is present in the cluster.'.format(name)
|
||||||
|
|
||||||
# Tell the cluster to create a new pool
|
# 1. Remove pool volumes
|
||||||
remove_pool_string = 'pool_remove {}'.format(name)
|
for volume in zkhandler.listchildren(zk_conn, '/ceph/volumes/{}'.format(name)):
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': remove_pool_string})
|
remove_volume(zk_conn, logger, name, volume)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-pool_remove':
|
|
||||||
message = 'Removed RBD pool "{}" and all volumes.'.format(name)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to remove pool; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except Exception as e:
|
|
||||||
message = 'ERROR: Command ignored by node: {}'.format(e)
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 2. Remove the pool
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('ceph osd pool rm {pool} {pool} --yes-i-really-really-mean-it'.format(pool=name))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to remove pool "{}": {}'.format(name, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 3. Delete pool from Zookeeper
|
||||||
|
zkhandler.deletekey(zk_conn, '/ceph/pools/{}'.format(name))
|
||||||
|
zkhandler.deletekey(zk_conn, '/ceph/volumes/{}'.format(name))
|
||||||
|
zkhandler.deletekey(zk_conn, '/ceph/snapshots/{}'.format(name))
|
||||||
|
|
||||||
|
return True, 'Removed RBD pool "{}" and all volumes.'.format(name)
|
||||||
|
|
||||||
def get_list_pool(zk_conn, limit, is_fuzzy=True):
|
def get_list_pool(zk_conn, limit, is_fuzzy=True):
|
||||||
pool_list = []
|
pool_list = []
|
||||||
@ -967,154 +891,147 @@ def getVolumeInformation(zk_conn, pool, volume):
|
|||||||
return volume_information
|
return volume_information
|
||||||
|
|
||||||
def add_volume(zk_conn, pool, name, size):
|
def add_volume(zk_conn, pool, name, size):
|
||||||
# Tell the cluster to create a new volume
|
# 1. Create the volume
|
||||||
databytes = format_bytes_fromhuman(size)
|
retcode, stdout, stderr = common.run_os_command('rbd create --size {} --image-feature layering,exclusive-lock {}/{}'.format(size, pool, name))
|
||||||
add_volume_string = 'volume_add {},{},{}'.format(pool, name, databytes)
|
if retcode:
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': add_volume_string})
|
return False, 'ERROR: Failed to create RBD volume "{}": {}'.format(name, stderr)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-volume_add':
|
|
||||||
message = 'Created new RBD volume "{}" of size "{}" on pool "{}".'.format(name, size, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to create new volume; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 2. Get volume stats
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('rbd info --format json {}/{}'.format(pool, name))
|
||||||
with lock:
|
volstats = stdout
|
||||||
time.sleep(0.5)
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 3. Add the new volume to Zookeeper
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/ceph/volumes/{}/{}'.format(pool, name): '',
|
||||||
|
'/ceph/volumes/{}/{}/stats'.format(pool, name): volstats,
|
||||||
|
'/ceph/snapshots/{}/{}'.format(pool, name): '',
|
||||||
|
})
|
||||||
|
|
||||||
|
return True, 'Created RBD volume "{}/{}" ({}).'.format(pool, name, size)
|
||||||
|
|
||||||
|
def clone_volume(zk_conn, pool, name_src, name_new):
|
||||||
|
if not verifyVolume(zk_conn, pool, name_src):
|
||||||
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(name_src, pool)
|
||||||
|
|
||||||
|
# 1. Clone the volume
|
||||||
|
retcode, stdout, stderr = common.run_os_command('rbd copy {}/{} {}/{}'.format(pool, name_src, pool, name_new))
|
||||||
|
if retcode:
|
||||||
|
return False, 'ERROR: Failed to clone RBD volume "{}" to "{}" in pool "{}": {}'.format(name_src, new_name, pool, stderr)
|
||||||
|
|
||||||
|
# 2. Get volume stats
|
||||||
|
retcode, stdout, stderr = common.run_os_command('rbd info --format json {}/{}'.format(pool, name_new))
|
||||||
|
volstats = stdout
|
||||||
|
|
||||||
|
# 3. Add the new volume to Zookeeper
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/ceph/volumes/{}/{}'.format(pool, name_new): '',
|
||||||
|
'/ceph/volumes/{}/{}/stats'.format(pool, name_new): volstats,
|
||||||
|
'/ceph/snapshots/{}/{}'.format(pool, name_new): '',
|
||||||
|
})
|
||||||
|
|
||||||
|
return True, 'Cloned RBD volume "{}" to "{}" in pool "{}"'.format(name, name_new, pool)
|
||||||
|
|
||||||
def resize_volume(zk_conn, pool, name, size):
|
def resize_volume(zk_conn, pool, name, size):
|
||||||
# Tell the cluster to resize the volume
|
if not verifyVolume(zk_conn, pool, name):
|
||||||
databytes = format_bytes_fromhuman(size)
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(name, pool)
|
||||||
resize_volume_string = 'volume_resize {},{},{}'.format(pool, name, databytes)
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': resize_volume_string})
|
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-volume_resize':
|
|
||||||
message = 'Resized RBD volume "{}" to size "{}" on pool "{}".'.format(name, size, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to resize volume; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 1. Resize the volume
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('rbd resize --size {} {}/{}'.format(size, pool, name))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to resize RBD volume "{}" to size "{}" in pool "{}": {}'.format(name, size, pool, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 2. Get volume stats
|
||||||
|
retcode, stdout, stderr = common.run_os_command('rbd info --format json {}/{}'.format(pool, name))
|
||||||
|
volstats = stdout
|
||||||
|
|
||||||
|
# 3. Add the new volume to Zookeeper
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/ceph/volumes/{}/{}'.format(pool, name): '',
|
||||||
|
'/ceph/volumes/{}/{}/stats'.format(pool, name): volstats,
|
||||||
|
'/ceph/snapshots/{}/{}'.format(pool, name): '',
|
||||||
|
})
|
||||||
|
|
||||||
|
return True, 'Resized RBD volume "{}" to size "{}" in pool "{}".'.format(name, size, pool)
|
||||||
|
|
||||||
def rename_volume(zk_conn, pool, name, new_name):
|
def rename_volume(zk_conn, pool, name, new_name):
|
||||||
# Tell the cluster to rename
|
if not verifyVolume(zk_conn, pool, name):
|
||||||
rename_volume_string = 'volume_rename {},{},{}'.format(pool, name, new_name)
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(name, pool)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': rename_volume_string})
|
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-volume_rename':
|
|
||||||
message = 'Renamed RBD volume "{}" to "{}" on pool "{}".'.format(name, new_name, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to rename volume {} to {}; check node logs for details.'.format(name, new_name)
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 1. Rename the volume
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('rbd rename {}/{} {}'.format(pool, name, new_name))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to rename volume "{}" to "{}" in pool "{}": {}'.format(name, new_name, pool, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 2. Rename the volume in Zookeeper
|
||||||
|
zkhandler.renamekey(zk_conn, {
|
||||||
|
'/ceph/volumes/{}/{}'.format(pool, name): '/ceph/volumes/{}/{}'.format(pool, new_name),
|
||||||
|
'/ceph/snapshots/{}/{}'.format(pool, name): '/ceph/snapshots/{}/{}'.format(pool, new_name),
|
||||||
|
})
|
||||||
|
|
||||||
def clone_volume(zk_conn, pool, name, new_name):
|
# 3. Get volume stats
|
||||||
# Tell the cluster to clone
|
retcode, stdout, stderr = common.run_os_command('rbd info --format json {}/{}'.format(pool, new_name))
|
||||||
clone_volume_string = 'volume_clone {},{},{}'.format(pool, name, new_name)
|
volstats = stdout
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': clone_volume_string})
|
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-volume_clone':
|
|
||||||
message = 'Cloned RBD volume "{}" to "{}" on pool "{}".'.format(name, new_name, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to clone volume {} to {}; check node logs for details.'.format(name, new_name)
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 4. Update the volume stats in Zookeeper
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
zkhandler.writedata(zk_conn, {
|
||||||
with lock:
|
'/ceph/volumes/{}/{}/stats'.format(pool, new_name): volstats,
|
||||||
time.sleep(0.5)
|
})
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
return True, 'Renamed RBD volume "{}" to "{}" in pool "{}".'.format(name, new_name, pool)
|
||||||
|
|
||||||
def remove_volume(zk_conn, pool, name):
|
def remove_volume(zk_conn, pool, name):
|
||||||
if not verifyVolume(zk_conn, pool, name):
|
if not verifyVolume(zk_conn, pool, name):
|
||||||
return False, 'ERROR: No volume with name "{}" is present in pool {}.'.format(name, pool)
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(name, pool)
|
||||||
|
|
||||||
# Tell the cluster to create a new volume
|
# 1. Remove volume snapshots
|
||||||
remove_volume_string = 'volume_remove {},{}'.format(pool, name)
|
for snapshot in zkhandler.listchildren(zk_conn, '/ceph/snapshots/{}/{}'.format(pool, name)):
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': remove_volume_string})
|
remove_snapshot(zk_conn, logger, pool, volume, snapshot)
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-volume_remove':
|
|
||||||
message = 'Removed RBD volume "{}" in pool "{}".'.format(name, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to remove volume; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except Exception as e:
|
|
||||||
message = 'ERROR: Command ignored by node: {}'.format(e)
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 2. Remove the volume
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('rbd rm {}/{}'.format(pool, name))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to remove RBD volume "{}" in pool "{}": {}'.format(name, pool, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 3. Delete volume from Zookeeper
|
||||||
|
zkhandler.deletekey(zk_conn, '/ceph/volumes/{}/{}'.format(pool, name))
|
||||||
|
zkhandler.deletekey(zk_conn, '/ceph/snapshots/{}/{}'.format(pool, name))
|
||||||
|
|
||||||
|
return True, 'Removed RBD volume "{}" in pool "{}".'.format(name, pool)
|
||||||
|
|
||||||
|
def map_volume(zk_conn, pool, name):
|
||||||
|
if not verifyVolume(zk_conn, pool, name):
|
||||||
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(name, pool)
|
||||||
|
|
||||||
|
# 1. Map the volume onto the local system
|
||||||
|
retcode, stdout, stderr = common.run_os_command('rbd map {}/{}'.format(pool, name))
|
||||||
|
if retcode:
|
||||||
|
return False, 'ERROR: Failed to map RBD volume "{}" in pool "{}": {}'.format(name, pool, stderr)
|
||||||
|
|
||||||
|
# 2. Calculate the absolute path to the mapped volume
|
||||||
|
mapped_volume = '/dev/rbd/{}/{}'.format(pool, name)
|
||||||
|
|
||||||
|
# 3. Ensure the volume exists
|
||||||
|
if not os.path.exists(mapped_volume):
|
||||||
|
return False, 'ERROR: Mapped volume not found at expected location "{}".'.format(mapped_volume)
|
||||||
|
|
||||||
|
return True, mapped_volume
|
||||||
|
|
||||||
|
def unmap_volume(zk_conn, pool, name):
|
||||||
|
if not verifyVolume(zk_conn, pool, name):
|
||||||
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(name, pool)
|
||||||
|
|
||||||
|
mapped_volume = '/dev/rbd/{}/{}'.format(pool, name)
|
||||||
|
|
||||||
|
# 1. Ensure the volume exists
|
||||||
|
if not os.path.exists(mapped_volume):
|
||||||
|
return False, 'ERROR: Mapped volume not found at expected location "{}".'.format(mapped_volume)
|
||||||
|
|
||||||
|
# 2. Unap the volume
|
||||||
|
retcode, stdout, stderr = common.run_os_command('rbd unmap {}'.format(mapped_volume))
|
||||||
|
if retcode:
|
||||||
|
return False, 'ERROR: Failed to unmap RBD volume at "{}": {}'.format(mapped_volume, stderr)
|
||||||
|
|
||||||
|
return True, 'Unmapped RBD volume at "{}".'.format(mapped_volume)
|
||||||
|
|
||||||
def get_list_volume(zk_conn, pool, limit, is_fuzzy=True):
|
def get_list_volume(zk_conn, pool, limit, is_fuzzy=True):
|
||||||
volume_list = []
|
volume_list = []
|
||||||
@ -1276,94 +1193,55 @@ def getCephSnapshots(zk_conn, pool, volume):
|
|||||||
return snapshot_list
|
return snapshot_list
|
||||||
|
|
||||||
def add_snapshot(zk_conn, pool, volume, name):
|
def add_snapshot(zk_conn, pool, volume, name):
|
||||||
# Tell the cluster to create a new snapshot
|
if not verifyVolume(zk_conn, pool, volume):
|
||||||
add_snapshot_string = 'snapshot_add {},{},{}'.format(pool, volume, name)
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(volume, pool)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': add_snapshot_string})
|
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-snapshot_add':
|
|
||||||
message = 'Created new RBD snapshot "{}" of volume "{}" on pool "{}".'.format(name, volume, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to create new snapshot; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 1. Create the snapshot
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('rbd snap create {}/{}@{}'.format(pool, volume, name))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to create RBD snapshot "{}" of volume "{}" in pool "{}": {}'.format(name, volume, pool, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 2. Add the snapshot to Zookeeper
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/ceph/snapshots/{}/{}/{}'.format(pool, volume, name): '',
|
||||||
|
'/ceph/snapshots/{}/{}/{}/stats'.format(pool, volume, name): '{}'
|
||||||
|
})
|
||||||
|
|
||||||
|
return True, 'Created RBD snapshot "{}" of volume "{}" in pool "{}".'.format(name, volume, pool)
|
||||||
|
|
||||||
def rename_snapshot(zk_conn, pool, volume, name, new_name):
|
def rename_snapshot(zk_conn, pool, volume, name, new_name):
|
||||||
# Tell the cluster to rename
|
if not verifyVolume(zk_conn, pool, volume):
|
||||||
rename_snapshot_string = 'snapshot_rename {},{},{}'.format(pool, name, new_name)
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(volume, pool)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': rename_snapshot_string})
|
if not verifySnapshot(zk_conn, pool, volume, name):
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
return False, 'ERROR: No snapshot with name "{}" is present for volume "{}" in pool "{}".'.format(name, volume, pool)
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-snapshot_rename':
|
|
||||||
message = 'Renamed RBD volume snapshot "{}" to "{}" for volume {} on pool "{}".'.format(name, new_name, volume, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to rename volume {} to {}; check node logs for details.'.format(name, new_name)
|
|
||||||
success = False
|
|
||||||
except:
|
|
||||||
message = 'ERROR: Command ignored by node.'
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 1. Rename the snapshot
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
retcode, stdout, stderr = common.run_os_command('rbd snap rename {}/{}@{} {}'.format(pool, volume, name, new_name))
|
||||||
with lock:
|
if retcode:
|
||||||
time.sleep(0.5)
|
return False, 'ERROR: Failed to rename RBD snapshot "{}" to "{}" for volume "{}" in pool "{}": {}'.format(name, new_name, volume, pool, stderr)
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
# 2. Rename the snapshot in ZK
|
||||||
|
zkhandler.renamekey(zk_conn, {
|
||||||
|
'/ceph/snapshots/{}/{}/{}'.format(pool, volume, name): '/ceph/snapshots/{}/{}/{}'.format(pool, volume, new_name)
|
||||||
|
})
|
||||||
|
|
||||||
|
return True, 'Renamed RBD snapshot "{}" to "{}" for volume "{}" in pool "{}".'.format(name, new_name, volume, pool)
|
||||||
|
|
||||||
def remove_snapshot(zk_conn, pool, volume, name):
|
def remove_snapshot(zk_conn, pool, volume, name):
|
||||||
|
if not verifyVolume(zk_conn, pool, volume):
|
||||||
|
return False, 'ERROR: No volume with name "{}" is present in pool "{}".'.format(volume, pool)
|
||||||
if not verifySnapshot(zk_conn, pool, volume, name):
|
if not verifySnapshot(zk_conn, pool, volume, name):
|
||||||
return False, 'ERROR: No snapshot with name "{}" is present of volume {} on pool {}.'.format(name, volume, pool)
|
return False, 'ERROR: No snapshot with name "{}" is present of volume {} in pool {}.'.format(name, volume, pool)
|
||||||
|
|
||||||
# Tell the cluster to create a new snapshot
|
# 1. Remove the snapshot
|
||||||
remove_snapshot_string = 'snapshot_remove {},{},{}'.format(pool, volume, name)
|
retcode, stdout, stderr = common.run_os_command('rbd snap rm {}/{}@{}'.format(pool, volume, name))
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': remove_snapshot_string})
|
if retcode:
|
||||||
# Wait 1/2 second for the cluster to get the message and start working
|
return False, 'Failed to remove RBD snapshot "{}" of volume "{}" in pool "{}": {}'.format(name, volume, pool, stderr)
|
||||||
time.sleep(0.5)
|
|
||||||
# Acquire a read lock, so we get the return exclusively
|
|
||||||
lock = zkhandler.readlock(zk_conn, '/cmd/ceph')
|
|
||||||
with lock:
|
|
||||||
try:
|
|
||||||
result = zkhandler.readdata(zk_conn, '/cmd/ceph').split()[0]
|
|
||||||
if result == 'success-snapshot_remove':
|
|
||||||
message = 'Removed RBD snapshot "{}" of volume "{}" in pool "{}".'.format(name, volume, pool)
|
|
||||||
success = True
|
|
||||||
else:
|
|
||||||
message = 'ERROR: Failed to remove snapshot; check node logs for details.'
|
|
||||||
success = False
|
|
||||||
except Exception as e:
|
|
||||||
message = 'ERROR: Command ignored by node: {}'.format(e)
|
|
||||||
success = False
|
|
||||||
|
|
||||||
# Acquire a write lock to ensure things go smoothly
|
# 2. Delete snapshot from Zookeeper
|
||||||
lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
zkhandler.deletekey(zk_conn, '/ceph/snapshots/{}/{}/{}'.format(pool, volume, name))
|
||||||
with lock:
|
|
||||||
time.sleep(0.5)
|
|
||||||
zkhandler.writedata(zk_conn, {'/cmd/ceph': ''})
|
|
||||||
|
|
||||||
return success, message
|
return True, 'Removed RBD snapshot "{}" of volume "{}" in pool "{}".'.format(name, volume, pool)
|
||||||
|
|
||||||
def get_list_snapshot(zk_conn, pool, volume, limit, is_fuzzy=True):
|
def get_list_snapshot(zk_conn, pool, volume, limit, is_fuzzy=True):
|
||||||
snapshot_list = []
|
snapshot_list = []
|
@ -24,13 +24,13 @@ import json
|
|||||||
|
|
||||||
from distutils.util import strtobool
|
from distutils.util import strtobool
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
import client_lib.zkhandler as zkhandler
|
import daemon_lib.zkhandler as zkhandler
|
||||||
import client_lib.common as common
|
import daemon_lib.common as common
|
||||||
import client_lib.vm as pvc_vm
|
import daemon_lib.vm as pvc_vm
|
||||||
import client_lib.node as pvc_node
|
import daemon_lib.node as pvc_node
|
||||||
import client_lib.network as pvc_network
|
import daemon_lib.network as pvc_network
|
||||||
import client_lib.ceph as pvc_ceph
|
import daemon_lib.ceph as pvc_ceph
|
||||||
|
|
||||||
def set_maintenance(zk_conn, maint_state):
|
def set_maintenance(zk_conn, maint_state):
|
||||||
try:
|
try:
|
||||||
@ -131,7 +131,8 @@ def getClusterInformation(zk_conn):
|
|||||||
node_state_combinations = [
|
node_state_combinations = [
|
||||||
'run,ready', 'run,flush', 'run,flushed', 'run,unflush',
|
'run,ready', 'run,flush', 'run,flushed', 'run,unflush',
|
||||||
'init,ready', 'init,flush', 'init,flushed', 'init,unflush',
|
'init,ready', 'init,flush', 'init,flushed', 'init,unflush',
|
||||||
'stop,ready', 'stop,flush', 'stop,flushed', 'stop,unflush'
|
'stop,ready', 'stop,flush', 'stop,flushed', 'stop,unflush',
|
||||||
|
'dead,ready', 'dead,flush', 'dead,flushed', 'dead,unflush'
|
||||||
]
|
]
|
||||||
vm_state_combinations = [
|
vm_state_combinations = [
|
||||||
'start', 'restart', 'shutdown', 'stop', 'disable', 'fail', 'migrate', 'unmigrate', 'provision'
|
'start', 'restart', 'shutdown', 'stop', 'disable', 'fail', 'migrate', 'unmigrate', 'provision'
|
@ -23,16 +23,46 @@
|
|||||||
import uuid
|
import uuid
|
||||||
import lxml
|
import lxml
|
||||||
import math
|
import math
|
||||||
|
import shlex
|
||||||
|
import subprocess
|
||||||
import kazoo.client
|
import kazoo.client
|
||||||
|
|
||||||
from distutils.util import strtobool
|
from distutils.util import strtobool
|
||||||
|
|
||||||
import client_lib.zkhandler as zkhandler
|
import daemon_lib.zkhandler as zkhandler
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# Supplemental functions
|
# Supplemental functions
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
|
#
|
||||||
|
# Run a local OS command via shell
|
||||||
|
#
|
||||||
|
def run_os_command(command_string, background=False, environment=None, timeout=None, shell=False):
|
||||||
|
command = shlex.split(command_string)
|
||||||
|
try:
|
||||||
|
command_output = subprocess.run(
|
||||||
|
command,
|
||||||
|
shell=shell,
|
||||||
|
env=environment,
|
||||||
|
timeout=timeout,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.PIPE,
|
||||||
|
)
|
||||||
|
retcode = command_output.returncode
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
retcode = 128
|
||||||
|
|
||||||
|
try:
|
||||||
|
stdout = command_output.stdout.decode('ascii')
|
||||||
|
except:
|
||||||
|
stdout = ''
|
||||||
|
try:
|
||||||
|
stderr = command_output.stderr.decode('ascii')
|
||||||
|
except:
|
||||||
|
stderr = ''
|
||||||
|
return retcode, stdout, stderr
|
||||||
|
|
||||||
#
|
#
|
||||||
# Validate a UUID
|
# Validate a UUID
|
||||||
#
|
#
|
@ -34,9 +34,9 @@ import lxml.objectify
|
|||||||
import configparser
|
import configparser
|
||||||
import kazoo.client
|
import kazoo.client
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
import client_lib.zkhandler as zkhandler
|
import daemon_lib.zkhandler as zkhandler
|
||||||
import client_lib.common as common
|
import daemon_lib.common as common
|
||||||
|
|
||||||
#
|
#
|
||||||
# Cluster search functions
|
# Cluster search functions
|
@ -34,10 +34,10 @@ import lxml.objectify
|
|||||||
import configparser
|
import configparser
|
||||||
import kazoo.client
|
import kazoo.client
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
import client_lib.zkhandler as zkhandler
|
import daemon_lib.zkhandler as zkhandler
|
||||||
import client_lib.common as common
|
import daemon_lib.common as common
|
||||||
import client_lib.vm as pvc_vm
|
import daemon_lib.vm as pvc_vm
|
||||||
|
|
||||||
def getNodeInformation(zk_conn, node_name):
|
def getNodeInformation(zk_conn, node_name):
|
||||||
"""
|
"""
|
||||||
@ -143,7 +143,7 @@ def primary_node(zk_conn, node):
|
|||||||
|
|
||||||
return True, retmsg
|
return True, retmsg
|
||||||
|
|
||||||
def flush_node(zk_conn, node, wait):
|
def flush_node(zk_conn, node, wait=False):
|
||||||
# Verify node is valid
|
# Verify node is valid
|
||||||
if not common.verifyNode(zk_conn, node):
|
if not common.verifyNode(zk_conn, node):
|
||||||
return False, 'ERROR: No node named "{}" is present in the cluster.'.format(node)
|
return False, 'ERROR: No node named "{}" is present in the cluster.'.format(node)
|
||||||
@ -155,7 +155,6 @@ def flush_node(zk_conn, node, wait):
|
|||||||
'/nodes/{}/domainstate'.format(node): 'flush'
|
'/nodes/{}/domainstate'.format(node): 'flush'
|
||||||
})
|
})
|
||||||
|
|
||||||
# Wait cannot be triggered from the API
|
|
||||||
if wait:
|
if wait:
|
||||||
while zkhandler.readdata(zk_conn, '/nodes/{}/domainstate'.format(node)) == 'flush':
|
while zkhandler.readdata(zk_conn, '/nodes/{}/domainstate'.format(node)) == 'flush':
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
@ -163,7 +162,7 @@ def flush_node(zk_conn, node, wait):
|
|||||||
|
|
||||||
return True, retmsg
|
return True, retmsg
|
||||||
|
|
||||||
def ready_node(zk_conn, node, wait):
|
def ready_node(zk_conn, node, wait=False):
|
||||||
# Verify node is valid
|
# Verify node is valid
|
||||||
if not common.verifyNode(zk_conn, node):
|
if not common.verifyNode(zk_conn, node):
|
||||||
return False, 'ERROR: No node named "{}" is present in the cluster.'.format(node)
|
return False, 'ERROR: No node named "{}" is present in the cluster.'.format(node)
|
||||||
@ -175,7 +174,6 @@ def ready_node(zk_conn, node, wait):
|
|||||||
'/nodes/{}/domainstate'.format(node): 'unflush'
|
'/nodes/{}/domainstate'.format(node): 'unflush'
|
||||||
})
|
})
|
||||||
|
|
||||||
# Wait cannot be triggered from the API
|
|
||||||
if wait:
|
if wait:
|
||||||
while zkhandler.readdata(zk_conn, '/nodes/{}/domainstate'.format(node)) == 'unflush':
|
while zkhandler.readdata(zk_conn, '/nodes/{}/domainstate'.format(node)) == 'unflush':
|
||||||
time.sleep(1)
|
time.sleep(1)
|
@ -35,11 +35,11 @@ import kazoo.client
|
|||||||
|
|
||||||
from collections import deque
|
from collections import deque
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
import client_lib.zkhandler as zkhandler
|
import daemon_lib.zkhandler as zkhandler
|
||||||
import client_lib.common as common
|
import daemon_lib.common as common
|
||||||
|
|
||||||
import client_lib.ceph as ceph
|
import daemon_lib.ceph as ceph
|
||||||
|
|
||||||
#
|
#
|
||||||
# Cluster search functions
|
# Cluster search functions
|
||||||
@ -270,13 +270,7 @@ def dump_vm(zk_conn, domain):
|
|||||||
|
|
||||||
return True, vm_xml
|
return True, vm_xml
|
||||||
|
|
||||||
def purge_vm(zk_conn, domain, is_cli=False):
|
def undefine_vm(zk_conn, domain):
|
||||||
"""
|
|
||||||
Helper function for both undefine and remove VM to perform the shutdown, termination,
|
|
||||||
and configuration deletion.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def undefine_vm(zk_conn, domain, is_cli=False):
|
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
@ -285,30 +279,22 @@ def undefine_vm(zk_conn, domain, is_cli=False):
|
|||||||
# Shut down the VM
|
# Shut down the VM
|
||||||
current_vm_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
current_vm_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
||||||
if current_vm_state != 'stop':
|
if current_vm_state != 'stop':
|
||||||
if is_cli:
|
|
||||||
click.echo('Forcibly stopping VM "{}".'.format(domain))
|
|
||||||
# Set the domain into stop mode
|
# Set the domain into stop mode
|
||||||
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'stop'})
|
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'stop'})
|
||||||
|
|
||||||
# Wait for 1 second to allow state to flow to all nodes
|
# Wait for 2 seconds to allow state to flow to all nodes
|
||||||
if is_cli:
|
|
||||||
click.echo('Waiting for cluster to update.')
|
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
# Gracefully terminate the class instances
|
# Gracefully terminate the class instances
|
||||||
if is_cli:
|
|
||||||
click.echo('Deleting VM "{}" from nodes.'.format(domain))
|
|
||||||
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'delete'})
|
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'delete'})
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
# Delete the configurations
|
# Delete the configurations
|
||||||
if is_cli:
|
|
||||||
click.echo('Undefining VM "{}".'.format(domain))
|
|
||||||
zkhandler.deletekey(zk_conn, '/domains/{}'.format(dom_uuid))
|
zkhandler.deletekey(zk_conn, '/domains/{}'.format(dom_uuid))
|
||||||
|
|
||||||
return True, 'Undefined VM "{}" from the cluster.'.format(domain)
|
return True, 'Undefined VM "{}" from the cluster.'.format(domain)
|
||||||
|
|
||||||
def remove_vm(zk_conn, domain, is_cli=False):
|
def remove_vm(zk_conn, domain):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
@ -319,25 +305,17 @@ def remove_vm(zk_conn, domain, is_cli=False):
|
|||||||
# Shut down the VM
|
# Shut down the VM
|
||||||
current_vm_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
current_vm_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
||||||
if current_vm_state != 'stop':
|
if current_vm_state != 'stop':
|
||||||
if is_cli:
|
|
||||||
click.echo('Forcibly stopping VM "{}".'.format(domain))
|
|
||||||
# Set the domain into stop mode
|
# Set the domain into stop mode
|
||||||
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'stop'})
|
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'stop'})
|
||||||
|
|
||||||
# Wait for 1 second to allow state to flow to all nodes
|
# Wait for 2 seconds to allow state to flow to all nodes
|
||||||
if is_cli:
|
|
||||||
click.echo('Waiting for cluster to update.')
|
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
# Gracefully terminate the class instances
|
# Gracefully terminate the class instances
|
||||||
if is_cli:
|
|
||||||
click.echo('Deleting VM "{}" from nodes.'.format(domain))
|
|
||||||
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'delete'})
|
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'delete'})
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
# Delete the configurations
|
# Delete the configurations
|
||||||
if is_cli:
|
|
||||||
click.echo('Undefining VM "{}".'.format(domain))
|
|
||||||
zkhandler.deletekey(zk_conn, '/domains/{}'.format(dom_uuid))
|
zkhandler.deletekey(zk_conn, '/domains/{}'.format(dom_uuid))
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
@ -347,8 +325,6 @@ def remove_vm(zk_conn, domain, is_cli=False):
|
|||||||
try:
|
try:
|
||||||
disk_pool, disk_name = disk.split('/')
|
disk_pool, disk_name = disk.split('/')
|
||||||
retcode, message = ceph.remove_volume(zk_conn, disk_pool, disk_name)
|
retcode, message = ceph.remove_volume(zk_conn, disk_pool, disk_name)
|
||||||
if is_cli and message:
|
|
||||||
click.echo('{}'.format(message))
|
|
||||||
except ValueError:
|
except ValueError:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@ -365,7 +341,7 @@ def start_vm(zk_conn, domain):
|
|||||||
|
|
||||||
return True, 'Starting VM "{}".'.format(domain)
|
return True, 'Starting VM "{}".'.format(domain)
|
||||||
|
|
||||||
def restart_vm(zk_conn, domain):
|
def restart_vm(zk_conn, domain, wait=False):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
@ -376,12 +352,21 @@ def restart_vm(zk_conn, domain):
|
|||||||
if current_state != 'start':
|
if current_state != 'start':
|
||||||
return False, 'ERROR: VM "{}" is not in "start" state!'.format(domain)
|
return False, 'ERROR: VM "{}" is not in "start" state!'.format(domain)
|
||||||
|
|
||||||
# Set the VM to start
|
retmsg = 'Restarting VM "{}".'.format(domain)
|
||||||
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'restart'})
|
|
||||||
|
|
||||||
return True, 'Restarting VM "{}".'.format(domain)
|
# Set the VM to restart
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/domains/{}/state'.format(dom_uuid): 'restart'
|
||||||
|
})
|
||||||
|
|
||||||
def shutdown_vm(zk_conn, domain):
|
if wait:
|
||||||
|
while zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid)) == 'restart':
|
||||||
|
time.sleep(1)
|
||||||
|
retmsg = 'Restarted VM "{}"'.format(domain)
|
||||||
|
|
||||||
|
return True, retmsg
|
||||||
|
|
||||||
|
def shutdown_vm(zk_conn, domain, wait=False):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
@ -392,10 +377,19 @@ def shutdown_vm(zk_conn, domain):
|
|||||||
if current_state != 'start':
|
if current_state != 'start':
|
||||||
return False, 'ERROR: VM "{}" is not in "start" state!'.format(domain)
|
return False, 'ERROR: VM "{}" is not in "start" state!'.format(domain)
|
||||||
|
|
||||||
|
retmsg = 'Shutting down VM "{}"'.format(domain)
|
||||||
|
|
||||||
# Set the VM to shutdown
|
# Set the VM to shutdown
|
||||||
zkhandler.writedata(zk_conn, {'/domains/{}/state'.format(dom_uuid): 'shutdown'})
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/domains/{}/state'.format(dom_uuid): 'shutdown'
|
||||||
|
})
|
||||||
|
|
||||||
return True, 'Shutting down VM "{}".'.format(domain)
|
if wait:
|
||||||
|
while zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid)) == 'shutdown':
|
||||||
|
time.sleep(1)
|
||||||
|
retmsg = 'Shut down VM "{}"'.format(domain)
|
||||||
|
|
||||||
|
return True, retmsg
|
||||||
|
|
||||||
def stop_vm(zk_conn, domain):
|
def stop_vm(zk_conn, domain):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
@ -427,12 +421,20 @@ def disable_vm(zk_conn, domain):
|
|||||||
|
|
||||||
return True, 'Marked VM "{}" as disable.'.format(domain)
|
return True, 'Marked VM "{}" as disable.'.format(domain)
|
||||||
|
|
||||||
def move_vm(zk_conn, domain, target_node):
|
def move_vm(zk_conn, domain, target_node, wait=False):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
return False, 'ERROR: Could not find VM "{}" in the cluster!'.format(domain)
|
return False, 'ERROR: Could not find VM "{}" in the cluster!'.format(domain)
|
||||||
|
|
||||||
|
# Get state and verify we're OK to proceed
|
||||||
|
current_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
||||||
|
if current_state != 'start':
|
||||||
|
# If the current state isn't start, preserve it; we're not doing live migration
|
||||||
|
target_state = current_state
|
||||||
|
else:
|
||||||
|
target_state = 'migrate'
|
||||||
|
|
||||||
current_node = zkhandler.readdata(zk_conn, '/domains/{}/node'.format(dom_uuid))
|
current_node = zkhandler.readdata(zk_conn, '/domains/{}/node'.format(dom_uuid))
|
||||||
|
|
||||||
if not target_node:
|
if not target_node:
|
||||||
@ -455,22 +457,22 @@ def move_vm(zk_conn, domain, target_node):
|
|||||||
if not target_node:
|
if not target_node:
|
||||||
return False, 'ERROR: Could not find a valid migration target for VM "{}".'.format(domain)
|
return False, 'ERROR: Could not find a valid migration target for VM "{}".'.format(domain)
|
||||||
|
|
||||||
current_vm_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
retmsg = 'Permanently migrating VM "{}" to node "{}".'.format(domain, target_node)
|
||||||
if current_vm_state == 'start':
|
|
||||||
zkhandler.writedata(zk_conn, {
|
|
||||||
'/domains/{}/state'.format(dom_uuid): 'migrate',
|
|
||||||
'/domains/{}/node'.format(dom_uuid): target_node,
|
|
||||||
'/domains/{}/lastnode'.format(dom_uuid): ''
|
|
||||||
})
|
|
||||||
else:
|
|
||||||
zkhandler.writedata(zk_conn, {
|
|
||||||
'/domains/{}/node'.format(dom_uuid): target_node,
|
|
||||||
'/domains/{}/lastnode'.format(dom_uuid): ''
|
|
||||||
})
|
|
||||||
|
|
||||||
return True, 'Permanently migrating VM "{}" to node "{}".'.format(domain, target_node)
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/domains/{}/state'.format(dom_uuid): target_state,
|
||||||
|
'/domains/{}/node'.format(dom_uuid): target_node,
|
||||||
|
'/domains/{}/lastnode'.format(dom_uuid): ''
|
||||||
|
})
|
||||||
|
|
||||||
def migrate_vm(zk_conn, domain, target_node, force_migrate, is_cli=False):
|
if wait:
|
||||||
|
while zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid)) == target_state:
|
||||||
|
time.sleep(1)
|
||||||
|
retmsg = 'Permanently migrated VM "{}" to node "{}"'.format(domain, target_node)
|
||||||
|
|
||||||
|
return True, retmsg
|
||||||
|
|
||||||
|
def migrate_vm(zk_conn, domain, target_node, force_migrate, wait=False):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
@ -479,7 +481,8 @@ def migrate_vm(zk_conn, domain, target_node, force_migrate, is_cli=False):
|
|||||||
# Get state and verify we're OK to proceed
|
# Get state and verify we're OK to proceed
|
||||||
current_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
current_state = zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid))
|
||||||
if current_state != 'start':
|
if current_state != 'start':
|
||||||
target_state = 'start'
|
# If the current state isn't start, preserve it; we're not doing live migration
|
||||||
|
target_state = current_state
|
||||||
else:
|
else:
|
||||||
target_state = 'migrate'
|
target_state = 'migrate'
|
||||||
|
|
||||||
@ -487,14 +490,7 @@ def migrate_vm(zk_conn, domain, target_node, force_migrate, is_cli=False):
|
|||||||
last_node = zkhandler.readdata(zk_conn, '/domains/{}/lastnode'.format(dom_uuid))
|
last_node = zkhandler.readdata(zk_conn, '/domains/{}/lastnode'.format(dom_uuid))
|
||||||
|
|
||||||
if last_node and not force_migrate:
|
if last_node and not force_migrate:
|
||||||
if is_cli:
|
return False, 'ERROR: VM "{}" has been previously migrated.'.format(domain)
|
||||||
click.echo('ERROR: VM "{}" has been previously migrated.'.format(domain))
|
|
||||||
click.echo('> Last node: {}'.format(last_node))
|
|
||||||
click.echo('> Current node: {}'.format(current_node))
|
|
||||||
click.echo('Run `vm unmigrate` to restore the VM to its previous node, or use `--force` to override this check.')
|
|
||||||
return False, ''
|
|
||||||
else:
|
|
||||||
return False, 'ERROR: VM "{}" has been previously migrated.'.format(domain)
|
|
||||||
|
|
||||||
if not target_node:
|
if not target_node:
|
||||||
target_node = common.findTargetNode(zk_conn, dom_uuid)
|
target_node = common.findTargetNode(zk_conn, dom_uuid)
|
||||||
@ -520,15 +516,22 @@ def migrate_vm(zk_conn, domain, target_node, force_migrate, is_cli=False):
|
|||||||
if last_node and force_migrate:
|
if last_node and force_migrate:
|
||||||
current_node = last_node
|
current_node = last_node
|
||||||
|
|
||||||
|
retmsg = 'Migrating VM "{}" to node "{}".'.format(domain, target_node)
|
||||||
|
|
||||||
zkhandler.writedata(zk_conn, {
|
zkhandler.writedata(zk_conn, {
|
||||||
'/domains/{}/state'.format(dom_uuid): 'migrate',
|
'/domains/{}/state'.format(dom_uuid): target_state,
|
||||||
'/domains/{}/node'.format(dom_uuid): target_node,
|
'/domains/{}/node'.format(dom_uuid): target_node,
|
||||||
'/domains/{}/lastnode'.format(dom_uuid): current_node
|
'/domains/{}/lastnode'.format(dom_uuid): current_node
|
||||||
})
|
})
|
||||||
|
|
||||||
return True, 'Migrating VM "{}" to node "{}".'.format(domain, target_node)
|
if wait:
|
||||||
|
while zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid)) == target_state:
|
||||||
|
time.sleep(1)
|
||||||
|
retmsg = 'Migrated VM "{}" to node "{}"'.format(domain, target_node)
|
||||||
|
|
||||||
def unmigrate_vm(zk_conn, domain):
|
return True, retmsg
|
||||||
|
|
||||||
|
def unmigrate_vm(zk_conn, domain, wait=False):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
||||||
if not dom_uuid:
|
if not dom_uuid:
|
||||||
@ -547,13 +550,20 @@ def unmigrate_vm(zk_conn, domain):
|
|||||||
if target_node == '':
|
if target_node == '':
|
||||||
return False, 'ERROR: VM "{}" has not been previously migrated.'.format(domain)
|
return False, 'ERROR: VM "{}" has not been previously migrated.'.format(domain)
|
||||||
|
|
||||||
|
retmsg = 'Unmigrating VM "{}" back to node "{}".'.format(domain, target_node)
|
||||||
|
|
||||||
zkhandler.writedata(zk_conn, {
|
zkhandler.writedata(zk_conn, {
|
||||||
'/domains/{}/state'.format(dom_uuid): target_state,
|
'/domains/{}/state'.format(dom_uuid): target_state,
|
||||||
'/domains/{}/node'.format(dom_uuid): target_node,
|
'/domains/{}/node'.format(dom_uuid): target_node,
|
||||||
'/domains/{}/lastnode'.format(dom_uuid): ''
|
'/domains/{}/lastnode'.format(dom_uuid): ''
|
||||||
})
|
})
|
||||||
|
|
||||||
return True, 'Unmigrating VM "{}" back to node "{}".'.format(domain, target_node)
|
if wait:
|
||||||
|
while zkhandler.readdata(zk_conn, '/domains/{}/state'.format(dom_uuid)) == target_state:
|
||||||
|
time.sleep(1)
|
||||||
|
retmsg = 'Unmigrated VM "{}" back to node "{}"'.format(domain, target_node)
|
||||||
|
|
||||||
|
return True, retmsg
|
||||||
|
|
||||||
def get_console_log(zk_conn, domain, lines=1000):
|
def get_console_log(zk_conn, domain, lines=1000):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
@ -570,54 +580,6 @@ def get_console_log(zk_conn, domain, lines=1000):
|
|||||||
|
|
||||||
return True, loglines
|
return True, loglines
|
||||||
|
|
||||||
def follow_console_log(zk_conn, domain, lines=10):
|
|
||||||
# Validate that VM exists in cluster
|
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
|
||||||
if not dom_uuid:
|
|
||||||
return False, 'ERROR: Could not find VM "{}" in the cluster!'.format(domain)
|
|
||||||
|
|
||||||
# Get the initial data from ZK
|
|
||||||
console_log = zkhandler.readdata(zk_conn, '/domains/{}/consolelog'.format(dom_uuid))
|
|
||||||
|
|
||||||
# Shrink the log buffer to length lines
|
|
||||||
shrunk_log = console_log.split('\n')[-lines:]
|
|
||||||
loglines = '\n'.join(shrunk_log)
|
|
||||||
|
|
||||||
# Print the initial data and begin following
|
|
||||||
print(loglines, end='')
|
|
||||||
|
|
||||||
try:
|
|
||||||
while True:
|
|
||||||
# Grab the next line set
|
|
||||||
new_console_log = zkhandler.readdata(zk_conn, '/domains/{}/consolelog'.format(dom_uuid))
|
|
||||||
# Split the new and old log strings into constitutent lines
|
|
||||||
old_console_loglines = console_log.split('\n')
|
|
||||||
new_console_loglines = new_console_log.split('\n')
|
|
||||||
# Set the console log to the new log value for the next iteration
|
|
||||||
console_log = new_console_log
|
|
||||||
# Remove the lines from the old log until we hit the first line of the new log; this
|
|
||||||
# ensures that the old log is a string that we can remove from the new log entirely
|
|
||||||
for index, line in enumerate(old_console_loglines, start=0):
|
|
||||||
if line == new_console_loglines[0]:
|
|
||||||
del old_console_loglines[0:index]
|
|
||||||
break
|
|
||||||
# Rejoin the log lines into strings
|
|
||||||
old_console_log = '\n'.join(old_console_loglines)
|
|
||||||
new_console_log = '\n'.join(new_console_loglines)
|
|
||||||
# Remove the old lines from the new log
|
|
||||||
diff_console_log = new_console_log.replace(old_console_log, "")
|
|
||||||
# If there's a difference, print it out
|
|
||||||
if diff_console_log:
|
|
||||||
print(diff_console_log, end='')
|
|
||||||
# Wait a second
|
|
||||||
time.sleep(1)
|
|
||||||
except kazoo.exceptions.NoNodeError:
|
|
||||||
return False, 'ERROR: VM has gone away.'
|
|
||||||
except:
|
|
||||||
return False, 'ERROR: Lost connection to Zookeeper node.'
|
|
||||||
|
|
||||||
return True, ''
|
|
||||||
|
|
||||||
def get_info(zk_conn, domain):
|
def get_info(zk_conn, domain):
|
||||||
# Validate that VM exists in cluster
|
# Validate that VM exists in cluster
|
||||||
dom_uuid = getDomainUUID(zk_conn, domain)
|
dom_uuid = getDomainUUID(zk_conn, domain)
|
@ -23,7 +23,7 @@
|
|||||||
import kazoo.client
|
import kazoo.client
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
import client_lib.ansiprint as ansiprint
|
import daemon_lib.ansiprint as ansiprint
|
||||||
|
|
||||||
# Exists function
|
# Exists function
|
||||||
def exists(zk_conn, key):
|
def exists(zk_conn, key):
|
6
debian/changelog
vendored
6
debian/changelog
vendored
@ -1,3 +1,9 @@
|
|||||||
|
pvc (0.7-0) unstable; urgency=medium
|
||||||
|
|
||||||
|
* Numerous bugfixes and improvements
|
||||||
|
|
||||||
|
-- Joshua Boniface <joshua@boniface.me> Sat, 15 Feb 2019 23:24:17 -0500
|
||||||
|
|
||||||
pvc (0.6-0) unstable; urgency=medium
|
pvc (0.6-0) unstable; urgency=medium
|
||||||
|
|
||||||
* Numerous improvements, implementation of provisioner and API client
|
* Numerous improvements, implementation of provisioner and API client
|
||||||
|
30
debian/control
vendored
30
debian/control
vendored
@ -6,34 +6,34 @@ Standards-Version: 3.9.8
|
|||||||
Homepage: https://www.boniface.me
|
Homepage: https://www.boniface.me
|
||||||
X-Python3-Version: >= 3.2
|
X-Python3-Version: >= 3.2
|
||||||
|
|
||||||
Package: pvc-daemon
|
Package: pvc-daemon-node
|
||||||
Architecture: all
|
Architecture: all
|
||||||
Depends: systemd, pvc-client-common, python3-kazoo, python3-psutil, python3-apscheduler, python3-libvirt, python3-psycopg2, python3-dnspython, python3-yaml, python3-distutils, ipmitool, libvirt-daemon-system, arping, vlan, bridge-utils, dnsmasq, nftables, pdns-server, pdns-backend-pgsql
|
Depends: systemd, pvc-daemon-common, python3-kazoo, python3-psutil, python3-apscheduler, python3-libvirt, python3-psycopg2, python3-dnspython, python3-yaml, python3-distutils, ipmitool, libvirt-daemon-system, arping, vlan, bridge-utils, dnsmasq, nftables, pdns-server, pdns-backend-pgsql
|
||||||
Suggests: pvc-client-api, pvc-client-cli
|
Suggests: pvc-client-api, pvc-client-cli
|
||||||
Description: Parallel Virtual Cluster virtualization daemon (Python 3)
|
Description: Parallel Virtual Cluster node daemon (Python 3)
|
||||||
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
||||||
.
|
.
|
||||||
This package installs the PVC node daemon
|
This package installs the PVC node daemon
|
||||||
|
|
||||||
Package: pvc-client-common
|
Package: pvc-daemon-api
|
||||||
|
Architecture: all
|
||||||
|
Depends: systemd, pvc-daemon-common, python3-yaml, python3-flask, python3-flask-restful, python3-gevent, python3-celery, python-celery-common, python3-distutils, redis, python3-redis, python3-lxml, python3-flask-migrate, python3-flask-script
|
||||||
|
Description: Parallel Virtual Cluster API daemon (Python 3)
|
||||||
|
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
||||||
|
.
|
||||||
|
This package installs the PVC API daemon
|
||||||
|
|
||||||
|
Package: pvc-daemon-common
|
||||||
Architecture: all
|
Architecture: all
|
||||||
Depends: python3-kazoo, python3-psutil, python3-click, python3-lxml
|
Depends: python3-kazoo, python3-psutil, python3-click, python3-lxml
|
||||||
Description: Parallel Virtual Cluster common client libraries (Python 3)
|
Description: Parallel Virtual Cluster common libraries (Python 3)
|
||||||
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
||||||
.
|
.
|
||||||
This package installs the common client libraries
|
This package installs the common libraries for the daemon and API
|
||||||
|
|
||||||
Package: pvc-client-api
|
|
||||||
Architecture: all
|
|
||||||
Depends: systemd, pvc-client-common, python3-yaml, python3-flask, python3-flask-restful, python3-gevent, python3-celery, python-celery-common, python3-distutils, redis, python3-redis
|
|
||||||
Description: Parallel Virtual Cluster API client (Python 3)
|
|
||||||
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
|
||||||
.
|
|
||||||
This package installs the PVC API client daemon
|
|
||||||
|
|
||||||
Package: pvc-client-cli
|
Package: pvc-client-cli
|
||||||
Architecture: all
|
Architecture: all
|
||||||
Depends: python3-requests, python3-yaml, python3-lxml
|
Depends: python3-requests, python3-requests-toolbelt, python3-yaml, python3-lxml
|
||||||
Description: Parallel Virtual Cluster CLI client (Python 3)
|
Description: Parallel Virtual Cluster CLI client (Python 3)
|
||||||
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
A KVM/Zookeeper/Ceph-based VM and private cloud manager
|
||||||
.
|
.
|
||||||
|
6
debian/pvc-client-api.install
vendored
6
debian/pvc-client-api.install
vendored
@ -1,6 +0,0 @@
|
|||||||
client-api/pvc-api.py usr/share/pvc
|
|
||||||
client-api/pvc-api.sample.yaml etc/pvc
|
|
||||||
client-api/api_lib usr/share/pvc
|
|
||||||
client-api/pvc-api.service lib/systemd/system
|
|
||||||
client-api/pvc-provisioner-worker.service lib/systemd/system
|
|
||||||
client-api/provisioner usr/share/pvc
|
|
20
debian/pvc-client-api.postinst
vendored
20
debian/pvc-client-api.postinst
vendored
@ -1,20 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
|
|
||||||
# Install client binary to /usr/bin via symlink
|
|
||||||
ln -s /usr/share/pvc/api.py /usr/bin/pvc-api
|
|
||||||
|
|
||||||
# Reload systemd's view of the units
|
|
||||||
systemctl daemon-reload
|
|
||||||
|
|
||||||
# Restart the main daemon (or warn on first install)
|
|
||||||
if systemctl is-active --quiet pvc-api.service; then
|
|
||||||
systemctl restart pvc-api.service
|
|
||||||
else
|
|
||||||
echo "NOTE: The PVC client API daemon (pvc-api.service) has not been started; create a config file at /etc/pvc/pvc-api.yaml then start it."
|
|
||||||
fi
|
|
||||||
# Restart the worker daemon (or warn on first install)
|
|
||||||
if systemctl is-active --quiet pvc-provisioner-worker.service; then
|
|
||||||
systemctl restart pvc-provisioner-worker.service
|
|
||||||
else
|
|
||||||
echo "NOTE: The PVC provisioner worker daemon (pvc-provisioner-worker.service) has not been started; create a config file at /etc/pvc/pvc-api.yaml then start it."
|
|
||||||
fi
|
|
1
debian/pvc-client-cli.install
vendored
1
debian/pvc-client-cli.install
vendored
@ -1,2 +1,3 @@
|
|||||||
client-cli/pvc.py usr/share/pvc
|
client-cli/pvc.py usr/share/pvc
|
||||||
client-cli/cli_lib usr/share/pvc
|
client-cli/cli_lib usr/share/pvc
|
||||||
|
client-cli/scripts usr/share/pvc
|
||||||
|
1
debian/pvc-client-common.install
vendored
1
debian/pvc-client-common.install
vendored
@ -1 +0,0 @@
|
|||||||
client-common/* usr/share/pvc/client_lib
|
|
9
debian/pvc-daemon-api.install
vendored
Normal file
9
debian/pvc-daemon-api.install
vendored
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
api-daemon/pvcapid.py usr/share/pvc
|
||||||
|
api-daemon/pvcapid-manage.py usr/share/pvc
|
||||||
|
api-daemon/pvc-api-db-upgrade usr/share/pvc
|
||||||
|
api-daemon/pvcapid.sample.yaml etc/pvc
|
||||||
|
api-daemon/pvcapid usr/share/pvc
|
||||||
|
api-daemon/pvcapid.service lib/systemd/system
|
||||||
|
api-daemon/pvcapid-worker.service lib/systemd/system
|
||||||
|
api-daemon/provisioner usr/share/pvc
|
||||||
|
api-daemon/migrations usr/share/pvc
|
15
debian/pvc-daemon-api.postinst
vendored
Normal file
15
debian/pvc-daemon-api.postinst
vendored
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
# Reload systemd's view of the units
|
||||||
|
systemctl daemon-reload
|
||||||
|
|
||||||
|
# Restart the main daemon and apply database migrations (or warn on first install)
|
||||||
|
if systemctl is-active --quiet pvcapid.service; then
|
||||||
|
systemctl stop pvcapid-worker.service
|
||||||
|
systemctl stop pvcapid.service
|
||||||
|
/usr/share/pvc/pvc-api-db-upgrade
|
||||||
|
systemctl start pvcapid.service
|
||||||
|
systemctl start pvcapid-worker.service
|
||||||
|
else
|
||||||
|
echo "NOTE: The PVC client API daemon (pvcapid.service) and the PVC provisioner worker daemon (pvcapid-worker.service) have not been started; create a config file at /etc/pvc/pvcapid.yaml, then run the database configuration (/usr/share/pvc/pvc-api-db-upgrade) and start them manually."
|
||||||
|
fi
|
@ -1,4 +1,4 @@
|
|||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
|
|
||||||
# Remove client binary symlink
|
# Remove client binary symlink
|
||||||
rm -f /usr/bin/pvc-api
|
rm -f /usr/bin/pvcapid
|
1
debian/pvc-daemon-common.install
vendored
Normal file
1
debian/pvc-daemon-common.install
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
daemon-common/* usr/share/pvc/daemon_lib
|
6
debian/pvc-daemon-node.install
vendored
Normal file
6
debian/pvc-daemon-node.install
vendored
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
node-daemon/pvcnoded.py usr/share/pvc
|
||||||
|
node-daemon/pvcnoded.sample.yaml etc/pvc
|
||||||
|
node-daemon/pvcnoded usr/share/pvc
|
||||||
|
node-daemon/pvcnoded.service lib/systemd/system
|
||||||
|
node-daemon/pvc.target lib/systemd/system
|
||||||
|
node-daemon/pvc-flush.service lib/systemd/system
|
@ -4,8 +4,8 @@
|
|||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
|
|
||||||
# Enable the service and target
|
# Enable the service and target
|
||||||
systemctl enable /lib/systemd/system/pvcd.service
|
systemctl enable /lib/systemd/system/pvcnoded.service
|
||||||
systemctl enable /lib/systemd/system/pvcd.target
|
systemctl enable /lib/systemd/system/pvc.target
|
||||||
|
|
||||||
# Inform administrator of the autoflush daemon if it is not enabled
|
# Inform administrator of the autoflush daemon if it is not enabled
|
||||||
if ! systemctl is-active --quiet pvc-flush.service; then
|
if ! systemctl is-active --quiet pvc-flush.service; then
|
||||||
@ -13,8 +13,8 @@ if ! systemctl is-active --quiet pvc-flush.service; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Inform administrator of the service restart/startup not occurring automatically
|
# Inform administrator of the service restart/startup not occurring automatically
|
||||||
if systemctl is-active --quiet pvcd.service; then
|
if systemctl is-active --quiet pvcnoded.service; then
|
||||||
echo "NOTE: The PVC node daemon (pvcd.service) has not been restarted; this is up to the administrator."
|
echo "NOTE: The PVC node daemon (pvcnoded.service) has not been restarted; this is up to the administrator."
|
||||||
else
|
else
|
||||||
echo "NOTE: The PVC node daemon (pvcd.service) has not been started; create a config file at /etc/pvc/pvcd.yaml then start it."
|
echo "NOTE: The PVC node daemon (pvcnoded.service) has not been started; create a config file at /etc/pvc/pvcnoded.yaml then start it."
|
||||||
fi
|
fi
|
5
debian/pvc-daemon-node.prerm
vendored
Normal file
5
debian/pvc-daemon-node.prerm
vendored
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
# Disable the services
|
||||||
|
systemctl disable pvcnoded.service
|
||||||
|
systemctl disable pvc.target
|
6
debian/pvc-daemon.install
vendored
6
debian/pvc-daemon.install
vendored
@ -1,6 +0,0 @@
|
|||||||
node-daemon/pvcd.py usr/share/pvc
|
|
||||||
node-daemon/pvcd.sample.yaml etc/pvc
|
|
||||||
node-daemon/pvcd usr/share/pvc
|
|
||||||
node-daemon/pvcd.target lib/systemd/system
|
|
||||||
node-daemon/pvcd.service lib/systemd/system
|
|
||||||
node-daemon/pvc-flush.service lib/systemd/system
|
|
5
debian/pvc-daemon.prerm
vendored
5
debian/pvc-daemon.prerm
vendored
@ -1,5 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
|
|
||||||
# Disable the services
|
|
||||||
systemctl disable pvcd.service
|
|
||||||
systemctl disable pvcd.target
|
|
@ -32,7 +32,7 @@ Within each node, the PVC daemon is a single Python 3 program which handles all
|
|||||||
|
|
||||||
The daemon uses an object-oriented approach, with most cluster objects being represented by class objects of a specific type. Each node has a full view of all cluster objects and can interact with them based on events from the cluster as needed.
|
The daemon uses an object-oriented approach, with most cluster objects being represented by class objects of a specific type. Each node has a full view of all cluster objects and can interact with them based on events from the cluster as needed.
|
||||||
|
|
||||||
Further information about the node daemon architecture can be found at the [daemon architecture page](/architecture/daemon).
|
Further information about the node daemon manual can be found at the [daemon manual page](/manuals/daemon).
|
||||||
|
|
||||||
## Client Architecture
|
## Client Architecture
|
||||||
|
|
||||||
@ -50,7 +50,7 @@ The API client uses a dedicated, independent set of functions to perform the act
|
|||||||
|
|
||||||
### CLI client
|
### CLI client
|
||||||
|
|
||||||
The CLI client interface is a Click application, which provides a convenient CLI interface to the API client. It supports connecting to multiple clusters, over both HTTP and HTTPS and with authentication, including a special "local" cluster if the client determines that an `/etc/pvc/pvc-api.yaml` configuration exists on the host.
|
The CLI client interface is a Click application, which provides a convenient CLI interface to the API client. It supports connecting to multiple clusters, over both HTTP and HTTPS and with authentication, including a special "local" cluster if the client determines that an `/etc/pvc/pvcapid.yaml` configuration exists on the host.
|
||||||
|
|
||||||
The CLI client is self-documenting using the `-h`/`--help` arguments, though a short manual can be found at the [CLI manual page](/manuals/cli).
|
The CLI client is self-documenting using the `-h`/`--help` arguments, though a short manual can be found at the [CLI manual page](/manuals/cli).
|
||||||
|
|
||||||
@ -58,9 +58,7 @@ The CLI client is self-documenting using the `-h`/`--help` arguments, though a s
|
|||||||
|
|
||||||
The overall management, deployment, bootstrapping, and configuring of nodes is accomplished via a set of Ansible roles, found in the [`pvc-ansible` repository](https://github.com/parallelvirtualcluster/pvc-ansible), and nodes are installed via a custom installer ISO generated by the [`pvc-installer` repository](https://github.com/parallelvirtualcluster/pvc-installer). Once the cluster is set up, nodes can be added, replaced, or updated using this Ansible framework.
|
The overall management, deployment, bootstrapping, and configuring of nodes is accomplished via a set of Ansible roles, found in the [`pvc-ansible` repository](https://github.com/parallelvirtualcluster/pvc-ansible), and nodes are installed via a custom installer ISO generated by the [`pvc-installer` repository](https://github.com/parallelvirtualcluster/pvc-installer). Once the cluster is set up, nodes can be added, replaced, or updated using this Ansible framework.
|
||||||
|
|
||||||
Further information about the Ansible deployment architecture can be found at the [Ansible architecture page](/architecture/ansible).
|
The Ansible configuration and architecture manual can be found at the [Ansible manual page](/manuals/ansible).
|
||||||
|
|
||||||
The Ansible configuration manual can be found at the [Ansible manual page](/manuals/ansible).
|
|
||||||
|
|
||||||
## About the author
|
## About the author
|
||||||
|
|
||||||
|
@ -1,43 +0,0 @@
|
|||||||
# PVC Ansible architecture
|
|
||||||
|
|
||||||
The PVC Ansible setup and management framework is written in Ansible. It consists of two roles: `base` and `pvc`.
|
|
||||||
|
|
||||||
## Base role
|
|
||||||
|
|
||||||
The Base role configures a node to a specific, standard base Debian system, with a number of PVC-specific tweaks. Some examples include:
|
|
||||||
|
|
||||||
* Installing the custom PVC repository at Boniface Labs.
|
|
||||||
|
|
||||||
* Removing several unnecessary packages and installing numerous additional packages.
|
|
||||||
|
|
||||||
* Automatically configuring network interfaces based on the `group_vars` configuration.
|
|
||||||
|
|
||||||
* Configuring several general `sysctl` settings for optimal performance.
|
|
||||||
|
|
||||||
* Installing and configuring rsyslog, postfix, ntpd, ssh, and fail2ban.
|
|
||||||
|
|
||||||
* Creating the users specified in the `group_vars` configuration.
|
|
||||||
|
|
||||||
* Installing custom MOTDs, bashrc files, vimrc files, and other useful configurations for each user.
|
|
||||||
|
|
||||||
The end result is a standardized "PVC node" system ready to have the daemons installed by the PVC role.
|
|
||||||
|
|
||||||
## PVC role
|
|
||||||
|
|
||||||
The PVC role configures all the dependencies of PVC, including storage, networking, and databases, then installs the PVC daemon itself. Specifically, it will, in order:
|
|
||||||
|
|
||||||
* Install Ceph, configure and bootstrap a new cluster if `bootstrap=yes` is set, configure the monitor and manager daemons, and start up the cluster ready for the addition of OSDs via the client interface (coordinators only).
|
|
||||||
|
|
||||||
* Install, configure, and if `bootstrap=yes` is set, bootstrap a Zookeeper cluster (coordinators only).
|
|
||||||
|
|
||||||
* Install, configure, and if `bootstrap=yes` is set`, bootstrap a Patroni PostgreSQL cluster for the PowerDNS aggregator (coordinators only).
|
|
||||||
|
|
||||||
* Install and configure Libvirt.
|
|
||||||
|
|
||||||
* Install and configure FRRouting.
|
|
||||||
|
|
||||||
* Install and configure the main PVC daemon and API client, including initializing the PVC cluster (`pvc init`).
|
|
||||||
|
|
||||||
## Completion
|
|
||||||
|
|
||||||
Once the entire playbook has run for the first time against a given host, the host will be rebooted to apply all the configured services. On startup, the system should immediately launch the PVC daemon, check in to the Zookeeper cluster, and become ready. The node will be in `flushed` state on its first boot; the administrator will need to run `pvc node unflush <node>` to set the node into active state ready to handle virtual machines.
|
|
@ -1,7 +0,0 @@
|
|||||||
# PVC API architecture
|
|
||||||
|
|
||||||
The PVC API is a standalone client application for PVC. It interfaces directly with the Zookeeper database to manage state.
|
|
||||||
|
|
||||||
The API is built using Flask and is packaged in the Debian package `pvc-client-api`. The API depends on the common client functions of the `pvc-client-common` package as does the CLI client.
|
|
||||||
|
|
||||||
Details of the API interface can be found in [the manual](/manuals/api).
|
|
@ -1,7 +0,0 @@
|
|||||||
# PVC CLI architecture
|
|
||||||
|
|
||||||
The PVC CLI is a standalone client application for PVC. It interfaces with the PVC API, via a configurable list of clusters with customizable hosts, ports, addresses, and authentication.
|
|
||||||
|
|
||||||
The CLI is build using Click and is packaged in the Debian package `pvc-client-cli`. The CLI does not depend on any other PVC components and can be used independently on arbitrary systems.
|
|
||||||
|
|
||||||
The CLI is self-documenting, however [the manual](/manuals/cli) details the required configuration.
|
|
@ -50,7 +50,7 @@ More advanced physical network layouts are also possible. For instance, one coul
|
|||||||
|
|
||||||
The upstream network functions as the main upstream for the cluster nodes, providing Internet access and a way to route managed client network traffic out of the cluster. In most deployments, this should be an RFC1918 private subnet with an upstream router which can perform NAT translation and firewalling as required, both for the cluster nodes themselves, but also for the RFC1918 managed client networks.
|
The upstream network functions as the main upstream for the cluster nodes, providing Internet access and a way to route managed client network traffic out of the cluster. In most deployments, this should be an RFC1918 private subnet with an upstream router which can perform NAT translation and firewalling as required, both for the cluster nodes themselves, but also for the RFC1918 managed client networks.
|
||||||
|
|
||||||
The floating IP address in the upstream network can be used as a single point of communication with the PVC cluster from other upstream sources, for instance to access the DNS aggregator instance or the API if configured. For this reason the network should generally be protected from unauthorized access via a firewall.
|
The floating IP address in the cluster network can be used as a single point of communication with the active primary node, for instance to access the DNS aggregator instance or the API if configured. For this reason the network should generally be protected from unauthorized access via a firewall.
|
||||||
|
|
||||||
Nodes in this network are generally assigned static IP addresses which are configured at node install time and in the [Ansible deployment configuration](/manuals/ansible).
|
Nodes in this network are generally assigned static IP addresses which are configured at node install time and in the [Ansible deployment configuration](/manuals/ansible).
|
||||||
|
|
||||||
@ -82,33 +82,37 @@ For even larger clusters, a `/23` or even larger network may be used.
|
|||||||
|
|
||||||
### Cluster: Connecting the nodes with each other
|
### Cluster: Connecting the nodes with each other
|
||||||
|
|
||||||
The cluster network is an unrouted private network used by the PVC nodes to communicate with each other for database access, Libvirt migration, and storage client traffic. It is also used as the underlying interface for the BGP EVPN VXLAN interfaces used by managed client networks.
|
The cluster network is an unrouted private network used by the PVC nodes to communicate with each other for database access and Libvirt migrations. It is also used as the underlying interface for the BGP EVPN VXLAN interfaces used by managed client networks.
|
||||||
|
|
||||||
The floating IP address in the cluster network can be used as a single point of communication with the primary node.
|
The floating IP address in the cluster network can be used as a single point of communication with the active primary node.
|
||||||
|
|
||||||
Nodes in this network are generally assigned IPs automatically based on their node number (e.g. node1 at `.1`, node2 at `.2`, etc.). The network should be large enough to include all nodes sequentially.
|
Nodes in this network are generally assigned IPs automatically based on their node number (e.g. node1 at `.1`, node2 at `.2`, etc.). The network should be large enough to include all nodes sequentially.
|
||||||
|
|
||||||
Generally the cluster network should be completely separate from the upstream network, either a separate physical interface (or set of bonded interfaces) or a dedicated vLAN on an underlying physical device.
|
Generally the cluster network should be completely separate from the upstream network, either a separate physical interface (or set of bonded interfaces) or a dedicated vLAN on an underlying physical device, but they can be colocated if required.
|
||||||
|
|
||||||
### Storage: Connecting Ceph OSD with each other
|
### Storage: Connecting Ceph OSD with each other
|
||||||
|
|
||||||
The storage network is an unrouted private network used by the PVC node storage OSDs to communicated with each other, without using the main cluster network and introducing potentially large amounts of traffic there.
|
The storage network is an unrouted private network used by the PVC node storage OSDs to communicated with each other, without using the main cluster network and introducing potentially large amounts of traffic there.
|
||||||
|
|
||||||
Nodes in this network are generally assigned IPs automatically based on their node number. The network should be large enough to include all nodes sequentially.
|
The floating IP address in the storage network can be used as a single point of communication with the active primary node.
|
||||||
|
|
||||||
|
Nodes in this network are generally assigned IPs automatically based on their node number (e.g. node1 at `.1`, node2 at `.2`, etc.). The network should be large enough to include all nodes sequentially.
|
||||||
|
|
||||||
The administrator may choose to collocate the storage network on the same physical interface as the cluster network, or on a separate physical interface. This should be decided based on the size of the cluster and the perceived ratios of client network versus storage traffic. In large (>3 node) or storage-intensive clusters, this network should generally be a separate set of fast physical interfaces, separate from both the upstream and cluster networks, in order to maximize and isolate the storage bandwidth.
|
The administrator may choose to collocate the storage network on the same physical interface as the cluster network, or on a separate physical interface. This should be decided based on the size of the cluster and the perceived ratios of client network versus storage traffic. In large (>3 node) or storage-intensive clusters, this network should generally be a separate set of fast physical interfaces, separate from both the upstream and cluster networks, in order to maximize and isolate the storage bandwidth.
|
||||||
|
|
||||||
### Bridged (unmanaged) Client Networks
|
### Bridged (unmanaged) Client Networks
|
||||||
|
|
||||||
The first type of client network is the unmanaged bridged network. These networks have a separate vLAN on the device underlying the cluster network, which is created when the network is configured. VMs are then bridged into this vLAN.
|
The first type of client network is the unmanaged bridged network. These networks have a separate vLAN on the device underlying the other networks, which is created when the network is configured. VMs are then bridged into this vLAN.
|
||||||
|
|
||||||
With this client network type, PVC does no management of the network. This is left entirely to the administrator. It requires switch support and the configuration of the vLANs on the switchports of each node's cluster network before enabling the network.
|
With this client network type, PVC does no management of the network. This is left entirely to the administrator. It requires switch support and the configuration of the vLANs on the switchports of each node's physical interfaces before enabling the network.
|
||||||
|
|
||||||
### VXLAN (managed) Client Networks
|
### VXLAN (managed) Client Networks
|
||||||
|
|
||||||
The second type of client network is the managed VXLAN network. These networks make use of BGP EVPN, managed by route reflection on the coordinators, to create virtual layer 2 Ethernet tunnels between all nodes in the cluster. VXLANs are then run on top of these virtual layer 2 tunnels, with the primary PVC node providing routing, DHCP, and DNS functionality to the network via a single IP address.
|
The second type of client network is the managed VXLAN network. These networks make use of BGP EVPN, managed by route reflection on the coordinators, to create virtual layer 2 Ethernet tunnels between all nodes in the cluster. VXLANs are then run on top of these virtual layer 2 tunnels, with the active primary PVC node providing routing, DHCP, and DNS functionality to the network via a single IP address.
|
||||||
|
|
||||||
With this client network type, PVC is in full control of the network. No vLAN configuration is required on the switchports of each node's cluster network as the virtual layer 2 tunnel travels over the cluster layer 3 network. All client network traffic destined for outside the network will exit via the upstream network of the primary coordinator node; note that this may introduce a bottleneck and tromboning if there is a large amount of external and/or inter-network traffic on the cluster. The administrator should consider this carefully when sizing the cluster network.
|
With this client network type, PVC is in full control of the network. No vLAN configuration is required on the switchports of each node's physical interfaces, as the virtual layer 2 tunnel travels over the cluster layer 3 network. All client network traffic destined for outside the network will exit via the upstream network interface of the active primary coordinator node. NOTE: This may introduce a bottleneck and tromboning if there is a large amount of external and/or inter-network traffic on the cluster. The administrator should consider this carefully when sizing the cluster network.
|
||||||
|
|
||||||
|
### Other Client Networks
|
||||||
|
|
||||||
Future PVC versions may support other client network types, such as direct-routing between VMs.
|
Future PVC versions may support other client network types, such as direct-routing between VMs.
|
||||||
|
|
||||||
@ -134,13 +138,15 @@ The set of coordinator nodes is generally configured at cluster bootstrap, initi
|
|||||||
|
|
||||||
##### The Primary Coordinator
|
##### The Primary Coordinator
|
||||||
|
|
||||||
Within the set of coordinators, a single primary coordinator is elected and shuffles around the cluster as nodes start and stop. Which coordinator is primary can be selected by the administrator manually, or via a simple election process within the cluster. Once a node becomes primary, it will remain so until told not to be. This coordinator is responsible for some additional functionality in addition to the other coordinators. These additional functions are:
|
Within the set of coordinators, a single primary coordinator is elected at cluster startup and as nodes start and stop, or in response to administrative commands. Once a node becomes primary, it will remain so until it stops or is told not to be. This coordinator is responsible for some additional functionality in addition to the other coordinators. These additional functions are:
|
||||||
|
|
||||||
0. The floating IPs in the main networks
|
0. The floating IPs in the main networks
|
||||||
0. The default gateway IP for each managed client network
|
0. The default gateway IP for each managed client network
|
||||||
0. The DNSMasq instance handling DHCP and DNS for each managed client network
|
0. The DNSMasq instance handling DHCP and DNS for each managed client network
|
||||||
0. The API and provisioner clients and workers
|
0. The API and provisioner clients and workers
|
||||||
|
|
||||||
|
PVC gracefully handles transitioning primary coordinator state, to minimize downtime. Workers will continue to operate on the old coordinator if available after a switchover and the administrator should be aware of any active tasks before switching the active primary coordinator.
|
||||||
|
|
||||||
#### Hypervisors
|
#### Hypervisors
|
||||||
|
|
||||||
Hypervisors consist of all other PVC nodes in the cluster. For small clusters (3 nodes), there will generally not be any non-coordinator nodes, though adding a 4th would require it to be a hypervisor to preserve quorum between the coordinators. Larger clusters should generally add new nodes as Hypervisors rather than coordinators to preserve the small set of coordinator nodes previously mentioned.
|
Hypervisors consist of all other PVC nodes in the cluster. For small clusters (3 nodes), there will generally not be any non-coordinator nodes, though adding a 4th would require it to be a hypervisor to preserve quorum between the coordinators. Larger clusters should generally add new nodes as Hypervisors rather than coordinators to preserve the small set of coordinator nodes previously mentioned.
|
||||||
|
@ -1,53 +0,0 @@
|
|||||||
# PVC Node Daemon architecture
|
|
||||||
|
|
||||||
The PVC Node Daemon is the heart of the PVC system and runs on each node to manage the state of the node and its configured resources. The daemon connects directly to the Zookeeper cluster for coordination and state.
|
|
||||||
|
|
||||||
The node daemon is build using Python 3.X and is packaged in the Debian package `pvc-daemon`.
|
|
||||||
|
|
||||||
Configuration of the daemon is documented in [the manual](/manuals/daemon), however it is recommended to use the [Ansible configuration interface](/manuals/ansible) to configure the PVC system for you from scratch.
|
|
||||||
|
|
||||||
## Overall architecture
|
|
||||||
|
|
||||||
The PVC daemon is object-oriented - each cluster resource is represented by an Object, which is then present on each node in the cluster. This allows state changes to be reflected across the entire cluster should their data change.
|
|
||||||
|
|
||||||
During startup, the system scans the Zookeeper database and sets up the required objects. The database is then watched in real-time for additional changes to the database information.
|
|
||||||
|
|
||||||
## Startup sequence
|
|
||||||
|
|
||||||
The daemon startup sequence is documented below. The main daemon entry-point is `Daemon.py` inside the `pvcd` folder, which is called from the `pvcd.py` stub file.
|
|
||||||
|
|
||||||
0. The configuration is read from `/etc/pvc/pvcd.yaml` and the configuration object set up.
|
|
||||||
|
|
||||||
0. Any required filesystem directories, mostly dynamic directories, are created.
|
|
||||||
|
|
||||||
0. The logger is set up. If file logging is enabled, this is the state when the first log messages are written.
|
|
||||||
|
|
||||||
0. Host networking is configured based on the `pvcd.yaml` configuration file. In a normal cluster, this is the point where the node will become reachable on the network as all networking is handled by the PVC node daemon.
|
|
||||||
|
|
||||||
0. Sysctl tweaks are applied to the host system, to enable routing/forwarding between nodes via the host.
|
|
||||||
|
|
||||||
0. The node determines its coordinator state and starts the required daemons if applicable. In a normal cluster, this is the point where the dependent services such as Zookeeper, FRR, and Ceph become available. After this step, the daemon waits 5 seconds before proceeding to give these daemons a chance to start up.
|
|
||||||
|
|
||||||
0. The daemon connects to the Zookeeper cluster and starts its listener. If the Zookeeper cluster is unavailable, it will wait some time before abandoning the attempt and starting again from step 1.
|
|
||||||
|
|
||||||
0. Termination handling/cleanup is configured.
|
|
||||||
|
|
||||||
0. The node checks if it is already present in the Zookeeper cluster; if not, it will add itself to the database. Initial static options are also updated in the database here. The daemon state transitions from `stop` to `init`.
|
|
||||||
|
|
||||||
0. The node checks if Libvirt is accessible.
|
|
||||||
|
|
||||||
0. The node starts up the NFT firewall if applicable and configures the base rule-set.
|
|
||||||
|
|
||||||
0. The node ensures that `dnsmasq` is stopped (legacy check, might be safe to remove eventually).
|
|
||||||
|
|
||||||
0. The node begins setting up the object representations of resources, in order:
|
|
||||||
|
|
||||||
a. Node entries
|
|
||||||
|
|
||||||
b. Network entries, creating client networks and starting them as required.
|
|
||||||
|
|
||||||
c. Domain (VM) entries, starting up the VMs as required.
|
|
||||||
|
|
||||||
d. Ceph storage entries (OSDs, Pools, Volumes, Snapshots).
|
|
||||||
|
|
||||||
0. The node activates its keepalived timer and begins sending keepalive updates to the cluster. The daemon state transitions from `init` to `run` and the system has started fully.
|
|
@ -1,4 +1,4 @@
|
|||||||
# PVC - The Parallel Virtual Cluster suite
|
# PVC - The Parallel Virtual Cluster system
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img alt="Logo banner" src="https://git.bonifacelabs.ca/uploads/-/system/project/avatar/135/pvc_logo.png"/>
|
<img alt="Logo banner" src="https://git.bonifacelabs.ca/uploads/-/system/project/avatar/135/pvc_logo.png"/>
|
||||||
@ -9,21 +9,20 @@
|
|||||||
<a href="https://parallelvirtualcluster.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
<a href="https://parallelvirtualcluster.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
PVC is a suite of Python 3 tools to manage virtualized clusters. It provides a fully-functional private cloud based on four key principles:
|
PVC is a KVM+Ceph+Zookeeper-based, Free Software, scalable, redundant, self-healing, and self-managing private cloud solution designed with administrator simplicity in mind. It is built from the ground-up to be redundant at the host layer, allowing the cluster to gracefully handle the loss of nodes or their components, both due to hardware failure or due to maintenance. It is able to scale from a minimum of 3 nodes up to 12 or more nodes, while retaining performance and flexibility, allowing the administrator to build a small cluster today and grow it as needed.
|
||||||
|
|
||||||
1. Be Free Software Forever (or Bust)
|
The major goal of PVC is to be administrator friendly, providing the power of Enterprise-grade private clouds like OpenStack, Nutanix, and VMWare to homelabbers, SMBs, and small ISPs, without the cost or complexity. It believes in picking the best tool for a job and abstracting it behind the cluster as a whole, freeing the administrator from the boring and time-consuming task of selecting the best component, and letting them get on with the things that really matter. Administration can be done from a simple CLI or via a RESTful API capable of building full-featured web frontends or additional applications, taking a self-documenting approach to keep the administrator learning curvet as low as possible. Setup is easy and straightforward with an [ISO-based node installer](https://github.com/parallelvirtualcluster/pvc-installer) and [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) designed to get a cluster up and running as quickly as possible. Build your cloud in an hour, grow it as you need, and never worry about it: just add physical servers.
|
||||||
2. Be Opinionated and Efficient and Pick The Best Software
|
|
||||||
3. Be Scalable and Redundant but Not Hyperscale
|
|
||||||
4. Be Simple To Use, Configure, and Maintain
|
|
||||||
|
|
||||||
It is designed to be an administrator-friendly but extremely powerful and rich modern private cloud system, but without the feature bloat and complexity of tools like OpenStack. With PVC, an administrator can provision, manage, and update a cluster of dozens or more hypervisors running thousands of VMs using a simple CLI tool, HTTP API, or web interface. PVC is based entirely on Debian GNU/Linux and Free-and-Open-Source tools, providing the glue to bootstrap, provision and manage the cluster, then getting out of the administrators' way.
|
## Getting Started
|
||||||
|
|
||||||
Your cloud, the best way; just add physical servers.
|
To get started with PVC, read the [Cluster Architecture document](https://parallelvirtualcluster.readthedocs.io/en/latest/architecture/cluster/), then see [Installing](https://parallelvirtualcluster.readthedocs.io/en/latest/installing) for details on setting up a set of PVC nodes, using the [PVC Ansible](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/ansible) framework to configure and bootstrap a cluster, and managing it with the [`pvc` CLI tool](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/cli) or [RESTful HTTP API](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/api). For details on the project, its motivation, and architectural details, see [the About page](https://parallelvirtualcluster.readthedocs.io/en/latest/about).
|
||||||
|
|
||||||
To get started with PVC, read the [Cluster Architecture document](/architecture/cluster), then see [Installing](/installing) for details on setting up a set of PVC nodes, using [`pvc-ansible`](/manuals/ansible) to configure and bootstrap a cluster, and managing it with the [`pvc` cli](/manuals/cli) or [HTTP API](/manuals/api). For details on the project, its motivation, and architectural details, see [the About page](/about).
|
|
||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
|
|
||||||
|
#### v0.7
|
||||||
|
|
||||||
|
Numerous improvements and bugfixes, revamped documentation. This release is suitable for general use and is beta-quality software.
|
||||||
|
|
||||||
#### v0.6
|
#### v0.6
|
||||||
|
|
||||||
Numerous improvements and bugfixes, full implementation of the provisioner, full implementation of the API CLI client (versus direct CLI client). This release is suitable for general use and is beta-quality software.
|
Numerous improvements and bugfixes, full implementation of the provisioner, full implementation of the API CLI client (versus direct CLI client). This release is suitable for general use and is beta-quality software.
|
||||||
|
@ -6,6 +6,8 @@ This guide will walk you through setting up a simple 3-node PVC cluster from scr
|
|||||||
|
|
||||||
### Part One - Preparing for bootstrap
|
### Part One - Preparing for bootstrap
|
||||||
|
|
||||||
|
0. Read through the [Cluster Architecture documentation](/architecture/cluster). This documentation details the requirements and conventions of a PVC cluster, and is important to understand before proceeding.
|
||||||
|
|
||||||
0. Download the latest copy of the [`pvc-installer`](https://github.com/parallelvirtualcluster/pvc-installer) and [`pvc-ansible`](https://github.com/parallelvirtualcluster/pvc-ansible) repositories to your local machine.
|
0. Download the latest copy of the [`pvc-installer`](https://github.com/parallelvirtualcluster/pvc-installer) and [`pvc-ansible`](https://github.com/parallelvirtualcluster/pvc-ansible) repositories to your local machine.
|
||||||
|
|
||||||
0. In `pvc-ansible`, create an initial `hosts` inventory, using `hosts.default` as a template. You can manage multiple PVC clusters ("sites") from the Ansible repository easily, however for simplicity you can use the simple name `cluster` for your initial site. Define the 3 hostnames you will use under the site group; usually the provided names of `pvchv1`, `pvchv2`, and `pvchv3` are sufficient, though you may use any hostname pattern you wish. It is *very important* that the names all contain a sequential number, however, as this is used by various components.
|
0. In `pvc-ansible`, create an initial `hosts` inventory, using `hosts.default` as a template. You can manage multiple PVC clusters ("sites") from the Ansible repository easily, however for simplicity you can use the simple name `cluster` for your initial site. Define the 3 hostnames you will use under the site group; usually the provided names of `pvchv1`, `pvchv2`, and `pvchv3` are sufficient, though you may use any hostname pattern you wish. It is *very important* that the names all contain a sequential number, however, as this is used by various components.
|
||||||
@ -124,122 +126,11 @@ All steps in this and following sections can be performed using either the CLI c
|
|||||||
|
|
||||||
0. Verify the client networks are reachable by pinging the managed gateway from outside the cluster.
|
0. Verify the client networks are reachable by pinging the managed gateway from outside the cluster.
|
||||||
|
|
||||||
### Part Six - Setting nodes ready and deploying a VM
|
|
||||||
|
|
||||||
This section walks through deploying a simple Debian VM to the cluster with Debootstrap. Note that as of PVC version `0.5`, this is still a manual process, though automated deployment of VMs based on configuration templates and image snapshots is planned for version `0.6`. This section can be used as a basis for a scripted installer, or a manual process as the administrator sees fit.
|
|
||||||
|
|
||||||
0. Set all 3 nodes to `ready` state, allowing them to run virtual machines. The general command is:
|
0. Set all 3 nodes to `ready` state, allowing them to run virtual machines. The general command is:
|
||||||
`$ pvc node ready <node>`
|
`$ pvc node ready <node>`
|
||||||
|
|
||||||
0. Create an RBD image for the VM. The general command is:
|
### You're Done!
|
||||||
`$ pvc storage volume add <pool> <name> <size>`
|
|
||||||
|
|
||||||
For example, to create a 20GB disk for a VM called `test1` in the previously-configured pool `vms`, run the command as follows:
|
Congratulations, you now have a basic PVC storage cluster, ready to run your VMs.
|
||||||
`$ pvc storage volume add vms test1_disk0 20G`
|
|
||||||
|
|
||||||
0. Verify the RBD image was created:
|
For next steps, see the [Provisioner manual](/manuals/provisioner) for details on how to use the PVC provisioner to create new Virtual Machines, as well as the [CLI manual](/manuals/cli) and [API manual](/manuals/api) for details on day-to-day usage of PVC.
|
||||||
`$ pvc storage volume list`
|
|
||||||
|
|
||||||
0. On one of the PVC nodes, for example `pvchv1`, map the RBD volume to the local system:
|
|
||||||
`$ ceph rbd map vms/test1_disk0`
|
|
||||||
|
|
||||||
The resulting disk device will be available at `/dev/rbd/vms/test1_disk0` or `/dev/rbd0`.
|
|
||||||
|
|
||||||
0. Create a filesystem on the block device, for example `ext4`:
|
|
||||||
`$ mkfs -t ext4 /dev/rbd/vms/test1_disk0`
|
|
||||||
|
|
||||||
0. Create a temporary directory and mount the block device to it, using `mount` to find the directory:
|
|
||||||
`$ mount /dev/rbd/vms/test1_disk0 $( mktemp -d )`
|
|
||||||
`$ mount | grep rbd`
|
|
||||||
|
|
||||||
0. Run a `debootstrap` installation to the volume:
|
|
||||||
`$ debootstrap buster <temporary_mountpoint> http://ftp.mirror.debian.org/debian`
|
|
||||||
|
|
||||||
0. Bind mount the various required directories to the new system:
|
|
||||||
`$ mount --bind /dev <temporary_mountpoint>/dev`
|
|
||||||
`$ mount --bind /dev/pts <temporary_mountpoint>/dev/pts`
|
|
||||||
`$ mount --bind /proc <temporary_mountpoint>/proc`
|
|
||||||
`$ mount --bind /sys <temporary_mountpoint>/sys`
|
|
||||||
`$ mount --bind /run <temporary_mountpoint>/run`
|
|
||||||
|
|
||||||
0. Using `chroot`, configure the VM system as required, for instance installing packages or adding users:
|
|
||||||
`$ chroot <temporary_mountpoint>`
|
|
||||||
`[chroot]$ ...`
|
|
||||||
|
|
||||||
0. Install the GRUB bootloader in the VM system, and install Grub to the RBD device:
|
|
||||||
`[chroot]$ apt install grub-pc`
|
|
||||||
`[chroot]$ grub-install /dev/rbd/vms/test1_disk0`
|
|
||||||
|
|
||||||
0. Exit the `chroot` environment, unmount the temporary mountpoint, and unmap the RBD device:
|
|
||||||
`[chroot]$ exit`
|
|
||||||
`$ umount <temporary_mountpoint>`
|
|
||||||
`$ rbd unmap /dev/rd0`
|
|
||||||
|
|
||||||
0. Prepare a Libvirt XML configuration, obtaining the required Ceph storage secret and a new random VM UUID first. This example provides a very simple VM with 1 vCPU, 1GB RAM, the previously-configured network `100`, and the previously-configured disk `vms/test1_disk0`:
|
|
||||||
`$ virsh secret-list`
|
|
||||||
`$ uuidgen`
|
|
||||||
`$ $EDITOR /tmp/test1.xml`
|
|
||||||
|
|
||||||
```
|
|
||||||
<domain type='kvm'>
|
|
||||||
<name>test1</name>
|
|
||||||
<uuid>[INSERT GENERATED UUID]</uuid>
|
|
||||||
<description>Testing VM</description>
|
|
||||||
<memory unit='MiB'>1024</memory>
|
|
||||||
<vcpu>1</vcpu>
|
|
||||||
<os>
|
|
||||||
<type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type>
|
|
||||||
<boot dev='hd'/>
|
|
||||||
</os>
|
|
||||||
<features>
|
|
||||||
<acpi/>
|
|
||||||
<apic/>
|
|
||||||
<pae/>
|
|
||||||
</features>
|
|
||||||
<clock offset='utc'/>
|
|
||||||
<on_poweroff>destroy</on_poweroff>
|
|
||||||
<on_reboot>restart</on_reboot>
|
|
||||||
<on_crash>restart</on_crash>
|
|
||||||
<devices>
|
|
||||||
<emulator>/usr/bin/kvm</emulator>
|
|
||||||
<controller type='usb' index='0'/>
|
|
||||||
<controller type='pci' index='0' model='pci-root'/>
|
|
||||||
<serial type='pty'/>
|
|
||||||
<console type='pty'/>
|
|
||||||
<disk type='network' device='disk'>
|
|
||||||
<driver name='qemu' discard='unmap'/>
|
|
||||||
<auth username='libvirt'>
|
|
||||||
<secret type='ceph' uuid='[INSERT CEPH STORAGE SECRET]'/>
|
|
||||||
</auth>
|
|
||||||
<source protocol='rbd' name='vms/test1_disk0'>
|
|
||||||
<host name='[INSERT FIRST COORDINATOR CLUSTER NETWORK FQDN' port='6789'/>
|
|
||||||
<host name='[INSERT FIRST COORDINATOR CLUSTER NETWORK FQDN' port='6789'/>
|
|
||||||
<host name='[INSERT FIRST COORDINATOR CLUSTER NETWORK FQDN' port='6789'/>
|
|
||||||
</source>
|
|
||||||
<target dev='sda' bus='scsi'/>
|
|
||||||
</disk>
|
|
||||||
<interface type='bridge'>
|
|
||||||
<mac address='52:54:00:12:34:56'/>
|
|
||||||
<source bridge='vmbr100'/>
|
|
||||||
<model type='virtio'/>
|
|
||||||
</interface>
|
|
||||||
<controller type='scsi' index='0' model='virtio-scsi'/>
|
|
||||||
</devices>
|
|
||||||
</domain>
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOTE:** This Libvirt XML is only a sample; it should be modified to fit the specifics of the VM. Alternatively to manual configuration, one can use a tool like `virt-manager` to generate valid Libvirt XML configurations for PVC to use.
|
|
||||||
|
|
||||||
0. Define the VM in the PVC cluster:
|
|
||||||
`$ pvc vm define /tmp/test1.xml`
|
|
||||||
|
|
||||||
0. Verify the VM is present in the cluster:
|
|
||||||
`$ pvc vm info test1`
|
|
||||||
|
|
||||||
0. Start the VM and watch the console log:
|
|
||||||
`$ pvc vm start test1`
|
|
||||||
`$ pvc vm log -f test1`
|
|
||||||
|
|
||||||
If all has gone well until this point, you should now be able to watch your new VM boot on the cluster, grab DHCP from the managed network, and run away doing its thing. You could now, for instance, move it permanently to another node with the `pvc vm move -t <node> test1` command, or temporarily with the `pvc vm migrate -t <node> test1` command and back again with the `pvc vm unmigrate test` command.
|
|
||||||
|
|
||||||
For more details on what to do next, see the [CLI manual](/manuals/cli) for a full list of management functions, SSH into your new VM, and start provisioning more. Your new private cloud is now here!
|
|
||||||
|
@ -1,3 +1,47 @@
|
|||||||
|
# PVC Ansible architecture
|
||||||
|
|
||||||
|
The PVC Ansible setup and management framework is written in Ansible. It consists of two roles: `base` and `pvc`.
|
||||||
|
|
||||||
|
## Base role
|
||||||
|
|
||||||
|
The Base role configures a node to a specific, standard base Debian system, with a number of PVC-specific tweaks. Some examples include:
|
||||||
|
|
||||||
|
* Installing the custom PVC repository at Boniface Labs.
|
||||||
|
|
||||||
|
* Removing several unnecessary packages and installing numerous additional packages.
|
||||||
|
|
||||||
|
* Automatically configuring network interfaces based on the `group_vars` configuration.
|
||||||
|
|
||||||
|
* Configuring several general `sysctl` settings for optimal performance.
|
||||||
|
|
||||||
|
* Installing and configuring rsyslog, postfix, ntpd, ssh, and fail2ban.
|
||||||
|
|
||||||
|
* Creating the users specified in the `group_vars` configuration.
|
||||||
|
|
||||||
|
* Installing custom MOTDs, bashrc files, vimrc files, and other useful configurations for each user.
|
||||||
|
|
||||||
|
The end result is a standardized "PVC node" system ready to have the daemons installed by the PVC role.
|
||||||
|
|
||||||
|
## PVC role
|
||||||
|
|
||||||
|
The PVC role configures all the dependencies of PVC, including storage, networking, and databases, then installs the PVC daemon itself. Specifically, it will, in order:
|
||||||
|
|
||||||
|
* Install Ceph, configure and bootstrap a new cluster if `bootstrap=yes` is set, configure the monitor and manager daemons, and start up the cluster ready for the addition of OSDs via the client interface (coordinators only).
|
||||||
|
|
||||||
|
* Install, configure, and if `bootstrap=yes` is set, bootstrap a Zookeeper cluster (coordinators only).
|
||||||
|
|
||||||
|
* Install, configure, and if `bootstrap=yes` is set`, bootstrap a Patroni PostgreSQL cluster for the PowerDNS aggregator (coordinators only).
|
||||||
|
|
||||||
|
* Install and configure Libvirt.
|
||||||
|
|
||||||
|
* Install and configure FRRouting.
|
||||||
|
|
||||||
|
* Install and configure the main PVC daemon and API client, including initializing the PVC cluster (`pvc init`).
|
||||||
|
|
||||||
|
## Completion
|
||||||
|
|
||||||
|
Once the entire playbook has run for the first time against a given host, the host will be rebooted to apply all the configured services. On startup, the system should immediately launch the PVC daemon, check in to the Zookeeper cluster, and become ready. The node will be in `flushed` state on its first boot; the administrator will need to run `pvc node unflush <node>` to set the node into active state ready to handle virtual machines.
|
||||||
|
|
||||||
# PVC Ansible configuration manual
|
# PVC Ansible configuration manual
|
||||||
|
|
||||||
This manual documents the various `group_vars` configuration options for the `pvc-ansible` framework. We assume that the administrator is generally familiar with Ansible and its operation.
|
This manual documents the various `group_vars` configuration options for the `pvc-ansible` framework. We assume that the administrator is generally familiar with Ansible and its operation.
|
||||||
|
@ -1,3 +1,11 @@
|
|||||||
|
# PVC API architecture
|
||||||
|
|
||||||
|
The PVC API is a standalone client application for PVC. It interfaces directly with the Zookeeper database to manage state.
|
||||||
|
|
||||||
|
The API is built using Flask and is packaged in the Debian package `pvc-client-api`. The API depends on the common client functions of the `pvc-client-common` package as does the CLI client.
|
||||||
|
|
||||||
|
Details of the API interface can be found in [the manual](/manuals/api).
|
||||||
|
|
||||||
# PVC HTTP API manual
|
# PVC HTTP API manual
|
||||||
|
|
||||||
The PVC HTTP API client is built with Flask, a Python framework for creating API interfaces, and run directly with the PyWSGI framework. It interfaces directly with the Zookeeper cluster to send and receive information about the cluster. It supports authentication configured statically via tokens in the configuration file as well as SSL. It also includes the provisioner client, an optional section that can be used to create VMs automatically using a set of templates and standardized scripts.
|
The PVC HTTP API client is built with Flask, a Python framework for creating API interfaces, and run directly with the PyWSGI framework. It interfaces directly with the Zookeeper cluster to send and receive information about the cluster. It supports authentication configured statically via tokens in the configuration file as well as SSL. It also includes the provisioner client, an optional section that can be used to create VMs automatically using a set of templates and standardized scripts.
|
||||||
@ -8,7 +16,7 @@ The [`pvc-ansible`](https://github.com/parallelvirtualcluster/pvc-ansible) frame
|
|||||||
|
|
||||||
### SSL
|
### SSL
|
||||||
|
|
||||||
The API accepts SSL certificate and key files via the `pvc-api.yaml` configuration to enable SSL support for the API, which protects the data and query values from snooping or tampering. SSL is strongly recommended if using the API outside of a trusted local area network.
|
The API accepts SSL certificate and key files via the `pvcapid.yaml` configuration to enable SSL support for the API, which protects the data and query values from snooping or tampering. SSL is strongly recommended if using the API outside of a trusted local area network.
|
||||||
|
|
||||||
### API authentication
|
### API authentication
|
||||||
|
|
||||||
@ -148,7 +156,7 @@ curl -X GET http://localhost:7370/api/v1/provisioner/status/<task-id>
|
|||||||
|
|
||||||
## API Daemon Configuration
|
## API Daemon Configuration
|
||||||
|
|
||||||
The API is configured using a YAML configuration file which is passed in to the API process by the environment variable `PVC_CONFIG_FILE`. When running with the default package and SystemD unit, this file is located at `/etc/pvc/pvc-api.yaml`.
|
The API is configured using a YAML configuration file which is passed in to the API process by the environment variable `PVC_CONFIG_FILE`. When running with the default package and SystemD unit, this file is located at `/etc/pvc/pvcapid.yaml`.
|
||||||
|
|
||||||
### Conventions
|
### Conventions
|
||||||
|
|
||||||
@ -156,7 +164,7 @@ The API is configured using a YAML configuration file which is passed in to the
|
|||||||
|
|
||||||
* Settings may `depends` on other settings. This indicates that, if one setting is enabled, the other setting is very likely `required` by that setting.
|
* Settings may `depends` on other settings. This indicates that, if one setting is enabled, the other setting is very likely `required` by that setting.
|
||||||
|
|
||||||
### `pvc-api.yaml`
|
### `pvcapid.yaml`
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
|
|
||||||
|
@ -1,10 +1,18 @@
|
|||||||
|
# PVC CLI architecture
|
||||||
|
|
||||||
|
The PVC CLI is a standalone client application for PVC. It interfaces with the PVC API, via a configurable list of clusters with customizable hosts, ports, addresses, and authentication.
|
||||||
|
|
||||||
|
The CLI is build using Click and is packaged in the Debian package `pvc-client-cli`. The CLI does not depend on any other PVC components and can be used independently on arbitrary systems.
|
||||||
|
|
||||||
|
The CLI is self-documenting, however [the manual](/manuals/cli) details the required configuration.
|
||||||
|
|
||||||
# PVC CLI client manual
|
# PVC CLI client manual
|
||||||
|
|
||||||
The PVC CLI client is built with Click, a Python framework for creating self-documenting CLI applications. It interfaces with the PVC API.
|
The PVC CLI client is built with Click, a Python framework for creating self-documenting CLI applications. It interfaces with the PVC API.
|
||||||
|
|
||||||
Use the `-h` option at any level of the `pvc` CLI command to receive help about the available commands and options.
|
Use the `-h` option at any level of the `pvc` CLI command to receive help about the available commands and options.
|
||||||
|
|
||||||
Before using the CLI on a non-PVC node system, at least one cluster must be added using the `pvc cluster` subcommands. Running the CLI on hosts which also run the PVC API (via its configuration at `/etc/pvc/pvc-api.yaml`) uses the special `local` cluster, reading information from the API configuration, by default.
|
Before using the CLI on a non-PVC node system, at least one cluster must be added using the `pvc cluster` subcommands. Running the CLI on hosts which also run the PVC API (via its configuration at `/etc/pvc/pvcapid.yaml`) uses the special `local` cluster, reading information from the API configuration, by default.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
@ -1,10 +1,64 @@
|
|||||||
|
# PVC Node Daemon architecture
|
||||||
|
|
||||||
|
The PVC Node Daemon is the heart of the PVC system and runs on each node to manage the state of the node and its configured resources. The daemon connects directly to the Zookeeper cluster for coordination and state.
|
||||||
|
|
||||||
|
The node daemon is build using Python 3.X and is packaged in the Debian package `pvc-daemon`.
|
||||||
|
|
||||||
|
Configuration of the daemon is documented in [the manual](/manuals/daemon), however it is recommended to use the [Ansible configuration interface](/manuals/ansible) to configure the PVC system for you from scratch.
|
||||||
|
|
||||||
|
## Overall architecture
|
||||||
|
|
||||||
|
The PVC daemon is object-oriented - each cluster resource is represented by an Object, which is then present on each node in the cluster. This allows state changes to be reflected across the entire cluster should their data change.
|
||||||
|
|
||||||
|
During startup, the system scans the Zookeeper database and sets up the required objects. The database is then watched in real-time for additional changes to the database information.
|
||||||
|
|
||||||
|
## Startup sequence
|
||||||
|
|
||||||
|
The daemon startup sequence is documented below. The main daemon entry-point is `Daemon.py` inside the `pvcnoded` folder, which is called from the `pvcnoded.py` stub file.
|
||||||
|
|
||||||
|
0. The configuration is read from `/etc/pvc/pvcnoded.yaml` and the configuration object set up.
|
||||||
|
|
||||||
|
0. Any required filesystem directories, mostly dynamic directories, are created.
|
||||||
|
|
||||||
|
0. The logger is set up. If file logging is enabled, this is the state when the first log messages are written.
|
||||||
|
|
||||||
|
0. Host networking is configured based on the `pvcnoded.yaml` configuration file. In a normal cluster, this is the point where the node will become reachable on the network as all networking is handled by the PVC node daemon.
|
||||||
|
|
||||||
|
0. Sysctl tweaks are applied to the host system, to enable routing/forwarding between nodes via the host.
|
||||||
|
|
||||||
|
0. The node determines its coordinator state and starts the required daemons if applicable. In a normal cluster, this is the point where the dependent services such as Zookeeper, FRR, and Ceph become available. After this step, the daemon waits 5 seconds before proceeding to give these daemons a chance to start up.
|
||||||
|
|
||||||
|
0. The daemon connects to the Zookeeper cluster and starts its listener. If the Zookeeper cluster is unavailable, it will wait some time before abandoning the attempt and starting again from step 1.
|
||||||
|
|
||||||
|
0. Termination handling/cleanup is configured.
|
||||||
|
|
||||||
|
0. The node checks if it is already present in the Zookeeper cluster; if not, it will add itself to the database. Initial static options are also updated in the database here. The daemon state transitions from `stop` to `init`.
|
||||||
|
|
||||||
|
0. The node checks if Libvirt is accessible.
|
||||||
|
|
||||||
|
0. The node starts up the NFT firewall if applicable and configures the base rule-set.
|
||||||
|
|
||||||
|
0. The node ensures that `dnsmasq` is stopped (legacy check, might be safe to remove eventually).
|
||||||
|
|
||||||
|
0. The node begins setting up the object representations of resources, in order:
|
||||||
|
|
||||||
|
a. Node entries
|
||||||
|
|
||||||
|
b. Network entries, creating client networks and starting them as required.
|
||||||
|
|
||||||
|
c. Domain (VM) entries, starting up the VMs as required.
|
||||||
|
|
||||||
|
d. Ceph storage entries (OSDs, Pools, Volumes, Snapshots).
|
||||||
|
|
||||||
|
0. The node activates its keepalived timer and begins sending keepalive updates to the cluster. The daemon state transitions from `init` to `run` and the system has started fully.
|
||||||
|
|
||||||
# PVC Node Daemon manual
|
# PVC Node Daemon manual
|
||||||
|
|
||||||
The PVC node daemon ins build with Python 3 and is run directly on nodes. For details of the startup sequence and general layout, see the [architecture document](/architecture/daemon).
|
The PVC node daemon ins build with Python 3 and is run directly on nodes. For details of the startup sequence and general layout, see the [architecture document](/architecture/daemon).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The Daemon is configured using a YAML configuration file which is passed in to the API process by the environment variable `PVCD_CONFIG_FILE`. When running with the default package and SystemD unit, this file is located at `/etc/pvc/pvcd.yaml`.
|
The Daemon is configured using a YAML configuration file which is passed in to the API process by the environment variable `PVCD_CONFIG_FILE`. When running with the default package and SystemD unit, this file is located at `/etc/pvc/pvcnoded.yaml`.
|
||||||
|
|
||||||
For most deployments, the management of the configuration file is handled entirely by the [PVC Ansible framework](/manuals/ansible) and should not be modified directly. Many options from the Ansible framework map directly into the configuration options in this file.
|
For most deployments, the management of the configuration file is handled entirely by the [PVC Ansible framework](/manuals/ansible) and should not be modified directly. Many options from the Ansible framework map directly into the configuration options in this file.
|
||||||
|
|
||||||
@ -14,7 +68,7 @@ For most deployments, the management of the configuration file is handled entire
|
|||||||
|
|
||||||
* Settings may `depends` on other settings. This indicates that, if one setting is enabled, the other setting is very likely `required` by that setting.
|
* Settings may `depends` on other settings. This indicates that, if one setting is enabled, the other setting is very likely `required` by that setting.
|
||||||
|
|
||||||
### `pvcd.yaml`
|
### `pvcnoded.yaml`
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# PVC Provisioner API architecture
|
# PVC Provisioner manual
|
||||||
|
|
||||||
The PVC provisioner is a subsection of the main PVC API. IT interfaces directly with the Zookeeper database using the common client functions, and with the Patroni PostgreSQL database to store details. The provisioner also interfaces directly with the Ceph storage cluster, for mapping volumes, creating filesystems, and installing guests.
|
The PVC provisioner is a subsection of the main PVC API. IT interfaces directly with the Zookeeper database using the common client functions, and with the Patroni PostgreSQL database to store details. The provisioner also interfaces directly with the Ceph storage cluster, for mapping volumes, creating filesystems, and installing guests.
|
||||||
|
|
||||||
@ -10,10 +10,18 @@ The purpose of the Provisioner API is to provide a convenient way for administra
|
|||||||
|
|
||||||
The Provisioner allows the administrator to constuct descriptions of VMs, called profiles, which include system resource specifications, network interfaces, disks, cloud-init userdata, and installation scripts. These profiles are highly modular, allowing the administrator to specify arbitrary combinations of the mentioned VM features with which to build new VMs.
|
The Provisioner allows the administrator to constuct descriptions of VMs, called profiles, which include system resource specifications, network interfaces, disks, cloud-init userdata, and installation scripts. These profiles are highly modular, allowing the administrator to specify arbitrary combinations of the mentioned VM features with which to build new VMs.
|
||||||
|
|
||||||
Currently, the provisioner supports creating VMs based off of installation scripts, or by cloning existing volumes. Future versions of PVC will allow the uploading of arbitrary images (either disk or ISO images) to cluster volumes, permitting even more flexibility in the installation of VMs.
|
The provisioner supports creating VMs based off of installation scripts, by cloning existing volumes, and by uploading OVA image templates to the cluster.
|
||||||
|
|
||||||
Examples in the following sections use the CLI exclusively for demonstration purposes. For details of the underlying API calls, please see the [API interface reference](/manuals/api-reference.html).
|
Examples in the following sections use the CLI exclusively for demonstration purposes. For details of the underlying API calls, please see the [API interface reference](/manuals/api-reference.html).
|
||||||
|
|
||||||
|
# Deploying VMs from OVA images
|
||||||
|
|
||||||
|
PVC supports deploying virtual machines from industry-standard OVA images. OVA images can be uploaded to the cluster with the `pvc provisioner ova` commands, and deployed via the created profile(s) using the `pvc provisioner create` command. Additionally, the profile(s) can be modified to suite your specific needs via the provisioner template system detailed below.
|
||||||
|
|
||||||
|
# Deploying VMs from provisioner scripts
|
||||||
|
|
||||||
|
PVC supports deploying virtual machines using administrator-provided scripts, using templates, profiles, and Cloud-init userdata to control the deployment process as desired. This deployment method permits the administrator to deploy POSIX-like systems such as Linux or BSD directly from a companion tool such as `debootstrap` on-demand and with maximum flexibility.
|
||||||
|
|
||||||
## Templates
|
## Templates
|
||||||
|
|
||||||
The PVC Provisioner features three categories of templates to specify the resources allocated to the virtual machine. They are: System Templates, Network Templates, and Disk Templates.
|
The PVC Provisioner features three categories of templates to specify the resources allocated to the virtual machine. They are: System Templates, Network Templates, and Disk Templates.
|
@ -554,6 +554,48 @@
|
|||||||
},
|
},
|
||||||
"type": "object"
|
"type": "object"
|
||||||
},
|
},
|
||||||
|
"ova": {
|
||||||
|
"properties": {
|
||||||
|
"id": {
|
||||||
|
"description": "Internal provisioner OVA ID",
|
||||||
|
"type": "integer"
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
"description": "OVA name",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"volumes": {
|
||||||
|
"items": {
|
||||||
|
"id": "ova_volume",
|
||||||
|
"properties": {
|
||||||
|
"disk_id": {
|
||||||
|
"description": "Disk identifier",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"disk_size_gb": {
|
||||||
|
"description": "Disk size in GB",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"pool": {
|
||||||
|
"description": "Pool containing the OVA volume",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"volume_format": {
|
||||||
|
"description": "OVA image format",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"volume_name": {
|
||||||
|
"description": "Storage volume containing the OVA image",
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"type": "object"
|
||||||
|
},
|
||||||
|
"type": "list"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"type": "object"
|
||||||
|
},
|
||||||
"pool": {
|
"pool": {
|
||||||
"properties": {
|
"properties": {
|
||||||
"name": {
|
"name": {
|
||||||
@ -2190,6 +2232,160 @@
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"/api/v1/provisioner/ova": {
|
||||||
|
"get": {
|
||||||
|
"description": "",
|
||||||
|
"parameters": [
|
||||||
|
{
|
||||||
|
"description": "An OVA name search limit; fuzzy by default, use ^/$ to force exact matches",
|
||||||
|
"in": "query",
|
||||||
|
"name": "limit",
|
||||||
|
"required": false,
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"items": {
|
||||||
|
"$ref": "#/definitions/ova"
|
||||||
|
},
|
||||||
|
"type": "list"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Return a list of OVA sources",
|
||||||
|
"tags": [
|
||||||
|
"provisioner"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"post": {
|
||||||
|
"description": "<br/>The API client is responsible for determining and setting the ova_size value, as this value cannot be determined dynamically before the upload proceeds.",
|
||||||
|
"parameters": [
|
||||||
|
{
|
||||||
|
"description": "Storage pool name",
|
||||||
|
"in": "query",
|
||||||
|
"name": "pool",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "OVA name on the cluster (usually identical to the OVA file name)",
|
||||||
|
"in": "query",
|
||||||
|
"name": "name",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Size of the OVA file in bytes",
|
||||||
|
"in": "query",
|
||||||
|
"name": "ova_size",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"400": {
|
||||||
|
"description": "Bad request",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Upload an OVA image to the cluster",
|
||||||
|
"tags": [
|
||||||
|
"provisioner"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"/api/v1/provisioner/ova/{ova}": {
|
||||||
|
"delete": {
|
||||||
|
"description": "",
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"404": {
|
||||||
|
"description": "Not found",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Remove ova {ova}",
|
||||||
|
"tags": [
|
||||||
|
"provisioner"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"get": {
|
||||||
|
"description": "",
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/ova"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"404": {
|
||||||
|
"description": "Not found",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Return information about OVA image {ova}",
|
||||||
|
"tags": [
|
||||||
|
"provisioner"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"post": {
|
||||||
|
"description": "<br/>The API client is responsible for determining and setting the ova_size value, as this value cannot be determined dynamically before the upload proceeds.",
|
||||||
|
"parameters": [
|
||||||
|
{
|
||||||
|
"description": "Storage pool name",
|
||||||
|
"in": "query",
|
||||||
|
"name": "pool",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Size of the OVA file in bytes",
|
||||||
|
"in": "query",
|
||||||
|
"name": "ova_size",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"400": {
|
||||||
|
"description": "Bad request",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Upload an OVA image to the cluster",
|
||||||
|
"tags": [
|
||||||
|
"provisioner"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
"/api/v1/provisioner/profile": {
|
"/api/v1/provisioner/profile": {
|
||||||
"get": {
|
"get": {
|
||||||
"description": "",
|
"description": "",
|
||||||
@ -2228,39 +2424,57 @@
|
|||||||
"required": true,
|
"required": true,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"description": "Profile type",
|
||||||
|
"enum": [
|
||||||
|
"provisioner",
|
||||||
|
"ova"
|
||||||
|
],
|
||||||
|
"in": "query",
|
||||||
|
"name": "profile_type",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"description": "Script name",
|
"description": "Script name",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
"name": "script",
|
"name": "script",
|
||||||
"required": true,
|
"required": false,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"description": "System template name",
|
"description": "System template name",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
"name": "system_template",
|
"name": "system_template",
|
||||||
"required": true,
|
"required": false,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"description": "Network template name",
|
"description": "Network template name",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
"name": "network_template",
|
"name": "network_template",
|
||||||
"required": true,
|
"required": false,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"description": "Storage template name",
|
"description": "Storage template name",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
"name": "storage_template",
|
"name": "storage_template",
|
||||||
"required": true,
|
"required": false,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"description": "Userdata template name",
|
"description": "Userdata template name",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
"name": "userdata",
|
"name": "userdata",
|
||||||
"required": true,
|
"required": false,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "OVA image source",
|
||||||
|
"in": "query",
|
||||||
|
"name": "ova",
|
||||||
|
"required": false,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -2336,6 +2550,17 @@
|
|||||||
"post": {
|
"post": {
|
||||||
"description": "",
|
"description": "",
|
||||||
"parameters": [
|
"parameters": [
|
||||||
|
{
|
||||||
|
"description": "Profile type",
|
||||||
|
"enum": [
|
||||||
|
"provisioner",
|
||||||
|
"ova"
|
||||||
|
],
|
||||||
|
"in": "query",
|
||||||
|
"name": "profile_type",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"description": "Script name",
|
"description": "Script name",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
@ -2371,6 +2596,13 @@
|
|||||||
"required": true,
|
"required": true,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"description": "OVA image source",
|
||||||
|
"in": "query",
|
||||||
|
"name": "ova",
|
||||||
|
"required": false,
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"description": "Script install() function keywork argument in \"arg=data\" format; may be specified multiple times to add multiple arguments",
|
"description": "Script install() function keywork argument in \"arg=data\" format; may be specified multiple times to add multiple arguments",
|
||||||
"in": "query",
|
"in": "query",
|
||||||
@ -3558,6 +3790,77 @@
|
|||||||
"tags": [
|
"tags": [
|
||||||
"provisioner / template"
|
"provisioner / template"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
"put": {
|
||||||
|
"description": "",
|
||||||
|
"parameters": [
|
||||||
|
{
|
||||||
|
"description": "vCPU count for VM",
|
||||||
|
"in": "query",
|
||||||
|
"name": "vcpus",
|
||||||
|
"type": "integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "vRAM size in MB for VM",
|
||||||
|
"in": "query",
|
||||||
|
"name": "vram",
|
||||||
|
"type": "integer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Whether to enable serial console for VM",
|
||||||
|
"in": "query",
|
||||||
|
"name": "serial",
|
||||||
|
"type": "boolean"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Whether to enable VNC console for VM",
|
||||||
|
"in": "query",
|
||||||
|
"name": "vnc",
|
||||||
|
"type": "boolean"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "VNC bind address when VNC console is enabled",
|
||||||
|
"in": "query",
|
||||||
|
"name": "vnc_bind",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "CSV list of node(s) to limit VM assignment to",
|
||||||
|
"in": "query",
|
||||||
|
"name": "node_limit",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Selector to use for VM node assignment on migration/move",
|
||||||
|
"in": "query",
|
||||||
|
"name": "node_selector",
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Whether to start VM with node ready state (one-time)",
|
||||||
|
"in": "query",
|
||||||
|
"name": "node_autostart",
|
||||||
|
"type": "boolean"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"400": {
|
||||||
|
"description": "Bad request",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Modify an existing system template {template}",
|
||||||
|
"tags": [
|
||||||
|
"provisioner / template"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"/api/v1/provisioner/userdata": {
|
"/api/v1/provisioner/userdata": {
|
||||||
@ -4691,6 +4994,52 @@
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"/api/v1/storage/ceph/volume/{pool}/{volume}/upload": {
|
||||||
|
"post": {
|
||||||
|
"description": "<br/>The body must be a form body containing a file that is the binary contents of the image.",
|
||||||
|
"parameters": [
|
||||||
|
{
|
||||||
|
"description": "The type of source image file",
|
||||||
|
"enum": [
|
||||||
|
"raw",
|
||||||
|
"vmdk",
|
||||||
|
"qcow2",
|
||||||
|
"qed",
|
||||||
|
"vdi",
|
||||||
|
"vpc"
|
||||||
|
],
|
||||||
|
"in": "query",
|
||||||
|
"name": "image_format",
|
||||||
|
"required": true,
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"responses": {
|
||||||
|
"200": {
|
||||||
|
"description": "OK",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"400": {
|
||||||
|
"description": "Bad request",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"404": {
|
||||||
|
"description": "Not found",
|
||||||
|
"schema": {
|
||||||
|
"$ref": "#/definitions/Message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"summary": "Upload a disk image to Ceph volume {volume} in pool {pool}",
|
||||||
|
"tags": [
|
||||||
|
"storage / ceph"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
"/api/v1/vm": {
|
"/api/v1/vm": {
|
||||||
"get": {
|
"get": {
|
||||||
"description": "",
|
"description": "",
|
||||||
@ -5142,6 +5491,12 @@
|
|||||||
"in": "query",
|
"in": "query",
|
||||||
"name": "force",
|
"name": "force",
|
||||||
"type": "boolean"
|
"type": "boolean"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Whether to block waiting for the migration to complete",
|
||||||
|
"in": "query",
|
||||||
|
"name": "wait",
|
||||||
|
"type": "boolean"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"responses": {
|
"responses": {
|
||||||
@ -5202,6 +5557,12 @@
|
|||||||
"name": "state",
|
"name": "state",
|
||||||
"required": true,
|
"required": true,
|
||||||
"type": "string"
|
"type": "string"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Whether to block waiting for the state change to complete",
|
||||||
|
"in": "query",
|
||||||
|
"name": "wait",
|
||||||
|
"type": "boolean"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"responses": {
|
"responses": {
|
||||||
|
@ -8,14 +8,13 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
import json
|
import json
|
||||||
|
|
||||||
os.environ['PVC_CONFIG_FILE'] = "./client-api/pvc-api.sample.yaml"
|
os.environ['PVC_CONFIG_FILE'] = "./api-daemon/pvcapid.sample.yaml"
|
||||||
|
|
||||||
sys.path.append('client-api')
|
sys.path.append('api-daemon')
|
||||||
|
|
||||||
pvc_api = __import__('pvc-api')
|
import pvcapid.flaskapi as pvc_api
|
||||||
|
|
||||||
swagger_file = "docs/manuals/swagger.json"
|
swagger_file = "docs/manuals/swagger.json"
|
||||||
|
|
||||||
swagger_data = swagger(pvc_api.app)
|
swagger_data = swagger(pvc_api.app)
|
||||||
swagger_data['info']['version'] = "1.0"
|
swagger_data['info']['version'] = "1.0"
|
||||||
swagger_data['info']['title'] = "PVC Client and Provisioner API"
|
swagger_data['info']['title'] = "PVC Client and Provisioner API"
|
11
gen-api-migrations
Executable file
11
gen-api-migrations
Executable file
@ -0,0 +1,11 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Generate the database migration files
|
||||||
|
|
||||||
|
VERSION="$( head -1 debian/changelog | awk -F'[()-]' '{ print $2 }' )"
|
||||||
|
|
||||||
|
pushd api-daemon
|
||||||
|
export PVC_CONFIG_FILE="./pvcapid.sample.yaml"
|
||||||
|
./pvcapid-manage.py db migrate -m "PVC version ${VERSION}"
|
||||||
|
./pvcapid-manage.py db upgrade
|
||||||
|
popd
|
@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
[Unit]
|
[Unit]
|
||||||
Description = Parallel Virtual Cluster autoflush daemon
|
Description = Parallel Virtual Cluster autoflush daemon
|
||||||
After = pvcd.service
|
After = pvcnoded.service
|
||||||
PartOf = pvcd.target
|
PartOf = pvc.target
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type = oneshot
|
Type = oneshot
|
||||||
@ -15,4 +15,4 @@ ExecStop = /usr/bin/pvc -c local node flush --wait
|
|||||||
ExecStopPost = /bin/sleep 30
|
ExecStopPost = /bin/sleep 30
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy = pvcd.target
|
WantedBy = pvc.target
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -1,13 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
for disk in $( sudo rbd list ${BLSE_STORAGE_POOL_VM} | grep "^${vm}" ); do
|
|
||||||
echo -e " Disk: $disk"
|
|
||||||
locks="$( sudo rbd lock list ${BLSE_STORAGE_POOL_VM}/${disk} | grep '^client' )"
|
|
||||||
echo "${locks}"
|
|
||||||
if [[ -n "${locks}" ]]; then
|
|
||||||
echo -e " LOCK FOUND! Clearing."
|
|
||||||
locker="$( awk '{ print $1 }' <<<"${locks}" )"
|
|
||||||
id="$( awk '{ print $2" "$3 }' <<<"${locks}" )"
|
|
||||||
sudo rbd lock remove ${BLSE_STORAGE_POOL_VM}/${disk} "${id}" "${locker}"
|
|
||||||
fi
|
|
||||||
done
|
|
23
node-daemon/pvcnoded.py
Executable file
23
node-daemon/pvcnoded.py
Executable file
@ -0,0 +1,23 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# pvcnoded.py - Node daemon startup stub
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
import pvcnoded.Daemon
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
# pvcd cluster configuration file example
|
# pvcnoded configuration file example
|
||||||
#
|
#
|
||||||
# This configuration file specifies details for this node in PVC. Multiple node
|
# This configuration file specifies details for this node in PVC. Multiple node
|
||||||
# blocks can be added but only the one matching the current system nodename will
|
# blocks can be added but only the one matching the current system nodename will
|
||||||
@ -7,7 +7,7 @@
|
|||||||
# this sample configuration are considered defaults and, with adjustment of the
|
# this sample configuration are considered defaults and, with adjustment of the
|
||||||
# nodename section and coordinators list, can be used as-is on a Debian system.
|
# nodename section and coordinators list, can be used as-is on a Debian system.
|
||||||
#
|
#
|
||||||
# Copy this example to /etc/pvc/pvcd.conf and edit to your needs
|
# Copy this example to /etc/pvc/pvcnoded.conf and edit to your needs
|
||||||
|
|
||||||
pvc:
|
pvc:
|
||||||
# node: The (short) hostname of the node, set during provisioning
|
# node: The (short) hostname of the node, set during provisioning
|
@ -2,16 +2,16 @@
|
|||||||
|
|
||||||
[Unit]
|
[Unit]
|
||||||
Description = Parallel Virtual Cluster node daemon
|
Description = Parallel Virtual Cluster node daemon
|
||||||
After = network-online.target libvirtd.service zookeeper.service
|
After = network-online.target zookeeper.service
|
||||||
PartOf = pvcd.target
|
PartOf = pvc.target
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type = simple
|
Type = simple
|
||||||
WorkingDirectory = /usr/share/pvc
|
WorkingDirectory = /usr/share/pvc
|
||||||
Environment = PYTHONUNBUFFERED=true
|
Environment = PYTHONUNBUFFERED=true
|
||||||
Environment = PVCD_CONFIG_FILE=/etc/pvc/pvcd.yaml
|
Environment = PVCD_CONFIG_FILE=/etc/pvc/pvcnoded.yaml
|
||||||
ExecStart = /usr/share/pvc/pvcd.py
|
ExecStart = /usr/share/pvc/pvcnoded.py
|
||||||
Restart = on-failure
|
Restart = on-failure
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy = pvcd.target
|
WantedBy = pvc.target
|
417
node-daemon/pvcnoded/CephInstance.py
Normal file
417
node-daemon/pvcnoded/CephInstance.py
Normal file
@ -0,0 +1,417 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# CephInstance.py - Class implementing a PVC node Ceph instance
|
||||||
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
import time
|
||||||
|
import ast
|
||||||
|
import json
|
||||||
|
import psutil
|
||||||
|
|
||||||
|
import pvcnoded.log as log
|
||||||
|
import pvcnoded.zkhandler as zkhandler
|
||||||
|
import pvcnoded.common as common
|
||||||
|
|
||||||
|
class CephOSDInstance(object):
|
||||||
|
def __init__(self, zk_conn, this_node, osd_id):
|
||||||
|
self.zk_conn = zk_conn
|
||||||
|
self.this_node = this_node
|
||||||
|
self.osd_id = osd_id
|
||||||
|
self.node = None
|
||||||
|
self.size = None
|
||||||
|
self.stats = dict()
|
||||||
|
|
||||||
|
@self.zk_conn.DataWatch('/ceph/osds/{}/node'.format(self.osd_id))
|
||||||
|
def watch_osd_node(data, stat, event=''):
|
||||||
|
if event and event.type == 'DELETED':
|
||||||
|
# The key has been deleted after existing before; terminate this watcher
|
||||||
|
# because this class instance is about to be reaped in Daemon.py
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = data.decode('ascii')
|
||||||
|
except AttributeError:
|
||||||
|
data = ''
|
||||||
|
|
||||||
|
if data and data != self.node:
|
||||||
|
self.node = data
|
||||||
|
|
||||||
|
@self.zk_conn.DataWatch('/ceph/osds/{}/stats'.format(self.osd_id))
|
||||||
|
def watch_osd_stats(data, stat, event=''):
|
||||||
|
if event and event.type == 'DELETED':
|
||||||
|
# The key has been deleted after existing before; terminate this watcher
|
||||||
|
# because this class instance is about to be reaped in Daemon.py
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = data.decode('ascii')
|
||||||
|
except AttributeError:
|
||||||
|
data = ''
|
||||||
|
|
||||||
|
if data and data != self.stats:
|
||||||
|
self.stats = json.loads(data)
|
||||||
|
|
||||||
|
def add_osd(zk_conn, logger, node, device, weight):
|
||||||
|
# We are ready to create a new OSD on this node
|
||||||
|
logger.out('Creating new OSD disk on block device {}'.format(device), state='i')
|
||||||
|
try:
|
||||||
|
# 1. Create an OSD; we do this so we know what ID will be gen'd
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd create')
|
||||||
|
if retcode:
|
||||||
|
print('ceph osd create')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
osd_id = stdout.rstrip()
|
||||||
|
|
||||||
|
# 2. Remove that newly-created OSD
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd rm {}'.format(osd_id))
|
||||||
|
if retcode:
|
||||||
|
print('ceph osd rm')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 3a. Zap the disk to ensure it is ready to go
|
||||||
|
logger.out('Zapping disk {}'.format(device), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph-volume lvm zap --destroy {}'.format(device))
|
||||||
|
if retcode:
|
||||||
|
print('ceph-volume lvm zap')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 3b. Create the OSD for real
|
||||||
|
logger.out('Preparing LVM for new OSD disk with ID {} on {}'.format(osd_id, device), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command(
|
||||||
|
'ceph-volume lvm prepare --bluestore --data {device}'.format(
|
||||||
|
osdid=osd_id,
|
||||||
|
device=device
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if retcode:
|
||||||
|
print('ceph-volume lvm prepare')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 4a. Get OSD FSID
|
||||||
|
logger.out('Getting OSD FSID for ID {} on {}'.format(osd_id, device), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command(
|
||||||
|
'ceph-volume lvm list {device}'.format(
|
||||||
|
osdid=osd_id,
|
||||||
|
device=device
|
||||||
|
)
|
||||||
|
)
|
||||||
|
for line in stdout.split('\n'):
|
||||||
|
if 'osd fsid' in line:
|
||||||
|
osd_fsid = line.split()[-1]
|
||||||
|
|
||||||
|
if not osd_fsid:
|
||||||
|
print('ceph-volume lvm list')
|
||||||
|
print('Could not find OSD fsid in data:')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 4b. Activate the OSD
|
||||||
|
logger.out('Activating new OSD disk with ID {}'.format(osd_id, device), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command(
|
||||||
|
'ceph-volume lvm activate --bluestore {osdid} {osdfsid}'.format(
|
||||||
|
osdid=osd_id,
|
||||||
|
osdfsid=osd_fsid
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if retcode:
|
||||||
|
print('ceph-volume lvm activate')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 5. Add it to the crush map
|
||||||
|
logger.out('Adding new OSD disk with ID {} to CRUSH map'.format(osd_id), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command(
|
||||||
|
'ceph osd crush add osd.{osdid} {weight} root=default host={node}'.format(
|
||||||
|
osdid=osd_id,
|
||||||
|
weight=weight,
|
||||||
|
node=node
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if retcode:
|
||||||
|
print('ceph osd crush add')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
# 6. Verify it started
|
||||||
|
retcode, stdout, stderr = common.run_os_command(
|
||||||
|
'systemctl status ceph-osd@{osdid}'.format(
|
||||||
|
osdid=osd_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if retcode:
|
||||||
|
print('systemctl status')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 7. Add the new OSD to the list
|
||||||
|
logger.out('Adding new OSD disk with ID {} to Zookeeper'.format(osd_id), state='i')
|
||||||
|
zkhandler.writedata(zk_conn, {
|
||||||
|
'/ceph/osds/{}'.format(osd_id): '',
|
||||||
|
'/ceph/osds/{}/node'.format(osd_id): node,
|
||||||
|
'/ceph/osds/{}/device'.format(osd_id): device,
|
||||||
|
'/ceph/osds/{}/stats'.format(osd_id): '{}'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Log it
|
||||||
|
logger.out('Created new OSD disk with ID {}'.format(osd_id), state='o')
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
# Log it
|
||||||
|
logger.out('Failed to create new OSD disk: {}'.format(e), state='e')
|
||||||
|
return False
|
||||||
|
|
||||||
|
def remove_osd(zk_conn, logger, osd_id, osd_obj):
|
||||||
|
logger.out('Removing OSD disk {}'.format(osd_id), state='i')
|
||||||
|
try:
|
||||||
|
# 1. Verify the OSD is present
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd ls')
|
||||||
|
osd_list = stdout.split('\n')
|
||||||
|
if not osd_id in osd_list:
|
||||||
|
logger.out('Could not find OSD {} in the cluster'.format(osd_id), state='e')
|
||||||
|
return True
|
||||||
|
|
||||||
|
# 1. Set the OSD out so it will flush
|
||||||
|
logger.out('Setting out OSD disk with ID {}'.format(osd_id), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd out {}'.format(osd_id))
|
||||||
|
if retcode:
|
||||||
|
print('ceph osd out')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 2. Wait for the OSD to flush
|
||||||
|
logger.out('Flushing OSD disk with ID {}'.format(osd_id), state='i')
|
||||||
|
osd_string = str()
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph pg dump osds --format json')
|
||||||
|
dump_string = json.loads(stdout)
|
||||||
|
for osd in dump_string:
|
||||||
|
if str(osd['osd']) == osd_id:
|
||||||
|
osd_string = osd
|
||||||
|
num_pgs = osd_string['num_pgs']
|
||||||
|
if num_pgs > 0:
|
||||||
|
time.sleep(5)
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
break
|
||||||
|
|
||||||
|
# 3. Stop the OSD process and wait for it to be terminated
|
||||||
|
logger.out('Stopping OSD disk with ID {}'.format(osd_id), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command('systemctl stop ceph-osd@{}'.format(osd_id))
|
||||||
|
if retcode:
|
||||||
|
print('systemctl stop')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# FIXME: There has to be a better way to do this /shrug
|
||||||
|
while True:
|
||||||
|
is_osd_up = False
|
||||||
|
# Find if there is a process named ceph-osd with arg '--id {id}'
|
||||||
|
for p in psutil.process_iter(attrs=['name', 'cmdline']):
|
||||||
|
if 'ceph-osd' == p.info['name'] and '--id {}'.format(osd_id) in ' '.join(p.info['cmdline']):
|
||||||
|
is_osd_up = True
|
||||||
|
# If there isn't, continue
|
||||||
|
if not is_osd_up:
|
||||||
|
break
|
||||||
|
|
||||||
|
# 4. Determine the block devices
|
||||||
|
retcode, stdout, stderr = common.run_os_command('readlink /var/lib/ceph/osd/ceph-{}/block'.format(osd_id))
|
||||||
|
vg_name = stdout.split('/')[-2] # e.g. /dev/ceph-<uuid>/osd-block-<uuid>
|
||||||
|
retcode, stdout, stderr = common.run_os_command('vgs --separator , --noheadings -o pv_name {}'.format(vg_name))
|
||||||
|
pv_block = stdout.strip()
|
||||||
|
|
||||||
|
# 5. Zap the volumes
|
||||||
|
logger.out('Zapping OSD disk with ID {} on {}'.format(osd_id, pv_block), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph-volume lvm zap --destroy {}'.format(pv_block))
|
||||||
|
if retcode:
|
||||||
|
print('ceph-volume lvm zap')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 6. Purge the OSD from Ceph
|
||||||
|
logger.out('Purging OSD disk with ID {}'.format(osd_id), state='i')
|
||||||
|
retcode, stdout, stderr = common.run_os_command('ceph osd purge {} --yes-i-really-mean-it'.format(osd_id))
|
||||||
|
if retcode:
|
||||||
|
print('ceph osd purge')
|
||||||
|
print(stdout)
|
||||||
|
print(stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# 7. Delete OSD from ZK
|
||||||
|
logger.out('Deleting OSD disk with ID {} from Zookeeper'.format(osd_id), state='i')
|
||||||
|
zkhandler.deletekey(zk_conn, '/ceph/osds/{}'.format(osd_id))
|
||||||
|
|
||||||
|
# Log it
|
||||||
|
logger.out('Removed OSD disk with ID {}'.format(osd_id), state='o')
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
# Log it
|
||||||
|
logger.out('Failed to purge OSD disk with ID {}: {}'.format(osd_id, e), state='e')
|
||||||
|
return False
|
||||||
|
|
||||||
|
class CephPoolInstance(object):
|
||||||
|
def __init__(self, zk_conn, this_node, name):
|
||||||
|
self.zk_conn = zk_conn
|
||||||
|
self.this_node = this_node
|
||||||
|
self.name = name
|
||||||
|
self.pgs = ''
|
||||||
|
self.stats = dict()
|
||||||
|
|
||||||
|
@self.zk_conn.DataWatch('/ceph/pools/{}/pgs'.format(self.name))
|
||||||
|
def watch_pool_node(data, stat, event=''):
|
||||||
|
if event and event.type == 'DELETED':
|
||||||
|
# The key has been deleted after existing before; terminate this watcher
|
||||||
|
# because this class instance is about to be reaped in Daemon.py
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = data.decode('ascii')
|
||||||
|
except AttributeError:
|
||||||
|
data = ''
|
||||||
|
|
||||||
|
if data and data != self.pgs:
|
||||||
|
self.pgs = data
|
||||||
|
|
||||||
|
@self.zk_conn.DataWatch('/ceph/pools/{}/stats'.format(self.name))
|
||||||
|
def watch_pool_stats(data, stat, event=''):
|
||||||
|
if event and event.type == 'DELETED':
|
||||||
|
# The key has been deleted after existing before; terminate this watcher
|
||||||
|
# because this class instance is about to be reaped in Daemon.py
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = data.decode('ascii')
|
||||||
|
except AttributeError:
|
||||||
|
data = ''
|
||||||
|
|
||||||
|
if data and data != self.stats:
|
||||||
|
self.stats = json.loads(data)
|
||||||
|
|
||||||
|
class CephVolumeInstance(object):
|
||||||
|
def __init__(self, zk_conn, this_node, pool, name):
|
||||||
|
self.zk_conn = zk_conn
|
||||||
|
self.this_node = this_node
|
||||||
|
self.pool = pool
|
||||||
|
self.name = name
|
||||||
|
self.stats = dict()
|
||||||
|
|
||||||
|
@self.zk_conn.DataWatch('/ceph/volumes/{}/{}/stats'.format(self.pool, self.name))
|
||||||
|
def watch_volume_stats(data, stat, event=''):
|
||||||
|
if event and event.type == 'DELETED':
|
||||||
|
# The key has been deleted after existing before; terminate this watcher
|
||||||
|
# because this class instance is about to be reaped in Daemon.py
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = data.decode('ascii')
|
||||||
|
except AttributeError:
|
||||||
|
data = ''
|
||||||
|
|
||||||
|
if data and data != self.stats:
|
||||||
|
self.stats = json.loads(data)
|
||||||
|
|
||||||
|
class CephSnapshotInstance(object):
|
||||||
|
def __init__(self, zk_conn, this_node, name):
|
||||||
|
self.zk_conn = zk_conn
|
||||||
|
self.this_node = this_node
|
||||||
|
self.pool = pool
|
||||||
|
self.volume = volume
|
||||||
|
self.name = name
|
||||||
|
self.stats = dict()
|
||||||
|
|
||||||
|
@self.zk_conn.DataWatch('/ceph/snapshots/{}/{}/{}/stats'.format(self.pool, self.volume, self.name))
|
||||||
|
def watch_snapshot_stats(data, stat, event=''):
|
||||||
|
if event and event.type == 'DELETED':
|
||||||
|
# The key has been deleted after existing before; terminate this watcher
|
||||||
|
# because this class instance is about to be reaped in Daemon.py
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = data.decode('ascii')
|
||||||
|
except AttributeError:
|
||||||
|
data = ''
|
||||||
|
|
||||||
|
if data and data != self.stats:
|
||||||
|
self.stats = json.loads(data)
|
||||||
|
|
||||||
|
# Primary command function
|
||||||
|
# This command pipe is only used for OSD adds and removes
|
||||||
|
def run_command(zk_conn, logger, this_node, data, d_osd):
|
||||||
|
# Get the command and args
|
||||||
|
command, args = data.split()
|
||||||
|
|
||||||
|
# Adding a new OSD
|
||||||
|
if command == 'osd_add':
|
||||||
|
node, device, weight = args.split(',')
|
||||||
|
if node == this_node.name:
|
||||||
|
# Lock the command queue
|
||||||
|
zk_lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
||||||
|
with zk_lock:
|
||||||
|
# Add the OSD
|
||||||
|
result = add_osd(zk_conn, logger, node, device, weight)
|
||||||
|
# Command succeeded
|
||||||
|
if result:
|
||||||
|
# Update the command queue
|
||||||
|
zkhandler.writedata(zk_conn, {'/cmd/ceph': 'success-{}'.format(data)})
|
||||||
|
# Command failed
|
||||||
|
else:
|
||||||
|
# Update the command queue
|
||||||
|
zkhandler.writedata(zk_conn, {'/cmd/ceph': 'failure-{}'.format(data)})
|
||||||
|
# Wait 1 seconds before we free the lock, to ensure the client hits the lock
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
# Removing an OSD
|
||||||
|
elif command == 'osd_remove':
|
||||||
|
osd_id = args
|
||||||
|
|
||||||
|
# Verify osd_id is in the list
|
||||||
|
if d_osd[osd_id] and d_osd[osd_id].node == this_node.name:
|
||||||
|
# Lock the command queue
|
||||||
|
zk_lock = zkhandler.writelock(zk_conn, '/cmd/ceph')
|
||||||
|
with zk_lock:
|
||||||
|
# Remove the OSD
|
||||||
|
result = remove_osd(zk_conn, logger, osd_id, d_osd[osd_id])
|
||||||
|
# Command succeeded
|
||||||
|
if result:
|
||||||
|
# Update the command queue
|
||||||
|
zkhandler.writedata(zk_conn, {'/cmd/ceph': 'success-{}'.format(data)})
|
||||||
|
# Command failed
|
||||||
|
else:
|
||||||
|
# Update the command queue
|
||||||
|
zkhandler.writedata(zk_conn, {'/cmd/ceph': 'failure-{}'.format(data)})
|
||||||
|
# Wait 1 seconds before we free the lock, to ensure the client hits the lock
|
||||||
|
time.sleep(1)
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
# DNSAggregatorInstance.py - Class implementing a DNS aggregator and run by pvcd
|
# DNSAggregatorInstance.py - Class implementing a DNS aggregator and run by pvcnoded
|
||||||
# Part of the Parallel Virtual Cluster (PVC) system
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
#
|
#
|
||||||
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
@ -28,9 +28,9 @@ import dns.zone
|
|||||||
import dns.query
|
import dns.query
|
||||||
import psycopg2
|
import psycopg2
|
||||||
|
|
||||||
import pvcd.log as log
|
import pvcnoded.log as log
|
||||||
import pvcd.zkhandler as zkhandler
|
import pvcnoded.zkhandler as zkhandler
|
||||||
import pvcd.common as common
|
import pvcnoded.common as common
|
||||||
|
|
||||||
class DNSAggregatorInstance(object):
|
class DNSAggregatorInstance(object):
|
||||||
# Initialization function
|
# Initialization function
|
||||||
@ -336,7 +336,11 @@ class AXFRDaemonInstance(object):
|
|||||||
zone_modified = False
|
zone_modified = False
|
||||||
|
|
||||||
# Set up our SQL cursor
|
# Set up our SQL cursor
|
||||||
sql_curs = self.sql_conn.cursor()
|
try:
|
||||||
|
sql_curs = self.sql_conn.cursor()
|
||||||
|
except:
|
||||||
|
time.sleep(0.5)
|
||||||
|
continue
|
||||||
|
|
||||||
# Set up our basic variables
|
# Set up our basic variables
|
||||||
domain = network.domain
|
domain = network.domain
|
@ -21,7 +21,7 @@
|
|||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
# Version string for startup output
|
# Version string for startup output
|
||||||
version = '0.6'
|
version = '0.7'
|
||||||
|
|
||||||
import kazoo.client
|
import kazoo.client
|
||||||
import libvirt
|
import libvirt
|
||||||
@ -44,17 +44,17 @@ import apscheduler.schedulers.background
|
|||||||
|
|
||||||
from distutils.util import strtobool
|
from distutils.util import strtobool
|
||||||
|
|
||||||
import pvcd.log as log
|
import pvcnoded.log as log
|
||||||
import pvcd.zkhandler as zkhandler
|
import pvcnoded.zkhandler as zkhandler
|
||||||
import pvcd.fencing as fencing
|
import pvcnoded.fencing as fencing
|
||||||
import pvcd.common as common
|
import pvcnoded.common as common
|
||||||
|
|
||||||
import pvcd.VMInstance as VMInstance
|
import pvcnoded.VMInstance as VMInstance
|
||||||
import pvcd.NodeInstance as NodeInstance
|
import pvcnoded.NodeInstance as NodeInstance
|
||||||
import pvcd.VXNetworkInstance as VXNetworkInstance
|
import pvcnoded.VXNetworkInstance as VXNetworkInstance
|
||||||
import pvcd.DNSAggregatorInstance as DNSAggregatorInstance
|
import pvcnoded.DNSAggregatorInstance as DNSAggregatorInstance
|
||||||
import pvcd.CephInstance as CephInstance
|
import pvcnoded.CephInstance as CephInstance
|
||||||
import pvcd.MetadataAPIInstance as MetadataAPIInstance
|
import pvcnoded.MetadataAPIInstance as MetadataAPIInstance
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# PVCD - node daemon startup program
|
# PVCD - node daemon startup program
|
||||||
@ -99,9 +99,9 @@ def stopKeepaliveTimer():
|
|||||||
|
|
||||||
# Get the config file variable from the environment
|
# Get the config file variable from the environment
|
||||||
try:
|
try:
|
||||||
pvcd_config_file = os.environ['PVCD_CONFIG_FILE']
|
pvcnoded_config_file = os.environ['PVCD_CONFIG_FILE']
|
||||||
except:
|
except:
|
||||||
print('ERROR: The "PVCD_CONFIG_FILE" environment variable must be set before starting pvcd.')
|
print('ERROR: The "PVCD_CONFIG_FILE" environment variable must be set before starting pvcnoded.')
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
# Set local hostname and domain variables
|
# Set local hostname and domain variables
|
||||||
@ -126,10 +126,10 @@ staticdata.append(subprocess.run(['uname', '-o'], stdout=subprocess.PIPE).stdout
|
|||||||
staticdata.append(subprocess.run(['uname', '-m'], stdout=subprocess.PIPE).stdout.decode('ascii').strip())
|
staticdata.append(subprocess.run(['uname', '-m'], stdout=subprocess.PIPE).stdout.decode('ascii').strip())
|
||||||
|
|
||||||
# Read and parse the config file
|
# Read and parse the config file
|
||||||
def readConfig(pvcd_config_file, myhostname):
|
def readConfig(pvcnoded_config_file, myhostname):
|
||||||
print('Loading configuration from file "{}"'.format(pvcd_config_file))
|
print('Loading configuration from file "{}"'.format(pvcnoded_config_file))
|
||||||
|
|
||||||
with open(pvcd_config_file, 'r') as cfgfile:
|
with open(pvcnoded_config_file, 'r') as cfgfile:
|
||||||
try:
|
try:
|
||||||
o_config = yaml.load(cfgfile)
|
o_config = yaml.load(cfgfile)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@ -272,7 +272,7 @@ def readConfig(pvcd_config_file, myhostname):
|
|||||||
return config
|
return config
|
||||||
|
|
||||||
# Get the config object from readConfig()
|
# Get the config object from readConfig()
|
||||||
config = readConfig(pvcd_config_file, myhostname)
|
config = readConfig(pvcnoded_config_file, myhostname)
|
||||||
debug = config['debug']
|
debug = config['debug']
|
||||||
if debug:
|
if debug:
|
||||||
print('DEBUG MODE ENABLED')
|
print('DEBUG MODE ENABLED')
|
||||||
@ -335,7 +335,7 @@ logger.out(' CPUs: {}'.format(staticdata[0]))
|
|||||||
logger.out(' Arch: {}'.format(staticdata[3]))
|
logger.out(' Arch: {}'.format(staticdata[3]))
|
||||||
logger.out(' OS: {}'.format(staticdata[2]))
|
logger.out(' OS: {}'.format(staticdata[2]))
|
||||||
logger.out(' Kernel: {}'.format(staticdata[1]))
|
logger.out(' Kernel: {}'.format(staticdata[1]))
|
||||||
logger.out('Starting pvcd on host {}'.format(myfqdn), state='s')
|
logger.out('Starting pvcnoded on host {}'.format(myfqdn), state='s')
|
||||||
|
|
||||||
# Define some colours for future messages if applicable
|
# Define some colours for future messages if applicable
|
||||||
if config['log_colours']:
|
if config['log_colours']:
|
||||||
@ -421,7 +421,7 @@ if enable_networking:
|
|||||||
common.run_os_command('ip route add default via {} dev {}'.format(upstream_gateway, 'brupstream'))
|
common.run_os_command('ip route add default via {} dev {}'.format(upstream_gateway, 'brupstream'))
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# PHASE 2b - Prepare sysctl for pvcd
|
# PHASE 2b - Prepare sysctl for pvcnoded
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
if enable_networking:
|
if enable_networking:
|
||||||
@ -553,7 +553,7 @@ def cleanup():
|
|||||||
# Set shutdown state in Zookeeper
|
# Set shutdown state in Zookeeper
|
||||||
zkhandler.writedata(zk_conn, { '/nodes/{}/daemonstate'.format(myhostname): 'shutdown' })
|
zkhandler.writedata(zk_conn, { '/nodes/{}/daemonstate'.format(myhostname): 'shutdown' })
|
||||||
|
|
||||||
logger.out('Terminating pvcd and cleaning up', state='s')
|
logger.out('Terminating pvcnoded and cleaning up', state='s')
|
||||||
|
|
||||||
# Stop keepalive thread
|
# Stop keepalive thread
|
||||||
try:
|
try:
|
||||||
@ -575,14 +575,17 @@ def cleanup():
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
# Force into secondary network state if needed
|
# Force into secondary network state if needed
|
||||||
if zkhandler.readdata(zk_conn, '/nodes/{}/routerstate'.format(myhostname)) == 'primary':
|
try:
|
||||||
is_primary = True
|
if this_node.router_state == 'primary':
|
||||||
zkhandler.writedata(zk_conn, {
|
is_primary = True
|
||||||
'/nodes/{}/routerstate'.format(myhostname): 'secondary',
|
zkhandler.writedata(zk_conn, {
|
||||||
'/primary_node': 'none'
|
'/primary_node': 'none'
|
||||||
})
|
})
|
||||||
logger.out('Waiting 5 seconds for primary migration', state='s')
|
logger.out('Waiting for primary migration', state='s')
|
||||||
time.sleep(5)
|
while this_node.router_state != 'secondary':
|
||||||
|
time.sleep(1)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
# Set stop state in Zookeeper
|
# Set stop state in Zookeeper
|
||||||
zkhandler.writedata(zk_conn, { '/nodes/{}/daemonstate'.format(myhostname): 'stop' })
|
zkhandler.writedata(zk_conn, { '/nodes/{}/daemonstate'.format(myhostname): 'stop' })
|
||||||
@ -825,9 +828,10 @@ def update_primary(new_primary, stat, event=''):
|
|||||||
logger.out('Contending for primary coordinator state', state='i')
|
logger.out('Contending for primary coordinator state', state='i')
|
||||||
zkhandler.writedata(zk_conn, {'/primary_node': myhostname})
|
zkhandler.writedata(zk_conn, {'/primary_node': myhostname})
|
||||||
elif new_primary == myhostname:
|
elif new_primary == myhostname:
|
||||||
zkhandler.writedata(zk_conn, {'/nodes/{}/routerstate'.format(myhostname): 'primary'})
|
zkhandler.writedata(zk_conn, {'/nodes/{}/routerstate'.format(myhostname): 'takeover'})
|
||||||
else:
|
else:
|
||||||
zkhandler.writedata(zk_conn, {'/nodes/{}/routerstate'.format(myhostname): 'secondary'})
|
if this_node.router_state != 'secondary':
|
||||||
|
zkhandler.writedata(zk_conn, {'/nodes/{}/routerstate'.format(myhostname): 'relinquish'})
|
||||||
else:
|
else:
|
||||||
zkhandler.writedata(zk_conn, {'/nodes/{}/routerstate'.format(myhostname): 'client'})
|
zkhandler.writedata(zk_conn, {'/nodes/{}/routerstate'.format(myhostname): 'client'})
|
||||||
|
|
@ -29,8 +29,8 @@ import psycopg2
|
|||||||
from psycopg2.extras import RealDictCursor
|
from psycopg2.extras import RealDictCursor
|
||||||
|
|
||||||
# The metadata server requires client libraries
|
# The metadata server requires client libraries
|
||||||
import client_lib.vm as pvc_vm
|
import daemon_lib.vm as pvc_vm
|
||||||
import client_lib.network as pvc_network
|
import daemon_lib.network as pvc_network
|
||||||
|
|
||||||
class MetadataAPIInstance(object):
|
class MetadataAPIInstance(object):
|
||||||
mdapi = flask.Flask(__name__)
|
mdapi = flask.Flask(__name__)
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
# NodeInstance.py - Class implementing a PVC node in pvcd
|
# NodeInstance.py - Class implementing a PVC node in pvcnoded
|
||||||
# Part of the Parallel Virtual Cluster (PVC) system
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
#
|
#
|
||||||
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
@ -28,9 +28,9 @@ import time
|
|||||||
import libvirt
|
import libvirt
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
import pvcd.log as log
|
import pvcnoded.log as log
|
||||||
import pvcd.zkhandler as zkhandler
|
import pvcnoded.zkhandler as zkhandler
|
||||||
import pvcd.common as common
|
import pvcnoded.common as common
|
||||||
|
|
||||||
class NodeInstance(object):
|
class NodeInstance(object):
|
||||||
# Initialization function
|
# Initialization function
|
||||||
@ -117,16 +117,19 @@ class NodeInstance(object):
|
|||||||
if data != self.router_state:
|
if data != self.router_state:
|
||||||
self.router_state = data
|
self.router_state = data
|
||||||
if self.config['enable_networking']:
|
if self.config['enable_networking']:
|
||||||
if self.router_state == 'primary':
|
if self.router_state == 'takeover':
|
||||||
self.logger.out('Setting node {} to primary state'.format(self.name), state='i')
|
self.logger.out('Setting node {} to primary state'.format(self.name), state='i')
|
||||||
transition_thread = threading.Thread(target=self.become_primary, args=(), kwargs={})
|
transition_thread = threading.Thread(target=self.become_primary, args=(), kwargs={})
|
||||||
transition_thread.start()
|
transition_thread.start()
|
||||||
else:
|
if self.router_state == 'relinquish':
|
||||||
# Skip becoming secondary unless already running
|
# Skip becoming secondary unless already running
|
||||||
if self.daemon_state == 'run' or self.daemon_state == 'shutdown':
|
if self.daemon_state == 'run' or self.daemon_state == 'shutdown':
|
||||||
self.logger.out('Setting node {} to secondary state'.format(self.name), state='i')
|
self.logger.out('Setting node {} to secondary state'.format(self.name), state='i')
|
||||||
transition_thread = threading.Thread(target=self.become_secondary, args=(), kwargs={})
|
transition_thread = threading.Thread(target=self.become_secondary, args=(), kwargs={})
|
||||||
transition_thread.start()
|
transition_thread.start()
|
||||||
|
else:
|
||||||
|
# We did nothing, so just become secondary state
|
||||||
|
zkhandler.writedata(self.zk_conn, {'/nodes/{}/routerstate'.format(self.name): 'secondary'})
|
||||||
|
|
||||||
@self.zk_conn.DataWatch('/nodes/{}/domainstate'.format(self.name))
|
@self.zk_conn.DataWatch('/nodes/{}/domainstate'.format(self.name))
|
||||||
def watch_node_domainstate(data, stat, event=''):
|
def watch_node_domainstate(data, stat, event=''):
|
||||||
@ -428,8 +431,8 @@ class NodeInstance(object):
|
|||||||
self.logger.out('Setting Patroni leader to this node', state='i')
|
self.logger.out('Setting Patroni leader to this node', state='i')
|
||||||
tick = 1
|
tick = 1
|
||||||
patroni_failed = True
|
patroni_failed = True
|
||||||
# As long as we're primary, keep trying to set the Patroni leader to us
|
# As long as we're in takeover, keep trying to set the Patroni leader to us
|
||||||
while self.router_state == 'primary':
|
while self.router_state == 'takeover':
|
||||||
# Switch Patroni leader to the local instance
|
# Switch Patroni leader to the local instance
|
||||||
retcode, stdout, stderr = common.run_os_command(
|
retcode, stdout, stderr = common.run_os_command(
|
||||||
"""
|
"""
|
||||||
@ -452,6 +455,7 @@ class NodeInstance(object):
|
|||||||
# Handle our current Patroni leader being us
|
# Handle our current Patroni leader being us
|
||||||
if stdout and stdout.split('\n')[-1].split() == ["Error:", "Switchover", "target", "and", "source", "are", "the", "same."]:
|
if stdout and stdout.split('\n')[-1].split() == ["Error:", "Switchover", "target", "and", "source", "are", "the", "same."]:
|
||||||
self.logger.out('Failed to switch Patroni leader to ourselves; this is fine\n{}'.format(stdout), state='w')
|
self.logger.out('Failed to switch Patroni leader to ourselves; this is fine\n{}'.format(stdout), state='w')
|
||||||
|
patroni_failed = False
|
||||||
break
|
break
|
||||||
# Handle a failed switchover
|
# Handle a failed switchover
|
||||||
elif stdout and (stdout.split('\n')[-1].split()[:2] == ["Switchover", "failed,"] or stdout.strip().split('\n')[-1].split()[:1] == ["Error"]):
|
elif stdout and (stdout.split('\n')[-1].split()[:2] == ["Switchover", "failed,"] or stdout.strip().split('\n')[-1].split()[:1] == ["Error"]):
|
||||||
@ -471,9 +475,9 @@ class NodeInstance(object):
|
|||||||
# 6. Start client API (and provisioner worker)
|
# 6. Start client API (and provisioner worker)
|
||||||
if self.config['enable_api']:
|
if self.config['enable_api']:
|
||||||
self.logger.out('Starting PVC API client service', state='i')
|
self.logger.out('Starting PVC API client service', state='i')
|
||||||
common.run_os_command("systemctl start pvc-api.service")
|
common.run_os_command("systemctl start pvcapid.service")
|
||||||
self.logger.out('Starting PVC Provisioner Worker service', state='i')
|
self.logger.out('Starting PVC Provisioner Worker service', state='i')
|
||||||
common.run_os_command("systemctl start pvc-provisioner-worker.service")
|
common.run_os_command("systemctl start pvcapid-worker.service")
|
||||||
# 7. Start metadata API; just continue if we fail
|
# 7. Start metadata API; just continue if we fail
|
||||||
self.metadata_api.start()
|
self.metadata_api.start()
|
||||||
# 8. Start DHCP servers
|
# 8. Start DHCP servers
|
||||||
@ -489,7 +493,10 @@ class NodeInstance(object):
|
|||||||
lock.release()
|
lock.release()
|
||||||
self.logger.out('Released write lock for synchronization G', state='o')
|
self.logger.out('Released write lock for synchronization G', state='o')
|
||||||
|
|
||||||
|
# Wait 2 seconds for everything to stabilize before we declare all-done
|
||||||
|
time.sleep(2)
|
||||||
primary_lock.release()
|
primary_lock.release()
|
||||||
|
zkhandler.writedata(self.zk_conn, {'/nodes/{}/routerstate'.format(self.name): 'primary'})
|
||||||
self.logger.out('Node {} transitioned to primary state'.format(self.name), state='o')
|
self.logger.out('Node {} transitioned to primary state'.format(self.name), state='o')
|
||||||
|
|
||||||
def become_secondary(self):
|
def become_secondary(self):
|
||||||
@ -525,7 +532,7 @@ class NodeInstance(object):
|
|||||||
# 3. Stop client API
|
# 3. Stop client API
|
||||||
if self.config['enable_api']:
|
if self.config['enable_api']:
|
||||||
self.logger.out('Stopping PVC API client service', state='i')
|
self.logger.out('Stopping PVC API client service', state='i')
|
||||||
common.run_os_command("systemctl stop pvc-api.service")
|
common.run_os_command("systemctl stop pvcapid.service")
|
||||||
# 4. Stop metadata API
|
# 4. Stop metadata API
|
||||||
self.metadata_api.stop()
|
self.metadata_api.stop()
|
||||||
time.sleep(0.1) # Time for new writer to acquire the lock
|
time.sleep(0.1) # Time for new writer to acquire the lock
|
||||||
@ -611,6 +618,9 @@ class NodeInstance(object):
|
|||||||
lock.release()
|
lock.release()
|
||||||
self.logger.out('Released read lock for synchronization G', state='o')
|
self.logger.out('Released read lock for synchronization G', state='o')
|
||||||
|
|
||||||
|
# Wait 2 seconds for everything to stabilize before we declare all-done
|
||||||
|
time.sleep(2)
|
||||||
|
zkhandler.writedata(self.zk_conn, {'/nodes/{}/routerstate'.format(self.name): 'secondary'})
|
||||||
self.logger.out('Node {} transitioned to secondary state'.format(self.name), state='o')
|
self.logger.out('Node {} transitioned to secondary state'.format(self.name), state='o')
|
||||||
|
|
||||||
# Flush all VMs on the host
|
# Flush all VMs on the host
|
@ -32,8 +32,8 @@ from collections import deque
|
|||||||
import fcntl
|
import fcntl
|
||||||
import signal
|
import signal
|
||||||
|
|
||||||
import pvcd.log as log
|
import pvcnoded.log as log
|
||||||
import pvcd.zkhandler as zkhandler
|
import pvcnoded.zkhandler as zkhandler
|
||||||
|
|
||||||
class VMConsoleWatcherInstance(object):
|
class VMConsoleWatcherInstance(object):
|
||||||
# Initialization function
|
# Initialization function
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
# VMInstance.py - Class implementing a PVC virtual machine in pvcd
|
# VMInstance.py - Class implementing a PVC virtual machine in pvcnoded
|
||||||
# Part of the Parallel Virtual Cluster (PVC) system
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
#
|
#
|
||||||
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
@ -30,11 +30,11 @@ import libvirt
|
|||||||
import kazoo.client
|
import kazoo.client
|
||||||
import json
|
import json
|
||||||
|
|
||||||
import pvcd.log as log
|
import pvcnoded.log as log
|
||||||
import pvcd.zkhandler as zkhandler
|
import pvcnoded.zkhandler as zkhandler
|
||||||
import pvcd.common as common
|
import pvcnoded.common as common
|
||||||
|
|
||||||
import pvcd.VMConsoleWatcherInstance as VMConsoleWatcherInstance
|
import pvcnoded.VMConsoleWatcherInstance as VMConsoleWatcherInstance
|
||||||
|
|
||||||
def flush_locks(zk_conn, logger, dom_uuid):
|
def flush_locks(zk_conn, logger, dom_uuid):
|
||||||
logger.out('Flushing RBD locks for VM "{}"'.format(dom_uuid), state='i')
|
logger.out('Flushing RBD locks for VM "{}"'.format(dom_uuid), state='i')
|
||||||
@ -56,13 +56,13 @@ def flush_locks(zk_conn, logger, dom_uuid):
|
|||||||
# If there's at least one lock
|
# If there's at least one lock
|
||||||
if lock_list:
|
if lock_list:
|
||||||
# Loop through the locks
|
# Loop through the locks
|
||||||
for lock, detail in lock_list.items():
|
for lock in lock_list:
|
||||||
# Free the lock
|
# Free the lock
|
||||||
lock_remove_retcode, lock_remove_stdout, lock_remove_stderr = common.run_os_command('rbd lock remove {} "{}" "{}"'.format(rbd, lock, detail['locker']))
|
lock_remove_retcode, lock_remove_stdout, lock_remove_stderr = common.run_os_command('rbd lock remove {} "{}" "{}"'.format(rbd, lock['id'], lock['locker']))
|
||||||
if lock_remove_retcode != 0:
|
if lock_remove_retcode != 0:
|
||||||
logger.out('Failed to free RBD lock "{}" on volume "{}"\n{}'.format(lock, rbd, lock_remove_stderr), state='e')
|
logger.out('Failed to free RBD lock "{}" on volume "{}"\n{}'.format(lock['id'], rbd, lock_remove_stderr), state='e')
|
||||||
continue
|
continue
|
||||||
logger.out('Freed RBD lock "{}" on volume "{}"'.format(lock, rbd), state='o')
|
logger.out('Freed RBD lock "{}" on volume "{}"'.format(lock['id'], rbd), state='o')
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@ -74,6 +74,9 @@ def run_command(zk_conn, logger, this_node, data):
|
|||||||
# Flushing VM RBD locks
|
# Flushing VM RBD locks
|
||||||
if command == 'flush_locks':
|
if command == 'flush_locks':
|
||||||
dom_uuid = args
|
dom_uuid = args
|
||||||
|
# If this node is taking over primary state, wait until it's done
|
||||||
|
while this_node.router_state == 'takeover':
|
||||||
|
time.sleep(1)
|
||||||
if this_node.router_state == 'primary':
|
if this_node.router_state == 'primary':
|
||||||
# Lock the command queue
|
# Lock the command queue
|
||||||
zk_lock = zkhandler.writelock(zk_conn, '/cmd/domains')
|
zk_lock = zkhandler.writelock(zk_conn, '/cmd/domains')
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
# VXNetworkInstance.py - Class implementing a PVC VM network and run by pvcd
|
# VXNetworkInstance.py - Class implementing a PVC VM network and run by pvcnoded
|
||||||
# Part of the Parallel Virtual Cluster (PVC) system
|
# Part of the Parallel Virtual Cluster (PVC) system
|
||||||
#
|
#
|
||||||
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
# Copyright (C) 2018-2020 Joshua M. Boniface <joshua@boniface.me>
|
||||||
@ -25,9 +25,9 @@ import sys
|
|||||||
import time
|
import time
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
|
|
||||||
import pvcd.log as log
|
import pvcnoded.log as log
|
||||||
import pvcd.zkhandler as zkhandler
|
import pvcnoded.zkhandler as zkhandler
|
||||||
import pvcd.common as common
|
import pvcnoded.common as common
|
||||||
|
|
||||||
class VXNetworkInstance(object):
|
class VXNetworkInstance(object):
|
||||||
# Initialization function
|
# Initialization function
|
||||||
@ -235,11 +235,11 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
|
|
||||||
if data and self.ip6_gateway != data.decode('ascii'):
|
if data and self.ip6_gateway != data.decode('ascii'):
|
||||||
orig_gateway = self.ip6_gateway
|
orig_gateway = self.ip6_gateway
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
if orig_gateway:
|
if orig_gateway:
|
||||||
self.removeGateway6Address()
|
self.removeGateway6Address()
|
||||||
self.ip6_gateway = data.decode('ascii')
|
self.ip6_gateway = data.decode('ascii')
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.createGateway6Address()
|
self.createGateway6Address()
|
||||||
if self.dhcp_server_daemon:
|
if self.dhcp_server_daemon:
|
||||||
self.stopDHCPServer()
|
self.stopDHCPServer()
|
||||||
@ -257,9 +257,9 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
|
|
||||||
if data and self.dhcp6_flag != ( data.decode('ascii') == 'True' ):
|
if data and self.dhcp6_flag != ( data.decode('ascii') == 'True' ):
|
||||||
self.dhcp6_flag = ( data.decode('ascii') == 'True' )
|
self.dhcp6_flag = ( data.decode('ascii') == 'True' )
|
||||||
if self.dhcp6_flag and not self.dhcp_server_daemon and self.this_node.router_state == 'primary':
|
if self.dhcp6_flag and not self.dhcp_server_daemon and self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.startDHCPServer()
|
self.startDHCPServer()
|
||||||
elif self.dhcp_server_daemon and not self.dhcp4_flag and self.this_node.router_state == 'primary':
|
elif self.dhcp_server_daemon and not self.dhcp4_flag and self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.stopDHCPServer()
|
self.stopDHCPServer()
|
||||||
|
|
||||||
@self.zk_conn.DataWatch('/networks/{}/ip4_network'.format(self.vni))
|
@self.zk_conn.DataWatch('/networks/{}/ip4_network'.format(self.vni))
|
||||||
@ -286,11 +286,11 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
|
|
||||||
if data and self.ip4_gateway != data.decode('ascii'):
|
if data and self.ip4_gateway != data.decode('ascii'):
|
||||||
orig_gateway = self.ip4_gateway
|
orig_gateway = self.ip4_gateway
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
if orig_gateway:
|
if orig_gateway:
|
||||||
self.removeGateway4Address()
|
self.removeGateway4Address()
|
||||||
self.ip4_gateway = data.decode('ascii')
|
self.ip4_gateway = data.decode('ascii')
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.createGateway4Address()
|
self.createGateway4Address()
|
||||||
if self.dhcp_server_daemon:
|
if self.dhcp_server_daemon:
|
||||||
self.stopDHCPServer()
|
self.stopDHCPServer()
|
||||||
@ -308,9 +308,9 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
|
|
||||||
if data and self.dhcp4_flag != ( data.decode('ascii') == 'True' ):
|
if data and self.dhcp4_flag != ( data.decode('ascii') == 'True' ):
|
||||||
self.dhcp4_flag = ( data.decode('ascii') == 'True' )
|
self.dhcp4_flag = ( data.decode('ascii') == 'True' )
|
||||||
if self.dhcp4_flag and not self.dhcp_server_daemon and self.this_node.router_state == 'primary':
|
if self.dhcp4_flag and not self.dhcp_server_daemon and self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.startDHCPServer()
|
self.startDHCPServer()
|
||||||
elif self.dhcp_server_daemon and not self.dhcp6_flag and self.this_node.router_state == 'primary':
|
elif self.dhcp_server_daemon and not self.dhcp6_flag and self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.stopDHCPServer()
|
self.stopDHCPServer()
|
||||||
|
|
||||||
@self.zk_conn.DataWatch('/networks/{}/dhcp4_start'.format(self.vni))
|
@self.zk_conn.DataWatch('/networks/{}/dhcp4_start'.format(self.vni))
|
||||||
@ -349,7 +349,7 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
if self.dhcp_reservations != new_reservations:
|
if self.dhcp_reservations != new_reservations:
|
||||||
old_reservations = self.dhcp_reservations
|
old_reservations = self.dhcp_reservations
|
||||||
self.dhcp_reservations = new_reservations
|
self.dhcp_reservations = new_reservations
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.updateDHCPReservations(old_reservations, new_reservations)
|
self.updateDHCPReservations(old_reservations, new_reservations)
|
||||||
if self.dhcp_server_daemon:
|
if self.dhcp_server_daemon:
|
||||||
self.stopDHCPServer()
|
self.stopDHCPServer()
|
||||||
@ -601,7 +601,7 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
self.createGateway4Address()
|
self.createGateway4Address()
|
||||||
|
|
||||||
def createGateway6Address(self):
|
def createGateway6Address(self):
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.logger.out(
|
self.logger.out(
|
||||||
'Creating gateway {}/{} on interface {}'.format(
|
'Creating gateway {}/{} on interface {}'.format(
|
||||||
self.ip6_gateway,
|
self.ip6_gateway,
|
||||||
@ -614,7 +614,7 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
common.createIPAddress(self.ip6_gateway, self.ip6_cidrnetmask, self.bridge_nic)
|
common.createIPAddress(self.ip6_gateway, self.ip6_cidrnetmask, self.bridge_nic)
|
||||||
|
|
||||||
def createGateway4Address(self):
|
def createGateway4Address(self):
|
||||||
if self.this_node.router_state == 'primary':
|
if self.this_node.router_state in ['primary', 'takeover']:
|
||||||
self.logger.out(
|
self.logger.out(
|
||||||
'Creating gateway {}/{} on interface {}'.format(
|
'Creating gateway {}/{} on interface {}'.format(
|
||||||
self.ip4_gateway,
|
self.ip4_gateway,
|
||||||
@ -627,7 +627,7 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
common.createIPAddress(self.ip4_gateway, self.ip4_cidrnetmask, self.bridge_nic)
|
common.createIPAddress(self.ip4_gateway, self.ip4_cidrnetmask, self.bridge_nic)
|
||||||
|
|
||||||
def startDHCPServer(self):
|
def startDHCPServer(self):
|
||||||
if self.this_node.router_state == 'primary' and self.nettype == 'managed':
|
if self.this_node.router_state in ['primary', 'takeover'] and self.nettype == 'managed':
|
||||||
self.logger.out(
|
self.logger.out(
|
||||||
'Starting dnsmasq DHCP server on interface {}'.format(
|
'Starting dnsmasq DHCP server on interface {}'.format(
|
||||||
self.bridge_nic
|
self.bridge_nic
|
||||||
@ -637,10 +637,10 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Recreate the environment we need for dnsmasq
|
# Recreate the environment we need for dnsmasq
|
||||||
pvcd_config_file = os.environ['PVCD_CONFIG_FILE']
|
pvcnoded_config_file = os.environ['PVCD_CONFIG_FILE']
|
||||||
dhcp_environment = {
|
dhcp_environment = {
|
||||||
'DNSMASQ_BRIDGE_INTERFACE': self.bridge_nic,
|
'DNSMASQ_BRIDGE_INTERFACE': self.bridge_nic,
|
||||||
'PVCD_CONFIG_FILE': pvcd_config_file
|
'PVCD_CONFIG_FILE': pvcnoded_config_file
|
||||||
}
|
}
|
||||||
|
|
||||||
# Define the dnsmasq config fragments
|
# Define the dnsmasq config fragments
|
||||||
@ -658,7 +658,7 @@ add rule inet filter forward ip6 saddr {netaddr6} counter jump {vxlannic}-out
|
|||||||
'--log-dhcp',
|
'--log-dhcp',
|
||||||
'--keep-in-foreground',
|
'--keep-in-foreground',
|
||||||
'--leasefile-ro',
|
'--leasefile-ro',
|
||||||
'--dhcp-script={}/pvcd/dnsmasq-zookeeper-leases.py'.format(os.getcwd()),
|
'--dhcp-script={}/pvcnoded/dnsmasq-zookeeper-leases.py'.format(os.getcwd()),
|
||||||
'--dhcp-hostsdir={}'.format(self.dnsmasq_hostsdir),
|
'--dhcp-hostsdir={}'.format(self.dnsmasq_hostsdir),
|
||||||
'--bind-interfaces',
|
'--bind-interfaces',
|
||||||
]
|
]
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user