Correct spelling in all documentation
This commit is contained in:
@ -6,9 +6,9 @@ Server management and system administration have changed significantly in the la
|
||||
|
||||
As part of this trend, the rise of IaaS (Infrastructure as a Service) has created an entirely new way for administrators and, increasingly, developers, to interact with servers. They need to be able to provision virtual machines easily and quickly, to ensure those virtual machines are reliable and consistent, and to avoid downtime wherever possible.
|
||||
|
||||
However, the state of the Free Software, virtual management ecosystem in 2019 is quite dissapointing. On the one hand are the giant, IaaS products like OpenStack and CloudStack. These are massive pieces of software, featuring dozens of interlocking parts, designed for massive clusters and public cloud deployments. They're great for a "hyperscale" provider, a large-scale SaaS/IaaS provider, or an enterprise. But they're not designed for small teams or small clusters. On the other hand, tools like Proxmox, oVirt, and even good old fashioned shell scripts are barely scalable, are showing their age, and have become increasily unweildy for advanced usecases - great for one server, not so great for 9 in a highly-available cluster. Not to mention the constant attempts to monitize by throwing features behind Enterprise subscriptions. In short, there is a massive gap between the old-style, pet-based virtualization and the modern, large-scale, IaaS-type virtualization.
|
||||
However, the state of the Free Software, virtual management ecosystem in 2019 is quite disappointing. On the one hand are the giant, IaaS products like OpenStack and CloudStack. These are massive pieces of software, featuring dozens of interlocking parts, designed for massive clusters and public cloud deployments. They're great for a "hyperscale" provider, a large-scale SaaS/IaaS provider, or an enterprise. But they're not designed for small teams or small clusters. On the other hand, tools like Proxmox, oVirt, and even good old fashioned shell scripts are barely scalable, are showing their age, and have become increasingly unwieldy for advanced use-cases - great for one server, not so great for 9 in a highly-available cluster. Not to mention the constant attempts to monetize by throwing features behind Enterprise subscriptions. In short, there is a massive gap between the old-style, pet-based virtualization and the modern, large-scale, IaaS-type virtualization.
|
||||
|
||||
PVC aims to bridge this gap. As a Python 3-based, fully-Free Software, scalable, and redundant private "cloud" that isn't afraid to say it's for small clusters, PVC is able to provide the simple, easy-to-use, small cluster you need today, with minimal administrator work, while being able to scale as your system grows, supporting hundreds or thousands of VMs across dozens of nodes. High availability is baked right into the core software, giving you piece of mind about your cluster, and ensuring that your systems keep running no matter what happens. And the interface couldn't be easier - a straightforward Click-based CLI and a Flask-based HTTP API provide access to the cluster for you to manage, either directly or though scripts or WebUIs. And since everything is Free Software, you can always inspect it, customize it to your usecase, add features, and contribute back to the community if you so choose.
|
||||
PVC aims to bridge this gap. As a Python 3-based, fully-Free Software, scalable, and redundant private "cloud" that isn't afraid to say it's for small clusters, PVC is able to provide the simple, easy-to-use, small cluster you need today, with minimal administrator work, while being able to scale as your system grows, supporting hundreds or thousands of VMs across dozens of nodes. High availability is baked right into the core software, giving you piece of mind about your cluster, and ensuring that your systems keep running no matter what happens. And the interface couldn't be easier - a straightforward Click-based CLI and a Flask-based HTTP API provide access to the cluster for you to manage, either directly or though scripts or WebUIs. And since everything is Free Software, you can always inspect it, customize it to your use-case, add features, and contribute back to the community if you so choose.
|
||||
|
||||
PVC provides all the features you'd expect of a "cloud" system - easy management of VMs, including live migration between nodes for maximum uptime; virtual networking support using either vLANs or EVPN-based VXLAN; shared, redundant, object-based storage using Ceph, and a convenient API interface for building your own interfaces. It is able to do this without being excessively complex, and without making sacrifices for legacy ideas.
|
||||
|
||||
@ -16,7 +16,7 @@ If you need to run virtual machines, and don't have the time to learn the Stacks
|
||||
|
||||
## Cluster Architecture
|
||||
|
||||
A PVC cluster is based around "nodes", which are physical servers on which the various daemons, storage, networks, and virtual machines run. Each node is self-contained; it is able to perform any and all cluster functions if needed, and there are no segmentations of function between different types of physical hosts.
|
||||
A PVC cluster is based around "nodes", which are physical servers on which the various daemons, storage, networks, and virtual machines run. Each node is self-contained; it is able to perform any and all cluster functions if needed, and there is no segmentation of function between different types of physical hosts.
|
||||
|
||||
A limited number of nodes, called "coordinators", are statically configured to provide additional services for the cluster. All databases for instance run on the coordinators, but not other nodes. This prevents any issues with scaling database clusters across dozens of hosts, while still retaining maximum redundancy. In a standard configuration, 3 or 5 nodes are designated as coordinators, and additional nodes connect to the coordinators for database access where required. For quorum purposes, there should always be an odd number of coordinators, and exceeding 5 is likely not required even for large clusters.
|
||||
|
||||
@ -46,7 +46,7 @@ The CLI client manual can be found at the [CLI manual page](/manuals/cli).
|
||||
|
||||
### API client
|
||||
|
||||
The HTTP API client is a more advanced interface to the PVC cluster, suitable for creating custom interfaces for PVC and providing better access control than the CLI. It is a Python 3 Flask application which also interfaces directly with the Zookeper cluster, and provides services on port 7370 by default. The API features a basic, key-based authentication mechanism to prevent unauthorized access, though this is optional, and can also provide HTTPS support if required for maximum security over public networks. With the exception of cluster initialization, the API can perform all functions that the CLI client can using a RESTful layout. Requests return JSON, and POST requests expect HTTP form bodies.
|
||||
The HTTP API client is a more advanced interface to the PVC cluster, suitable for creating custom interfaces for PVC and providing better access control than the CLI. It is a Python 3 Flask application which also interfaces directly with the Zookeeper cluster, and provides services on port 7370 by default. The API features a basic, key-based authentication mechanism to prevent unauthorized access, though this is optional, and can also provide HTTPS support if required for maximum security over public networks. With the exception of cluster initialization, the API can perform all functions that the CLI client can using a RESTful layout. Requests return JSON, and POST requests expect HTTP form bodies.
|
||||
|
||||
Further information about the API client architecture can be found at the [API client architecture page](/architecture/api).
|
||||
|
||||
|
Reference in New Issue
Block a user