Forge Home

proxmox

Manage Proxmox hypervisor and KVM virtual machines or OpenVZ containers.

11,535 downloads

9,221 latest version

4.6 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Support the Puppet Community by contributing to this module

You are welcome to contribute to this module by suggesting new features, currency updates, or fixes. Every contribution is valuable to help ensure that the module remains compatible with the latest Puppet versions and continues to meet community needs. Complete the following steps:

  1. Review the module’s contribution guidelines and any licenses. Ensure that your planned contribution aligns with the author’s standards and any legal requirements.
  2. Fork the repository on GitHub, make changes on a branch of your fork, and submit a pull request. The pull request must clearly document your proposed change.

For questions about updating the module, contact the module’s author.

Version information

  • 0.2.3 (latest)
  • 0.2.2
  • 0.2.1
  • 0.1.0
  • 0.0.2
  • 0.0.1
released Jun 1st 2015
This version is compatible with:
  • Debian

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'gardouille-proxmox', '0.2.3'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add gardouille-proxmox
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install gardouille-proxmox --version 0.2.3

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Download

Documentation

gardouille/proxmox — version 0.2.3 Jun 1st 2015

proxmox

Table of Contents

  1. Overview
  2. Module Description
  3. Setup
  4. Usage
  5. Reference
  6. Other notes
  7. Limitations
  8. Development
  9. License

Overview

The proxmox module provide a simple way to manage Proxmox hypervisor and OpenVZ network's configuration with Puppet.

Module Description

The proxmox module automates installing Proxmox on Debian systems.

Setup

What Proxmox affects:

  • Package/service/configuration files for Proxmox.
  • A new sources.list file for Proxmox.
  • Proxmox's cluster (master and nodes).
  • System repository
  • The static table lookup for hostnames hosts.
  • Users and group permissions for WebGUI.
  • WebGUI's service (pveproxy).
  • Kernel modules loaded at the boot time.
  • OpenVZ's configuration file.
  • OpenVZ's service.
  • OpenVZ CT network's configuration.

Beginning with Proxmox

To begin using proxmox module with default parameters, declare the hypervisor's class with include proxmox::hypervisor.

Usage

Hypervisor

include proxmox::hypervisor

Note: The module will NOT automatically reboot the system on the PVE Kernel. You will need to reboot it manually and start again the puppet agent.

KVM only

If you will use only KVM you can have a most recent kernel with:

class { 'proxmox::hypervisor':
  kvm_only => true,
}

Disable additionnal modules

Disable all additionnal modules load at the boot time:

class { 'proxmox::hypervisor':
  pve_modules_list => [ '' ],
}

Create a cluster full KVM (for Ceph)

node "pve_node" {
  # Install an hypervisor full KVM
  class { 'proxmox::hypervisor':
    pveproxy_allow    => '127.0.0.1,192.168.0.0/24',
    kvm_only          => true,
    cluster_master_ip => '192.168.0.201',
    cluster_name      => 'DeepThought',
  }
  # Access to PVE Webgui
  proxmox::hypervisor::group { 'sysadmin': role => "Administrator", users => [ 'marvin@pam', 'arthur@pam' ] }

  # SSH authorized keys between all nodes without passphrase (the module generate a key if not present)
  ssh_authorized_key { 'hyper01':
    ensure  => present,
    key     => 'AAAAB3NzaC1yc2EAAAADAQABAAABAQDQxnLaBlnujnByt3V7YLZv1+PTjREJ3hphZFdCVNs9ebED55/kEAPmtJzcq2OL7qk8PajvhpB7efuZAatKeCdhILpFBKRrCo/q3MsQUSyaHbrGKs8Kkpz0EBHp1Tgpd8i1+kF1EzVPqT/euNcI6cA3fyMrvdgTI25BwFt93A6bBpf4We7A0l0Ba2nCAs5ekWyKKLh54GO7KBHlMmIzboYpxwgnFcbb9UhuyUz2J6PSC0K+P+hdMXY4dFk/lPMEXLgve/TTPYpgDxgxWMUaobCanwBWcXkZ4MdJw2Qs6TQ0v+cOxX3ogr78w69naGB3joJ4ll31WA+Uo0mcZU3ylFj3',
    type    => 'ssh-rsa',
    user    => 'root',
    options => 'from="192.168.0.201"',
  }
  ssh_authorized_key { 'hyper02':
    ensure  => present,
    key     => 'AAAAB3NzaC1yc2EAAAADAQABAAABAQCxJeQ1R1rhPoig4jZLA8/Haru3nhVMgvDgO7nIqpwuPkDrheINVHOAd+DyQF0I2MtAjzg9gKfyix/cJ0cWMbd6/FdSVJ39dGYtNG9/YwTBcQiYwT0xS4NgJHzKrYE9PH2HEmjTmzcDeZ/u+IZjhO3Kyy9yZKcOhwV6fD+mzjQb4S2zsy67R/aoySbZjuoZYHrBrfjc66WbPbLtsFXIXuk46N376Y5sX37Bj17HhDEdP/lc9v939SswW1RZ2t1mVAjsMdsyBULDZk5av6Uj//YT1KuZBmBWkp7nPp1yt2ANPPGAnEW3oYjzXJd56Xtf3d0nbHOdHvMmIiV9fZyRUATd',
    type    => 'ssh-rsa',
    user    => 'root',
    options => 'from="192.168.0.202"',
  }

  # Verify the authenticity of each hosts (/etc/ssh/ssh_host_{rsa,ecdsa}_key.pub)
  sshkey { 'hyper01':
    ensure       => present,
    host_aliases => [ 'hyper01.domain.org', '192.168.42.201' ],
    key          => 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ3TC6B3+eVbohjk662FwM/1YUCjMwMT9lmZcNcfllF9Vm082lMXtKix20elUCK9yJDpPWvzFiqdyhgqPAeCNt4=',
    target       => '/root/.ssh/known_hosts',
    type         => 'ecdsa-sha2-nistp256',
  }
   sshkey { 'hyper02':
     ensure       => present,
     host_aliases => [ 'hyper02.domain.org', '192.168.42.202' ],
     key          => 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEqUpnig3DIQVZEr3LxJCVEF/fl4n1s8LNuUUaLRueCW2ygzNBOv2m7O42K/Ok7aa4kjGaXbnneYXMw3wBULJ1U='
     target       => '/root/.ssh/known_hosts',
     type         => 'ecdsa-sha2-nistp256',
   }

  # If you don't have a DNS service, i recommend to have an entry for each nodes in the hosts file
  host { 'hyper01':
    name         => "hyper01.${::domain}",
    ensure       => present,
    ip           => '192.168.42.201',
    host_aliases => 'hyper01',
  }
  host { 'hyper02':
    name         => "hyper02.${::domain}",
    ensure       => present,
    ip           => '192.168.42.202',
    host_aliases => 'hyper02',
  }
}

node /hyper0[12]/ inherits "pve_node" {

}

Will create a Cluster Proxmox with name "Deepthought", the master will be "hyper01". You also can manage all ssh ressources (and host) manually on each nodes.

VM

Only OpenVZ is supported right now but the vm's class will check-it by it self:

include proxmox::vm

proxmox::vm::openvz

Automatically call by the proxmox::vm class, it will manage network configuration, but only few configurations are possible:

  • Only one Virtual Ethernet device (aka veth) and it will work with DHCP.
  • If a veth is available, it will be the main network's interface (set the default gateway throught eth0).
  • If a veth is available, only one Virtual Network device (aka venet) as chance to work (the first one), because all others routes will be flushed.
  • If there are only venet: no changes.

Reference

Classes

  • proxmox: Main class, do nothing right now.

  • proxmox::hypervisor: Install the Proxmox hypervisor on the system.

  • proxmox::vm: Manage virtual machines and containers.

Defined types

  • proxmox::hypervisor::group: Manage groups for Proxmox WebGUI and set permissions.
proxmox::hypervisor::group { 'sysadmin':
  role  => "Administrator",
  users => [ 'user1@pam', 'toto@pve' ],
}
  • proxmox::hypervisor::user: Manage user for Proxmox WebGUI.
proxmox::hypervisor::user { 'marvin':
  group => 'sysadmin',
}

Mainly used by the proxmox::hypervisor::group defined type to create the group, permissions and also create/add the users to a group. Because to add a user to a group via this defined type, the group should already exist.

Parameters

proxmox::hypervisor

  • ve_pkg_ensure: What to set the Virtual Environnment package to. Can be 'present', 'absent' or 'version'. Defaults to 'present'.
  • ve_pkg_name: The list of VirtualEnvironnment packages. Can be an array [ 'proxmox-ve-2.6.32', 'ksm-control-daemon', 'vzprocps', 'open-iscsi', 'bootlogd', 'pve-firmware' ].
  • kvm_only: If set to 'true', Puppet will install a newer kernel compatible only with KVM. Accepts 'true' or 'false'. Defaults to 'false'.
  • kernel_kvm_pkg_name: The list of packages to install the newer kernel. Can be an array [ 'pve-kernel-3.10.0-9-pve', '...' ].
  • kernel_pkg_name: The list of packages to install a kernel compatible with both KVM and OpenVZ. Can be an array [ 'pve-kernel-2.6.32-39-pve', '...' ].
  • rec_pkg_name: The list of recommended and usefull packages for Proxmox. Can be an array [ 'ntp', 'ssh', 'lvm2', 'bridge-utils' ].
  • old_pkg_ensure: What to set useless packages (non recommended, previous kernel, ...). Can be 'present' or 'absent'. Defaults to 'absent'.
  • old_pkg_name: The list of useless packages. Can be an array [ 'acpid', 'linux-image-amd64', 'linux-base', 'linux-image-3.2.0-4-amd64' ].
  • pve_enterprise_repo_ensure: Choose to keep the PVE enterprise repository. Can be 'present' or 'absent'. Defaults to 'absent'.
  • pveproxy_default_path: Path of the configuration file read by the PveProxy service. Defaults to '/etc/default/pveproxy'.
  • pveproxy_default_content: Template file use to generate the previous configuration file. Default to 'proxmox/hypervisor/pveproxy_default.erb'.
  • pveproxy_allow: Can be ip addresses, range or network; separated by a comma (example: '192.168.0.0/24,10.10.0.1-10.10.0.5'). Defaults to '127.0.0.1'.
  • pveproxy_deny: Unauthorized IP addresses. Can be 'all' or ip addresses, range or network; separated by a comma. Defaults to 'all'.
  • pveproxy_policy: The policy access. Can be 'allow' or 'deny'. Defaults to 'deny'.
  • pveproxy_service_name: WebGUI's service name (replace Apache2 since v3.0). Defaults to 'pveproxy'.
  • pveproxy_service_manage: If set to 'true', Puppet will manage the WebGUI's service. Can be 'true' or 'false'. Defaults to 'true'.
  • pveproxy_service_enabled: If set to 'true', Puppet will ensure the WebGUI's service is running. Can be 'true' or 'false'. Defaults to 'true'.
  • pve_modules_list: The list of additionnal modules to load at boot time.
  • pve_modules_file_path: The configuration file that will contain the modules list. Defaults to '/etc/modules-load.d/proxmox.conf'.
  • pve_modules_file_content: Template file used to generate the previous configuration file. Defaults to 'proxmox/hypervisor/proxmox_modules.conf.erb'.
  • vz_config_file_path: Path of the main OpenVZ's configuration file. Defaults to '/etc/vz/vz.conf'.
  • vz_config_file_tpl: Template file use to generate the OpenVZ's configuration file. Defaults to 'proxmox/hypervisor/vz.conf.erb'.
  • vz_iptables_modules: If set to 'true', OpenVZ will share a list of iptables modules to the containers. Can be 'true' or 'false'. Defaults to 'true'.
  • vz_service_name: The OpenVZ's service name. Defaults to 'vz'.
  • vz_service_manage: If set to 'true', Puppet will manage the OpenVZ's service. Can be 'true' or 'false'. Defaults to 'true'.
  • vz_service_enabled: If set to 'true', Puppet will ensure the OpenVZ's service is running. Can be 'true' or 'false'. Defaults to 'true'.
  • labs_firewall_rule: If set to 'true', Puppet will set a iptable rule to allow WebGUI and VNC's port access. Can be 'true' or 'false'. Defaults to 'false'.
  • cluster_master_ip: The ip address of the "master" node that will create the cluster. Must be an IP address. Defaults to 'undef'.
  • cluster_name: The cluster's name. Defaults to 'undef'.

proxmox::vm

  • vm_interfaces_path: The main network configuration's file. Defaults to '/etc/network/interfaces'.
  • vm_interfaces_content: Template file used to generate the previous configuration file. Defaults to 'proxmox/vm/openvz_interfaces.erb'.
  • vm_interfaces_tail_path: A second network configuration file that will be concatenated in the main. Defaults to '/etc/network/interfaces.tail'.
  • vm_interfaces_tail_content: Template file used to generate the previous configuration file. Defaults to 'proxmox/vm/openzv_interfaces.tail.erb'.
  • network_service_name: Network's service name. Defaults to 'networking'.
  • network_service_manage: If set to 'true', Puppet will manage the network's service. Can be 'true' or 'false'. Defaults to 'true'.
  • network_service_enabled: If set to 'true', Puppet will ensure the network's service is running. Can be 'true' or 'false'. Defaults to 'true'.

Other notes

By default proxmox::hypervisor comes with several modules kernel load at boot time. Mainly iptables's modules to allow it in the OpenVZ CT.

The default modules list:

  • iptable_filter
  • iptable_mangle
  • iptable_nat
  • ipt_length (=xt_length)
  • ipt_limit (=xt_limit)
  • ipt_LOG
  • ipt_MASQUERADE
  • ipt_multiport (=xt_multiport)
  • ipt_owner (=xt_owner)
  • ipt_recent (=xt_recent)
  • ipt_REDIRECT
  • ipt_REJECT
  • ipt_state (=xt_state)
  • ipt_TCPMSS (=xt_TCPMSS)
  • ipt_tcpmss (=xt_tcpmss)
  • ipt_TOS
  • ipt_tos
  • ip_conntrack (=nf_conntrack)
  • ip_nat_ftp (=nf_nat_ftp)
  • xt_iprange
  • xt_comment
  • ip6table_filter
  • ip6table_mangle
  • ip6t_REJECT'

See hypervisor usage if you want to disable it or parameters if you want to edit this list.

Limitations

This module will only work on Debian 7.x versions.

Development

Free to send contributions, fork it, ...

License

WTFPL (http://wtfpl.org/)