Install and configure a complete ScaleIO environment.

Swisscom (Schweiz) AG



5,764 latest version

4.5 quality score

Version information

  • 3.2.2 (latest)
  • 3.1.2
  • 3.1.1
  • 3.1.0
  • 3.0.2
  • 3.0.1
  • 3.0.0
released Mar 20th 2017
This version is compatible with:
  • CentOS

Start using this module


swisscom/scaleio — version 3.2.2 Mar 20th 2017


Build Status

This module manages EMC ScaleIO deployments, configuration and management of ScaleIO components. The current version is able to handle ScaleIO 2.0 environments.

This puppet module installs and configures ScaleIO clusters with the following components:

  • MDM
  • MDM cluster
  • SDC
  • SDS
  • LIA

And it can manage this resources:

  • Protection domains
  • Fault sets
  • Storage pools
  • Volumes
  • Users
  • Syslog destinations

Cluster installation & configuration

Below you can find the various parameters explained - hiera is taken as an example backend. For sure the same parameters can be passed when declaring the scaleio class. See spec/hiera/stack.yaml for a complete hiera example.

The general idea behind the workflow is as follows:

  1. Install all components (SDS/SDC/LIA/MDM) on all nodes except the primary MDM (one Puppet run per node)
  2. Install all components (SDS/SDC/LIA/MDM) on the primary MDM
  3. Configure the ScaleIO cluster on the primary MDM/bootstrap node and create/manage the specified resources (such as pool, SDS, SDC, etc.)

ScaleIO packages

It is expected that the following RPM packages are available in a repository so they can be installed with the package manager (ie. yum). If not available they can be installed via the ScaleIO Install Manager (IM).

  • EMC-ScaleIO-mdm
  • EMC-ScaleIO-sdc
  • EMC-ScaleIO-sds
  • EMC-ScaleIO-lia

Version locking

The puppet module locks the versions of the RPMs using the yum-versionlock plugin. This prevents an unintended upgrade of the ScaleIO RPMs by running `yum update``


Per node, one or many of the following components can be specified and thus will installed. The MDM and Tiebreaker component will be installed automatically on the corresponding nodes based on the IP address.

scaleio::components: ['sdc', 'sds', 'lia']

Note: this is only for installation the appropriate package on the node. The configuration (ie: add it to the cluster) will happen on the primary MDM.


To bootstrap a MDM cluster, one needs to define all MDMs and Tiebreakers. Moreover the cluster bootstrap node needs to be specified. The order during the bootstrapping is important, it needs to be as follows:

  1. Install all components except MDM primary/bootstrap node, thus
  2. run puppet on secondary MDMs and tiebreakers (install & configure MDM package)
  3. run puppet on all SDS
  4. Bootstrap cluster on primary, thus
  5. run puppet on the bootstrap node

The puppet code to run for each node is the same

class { scaleio: }              # see hiera

And a common hiera file can be used for all elements in the cluster.

scaleio::version: '2.0-6035.0.el7'    # must correspond to binaries
scaleio::bootstrap_mdm_name: myMDM1   # node that does the cluster bootstrapping
scaleio::components: [sds, sdc, lia]  # mdm/tb will be auto installed
scaleio::system_name: sysname
  myMDM1:                             # name of the MDM
    ips: ''                   # one IP or an array of IPs
    mgmt_ips: ''              # optional; one IP or an array of IPs
    ips: ''
    mgmt_ips: ''
    ips: ''
    mgmt_ips: ''
    ips: ''                   # one IP or an array of IPs
    ips: ''


This module can make use of an exsting consul cluster to manage the bootstrap ordering. Thus set the parameter

scaleio::use_consul: true

With that all MDMs will create the key scaleio/${::scaleio::system_name}/cluster_setup/${mdm_tb_ip} in the consul KV store, as soon as the MDM service is running. The bootstrap node itself will wait until all MDMs have created their consul key, and then bootstrap the cluster.

Protection domains

An array of protection domain names.

  - 'pdo'

Fault sets

An array of fault set names - the name is splitted by a semicolon (${protection_domain:${fault_set_name}). This parameter is optional:

  - 'pdo:FaultSetOne'
  - 'pdo:FaultSetTwo'
  - 'pdo:FaultSetThree'

Storage pools

Besides creating a storage pool, the module supports managing,

  • the spare policy,
  • the background device scanner,
  • enabling/disabling RAM cache on a per-pool basis,
  • enabling/disabling rfcache on a per-pool basis,
  • Activating/deactivating zeropadding
  'pdo:pool1':                  # ${protection domain}:${pool name}
    spare_policy: 34%
    ramcache: 'enabled'
    rfcache: 'enabled'
    zeropadding: true
    device_scanner_mode: device_only
    device_scanner_bandwidth: 512
    spare_policy: 34%
    ramcache: 'disabled'
    rfcache: 'disabled'
    zeropadding: false
    device_scanner_mode: disabled


On a SDS level the following setting are manageable:

  • What device belongs to what pool.
  • The IPs of the SDS (only one SDS per server supported).
  • Size of the RAM cache
  • rfcache devices
  • enabling/disabling rfcache on a per-sds basis
  • What fault set is the SDS part of? (optional)

To end up with less configuration, there can be defaults specified over all SDS. In the following example, the configuration would look as follows in the end:

  • All SDS are part of the 'pdo' protection domain.
  • RAM cache is 128MB (default size) for all SDSs, except for sds-3. There it is disabled.
  • /dev/sdb will be part of the storage pool 'pool1' for all SDSs, except for sds-3, there it will be part of 'pool2'.
  protection_domain: 'pdo'
      - '/dev/sdb'
    - '/dev/sdc'
  rfcache: 'enabled'

    fault_set: FaultSetOne # optional
    ips: ['']
    fault_set: FaultSetTwo # optional
    ramcache_size: 1024
    ips: ['']
    rfcache: 'enabled'
    fault_set: FaultSetThree # optional
    ips: ['']
    ramcache_size: -1
        - '/dev/sdb'
      - '/dev/sdd'


Approve the SDCs and give them a name (desc).

    desc: 'sdc-1'
    desc: 'sdc-2'
    desc: 'sdc-3'


Create ScaleIO volumes and map them to SDCs. The two examples should be self explanatory:

    protection_domain: pdo
    storage_pool: pool1
    size: 8
    type: thick
      - sdc-1
    protection_domain: pdo
    storage_pool: pool2
    size: 16
    type: thin
      - sdc-1
      - sdc-2


Create users and manage their passwords (except the admin user):

    role: 'Administrator'
    password: 'myPassAPI1'
    role: 'Monitor'
    password: 'MonPW123'

General parameters

scaleio::version: '2.0-6035.0.el7'             # specific version to be installed
scaleio::password: 'myS3cr3t'                  # password of the admin user
scaleio::old_password: 'admin'                 # old password of the admin (only required for PW change)
scaleio::use_consul: false                     # use consul for bootstrapping
scaleio::purge: false                          # purge the resources if not defined in puppet parameter (for more granularity, see scaleio::mdm::resources)
scaleio::restricted_sdc_mode: true             # use the restricted SDC mode
scaleio::component_authentication_mode: true   # use authentication between system components
scaleio::syslog_ip_port: undef                 # syslog destination, eg: 'host:1245'
scaleio::monitoring_user: 'monitoring'         # name of the ScaleIO monitoring user to be created
scaleio::monitoring_passwd: 'Monitor1'         # password of the monitoring user
scaleio::external_monitoring_user: false       # name of a linux user that shall get sudo permissions for

Primary MDM switch

The ScaleIO configuration will always take place on the primary MDM. This means if the primary MDM switches, the ScaleIO configuration in the next Puppet run will be applied there. Exception: bootstrapping. This means if puppet runs on the 'bootstap node' and there is no ScaleIO installed, it will bootstrap a new cluster.

ScaleIO upgrade

Proceed with the following steps:

  1. Install the 'lia' component on all ScaleIO nodes using this puppet module (scaleio::components: ['lia']).
  2. Disable Puppet
  3. Upgrade the installation manager
  4. Do the actual upgrade using the installation manager
  5. Set the correct (new) version (scaleio::version: XXXX) for version locking.
  6. Enable Puppet again


This module uses a script called '' for executing scli commands. That wrapper script basically does a login, executes the command and does a logout at the end. To avoid race condition, there is a locking mechanism around those three commands. As a byproduct this puppet module creates a symlink from /usr/bin/si to that wrapper script and adds bash completion to it. Enjoy running scli commands ;)

Limitations - OS Support, etc.

The table below outlines specific OS requirements for ScaleIO versions.

ScaleIO Version Minimum Supported Linux OS
2.0.X CentOS ~> 6.5 / Red Hat ~> 6.5 / CentOS ~> 7.0 / Red Hat ~> 7.0

Please log tickets and issues at our Projects site

Puppet Supported Versions

This module requires a minimum version of Puppet 3.7.0.

MIT license, see