Forge Home

ceph

Ceph orchestration

18,888 downloads

7,107 latest version

4.3 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 4.0.0 (latest)
  • 3.1.2
  • 3.1.1
  • 3.1.0
  • 2.0.2
  • 2.0.1
  • 2.0.0
  • 1.5.3
  • 1.5.2
  • 1.5.1
  • 1.5.0
  • 1.4.3
  • 1.4.2
  • 1.4.1
  • 1.4.0
  • 1.3.4
  • 1.3.3
  • 1.3.2 (deleted)
  • 1.3.1
  • 1.3.0
  • 1.2.0
  • 1.1.0
  • 1.0.0
released Sep 13th 2017
This version is compatible with:
  • Puppet Enterprise 2017.2.x, 2017.1.x, 2016.5.x, 2016.4.x
  • Puppet >= 4.4.0 < 5.0.0
  • , ,

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'spjmurray-ceph', '4.0.0'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add spjmurray-ceph
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install spjmurray-ceph --version 4.0.0

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Download
Tags: ceph

Documentation

spjmurray/ceph — version 4.0.0 Sep 13th 2017

Ceph

Build Status Coverage Status

Table Of Contents

  1. Overview
  2. Module Description
  3. Compatibility Matrix
  4. Setup - The basics of getting started with ceph
  5. Usage
  6. Limitations

Overview

Deploys Ceph components

Module Description

Very lightweight implementation of Ceph deployment. This module depends quite heavily on knowledge of how the various ceph commands work and the requisite configuration values. See the usage example below.

The osd custom type operates on SCSI addresses e.g. '1:0:0:0/6:0:0:0'. This aims to solve the problem when a disk is removed from the cluster and replaced. The device node name is liable to change from /dev/sdb to /dev/sde thus hard coding device names is bad. However if we model deployment on hardware location then we can derive the device name, probe the drive partiton type and provision based on whether ceph-disk has been run.

The OSD provider can also operate on enclosures with SES firmware running on a SAS expander. In some cases SCSI addresses aren't predicatable and susceptible to the same enumeration problem as /dev device names. In these cases the devices can be provisioned with 'Slot 01' which directly correlates to a slot name found in sysfs under /sys/class/enclosure. On newer expanders the labels may be formatted as DISK00 which is also supported. The two addressing modes can be used interchangably.

The OSD provider also accepts an arbitrary hash of parameters to be passed to ceph-disk. The keys are the long options supported by ceph-disk stripped of the leading double hyphens. Values are either strings or nil/undef e.g. for options without arguments like --bluestore. These may be specified for all OSDs with the special name defaults.

Compatibility Matrix

Version Operating System Ceph Puppet
1.0.x Ubuntu 14.04 0.94 3
1.1.x Ubuntu 14.04 0.94 3
1.2.x Ubuntu 14.04 0.94 3
1.3.x Ubuntu 14.04, Centos 7 0.94 3
1.4.x Ubuntu 14.04, Centos 7 9 3, 4
1.5.x Ubuntu 14.04, Ubuntu 16.04*, Centos 7 10 3, 4
2.0.x Ubuntu 14.04, Ubuntu 16.04*, Centos 7 10 3, 4
3.1.x Ubuntu 14.04, Ubuntu 16.04, Centos 7 10 4
4.0.x Debian 8+9, Ubuntu 16.04, Centos 7 12 4, 5

* Ubuntu 16.04 only tested with Puppet 4 * Debian 8+9 only tested with Puppet 5

Usage

Basic Usage

To create a simple all in one server for test and development work:

include ::ceph

Hiera data should look like the following:

---
# Deployment options
ceph::mon: true
ceph::osd: true

# Install options
ceph::manage_repo: true
ceph::repo_version: 'luminious'

# ceph.conf
ceph::conf:
  global:
    fsid: '62ed9bd6-adf4-11e4-8fb5-3c970ebb2b86'
    mon_initial_members: "%{hostname}"
    mon_host: "%{ipaddress_eth0}"
    public_network: "%{network_eth0}/24"
    cluster_network: "%{network_eth1}/24"
    auth_supported: 'cephx'
    filestore_xattr_use_omap: 'true'
    osd_crush_chooseleaf_type: '0'
  osd:
    osd_journal_size: '1000'

# Create these keyrings on the monitors
ceph::keys:
  client.admin:
    key: 'AQBAyNlUmO09CxAA2u2p6s38wKkBXaLWFeD7bA=='
    caps:
      mon: 'allow *'
      osd: 'allow *'
      mds: 'allow'
      mgr: 'allow *'
    path: '/etc/ceph/ceph.client.admin.keyring'
  client.bootstrap-mgr:
    key: 'AQC82ppZVlWnABAAPCihMcu7yoTtyjGiCwycDA=='
    caps:
      mon: 'allow profile bootstrap-mgr'
    path: '/var/lib/ceph/bootstrap-mgr/ceph.keyring'
  client.bootstrap-osd:
    key: 'AQDLGtpUdYopJxAAnUZHBu0zuI0IEVKTrzmaGg=='
    caps:
      mon: 'allow profile bootstrap-osd'
    path: '/var/lib/ceph/bootstrap-osd/ceph.keyring'
  client.bootstrap-mds:
    key: 'AQDLGtpUlWDNMRAAVyjXjppZXkEmULAl93MbHQ=='
    caps:
      mon: 'allow profile bootstrap-mds'
    path: '/var/lib/ceph/bootstrap-mds/ceph.keyring'

# Create the OSDs
ceph::disks:
  defaults:
    params:
      fs-type: 'xfs'
      filestore: ~
  3:0:0:0:
    journal: '6:0:0:0'
  4:0:0:0:
    journal: '6:0:0:0'
  5:0:0:0:
    journal: '6:0:0:0'

Advanced Usage

It is recommended to enable deep merging so that global configuration can be defined in common.yaml and role/host specific configuration merged with the global section. A typical production deployment may look similar to the following:

---
### /var/lib/hiera/module/ceph.yaml

# Merge configuration based on role
ceph::conf_merge: true

# Global configuration for all nodes
ceph::conf:
  global:
    fsid: '62ed9bd6-adf4-11e4-8fb5-3c970ebb2b86'
    mon initial members: 'mon0,mon1,mon2'
    mon host: '10.0.0.2,10.0.0.3,10.0.0.4'
    auth supported: 'cephx'
    public network: '10.0.0.0/16'
    cluster network: '10.0.0.0/16'

# Merge keys based on role
ceph::keys_merge: true
---
### /var/lib/hiera/role/ceph-mon.yaml

# Install a monitor
ceph::mon: true

# Monitor specific configuration
ceph::conf:
  mon:
    mon compact on start: 'true'
    mon compact on trim: 'true'

# All the static keys on the system
ceph::keys:
  client.admin:
    key: "%{hiera('ceph_key_client_admin')}"
    caps:
      mon: 'allow *'
      osd: 'allow *'
      mds: 'allow'
      mgr: 'allow *'
    path: '/etc/ceph/ceph.client.admin.keyring'
  client.bootstrap-osd:
    key: "%{hiera('ceph_key_bootstrap_osd')}"
    caps:
      mon: 'allow profile bootstrap-osd'
    path: '/var/lib/ceph/bootstrap-osd/ceph.keyring'
  client.bootstrap-mgr:
    key: "%{hiera('ceph_key_bootstrap_mgr')}"
    caps:
      mon: 'allow profile bootstrap-mgr'
    path: '/var/lib/ceph/bootstrap-mgr/ceph.keyring'
  client.bootstrap-mds:
    key: "%{hiera('ceph_key_bootstrap_mds')}"
    caps:
      mon: 'allow profile bootstrap-mds'
    path: '/var/lib/ceph/bootstrap-mds/ceph.keyring'
  client.bootstrap-rgw:
    key: "%{hiera('ceph_key_bootstrap_rgw')}"
    caps:
      mon: 'allow profile bootstrap-rgw'
    path: '/var/lib/ceph/bootstrap-rgw/ceph.keyring'
  client.glance:
    key: "%{hiera('ceph_key_client_glance')}"
    caps:
      mon: 'allow r'
      osd: 'allow class-read object_prefix rbd_children, allow rwx pool=glance'
    path: '/etc/ceph/ceph.client.glance.keyring'
  client.cinder:
    key: "%{hiera('ceph_key_client_cinder')}"
    caps:
      mon: 'allow r'
      osd: 'allow class-read object_prefix rbd_children, allow rx pool=glance, allow rwx pool=cinder'
    path: '/etc/ceph/ceph.client.cinder.keyring'
---
### /var/lib/hiera/role/ceph-osd.yaml

# Create OSDs
ceph::osd: true

# OSD specific configuration
ceph::conf:
  osd:
    filestore xattr use omap: 'true'
    osd journal size: '10000'
    osd mount options xfs: 'noatime,inode64,logbsize=256k,logbufs=8'
    osd crush location hook: '/usr/local/bin/location_hook.py'
    osd recovery max active: '1'
    osd max backfills: '1'
    osd recovery threads: '1'
    osd recovery op priority: '1'

# OSD specific static keys
ceph::keys:
  client.admin:
    key: "%{hiera('ceph_key_client_admin')}"
    path: '/etc/ceph/ceph.client.admin.keyring'
  client.bootstrap-osd:
    key: "%{hiera('ceph_key_bootstrap_osd')}"
    path: '/var/lib/ceph/bootstrap-osd/ceph.keyring'
---
### /var/lib/hiera/productname/X10DRC-LN4+.yaml

# Product specific OSD definitions
ceph::disks:
  defaults:
    params:
      fs-type: 'xfs'
      filestore: ~
  Slot 01:
    journal: 'Slot 01'
  Slot 02:
    journal: 'Slot 02'
  Slot 03:
    journal: 'Slot 03'
  Slot 04:
    journal: 'Slot 04'
  Slot 05:
    journal: 'Slot 05'
  Slot 06:
    journal: 'Slot 06'
  Slot 07:
    journal: 'Slot 07'
  Slot 08:
    journal: 'Slot 08'
  Slot 09:
    journal: 'Slot 09'
  Slot 10:
    journal: 'Slot 10'
  Slot 11:
    journal: 'Slot 11'
  Slot 12:
    journal: 'Slot 12'
---
### /var/lib/hiera/role/ceph-rgw.yaml

# Create a Rados gateway
ceph::rgw: true
ceph::rgw_id: "rgw.%{hostname}"

# Rados gateway specific configuration
ceph::conf:
  client.rgw.%{hostname}:
    rgw enable usage log: 'true'
    rgw thread pool size: '4096'
    rgw dns name: 'storage.example.com'
    rgw keystone url: 'https://keystone.example.com:35357'
    rgw keystone admin token: 'dab8928d-1787-4d73-b3e9-1184a4aeffcb'
    rgw keystone accepted roles: '_member_,admin'
    rgw relaxed s3 bucket names: 'true'

# Rados gateway specific static keys
ceph::keys:
  client.admin:
    key: "%{hiera('ceph_key_client_admin')}"
    path: '/etc/ceph/ceph.client.admin.keyring'
  client.bootstrap-rgw:
    key: "%{hiera('ceph_key_bootstrap_rgw')}"
    path: '/var/lib/ceph/bootstrap-rgw/ceph.keyring'
---
### /var/lib/hiera/role/openstack-controller.yaml

# OpenStack controller specific static keys
ceph::keys:
  client.cinder:
    key: "%{hiera('ceph_key_client_cinder')}"
    path: '/etc/ceph/ceph.client.cinder.keyring'
  client.glance:
    key: "%{hiera('ceph_key_client_glance')}"
    path: '/etc/ceph/ceph.client.glance.keyring'
---
### /var/lib/hiera/role/openstack-compute.yaml

# OpenStack compute specific configuration
ceph::conf:
  client:
    rbd cache: 'true'
    rbd cache size: '268435456'
    rbd cache max dirty: '201326592'
    rbd cache dirty target: '134217728'
    rbd cache max dirty age: '2'
    rbd cache writethrough until flush: 'true'

# OpenStack compute specific static keys
ceph::keys:
  client.cinder:
    key: "%{hiera('ceph_key_client_cinder')}"
    path: '/etc/ceph/ceph.client.cinder.keyring'

Limitations

  1. Keys are implicitly added on the monitor, ergo all keys need to be defined on the monitor node