Version information
This version is compatible with:
- ,
Start using this module
Add this module to your Puppetfile:
mod 'maxadamo-galera_maxscale', '0.1.5'
Learn more about managing modules with a PuppetfileDocumentation
galera_maxscale
Table of Contents
- Description
- Setup - The basics of getting started with galera_maxscale
- Usage - Configuration options and additional functionality
- Reference - An under-the-hood peek at what the module is doing and how
- Limitations - OS compatibility, etc.
- Development - Guide for contributing to the module
Description
This module sets up and bootstrap Galera cluster and MaxScale Proxy.
The subsequent management of the Galera cluster is demanded to the script galera_wizard.yp
.
MaxScale Proxy will set up on 2 nodes with Keepalived
Setup
Beginning with galera_maxscale
To setup a galera cluster:
class { '::galera_maxscale':
root_password => $root_password,
sst_password => $sst_password,
monitor_password => $monitor_password,
galera_hosts => $galera_hosts,
trusted_networks => $trusted_networks,
lv_size => $lv_size;
}
To setup MaxScale:
class { '::galera_maxscale::maxscale::maxscale':
trusted_networks => $trusted_networks,
maxscale_hosts => $maxscale_hosts,
maxscale_vip => $maxscale_vip,
galera_hosts => $galera_hosts;
}
Once you have run puppet on every node, you can manage or check the cluster using the script:
[root@test-galera01 ~]# galera_wizard.py -h
usage: galera_wizard.py [-h] [-cg] [-dr] [-je] [-be] [-jn] [-bn]
Use this script to bootstrap, join nodes within a Galera Cluster
----------------------------------------------------------------
Avoid joining more than one node at once!
optional arguments:
-h, --help show this help message and exit
-cg, --check-galera check if all nodes are healthy
-dr, --dry-run show SQL statements to run on this cluster
-je, --join-existing join existing Cluster
-be, --bootstrap-existing bootstrap existing Cluster
-jn, --join-new join existing Cluster
-bn, --bootstrap-new bootstrap new Cluster
-f, --force force bootstrap new or join new Cluster
Author: Massimiliano Adamo <maxadamo@gmail.com>
Usage
The module will fail on Galera with an even number of nodes and with a number of nodes lower than 3.
To setup a Galera Cluster (and optionally a MaxScale cluster with Keepalived) we need a hash. If you use hiera it will be like this:
galera_hosts:
test-galera01.example.net:
ipv4: '192.168.0.83'
ipv6: '2001:123:4::6b'
test-galera02.example.net:
ipv4: '192.168.0.84'
ipv6: '2001:123:4::6c'
test-galera03.example.net:
ipv4: '192.168.0.85'
ipv6: '2001:123:4::6d'
maxscale_hosts:
test-maxscale01.example.net:
ipv4: '192.168.0.86'
ipv6: '2001:123:4::6e'
test-maxscale02.example.net:
ipv4: '192.168.0.87'
ipv6: '2001:123:4::6f'
maxscale_vip:
test-maxscale.example.net:
ipv4: '192.168.0.88'
ipv4_subnet: '22'
ipv6: '2001:123:4::70'
If you don't use ipv6, just skip the ipv6
keys as following:
galera_hosts:
test-galera01.example.net:
ipv4: '192.168.0.83'
test-galera02.example.net:
ipv4: '192.168.0.84'
test-galera03.example.net:
... and so on ..
you need an array of trusted networks/hosts (a list of ipv4/ipv6 networks/hosts allowed to connect to Galera socket):
trusted_networks:
- 192.168.0.1/24
- 2001:123:4::70/64
- 192.168.1.44
... and so on ...
Reference
Limitations
- important: not tested on ipv4 only
- important: changing MySQL root password is not yet supported. I'll implement it ASAP. For the time being don't do it with
puppetlabs/mysql
or manually: it must be done in conjunction with Galera configurations. - not tested yet on Ubuntu
- initial state transfer is supported only through Percona Xtrabackup (I see no reason to use
mysqldump
andrsync
since the donor would be unavailable during the transfer. I'll investigate soon howmariabackup
works). - handle major/minor versions properly
Development
Feel free to make pull requests and/or open issues on my GitHub Repository
Release Notes/Contributors/Etc. Optional
Dependencies
- arioch-keepalived (>= 1.2.5)
- puppetlabs-firewall (>= 1.9.0)
- puppetlabs-mysql (>= 3.1.1)
- stschulte-rpmkey (>= 1.0.3)