cfdb
Version information
This version is compatible with:
- Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2017.1.x, 2016.5.x, 2016.4.x
- Puppet >=4.7.0 <7.0.0
- ,
Start using this module
Add this module to your Puppetfile:
mod 'codingfuture-cfdb', '1.3.3'
Learn more about managing modules with a PuppetfileDocumentation
cfdb
Description
- Setup & auto tune service instances based on available resources:
- Elasticsearch
- MongoDB
- MySQL
- PostgreSQL
- Redis
- a general framework to easily add new types is available
- Support auto-configurable clustering:
- Native for Elasticsearch
- Native for MongoDB
- Galera Cluster for MySQL
- repmgr for PostgreSQL
- Sentinel for Redis Master-Slave
- High Availability and fail-over out-of-the-box
- Specialized HAProxy on each client system
- Support for secure TLS tunnel with mutual authentication for database connections
- Automatic creation of databases
- Automatic upgrade of databases after DB software upgrade
- Complete management of user accounts (roles)
- automatic random generated password management
- full & read-only database roles
- custom grants support
- ensures max connections meet
- PostgreSQL extension support
- Easy access configuration
- automatic firewall setup on both client & server side
- automatic protection for roles x incoming hosts
- automatic protection for roles x max connections
- Automatic incremental backup and automated restore
- Strict cgroup-based resource isolation on top of
systemd
integration - Automatic DB connection availability checks
- Automatic cluster state checks
- Support easy migration from default data dirs (see
init_db_from
). - Scheduled actions:
- For example, log cleanup in ELK stack
Terminology & Concept
cluster
- infrastructure-wide unique associative name for a collection of distributed database instances. In standalone cases, there is only one instance per cluster.instance
- database service process and related configuration. Each instance runs in own cgroup for fair weighted usage of RAM, CPU and I/O resources.primary instance
- instance where dynamic setup of roles and databases occurs. It also gets all Read-Write users by default.secondary instance
- a slave instance suitable for temporary automatic fail-over and Read-Only users. It is not allowed to define databases & roles in configuration, unless the instance is switched to primary node in static configuration.arbitrator
- an instance participating in quorum, but with no data. Mitigates split-brain cases.
database
- a database instance per cluster. All privileged on the database role with the same name is automatically created. There is no need to define such role explicitly.role
- a database user. Each role name is automatically prefixed with related database name to prevent name collisions.access
- a definition of client system and user to access particular role on specific database of specific cluster. All connections parameters are saved in ".env" file in user's home folder. It's possible to specify multiple access definitions per single system user using different .env variable prefixes. In case of multi-node cluster, a local HAProxy reverse-proxy instance is implicitly created with required high-Availability configuration.
Technical Support
- Example configuration
- Free & Commercial support: support@codingfuture.net
Setup
Up to date installation instructions are available in Puppet Forge: https://forge.puppet.com/codingfuture/cfdb
Please use librarian-puppet or cfpuppetserver module to deal with dependencies.
There is a known r10k issue RK-3 which prevents automatic dependencies of dependencies installation.
IMPORTANT NOTES!!!
Please understand that PuppetDB is heavily used for auto-configuration. The drawback is that new facts are updated only on second run after the run which makes any changes. Typical case is to do the following:
- Provision instance with databases & roles
- Provision the same instance again (collect facts)
- Provision access locations
- Provision access locations again (collect facts)
- Provision instance (update configuration based on the access facts)
- Restart instances, if asked during provisioning
- Provision all nodes in cycle until there are no new changes and no restart is required
For cluster configuration:
- Provision primary node
- Provision primary node again (collect facts)
- Provision secondary nodes
- Provision secondary nodes again (collect facts)
- Provision primary node (configure firewall & misc. based on facts)
- Provision secondary nodes again (clone/setup)
- Restart instances, if asked during provisioning
- Provision all nodes in cycle until there are no new changes and no restart is required
As available system memory is automatically distributed between all registered services:
- Please make sure to restart services after distribution gets changes (or you may run in trouble).
- Please avoid using the same system for
codingfuture
derived services and custom. Or see #3. - Please make sure to reserve RAM using
cfsystem_memory_weight
for any custom co-located services. Example:
cfsystem_memory_weight { 'my_own_services':
ensure => present,
weight => 100,
min_mb => 100,
max_mb => 1024
}
Examples
Please check codingufuture/puppet-test for example of a complete infrastructure configuration and Vagrant provisioning.
Example for host running standalone instances, PXC arbitrator & PostgreSQL slaves + related accesses.
cfdb::iface: main
cfdb::instances:
mysrv1:
type: mysql
databases:
- db1_1
- db1_2
iface: vagrant
port: 3306
mysrv2:
type: mysql
databases:
db2:
roles:
readonly:
readonly: true
sandbox:
custom_grant: 'GRANT SELECT ON $database.* TO $user; GRANT SELECT ON mysql.* TO $user;'
myclust1:
type: mysql
is_arbitrator: true
port: 4306
myclust2:
type: mysql
is_arbitrator: true
port: 4307
settings_tune:
cfdb:
secure_cluster: true
pgsrv1:
type: postgresql
iface: vagrant
databases:
pdb1: {}
pdb2: {}
pgclust1:
type: postgresql
is_arbitrator: true
is_secondary: true
port: 5300
settings_tune:
cfdb:
node_id: 3
pgclust2:
type: postgresql
is_arbitrator: true
is_secondary: true
port: 5301
settings_tune:
cfdb:
secure_cluster: true
node_id: 3
esearch:
type: elasticsearch
port: 9200
cfdb::access:
vagrant_mysrv2_db2:
cluster: mysrv2
role: db2
local_user: vagrant
max_connections: 100
vagrant_mysrv2_db2ro:
cluster: mysrv2
role: db2readonly
local_user: vagrant
config_prefix: 'DBRO_'
max_connections: 200
vagrant_mysrv2_db2sandbox:
cluster: mysrv2
role: db2sandbox
local_user: vagrant
config_prefix: 'DBSB_'
vagrant_myclust1_db1:
cluster: myclust1
role: db1
local_user: vagrant
config_prefix: 'DBC1_'
vagrant_myclust1_db2:
cluster: myclust1
role: db2
local_user: vagrant
config_prefix: 'DBC2_'
vagrant_pgsrv1_pdb1:
cluster: pgsrv1
role: pdb1
local_user: vagrant
config_prefix: 'PDB1_'
vagramt_esearch:
cluster: esearch
local_user: vagrant
config_prefix: 'ESRCH_'
For another related host running primary nodes of clusters:
cfdb::iface: main
cfdb::mysql::is_cluster: true
cfdb::instances:
myclust1:
type: mysql
is_cluster: true
databases:
db1:
roles:
ro:
readonly: true
db2: {}
port: 3306
myclust2:
type: mysql
is_cluster: true
databases:
- db1
- db2
port: 3307
settings_tune:
cfdb:
secure_cluster: true
pgclust1:
type: postgresql
is_cluster: true
databases:
pdb1:
roles:
ro:
readonly: true
pdb2: {}
port: 5300
pgclust2:
type: postgresql
is_cluster: true
databases:
- pdb3
- pdb4
port: 5301
settings_tune:
cfdb:
secure_cluster: true
For third related host running secondary nodes of clusters:
cfdb::iface: main
cfdb::mysql::is_cluster: true
cfdb::instances:
myclust1:
type: mysql
is_secondary: true
port: 3306
myclust2:
type: mysql
is_secondary: true
port: 3307
settings_tune:
cfdb:
secure_cluster: true
pgclust1:
type: postgresql
is_secondary: true
port: 5300
pgclust2:
type: postgresql
is_secondary: true
port: 5301
settings_tune:
cfdb:
secure_cluster: true
esearch:
type: elasticsearch
is_secondary: true
port: 9200
Implicitly created resources
# for every instance
#------------------
cfnetwork::describe_service:
cfdb_${cluster}:
server: "tcp/${port}"
# local cluster system user access to own instance
cfnetwork::service_port:
"local:cfdb_${cluster}": {}
cfnetwork::client_port:
"local:cfdb_${cluster}":
user: $user
# client access to local cluster instance
cfnetwork::service_port:
"${iface}:cfdb_${cluster}":
src: $client_hosts
# for each repmgr Elasticsearch cluster (inter-node comms)
# > access to local instance ports
cfnetwork::describe_service:
"cfdb_${cluster}_peer":
server: "tcp/${peer_port}"
cfnetwork::service_port:
"${iface}:cfdb_${cluster}_peer":
src: $peer_addr_list
cfnetwork::service_ports:
"${iface}:cfdb_${cluster}_peer":
src: 'ipset:cfdb_${cluster}'
cfnetwork::client_ports:
"${iface}:cfdb_${cluster}_peer":
dst: 'ipset:cfdb_${cluster}'
user: $user
# for each Galera cluster (inter-node comms)
cfnetwork::describe_service:
"cfdb_${cluster}_peer":
server: "tcp/${port}"
"cfdb_${cluster}_galera":
server:
- "tcp/${galera_port}"
- "udp/${galera_port}"
"cfdb_${cluster}_sst":
server: "tcp/${sst_port}"
"cfdb_${cluster}_ist":
server: "tcp/${ist_port}"
cfnetwork::ipset:
cfdb_${cluster}:
type: ip
addr: $peer_addr_list
cfnetwork::service_ports:
"${iface}:cfdb_${cluster}_peer":
src: 'ipset:cfdb_${cluster}'
"${iface}:cfdb_${cluster}_galera":
src: 'ipset:cfdb_${cluster}'
"${iface}:cfdb_${cluster}_sst":
src: 'ipset:cfdb_${cluster}'
"${iface}:cfdb_${cluster}_ist":
src: 'ipset:cfdb_${cluster}'
cfnetwork::client_ports:
"${iface}:cfdb_${cluster}_peer":
dst: 'ipset:cfdb_${cluster}'
user: $user
"${iface}:cfdb_${cluster}_galera":
dst: 'ipset:cfdb_${cluster}'
user: $user
"${iface}:cfdb_${cluster}_sst":
dst: 'ipset:cfdb_${cluster}'
user: $user
"${iface}:cfdb_${cluster}_ist":
dst: 'ipset:cfdb_${cluster}'
user: $user
# for each repmgr PostgreSQL cluster (inter-node comms)
# > access to local instance ports
cfnetwork::describe_service:
"cfdb_${cluster}_peer":
server: "tcp/${port}"
cfnetwork::service_port:
"${iface}:cfdb_${cluster}_peer":
src: $peer_addr_list
cfnetwork::service_ports:
"${iface}:cfdb_${cluster}_peer":
src: 'ipset:cfdb_${cluster}'
cfnetwork::client_ports:
"${iface}:cfdb_${cluster}_peer":
dst: 'ipset:cfdb_${cluster}'
user: $user
# for every cfdb::access when HAProxy is NOT used
#------------------
cfnetwork::describe_service:
cfdb_${cluster}:
server: "tcp/${port}"
cfnetwork::client_port:
"any:cfdb_${cluster}:${local_user}":
dst: [cluster_hosts]
user: $local_user
# for every cfdb::access when HAProxy IS used
#------------------
cfnetwork::describe_service:
"cfdbha_${cluster}_${port}":
server: "tcp/${port}"
cfnetwork::client_port:
"any:${fw_service}:${host_underscore}":
dst: $addr,
user: $cfdb::haproxy::user
class cfdb
parameters
This is a full featured class to use with Hiera
$instances = {}
- configurations forcfdb::instance
resources (Hiera-friendly)$access = {}
- configurations forcfdb::access
resources (Hiera-friendly)$iface = 'any'
- database network facing interface$cluster_face = 'main'
- cluster comms network facing interface$root_dir = '/db'
- root to create instance home folders$max_connections_default = 10
- default value for$cfdb::access::max_connections
$backup = true
- default value for$cfdb::instance::backup
class cfdb::backup
parameters
This class is included automatically on demand.
$cron = { hour => 3, minute => 10 }
- defaultcron
config for periodic auto-backup$root_dir = '/mnt/backup'
- root folder for instance backup sub-folders
class cfdb::haproxy
parameters
This class is included automatically on demand.
$memory_weight = 1
- weighted amount of memory to reserve for HAProxy.- Note: optimal minimal amount is automatically reserved based on max number of connections
$memory_max = undef
- possible max memory limit$cpu_weight = 100
- CPU weight for cgroup isolation$io_weight = 100
- I/O weight for cgroup isolation$settings_tune = {}
- do not use, unless you know what you are doing. Mostly left for exceptional in-field case purposes.
class cfdb::elasticsearch
parameters
This class is included automatically on demand.
$version = '6'
- major or major.minor version of Elasticsearch to use$apt_repo = 'https://artifacts.elastic.co/packages/6.x/apt'
- Official Elastic APT repository location$default_extensions = false
- install default extension list, if true.- Default: 'analysis-icu' and 'ingest-geoip'
$extensions = []
- list of custom extensions to insall.- Note: elasticsearch is quite painful for exact version match.
class cfdb::mongodb
parameters
This class is included automatically on demand.
$version = '5.6'
- version of Percona Server MongoDB to use
class cfdb::mysql
parameters
This class is included automatically on demand.
$is_cluster = false
- if true, Percona XtraDB Cluster is installed instead of Percona Server$percona_apt_repo = 'http://repo.percona.com/apt'
- Percona APT repository location$version = '5.7'
- version of Percona Server to use$cluster_version = '5.6'
- version of PXC to use
class cfdb::postgresql
parameters
This class is included automatically on demand.
$version = '9.5'
- version of postgresql to use$default_extensions = true
- install default extension list, if true.- Default list: 'asn1oid', 'debversion', 'ip4r', 'pgextwlist', 'pgmp', 'pgrouting', 'pllua', 'plproxy', 'plr', 'plv8', "postgis-${postgis_ver}", 'postgis-scripts', 'powa', 'prefix', 'preprepare', 'repmgr', 'contrib', 'plpython', 'pltcl'.
- Note: 'plperl', 'repack' and 'partman' are disabled as they causes troubles with packaging.
$extensions = []
- custom list of extensions to install$apt_repo = 'http://apt.postgresql.org/pub/repos/apt/'
- PostgreSQL APT repository location
type cfdb::access
parameters
This type defines client with specific properties for auto-configuration of instances.
$cluster
- unique cluster name$local_user
- local user to make.env
configuration for. Theuser
resource must be defined with$home
parameter.$role = $cluster
- unique role name within cluster (note roles defined in databases must be prefixed with database name)$use_proxy = 'auto'
- do not change the default (for future use)$max_connections = $cfdb::max_connections_default
- define max number of client connections for particular case.$config_prefix = 'DB_'
- variable prefix for.env
file. The following variables are defined:- 'HOST', 'PORT', 'SOCKET', 'USER', 'PASS', 'DB', 'TYPE', 'MAXCONN'.
- 'CONNINFO' - only for PostgreSQL
$env_file = '.env'
- name of dot-env file relative to $home of the user$iface = $cfdb::iface
- DB network facing interface$custom_config = undef
- name of custom resource to instantiate with the following parameters:cluster
- related cluster namerole
- related role namelocal_user
- related local user nameconfig_vars
- hash of configuration variables in lower case (see above)
$use_unix_socket = true
- should UNIX sockets be used as much as possible
type cfdb::database
parameters
This type must be used only on primary instance of cluster. Please avoid direct configuration, see $cfdb::instance::databases
$cluster
- unique cluster name$database
- database name$password = undef
- force password instead of auto-generated for default user$roles = undef
- configuration for extracfdb::role
resources (Hiera-friendly).- Note: database name is automatically prefixed
$ext = []
- database-specific extensions. Genereral format "{name}" or "{name}:{version}". If version is omitted then the latest one is used.
Please note that implementation types without concept of databases have a fictional one defined with the same name as the cluster.
type cfdb::instance
parameters
Defines and auto-configures instances.
$type
- type of cluster, e.g. elasticsearch, mysql, postgresql$is_cluster = false
- if true, configured instance with cluster in mind$is_secondary = false
- if true, secondary node is assumed$is_bootstrap = false
- if true, forces cluster bootstrap (should be used only TEMPORARY for recovery purposes). There is no need to set this during first node of cluster setup since v0.9.9$is_arbitrator = false
- if true, assumes a witness node for quorum with no data$memory_weight = 100
- relative memory weight for automatic configuration based on available RAM$memory_max = undef
- max memory the instance can use in auto-configuration$cpu_weight = 100
- relative CPU weight for cgroup isolation$io_weight = 100
- relative I/O weight for cgroup isolation$target_size = 'auto'
- expected database size in bytes (auto - detects based on partition size)$settings_tune = {}
- very specific fine tune. See below$databases = undef
- configuration forcfdb::database
resources$iface = $cfdb::iface
- DB network facing interface$cluster_face = $cfdb::cluster_face
- cluster comms network facing interface$port = undef
- force specific network port (mandatory, if$is_cluster
)$backup = $cfdb::backup
- if true, automatic scheduled backup gets enabled$backup_tune = { base_date => 'month' }
- overrides$type
-specific backup script parameters. See below.$ssh_key_type = 'ed25519'
- SSH key type for in-cluster communication$ssh_key_bits = 2048
- SSH key bits for RSA$scheduled_actions = {}
- type-specific scheduled actions
type cfdb::role
parameters
Define and auto-configures roles per database in specified cluster. Please avoid direct configuration, see $cfdb::database::roles
$cluster
- cluster name$database
- database name$password = undef
- force password instead of auto-generated$subname = ''
- role name is equal to $database. Sub-name is added to it.$readonly = false
- set read-only access, if supported by type$custom_grant = undef
- custom grant rules with$database
and$user
being replaced by actual values.$static_access = {}
- host => maxconn pairs for static configuration with data in PuppetDB. Please avoid using it, unless really needed.
Please note that if particular implementation is missing concept of databases then there is only one role with the same name as the cluster.
Backup & restore
Each instance has /db/bin/cfdb_{cluster}_backup
and /db/bin/cfdb_{cluster}_restore
scripts installed to
perform manual backup and manual restore from backup respectively. Of course,
restore will ask to input two different phrases for safety reasons.
There are two types of backup: base and incremental. The type of backup is detected automatically
based on base_date
option which can be set through $cfdb::instance::backup_tune
.
Possible values for for base_date
:
'year'
- '%Y''quarter'
- "%Y-Q$(( $(/bin/date +%m) / 4 + 1 ))"'month'
- '%Y-%m''week'
- '%Y-W%W''day'
- '%Y-%m-%d''daytime'
- '%Y-%m-%d_%H%M%S'- any accepted as
date
format
If $cfdb::instance::backup
is true then bin/cfdb_backup_auto
symlink is created.
The symlinks are automatically called in sequence during system-wide cron-based backup
to minimize stress on system.
TLS tunnel based on HAProxy
As database services do not support a dedicated TLS-only port and generally do not well offload TLS processing overhead the actual implentation is based on HAProxy utilizing Puppet PKI for mutual authentication of both peers. There are no changes required to client application - they open local UNIX socket.
TLS tunnel is created in the following cases:
use_proxy = 'secure'
- unconditionally createduse_proxy = 'auto'
- if specific database nodecf_location
mismatch client'scf_location
TLS tunnel is NOT created in the following cases:
use_proxy = 'insecure'
- HAProxy is used, but without any TLS security. This parameter is useful, if there is lower level secure VPN tunnel is available.use_proxy = false
- HAProxy is not used
Other commands & configurations
/opt/codingfuture/bin/cfdb_backup_all
is installed and used in periodic cron for sequential instance backup with minimized stress on the system./opt/codingfuture/bin/cfdb_access_checker <user> <dotenv> <prefix>
is a generic tool to verify each configured access is working. It is used on every Puppet provisioning run for everycfdb::access
defined./opt/codingfuture/bin/cfdb_restart_pending
is a helper to restart all DB instances with pending restart flag
Elasticsearch
/db/bin/cfdb_{cluster}_curl
is installed to properly invoke REST API/db/bin/cfdb_{cluster}_curator
is installed to properly invokeelasticsearch-curator
MongoDB
~/.mongorc.js
is properly configured formongo
client to work without password in command line or env/db/bin/cfdb_{cluster}_mongo
is installed to properly invoke mongo
MySQL
~/.my.cnf
is properly configured formysql
client to work without parameters./db/bin/cfdb_{cluster}_mysql
is installed to properly invoke mysql/db/bin/cfdb_{cluster}_sysbench
is installed for easy sysbench invocation/db/bin/cfdb_{cluster}_bootstrap
is installed for easy Galera bootstrap
PostgreSQL
/db/bin/cfdb_{cluster}_psql
is installed to properly invoke psql with required parameters./db/bin/cfdb_{cluster}_repmgr
is installed to properly invoke with required parameters/db/bin/cfdb_{cluster}_vacuumdb
is installed to properly invoke with required parameters~/.pgpass
is properly configured for superuser and repmgr~/.pg_service.conf
is properly configured to be used with~/bin/cfdb_psql
HAProxy
/db/bin/cfdb_hatop
is installed to properly invoke hatop
$settings_tune
magic
Elasticsearch
Flat configuration keys in documentation style (no sub-trees). Most of the settings can be tuned here.
MongoDB
Flat configuration keys in documentation style (no sub-trees). Most of the settings can be tuned here.
MySQL
Quite simple. Every key is section name in MySQL INI. Each value is a hash of section's variable => value pairs.
Note: there are some configuration variables which are enforced by CFDB
However, there is a special "cfdb"
section, which is interpreted differently. There are
special keys:
optimize_ssd
- if true, assume data directory is located on high IOPS hardwaresecure_cluster
- if true, use Puppet PKI based TLS for inter-node communicationshared_secret
- DO NOT USE, for internal cluster purposes.max_connections_roundto = 100
- ceil max_connections to multiple of thatlisten = 0.0.0.0
- address to listen on, if external connections are detected based oncfdb::access
cluster_listen
- address to listen for cluster communication based on '$cluster_face'inodes_min = 1000
andinodes_max = 10000
- set gates for automatic calculation ofmysqld.table_definition_cache
andmysqld.table_open_cache
open_file_limit_roundto = 10000
- ceilmysqld.open_file_limit
to multiple of thatbinlog_reserve_percent = 10
- percent of $target_size to reserve for binary logsdefault_chunk_size = 2 * gb
- default for innodb_buffer_pool_chunk_sizeinnodb_buffer_pool_roundto = 1GB or 128MB
- rounding of memory available for InnoDB pool. The default depends on actual amount of RAM available.wsrep_provider_options = {}
- overrides for some ofwsrep_provider_options
of Galera Clusterinit_db_from
- "{pgver}:{orig_dara_dir}" - copies initial data from specified path expecting specific PostgreSQL version and then upgradesjoiner_timeout = 600
- how long to wait for initial sync of joiner node
PostgreSQL
Similar to MySQL. Section named 'postgresql'
overrides some of configuration values in postgresql.conf
-
some other variables are enforced.
However, there is also a special "cfdb" section:
optimize_ssd
- if true, assume data directory is located on high IOPS hardwaresecure_cluster
- if true, use Puppet PKI based TLS for inter-node communicationshared_secret
- DO NOT USE, for internal cluster purposes.strict_hba_roles = true
- if true, hba conf strictly matches each role to host instead of using "all" for match. This imitates the same host-based security as provided by MySQL.max_connections_roundto = 100
- ceil max_connections to multiple of thatlisten = 0.0.0.0
- address to listen on, if external connections are detected based oncfdb::access
cluster_listen
- address to listen for cluster communication based on '$cluster_face'inodes_min = 1000
andinodes_max = 10000
- set gates for automatic calculation ofinodes_used
participating in:postgresql.max_files_per_process = 3 * (max_connections + inodes_used)
open_file_limit_roundto = 10000
- ceilpostgresql.max_files_per_process
to multiple of thatshared_buffers_percent = 20
- percent of allowed RAM to reserve for shared_bufferstemp_buffers_percent = 40
- percent of allowed RAM to reserve for tempbufferstemp_buffers_overcommit = 8
- ratio of allowed temp_buffers overcommitnode_id
- node ID for repmgr. If not set then the last digits of hostname are used as ID.upstream_node_id
- upstream node ID for repmgr. If not set then primary instance is used.locale = 'en_US.UTF-8'
- locale to use forinitdb
init_db_from
- copies initial data dir from specified path and then upgrades
HAProxy
- Top level key matches haproxy.conf sections (e.g. global, defaults, frontend XXX, backend YYY, etc.)
- If section is missing - it is created
- Top level values must be hashes of section definitions
- Nil value is interpreted as "delete section"
- Due to quite messy HAProxy configuration, you should check
lib/puppet/provider/cfdb_haproxy/cfdbb.rb
for how to properly overrides entires (some of them include space, like "timeout client")
However, there is also a special "cfdb" section:
inter = '3s'
default forserver inter
fastinter = '500ms'
default forserver fastinter
- it also serves fortimeout checks
Scheduled actions
Elasticsearch scheduled actions
All actions are based on elasticsearch-curator
configuration which is run from cron.
Note: cron actions run only on cfdb $is_primary node.
Old index cleanup
Suitable for cleanup in ELK stack.
type = 'cleanup_old'
- must be set exactlyprefix
- must be set explicitely (e.g. 'logstash')timestring = '%Y.%m.%d'
- filter.timestring valueunit = 'days'
- filter.unit valueunit_count = 30
- filter.unit_value valuecron = { hour => 2, minute => 10 }
- cron config
Generic elasticsearch-curator
actions
actions = {}
- as required by curator action configcron = { hour => 2, minute => 10 }
- cron config
Change Log
All notable changes to this project will be documented in this file. This project adheres to Semantic Versioning.
1.3.3 (2020-01-07)
- CHANGED: to make Redis primary a priority for failover
- CHANGED: to forcibly close on failover of non-distributed connections
- FIXED: MongoDB backup with renameCollection() statements
1.3.2 (2019-11-13)
- FIXED: to standalone redis configuration issue
- FIXED: to use cfsystem::apt::key instead of raw apt::key to retrieve up-to-date version
- FIXED: insecure MongoDB cluster to use named peers
- FIXED: Docker support improvements for PostgreSQL
- FIXED: extended MongoDB JSON which caused troubles
- FIXED: static access to integrate with cfnetwork
- FIXED: Docker with remote instances
- FIXED: .env of non-cluster local database with disabled unix socket configuration
1.3.1 (2019-06-17)
- CHANGED: to use systemLog.quiet which is ignored by MongoDB any way...
- FIXED: to use cfsystem::stable_sort() for ElasticSearch/MongoDB/Redis NODES .env
- FIXED: postgresql mix of arbitrator+server leading to failed catalog compilation sometimes
- FIXED: redis client to work without local server
- FIXED: cfbackup v1.3.1+ compatibility
- NEW: cfdb::access acting on behalf-of external clients
1.3.0 (2019-04-14)
- NEW: MongoDB support
- NEW: Redis support
- FIXED: postgresql UDP stats ports to be open for single instance case as well
- FIXED: minor cfdb::access issue leading to catalog build failure in some configurations
- FIXED: to more reliably detect active version of PostgreSQL for backup purposes
- FIXED: to bundle pg_backup_ctl with custom changes for PostgreSQL v10+
- FIXED: PostgreSQL to always use the original config
- FIXED: PostgreSQL access in secure mode
- FIXED: concurrent XtraBackup issues
- FIXED: Elasticsearch APT upgrade issues
- CHANGED: updated PostgreSQL latest version to v11
- CHANGED: to use cfbackup module
1.1.0 (2018-12-09)
- CHANGED: updated for Ubuntu 18.04 Bionic support
- CHANGED: revised per-version PostgreSQL extensions
- FIXED: of repmgr slave registration in PostgreSQL 10
- FIXED: failure catalog compilation without enabled backup
- FIXED: to support plain NVMe partitions
- FIXED: MySQL client host configuration issue with local TCP client
- FIXED: only-arbitrator-on-host case
- FIXED: secure proxy configuration issues
- NEW: instance memory_min parameter support
1.0.6 (2018-10-24)
- FIXED: to properly use ES_TMPDIR (elasticsearch)
- FIXED: to properly set GC log location (elasticsearch)
1.0.5 (2018-06-14)
- CHANGED: to use utf8mb4 instead of utf8 (utf8mb3) for MySQL by default
1.0.4 (2018-05-02)
- CHANGED: not to install pre-defined elasticsearch extensions by default
- FIXED: to also forcible enable instance services
- NEW: cfsystem::metric declaration for instances
1.0.3 (2018-04-29)
- CHANGED: to use common cfsystem::pip
1.0.2 (2018-04-18)
- FIXED: to support haproxy 1.8+ (without systemd wrapper)
- NEW: repmgr "location" support
1.0.1 (2018-04-13)
- FIXED: elasticsearch rolling plugin update issues (old plugins are removed first now)
0.12.3 (2018-03-19)
- CHANGED: to use cf_notify for warnings
- FIXED: elasticsearch plugin-related failures on initial run
0.12.2
- CHANGE: not to complain for "yellow" status of single-node elasticsearch
- FIXED: minor issue to allow standalone elasticsearch (cross ref to postgresql variable
- FIXED: missing default elasticsearch JVM options)
- FIXED: to properly use syslog in mysql
- NEW: Elasticsearch plugin installer
- NEW: elasticsearch-curator support
- NEW: concept of scheduled actions per instance
0.12.1
- CHANGED: to warn only if versions are older than latest known by cfdb
- CHANGED: per-type defaults of min & max memory limits
- CHANGED: to mask instead of just disabled default services
- FIXED: to properly set infinite open file limit
- FIXED: cfdb_check_access to properly work for root user (switch to HOME)
- NEW: Elasticsearch support
- NEW: record database software running package version to detect restart needed
0.12.0
- CHANGED: upgraded to postgresql 10
- CHANGED: upgraded to repmgr 4
- CHANGED: repmgr witness to always --force registration
- FIXED: to use sslmode=verity-ca for repmgr
- FIXED: cfdb_restart_pending to handle arbitrators
- FIXED: improved PostgreSQL upgrade process
- NEW: cfdb_{cluster}_vacuumdb tool
0.11.4
- CHANGED: enabled hostname resolution in MySQL by default
0.11.3
- FIXED: default cron for cfdb_backup_all
- FIXED: cfdb_restart_pending to suppprt cluster names with underscore
- CHANGED: Percona repos for Debian Stretch & Ubuntu Zesty
- CHANGED: postresql repos for Debian Stretch
0.11.2
- NEW: improved cfdb_*_bootstrap to ask for confirmation with date-based parameter
- NEW: imrpvoed cfdb_*_bootstrap to "fix" safe_bootstrap Galera state flag
0.11.1
- FIXED: broken configuration in some cases of cluster bootstrap (empty listen address)
- FIXED: --initialize detection failing on Puppet 5.x
- CHANGED: to always expect support for --initialize in MySQL 5.7+
- NEW: Puppet 5.x support
- NEW: Ubuntu Zesty support
0.11.0
- Major refactoring of internals
- Got rid of facts processing in favor of resources
- cfdb_access is noy recreated. but only shows error on healthcheck
- Added a difference between cluster and client interfaces
- All cluster instances must use the same port now (firewall optimization with ipsets)
- Switch to cfsystem::clusterssh instead of custom SSH setup
- Improved of persistent ports & secrets handling based on cfsystem_persist
- Fixed healthcheck cfdb_access without haproxy case
- Split DB type-specific features into sub-resources
- Added obfuscation of password secrets
- Removed deployment-time secret/port generation in favor on catalog resources (cfsystem)
- Improved Percona Cluster pre-joining connection check
- Changed to use ipsets for client list
- Rewritten cluster healthchecks to use native clients instead of Python scripts
- Misc. improvements
- Added dependency on cfnetwork:firewall anchor where applicable
- Added dependency on cfsystem::randomfeed for HAProxy dhparam generation
- Updated to new 'cfnetwork::bind_address' API
- Enforced public parameter types
- Updated to work with /etc/sudoers.d cleanup
- Changed default HAProxy inter tune from 1s to 3s (better fits Python-based checkers)
- Added "--skip-version-check" to mysql_upgrade
- Added 'password' parameter for default user of cfdb::database
- Fixed to update superuser & repmgr passwords for postgresql on change
- Aligned with cfnetwork changes for failed DNS resolution
- Added CFDB binary folder to global search path through cfsystem::binpath
- Updated to use gpg trusted key files for Percona due to issues with apt::key @stretch
- Removed deprecated calls to try_get_value()
- Changed to default versions to already installed or latest
- Added warning if configured version mismatches the latest
0.10.1
- Improved to support automatic Galera joiner startup (based on SSH check)
- Changed to filter out Galera arbitrators from normal node gcomm://
- Converted to support Debian/Ubuntu based on LSB versions, but not codenames
- Fixed Debian Stretch support
- Fixed upgrade procedure for standalone MySQL instance
- Updated to cfsystem:0.10.1
0.10.0
- Updated to cfnetwork 0.10.0 API changes
- Fixed puppet-lint issues
- Removed obsolete insecure helper tools under /db/{user}/bin/ -> use /db/bin/
- Minor improvements
- Added per-instance mysqladmin support
- Implemented reload & shutdown for mysqld through mysqladmin
- Added cfdb_restart_pending helper
- Updated CF deps to v0.10.x
0.9.16
- Fixed to properly install repmgr ext with specific postgresql version
- Added installation of new Percona PGP key
- Updated
repmgr
config to use new systemd for control (repmgr 3.2+ is required) - Improved PostgreSQL upgrade procedures
Fixed to ignore init_db_from configuration, if already configured Minor improvements to error handling Fixed to check all slave nodes are down on cluster upgrade
- Upgraded to PostgreSQL 9.6 by default
- Fixed previously introduced bug requiring instance node restart
- Removed 'partman' from default extension list due to incompatibility with PostgreSQL 9.6
- Removed 'pgespresso' as deprecated with PostgreSQL 9.6
- Added
repmgr
arbitrator (witness
) support (repmgr 3.2+ is required)
0.9.15
- Updated
cfsystem
dependency
0.9.14
- Added automatic cleanup of cfdb instance systemd files
- Security improvement to move root-executed scripts out of DB instance home folders
- Added
cfdb_{cluster}_bootstrap
command support for Galera instances
0.9.13
- Attempt to workaround issue of percona server upgrades requiring to shutdown all "mysqld" processes in system
0.9.12
- Fixed to check running cfdbhaproxy
- Updated PXC default to 5.7 (please read official upgrade procedure)
- Changed PXC mysql_upgrade handling to aid official steps
- Minor improvements to secondary server deployment
0.9.11
- Changed to use /dev/urandom for DH params generation to avoid possible hang on deployment with low entropy
0.9.10
- Added "cfdb-" prefix to cluster names in automatic global memory management
- Fixed issues in rotational drive auto-detection
- Fixed exception in existing user password check under some circumstances
0.9.9
- Removed need to use is_bootstrap for Galera cluster setup - it's automatic now
- Fixed to check actual in-database passwords for roles for both MySQL and PostgreSQL
- Fixed invalid check of in-database maxconn per PostgreSQL user
0.9.8
- Fixed to workaround Percona bug of missing qpress in Ubuntu repos
- Changed to use Percona Server 5.6 for Ubuntu due to Percona Repo issues
0.9.7
- Removed repack from default PostgreSQL extension list
0.9.6
- Fixed to support init_db_from the same PostgreSQL version
0.9.5
- Changed
cfhaproxy
tocfdbhaproxy
service name - Changed internal format of secrets storage
- Updated to new internal features of
cfsystem
module - Fixed first-run cfdb:access support from actual state with missing facts/resources in PuppetDB
- Added
cfdb::access::use_unix_socket
parameter to control if local TCP connection is required - Added
maxconn
variable to DB config produced by cfdb::access - Added
cfdb::role:static_access
support for special cases - Removed plperl from standard list of extensions as it leads to packaging issues
- Minor changes to interface of
cfdb::access::custom_config
- Added support for configuring and upgrades PostgreSQL database extensions
- Added automatic cluster status checks in provisioning
- Added initialization from existing location
init_db_from
0.9.4
- Fixed to support single server access for multiple local users without HAProxy involved
- Added HAProxy
inter
&fastinter
tune support - Implemented HAProxy-based secure TLS tunnel for database connection on demand
- Added a workaround for PostgreSQL stats UDP socket
- Added generic /opt/codingfuture/bin/cfdb_access_checker and fixed not to pass access password in command line during deployment auto-checks
0.9.3
- Major refactoring to support provider mixins per database type
- Fixed PostgreSQL HBA files with strict_hba_roles
- Fixed to check that DB services are running
- HAProxy imrovements
- changed to always use custom cluster-aware health-check scripts
- changed to use special reverse-proxy socket for health checks
- changed to support frontend secure connection based on Puppet PKI
- fixed to properly support PostgreSQL UNIX sockets
- Fixed missing DB variable for .env files
- Added PostgreSQL-specific CONNINFO variable for cfdb::access
- Fixed to properly configure access & max connections on secondary servers
- Implemented automatic check for cfdb::access connection availability
- Fixed issues with roles not getting updated after transition error
- Fixed some cases when PostgreSQL roles were not getting created
- Added support for custom config variable resource for a sort of polymorphism in cfdb::access
0.9.2
Initial release
Dependencies
- puppetlabs-stdlib (>= 5.2.0 <6.0.0)
- codingfuture-cfbackup (>= 1.3.0 <2.0.0)
- codingfuture-cfnetwork (>= 1.3.0 <2.0.0)
- codingfuture-cfsystem (>= 1.3.0 <2.0.0)
CodingFuture Infrastructure Automation Project cfdb: Generic Database Management and Access Control module Copyright 2016-2019 (c) Andrey Galkin Contacts: * support@codingfuture.net * andvgal@gmail.com Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.