os_patching
Version information
This version is compatible with:
- Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x
- Puppet >= 5.3.2 < 7.0.0
- , , , , , ,
Tasks:
- clean_cache
- patch_server
- refresh_fact
Start using this module
Add this module to your Puppetfile:
mod 'albatrossflavour-os_patching', '0.9.0'
Learn more about managing modules with a PuppetfileDocumentation
os_patching
This module contains a set of tasks and custom facts to allow the automation of and reporting on operating system patching. Currently patching restricted to Redhat and Debian derivatives, however reporting should work on Redhat/Debian derivatives and Windows.
Under the hood it uses the OS level tools to carry out the actual patching.
Description
Puppet tasks and bolt have opened up methods to integrate operating system level patching into the puppet workflow. Providing automation of patch execution through tasks and the robust reporting of state through custom facts and PuppetDB.
If you're looking for a simple way to report on your OS patch levels, this module will show all updates which are outstanding, including which are related to security updates. Do you want to enable self-service patching? This module will use Puppet's RBAC and orchestration and task execution facilities to give you that power.
It also uses security metadata (where available) to determine if there are security updates. On Redhat, this is provided from Redhat as additional metadata in YUM. On Debian, checks are done for which repo the updates are coming from. There is a parameter to the task to only apply security updates.
Blackout windows enable the support for time based change freezes where no patching can happen. There can be multiple windows defined and each which will automatically expire after reaching the defined end date.
Setup
What os_patching affects
The module provides an additional fact (os_patching
) and has a task to allow the patching of a server. When the os_patching
manifest is added to a node it installs a script and cron job to generate cache data used by the os_patching
fact.
Beginning with os_patching
Install the module using the Puppetfile, include it on your nodes and then use the provided tasks to carry out patching.
Usage
Manifest
Include the module:
include os_patching
More advanced usage:
class { 'os_patching':
patch_window => 'Week3',
blackout_windows => { 'End of year change freeze':
{
'start': '2018-12-15T00:00:00+1000',
'end': '2019-01-15T23:59:59+1000',
}
},
}
In that example, the node is assigned to a "patch window", will be forced to reboot regardless of the setting specified in the task and has a blackout window defined for the period of 2018-12-15 - 2019-01-15, during which time no patching through the task can be carried out.
Task
Run a basic patching task from the command line:
os_patching::patch_server - Carry out OS patching on the server, optionally including a reboot and/or only applying security related updates
USAGE:
$ puppet task run os_patching::patch_server [dpkg_params=<value>] [reboot=<value>] [security_only=<value>] [timeout=<value>] [yum_params=<value>] <[--nodes, -n <node-names>] | [--query, -q <'query'>]>
PARAMETERS:
- dpkg_params : Optional[String]
Any additional parameters to include in the dpkg command
- reboot : Optional[Variant[Boolean, Enum['always', 'never', 'patched', 'smart']]]
Should the server reboot after patching has been applied? (Defaults to "never")
- security_only : Optional[Boolean]
Limit patches to those tagged as security related? (Defaults to false)
- timeout : Optional[Integer]
How many seconds should we wait until timing out the patch run? (Defaults to 3600 seconds)
- yum_params : Optional[String]
Any additional parameters to include in the yum upgrade command (such as including/excluding repos)
Example:
$ puppet task run os_patching::patch_server --params='{"reboot": "patched", "security_only": false}' --query="inventory[certname] { facts.os_patching.patch_window = 'Week3' and facts.os_patching.blocked = false and facts.os_patching.package_update_count > 0}"
This will run a patching task against all nodes which have facts matching:
os_patching.patch_window
of 'Week3'os_patching.blocked
equalsfalse
os_patching.package_update_count
greater than 0
The task will apply all patches (security_only=false
) and will reboot the node after patching (reboot=true
).
Reference
Facts
Most of the reporting is driven off the custom fact os_patching_data
, for example:
# facter -p os_patching
{
package_update_count => 0,
package_updates => [],
security_package_updates => [],
security_package_update_count => 0,
blocked => false,
blocked_reasons => [],
blackouts => {},
patch_window = 'Week3',
pinned_packages => [],
last_run => {
date => "2018-08-07T21:55:20+10:00",
message => "Patching complete",
return_code => "Success",
post_reboot => "false",
security_only => "false",
job_id => "60"
}
reboots => {
reboot_required => false,
apps_needing_restart => { },
app_restart_required => false
}
}
This shows there are no updates which can be applied to this server and the server doesn't need a reboot or any application restarts. When there are updates to add, you will see similar to this:
# facter -p os_patching
{
package_update_count => 6,
package_updates => [
"kernel.x86_64",
"kernel-tools.x86_64",
"kernel-tools-libs.x86_64",
"postfix.x86_64",
"procps-ng.x86_64",
"python-perf.x86_64"
]
security_package_updates => [],
security_package_update_count => 0,
blocked => false,
blocked_reasons => [],
blackouts => {
Test change freeze 2 => {
start => "2018-08-01T09:17:10+1000",
end => "2018-08-01T11:15:50+1000"
}
},
pinned_packages => [],
patch_window = 'Week3',
last_run => {
date => "2018-08-07T21:55:20+10:00",
message => "Patching complete",
return_code => "Success",
post_reboot => "false",
security_only => "false",
job_id => "60"
}
reboots => {
reboot_required => true,
apps_needing_restart => {
630 => "/usr/sbin/NetworkManager --no-daemon ",
1451 => "/usr/bin/python2 -s /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b ",
1232 => "/usr/bin/python -Es /usr/sbin/tuned -l -P "
},
app_restart_required => true
}
}
Where it shows 6 packages with available updates, along with an array of the package names. None of the packges are tagged as security related (requires Debian or a subscription to RHEL). There are no blockers to patching and the blackout window defined is not in effect.
The reboot_required flag is set to true, which means there have been changes to packages that require a reboot (libc, kernel etc) but a reboot hasn't happened. The apps_needing_restart shows the PID and command line of applications that are using files that have been upgraded but the process hasn't been restarted.
The pinned packages entry lists any packages which have been specifically excluded from being patched, from version lock on Red Hat or by pinning in Debian.
Last run shows a summary of the information from the last os_patching::patch_server
task.
The fact os_patching.patch_window
can be used to assign nodes to an arbitrary group. The fact can be used as part of the query fed into the task to determine which nodes to patch:
$ puppet task run os_patching::patch_server --query="inventory[certname] {facts.os_patching.patch_window = 'Week3'}"
To reboot or not to reboot, that is the question...
The logic for how to handle reboots is a little complex as it has to handle a wide range of scenarios and desired outcomes.
There are two options which can be set that control how the reboot decision is made:
The reboot
parameter
The reboot parameter is set in the os_patching::patch_server
task. It takes the following options:
- "always"
- No matter what, always reboot the node during the task run, even if no patches are required
- "never" (or the legacy value
false
)- No matter what, never reboot the node during the task run, even if patches have been applied
- "patched" (or the legacy value
true
)- Reboot the node if patches have been applied
- "smart"
- Use the OS supplied tools (e.g.
needs_restarting
on RHEL) to determine if a reboot is required, if it is reboot, otherwise do not.
- Use the OS supplied tools (e.g.
The default value is "never".
These parameters set the default action for all nodes during the run of the task. It is possible to override the behaviour on a node by using...
The reboot_override
fact
The reboot override fact is part of the os_patching
fact set. It is set through the os_patching manifest and has a default of "default".
If it is set to "default" it will take whatever reboot actions are listed in the os_patching::patch_server
task. The other options it takes are the same as those for the reboot parameter (always, never, patched, smart).
During the task run, any value other than "default" will override the value for the reboot
parameter. For example, if the reboot
parameter is set to "never" but the reboot_override
fact is set to "always", the node will always reboot. If the reboot
parameter is set to "never" but the reboot_override
fact is set to "default", the node will use the reboot
parameter and not reboot.
Why?
By having a reboot mode set by the task parameter, it is possible to set the behaviour for all nodes in a patching run (I do 100's at once). Having the override functionality provided by the fact, you can allow individual nodes included in the patching run excluded from the reboot behaviour. Maybe there are a couple of nodes you know you need to patch but you can't reboot them immediately, you can set their reboot_override fact to "never" and handle the reboot manually at another time.
Task output
If there is nothing to be done, the task will report:
{
"pinned_packages" : [ ],
"security" : false,
"return" : "Success",
"start_time" : "2018-08-08T07:52:28+10:00",
"debug" : "",
"end_time" : "2018-08-08T07:52:46+10:00",
"reboot" : "never",
"packages_updated" : "",
"job_id" : "",
"message" : "No patches to apply"
}
If patching was executed, the task will report similar to below:
{
"pinned_packages" : [ ],
"security" : false,
"return" : "Success",
"start_time" : "2018-08-07T21:55:20+10:00",
"debug" : "TRIMMED DUE TO LENGTH FOR THIS EXAMPLE, WOULD NORMALLY CONTAIN FULL COMMAND OUTPUT",
"end_time" : "2018-08-07T21:57:11+10:00",
"reboot" : "never",
"packages_updated" : [ "NetworkManager-1:1.10.2-14.el7_5.x86_64", "NetworkManager-libnm-1:1.10.2-14.el7_5.x86_64", "NetworkManager-team-1:1.10.2-14.el7_5.x86_64", "NetworkManager-tui-1:1.10.2-14.el7_5.x86_64", "binutils-2.27-27.base.el7.x86_64", "centos-release-7-5.1804.el7.centos.2.x86_64", "git-1.8.3.1-13.el7.x86_64", "gnupg2-2.0.22-4.el7.x86_64", "kernel-tools-3.10.0-862.3.3.el7.x86_64", "kernel-tools-libs-3.10.0-862.3.3.el7.x86_64", "perl-Git-1.8.3.1-13.el7.noarch", "python-2.7.5-68.el7.x86_64", "python-libs-2.7.5-68.el7.x86_64", "python-perf-3.10.0-862.3.3.el7.centos.plus.x86_64", "selinux-policy-3.13.1-192.el7_5.3.noarch", "selinux-policy-targeted-3.13.1-192.el7_5.3.noarch", "sudo-1.8.19p2-13.el7.x86_64", "yum-plugin-fastestmirror-1.1.31-45.el7.noarch", "yum-utils-1.1.31-45.el7.noarch" ],
"job_id" : "60",
"message" : "Patching complete"
}
If patching was blocked, the task will report similar to below:
Error: Task exited : 100
Patching blocked
A summary of the patch run is also written to /var/cache/os_patching/run_history
, the last line of which is used by the os_patching.last_run
fact.
2018-08-07T14:47:24+10:00|No patches to apply|Success|false|false|
2018-08-07T14:56:56+10:00|Patching complete|Success|false|false|121
2018-08-07T15:04:42+10:00|yum timeout after 2 seconds : Loaded plugins: versionlock|1|||
2018-08-07T15:05:51+10:00|yum timeout after 3 seconds : Loaded plugins: versionlock|1|||
2018-08-07T15:10:16+10:00|Patching complete|Success|false|false|127
2018-08-07T21:31:47+10:00|Patching blocked |100|||
2018-08-08T07:53:59+10:00|Patching blocked |100|||
/var/cache/os_patching
directory
This directory contains the various control files needed for the fact and task to work correctly. They are managed by the manifest.
/var/cache/os_patching/blackout_windows
: contains name, start and end time for all blackout windows/var/cache/os_patching/package_updates
: a list of all package updates available, populated by/usr/local/bin/os_patching_fact_generation.sh
, triggered through cron/var/cache/os_patching/security_package_updates
: a list of all security_package updates available, populated by/usr/local/bin/os_patching_fact_generation.sh
, triggered through cron/var/cache/os_patching/run_history
: a summary of each run of theos_patching::patch_server
task, populated by the task/var/cache/os_patching/reboot_override
: if present, overrides thereboot=
parameter to the task/var/cache/os_patching/patch_window
: if present, sets the value for the factos_patching.patch_window
/var/cache/os_patching/reboot_required
: if the OS can determine that the server needs to be rebooted due to package changes, this file contains the result. Populates the fact reboot.reboot_required./var/cache/os_patching/apps_to_restart
: a list of processes (PID and command line) that haven't been restarted since the packages they use were patched. Sets the fact reboot.apps_needing_restart and .reboot.app_restart_required.
With the exception of the run_history file, all files in /var/cache/os_patching will be regenerated after a puppet run and a run of the os_patching_fact_generation.sh script, which runs every hour by default. If run_history is removed, the same information can be obtained from PDB, apt/yum and syslog.
Limitations
This module is for PE2018+ with agents capable of running tasks. It is currently limited to the Red Hat and Debian based operating systems (CentOS, Ubuntu, Debian, RedHat etc). Windows (WSUS) functionality is being actively worked on.
RedHat 5 based systems have support but lack a lot of the yum functionality added in 6, so things like the upgraded package list and job ID will be missing.
Development
Fork, develop, submit a pull request
Contributors
Reference
Table of Contents
Classes
os_patching
: This manifest sets up a script and cron job to populate theos_patching
fact.
Tasks
clean_cache
: Clean patch caches (yum/dpkg) via a taskpatch_server
: Carry out OS patching on the server, optionally including a reboot and/or only applying security related updatesrefresh_fact
: Force a refresh of the os_patching fact cache via a task
Classes
os_patching
This manifest sets up a script and cron job to populate
the os_patching
fact.
Examples
assign node to 'Week3' patching window, force a reboot and create a blackout window for the end of the year
class { 'os_patching':
patch_window => 'Week3',
reboot_override => 'always',
blackout_windows => { 'End of year change freeze':
{
'start': '2018-12-15T00:00:00+10:00',
'end': '2019-01-15T23:59:59+10:00',
}
},
}
An example profile to setup patching, sourcing blackout windows from hiera
class profiles::soe::patching (
$patch_window = undef,
$blackout_windows = undef,
$reboot_override = undef,
){
# Pull any blackout windows out of hiera
$hiera_blackout_windows = lookup('profiles::soe::patching::blackout_windows',Hash,hash,{})
# Merge the blackout windows from the parameter and hiera
$full_blackout_windows = $hiera_blackout_windows + $blackout_windows
# Call the os_patching class to set everything up
class { 'os_patching':
patch_window => $patch_window,
reboot_override => $reboot_override,
blackout_windows => $full_blackout_windows,
}
}
JSON hash to specify a change freeze from 2018-12-15 to 2019-01-15
{"End of year change freeze": {"start": "2018-12-15T00:00:00+10:00", "end": "2019-01-15T23:59:59+10:00"}}
Run patching on the node centos.example.com
using the smart reboot option
puppet task run os_patching::patch_server --params '{"reboot": "smart"}' --nodes centos.example.com
Remove from a managed system
class { 'os_patching':
ensure => absent,
}
Parameters
The following parameters are available in the os_patching
class.
patch_data_owner
Data type: String
User name for the owner of the patch data
Default value: 'root'
patch_data_group
Data type: String
Group name for the owner of the patch data
Default value: 'root'
patch_cron_user
Data type: String
User who runs the cron job
Default value: $patch_data_owner
manage_yum_utils
Data type: Boolean
Should the yum_utils package be managed by this module on RedHat family nodes?
If true
, use the parameter yum_utils
to determine how it should be manged
Default value: false
yum_utils
Data type: Enum['installed', 'absent', 'purged', 'held', 'latest']
If managed, what should the yum_utils package set to?
Default value: 'installed'
fact_upload
Data type: Boolean
Should puppet fact upload
be run after any changes to the fact cache files?
Default value: true
manage_delta_rpm
Data type: Boolean
Should the deltarpm package be managed by this module on RedHat family nodes?
If true
, use the parameter delta_rpm
to determine how it should be manged
Default value: false
delta_rpm
Data type: Enum['installed', 'absent', 'purged', 'held', 'latest']
If managed, what should the delta_rpm package set to?
Default value: 'installed'
manage_yum_plugin_security
Data type: Boolean
Should the yum_plugin_security package be managed by this module on RedHat family nodes?
If true
, use the parameter yum_plugin_security
to determine how it should be manged
Default value: false
yum_plugin_security
Data type: Enum['installed', 'absent', 'purged', 'held', 'latest']
If managed, what should the yum_plugin_security package set to?
Default value: 'installed'
reboot_override
Data type: Optional[Variant[Boolean, Enum['always', 'never', 'patched', 'smart', 'default']]]
Controls on a node level if a reboot should/should not be done after patching. This overrides the setting in the task
Default value: 'default'
patch_window
Data type: String
A freeform text entry used to allocate a node to a specific patch window (Optional)
Default value: undef
patch_cron_hour
Data type: Any
The hour(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_month
Data type: Any
The month(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_monthday
Data type: Any
The monthday(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_weekday
Data type: Any
The weekday(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_min
Data type: Any
The min(s) for the cron job to run (defaults to a random number between 0 and 59)
Default value: fqdn_rand(59)
ensure
Data type: Enum['present', 'absent']
present
to install scripts, cronjobs, files, etc, absent
to cleanup a system that previously hosted us
Default value: 'present'
blackout_windows
Data type: Optional[Hash]
Options:
- :title
String
: Name of the blackout window - :start
String
: Start of the blackout window (ISO8601 format) - :end
String
: End of the blackout window (ISO8601 format)
Default value: undef
Tasks
clean_cache
Clean patch caches (yum/dpkg) via a task
Supports noop? false
patch_server
Carry out OS patching on the server, optionally including a reboot and/or only applying security related updates
Supports noop? false
Parameters
yum_params
Data type: Optional[String]
Any additional parameters to include in the yum upgrade command (such as including/excluding repos)
dpkg_params
Data type: Optional[String]
Any additional parameters to include in the dpkg command
reboot
Data type: Optional[Variant[Boolean, Enum['always', 'never', 'patched', 'smart']]]
Should the server reboot after patching has been applied? (Defaults to 'never')
timeout
Data type: Optional[Integer]
How many seconds should we wait until timing out the patch run? (Defaults to 3600 seconds)
security_only
Data type: Optional[Boolean]
Limit patches to those tagged as security related? (Defaults to false)
clean_cache
Data type: Optional[Boolean]
Should the yum/dpkg caches be cleaned at the start of the task? (Defaults to false)
refresh_fact
Force a refresh of the os_patching fact cache via a task
Supports noop? false
What are tasks?
Modules can contain tasks that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Tasks in this module release
Change Log
0.9.0 (2019-04-24)
Implemented enhancements:
- os_patching can now patch Suse Linux (Thanks JakeTRogers)
- Switched acceptance testing over to Litmus
Merged pull requests:
- Feature/sles #113 (JakeTRogers)
0.8.0 (2019-01-24)
Closed issues:
Merged pull requests:
- Changelog update #111 (albatrossflavour)
- Merge to master #110 (albatrossflavour)
- Fact upload and stdlib fixes #109 (albatrossflavour)
- Feature/pdqtest #108 (albatrossflavour)
- Bugfix for filter code #105 (albatrossflavour)
- Bugfix/filter #104 (albatrossflavour)
- Merge pull request #102 from albatrossflavour/development #103 (albatrossflavour)
0.7.0 (2018-12-09)
Fixed bugs:
- 3777 updates!? (update check should only count stdout) #99
- AIX - Error resolving os_patching (restrict away from AIX?) #93
- json encoding issue #92
Merged pull requests:
- V0.7.0 release #102 (albatrossflavour)
- metadata updates prior to 0.7.0 release #101 (albatrossflavour)
- Additional filtering based on bug #99 #100 (albatrossflavour)
- Add confine to facter #98 (albatrossflavour)
- filter out yum check-update security messages #97 (albatrossflavour)
- Merge pull request #87 from albatrossflavour/development #88 (albatrossflavour)
0.6.4 (2018-10-03)
Merged pull requests:
- Push to master (0.6.4) #87 (albatrossflavour)
- Merge pull request #85 from albatrossflavour/development #86 (albatrossflavour)
0.6.3 (2018-10-03)
Merged pull requests:
- V0.6.3 release #85 (albatrossflavour)
- Debian fact improvements #84 (albatrossflavour)
- Merge pull request #82 from albatrossflavour/development #83 (albatrossflavour)
0.6.2 (2018-10-03)
Merged pull requests:
- V0.6.2 release #82 (albatrossflavour)
- Enable security patching in debian again #81 (albatrossflavour)
- Merge pull request #79 from albatrossflavour/development #80 (albatrossflavour)
0.6.1 (2018-10-02)
Merged pull requests:
- Fix a couple of strings issues #79 (albatrossflavour)
- Merge pull request #77 from albatrossflavour/development #78 (albatrossflavour)
0.6.0 (2018-10-02)
Implemented enhancements:
- [enhancement] consider validating incoming ISO-8601 timestamps for validity #69
- [bug] invalid times parsed from blackouts file silently ignored #67
- [task] move data to /var/cache #60
- [enhancement][sponsored] fixup puppetstrings output and include REFERENCE.md #59
- [testing][sponsored] need mock version of
puppet fact upload
#58 - [testing][sponsored] acceptance tests #57
- [feature][sponsored] package cleanup before #55
- [feature][sponsored] uninstall support #54
- stack trace when task run before setup complete #52
Fixed bugs:
- [bug] task fails to run on debian [assign geoff] #70
- [bug] script relies on /usr/local/bin/facter but it does not always exist #56
- Value type appears to be incorrect #48
Closed issues:
Merged pull requests:
- Pull to master #77 (albatrossflavour)
- Feature/data parser #76 (albatrossflavour)
- Feature/clean cache #75 (albatrossflavour)
- remove all reference to /opt/puppetlabs/facter/facts.d/os_patching.yaml #72 (GeoffWilliams)
- Add acceptance testing, esure=>absent, simplfify #71 (GeoffWilliams)
- Add reference.md #65 (albatrossflavour)
- Bugfix/strings #64 (albatrossflavour)
- Feature/move cache #63 (albatrossflavour)
- Bugfix/facter path #62 (albatrossflavour)
- Feature/rpm attribute fix #61 (albatrossflavour)
- Warn user when task is not setup yet #53 (GeoffWilliams)
- Merge pull request #50 from albatrossflavour/development #51 (albatrossflavour)
0.5.0 (2018-09-23)
Merged pull requests:
- Merge to master #50 (albatrossflavour)
- Change the way we handle reboot logic #49 (albatrossflavour)
- Resync to dev #47 (albatrossflavour)
0.4.1 (2018-09-16)
Merged pull requests:
- V0.4.1 #46 (albatrossflavour)
0.4.0 (2018-09-16)
Implemented enhancements:
packages\_updated
does not show the kernel itself #29
Fixed bugs:
- Locked, Exiting - Need trap(s) if we have a lockfile (/usr/local/bin/os_patching_fact_generation.sh) #42
- When os_patching reports patches but there is not enough space to install them, it reports success #39
- When unreachable yumrepos are present, os_patching does not restart properly #36
- When no disk space is left, os_patching reports no patches to apply rather than an error #35
Merged pull requests:
- V0.4.0 release #45 (albatrossflavour)
- Add extra error checking for the patch execution #44 (albatrossflavour)
- Feature/facter error reporting #43 (albatrossflavour)
- regex ignores pkgs starting with uppercase or digits #41 (f3sty)
- Bugfix/needs restarting improvements #38 (albatrossflavour)
- Prod release #34 (albatrossflavour)
- Fix parsing of install/installed #33 (albatrossflavour)
- Fix issue with parsing of installed/install output from yum #30 (albatrossflavour)
- Sync back to dev #28 (albatrossflavour)
0.3.5 (2018-08-16)
Merged pull requests:
- Pre-release updates #27 (albatrossflavour)
- Release to master #26 (albatrossflavour)
- Merge timeout fixes #25 (albatrossflavour)
- Resync to development #24 (albatrossflavour)
0.3.4 (2018-08-10)
Merged pull requests:
- Pre release updates #23 (albatrossflavour)
- Missed a new variable #22 (albatrossflavour)
- Remove shell commands as much as possible #21 (albatrossflavour)
- Ooops #20 (albatrossflavour)
- Add cron job to refresh cache at reboot #19 (albatrossflavour)
0.3.3 (2018-08-09)
Merged pull requests:
- Ensure we honour reboot_override even if a reboot isn't required #18 (albatrossflavour)
- Secure the params a little more #17 (albatrossflavour)
0.3.2 (2018-08-09)
Merged pull requests:
- Updates to detect when reboots are required #16 (albatrossflavour)
0.3.1 (2018-08-09)
Merged pull requests:
- Resync to development #15 (albatrossflavour)
0.2.1 (2018-08-07)
Merged pull requests:
- Major documentation update #14 (albatrossflavour)
- rubocop #13 (albatrossflavour)
- Rubocop updates #12 (albatrossflavour)
- Rubocop is on thin ice! #11 (albatrossflavour)
- rubocop updates #10 (albatrossflavour)
- Push to production #9 (albatrossflavour)
- Start/end times added and history file fixed #8 (albatrossflavour)
- Major update for all areas #7 (albatrossflavour)
0.1.19 (2018-07-09)
Merged pull requests:
- Feature/smarter tasks #6 (albatrossflavour)
0.1.17 (2018-06-01)
0.1.16 (2018-05-29)
0.1.14 (2018-05-28)
0.1.13 (2018-05-28)
Merged pull requests:
- Merge pull request #1 from albatrossflavour/development #4 (albatrossflavour)
- clean up unused caches #3 (albatrossflavour)
- Updates #1 (albatrossflavour)
* This Change Log was automatically generated by github_changelog_generator
Dependencies
- puppetlabs-stdlib (>= 4.13.1 < 5.1.0)
- puppetlabs-translate (>= 1.0.0 < 2.0.0)
- puppet-cron (>= 1.3.1 < 5.0.0)
- puppetlabs-cron_core (>= 1.0.1 < 5.1.0)