os_patching
Version information
This version is compatible with:
- Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x
- Puppet >= 5.3.2 < 7.0.0
- , , , , , , ,
Tasks:
- clean_cache
- patch_server
- refresh_fact
Plans:
- patch_after_healthcheck
Start using this module
Add this module to your Puppetfile:
mod 'albatrossflavour-os_patching', '0.11.0'
Learn more about managing modules with a PuppetfileDocumentation
os_patching
This module contains a set of tasks and custom facts to allow the automation of and reporting on operating system patching. Currently patching works on Linux (Redhat, Suse and Debian derivatives) and Windows
Under the hood it uses the OS level tools to carry out the actual patching. That does mean that you need to be sure that your nodes are ABLE to query for their updates (manage YUM, APT, WSUS etc).
Description
Puppet tasks and bolt have opened up methods to integrate operating system level patching into the puppet workflow. Providing automation of patch execution through tasks and the robust reporting of state through custom facts and PuppetDB.
If you're looking for a simple way to report on your OS patch levels, this module will show all updates which are outstanding, including which are related to security updates. Do you want to enable self-service patching? This module will use Puppet's RBAC and orchestration and task execution facilities to give you that power.
It also uses security metadata (where available) to determine if there are security updates. On Redhat, this is provided from Redhat as additional metadata in YUM. On Debian, checks are done for which repo the updates are coming from. There is a parameter to the task to only apply security updates.
Blackout windows enable the support for time based change freezes where no patching can happen. There can be multiple windows defined and each which will automatically expire after reaching the defined end date.
Setup
What os_patching affects
The module provides an additional fact (os_patching
) and has a task to allow the patching of a server. When the os_patching
manifest is added to a node it installs a script and cron job to generate cache data used by the os_patching
fact.
Beginning with os_patching
Install the module using the Puppetfile, include it on your nodes and then use the provided tasks to carry out patching.
Usage
Manifest
Include the module:
include os_patching
More advanced usage:
class { 'os_patching':
patch_window => 'Week3',
blackout_windows => { 'End of year change freeze':
{
'start': '2018-12-15T00:00:00+1000',
'end': '2019-01-15T23:59:59+1000',
}
},
}
In that example, the node is assigned to a "patch window", will be forced to reboot regardless of the setting specified in the task and has a blackout window defined for the period of 2018-12-15 - 2019-01-15, during which time no patching through the task can be carried out.
Task
Run a basic patching task from the command line:
os_patching::patch_server - Carry out OS patching on the server, optionally including a reboot and/or only applying security related updates
USAGE:
$ puppet task run os_patching::patch_server [dpkg_params=<value>] [reboot=<value>] [security_only=<value>] [timeout=<value>] [yum_params=<value>] <[--nodes, -n <node-names>] | [--query, -q <'query'>]>
PARAMETERS:
- dpkg_params : Optional[String]
Any additional parameters to include in the dpkg command
- reboot : Optional[Variant[Boolean, Enum['always', 'never', 'patched', 'smart']]]
Should the server reboot after patching has been applied? (Defaults to "never")
- security_only : Optional[Boolean]
Limit patches to those tagged as security related? (Defaults to false)
- timeout : Optional[Integer]
How many seconds should we wait until timing out the patch run? (Defaults to 3600 seconds)
- yum_params : Optional[String]
Any additional parameters to include in the yum upgrade command (such as including/excluding repos)
Example:
$ puppet task run os_patching::patch_server --params='{"reboot": "patched", "security_only": false}' --query="inventory[certname] { facts.os_patching.patch_window = 'Week3' and facts.os_patching.blocked = false and facts.os_patching.package_update_count > 0}"
This will run a patching task against all nodes which have facts matching:
os_patching.patch_window
of 'Week3'os_patching.blocked
equalsfalse
os_patching.package_update_count
greater than 0
The task will apply all patches (security_only=false
) and will reboot the node after patching (reboot=true
).
Reference
Facts
Most of the reporting is driven off the custom fact os_patching_data
, for example:
# facter -p os_patching
{
package_update_count => 0,
package_updates => [],
security_package_updates => [],
security_package_update_count => 0,
blocked => false,
blocked_reasons => [],
blackouts => {},
patch_window = 'Week3',
pinned_packages => [],
last_run => {
date => "2018-08-07T21:55:20+10:00",
message => "Patching complete",
return_code => "Success",
post_reboot => "false",
security_only => "false",
job_id => "60"
}
reboots => {
reboot_required => false,
apps_needing_restart => { },
app_restart_required => false
}
}
This shows there are no updates which can be applied to this server and the server doesn't need a reboot or any application restarts. When there are updates to add, you will see similar to this:
# facter -p os_patching
{
package_update_count => 6,
package_updates => [
"kernel.x86_64",
"kernel-tools.x86_64",
"kernel-tools-libs.x86_64",
"postfix.x86_64",
"procps-ng.x86_64",
"python-perf.x86_64"
]
security_package_updates => [],
security_package_update_count => 0,
blocked => false,
blocked_reasons => [],
blackouts => {
Test change freeze 2 => {
start => "2018-08-01T09:17:10+1000",
end => "2018-08-01T11:15:50+1000"
}
},
pinned_packages => [],
patch_window = 'Week3',
last_run => {
date => "2018-08-07T21:55:20+10:00",
message => "Patching complete",
return_code => "Success",
post_reboot => "false",
security_only => "false",
job_id => "60"
}
reboots => {
reboot_required => true,
apps_needing_restart => {
630 => "/usr/sbin/NetworkManager --no-daemon ",
1451 => "/usr/bin/python2 -s /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b ",
1232 => "/usr/bin/python -Es /usr/sbin/tuned -l -P "
},
app_restart_required => true
}
}
Where it shows 6 packages with available updates, along with an array of the package names. None of the packges are tagged as security related (requires Debian or a subscription to RHEL). There are no blockers to patching and the blackout window defined is not in effect.
The reboot_required flag is set to true, which means there have been changes to packages that require a reboot (libc, kernel etc) but a reboot hasn't happened. The apps_needing_restart shows the PID and command line of applications that are using files that have been upgraded but the process hasn't been restarted.
The pinned packages entry lists any packages which have been specifically excluded from being patched, from version lock on Red Hat or by pinning in Debian.
Last run shows a summary of the information from the last os_patching::patch_server
task.
The fact os_patching.patch_window
can be used to assign nodes to an arbitrary group. The fact can be used as part of the query fed into the task to determine which nodes to patch:
$ puppet task run os_patching::patch_server --query="inventory[certname] {facts.os_patching.patch_window = 'Week3'}"
To reboot or not to reboot, that is the question...
The logic for how to handle reboots is a little complex as it has to handle a wide range of scenarios and desired outcomes.
There are two options which can be set that control how the reboot decision is made:
The reboot
parameter
The reboot parameter is set in the os_patching::patch_server
task. It takes the following options:
- "always"
- No matter what, always reboot the node during the task run, even if no patches are required
- "never" (or the legacy value
false
)- No matter what, never reboot the node during the task run, even if patches have been applied
- "patched" (or the legacy value
true
)- Reboot the node if patches have been applied
- "smart"
- Use the OS supplied tools (e.g.
needs_restarting
on RHEL) to determine if a reboot is required, if it is reboot, otherwise do not.
- Use the OS supplied tools (e.g.
The default value is "never".
These parameters set the default action for all nodes during the run of the task. It is possible to override the behaviour on a node by using...
The reboot_override
fact
The reboot override fact is part of the os_patching
fact set. It is set through the os_patching manifest and has a default of "default".
If it is set to "default" it will take whatever reboot actions are listed in the os_patching::patch_server
task. The other options it takes are the same as those for the reboot parameter (always, never, patched, smart).
During the task run, any value other than "default" will override the value for the reboot
parameter. For example, if the reboot
parameter is set to "never" but the reboot_override
fact is set to "always", the node will always reboot. If the reboot
parameter is set to "never" but the reboot_override
fact is set to "default", the node will use the reboot
parameter and not reboot.
Why?
By having a reboot mode set by the task parameter, it is possible to set the behaviour for all nodes in a patching run (I do 100's at once). Having the override functionality provided by the fact, you can allow individual nodes included in the patching run excluded from the reboot behaviour. Maybe there are a couple of nodes you know you need to patch but you can't reboot them immediately, you can set their reboot_override fact to "never" and handle the reboot manually at another time.
Task output
If there is nothing to be done, the task will report:
{
"pinned_packages" : [ ],
"security" : false,
"return" : "Success",
"start_time" : "2018-08-08T07:52:28+10:00",
"debug" : "",
"end_time" : "2018-08-08T07:52:46+10:00",
"reboot" : "never",
"packages_updated" : "",
"job_id" : "",
"message" : "No patches to apply"
}
If patching was executed, the task will report similar to below:
{
"pinned_packages" : [ ],
"security" : false,
"return" : "Success",
"start_time" : "2018-08-07T21:55:20+10:00",
"debug" : "TRIMMED DUE TO LENGTH FOR THIS EXAMPLE, WOULD NORMALLY CONTAIN FULL COMMAND OUTPUT",
"end_time" : "2018-08-07T21:57:11+10:00",
"reboot" : "never",
"packages_updated" : [ "NetworkManager-1:1.10.2-14.el7_5.x86_64", "NetworkManager-libnm-1:1.10.2-14.el7_5.x86_64", "NetworkManager-team-1:1.10.2-14.el7_5.x86_64", "NetworkManager-tui-1:1.10.2-14.el7_5.x86_64", "binutils-2.27-27.base.el7.x86_64", "centos-release-7-5.1804.el7.centos.2.x86_64", "git-1.8.3.1-13.el7.x86_64", "gnupg2-2.0.22-4.el7.x86_64", "kernel-tools-3.10.0-862.3.3.el7.x86_64", "kernel-tools-libs-3.10.0-862.3.3.el7.x86_64", "perl-Git-1.8.3.1-13.el7.noarch", "python-2.7.5-68.el7.x86_64", "python-libs-2.7.5-68.el7.x86_64", "python-perf-3.10.0-862.3.3.el7.centos.plus.x86_64", "selinux-policy-3.13.1-192.el7_5.3.noarch", "selinux-policy-targeted-3.13.1-192.el7_5.3.noarch", "sudo-1.8.19p2-13.el7.x86_64", "yum-plugin-fastestmirror-1.1.31-45.el7.noarch", "yum-utils-1.1.31-45.el7.noarch" ],
"job_id" : "60",
"message" : "Patching complete"
}
If patching was blocked, the task will report similar to below:
Error: Task exited : 100
Patching blocked
A summary of the patch run is also written to /var/cache/os_patching/run_history
, the last line of which is used by the os_patching.last_run
fact.
2018-08-07T14:47:24+10:00|No patches to apply|Success|false|false|
2018-08-07T14:56:56+10:00|Patching complete|Success|false|false|121
2018-08-07T15:04:42+10:00|yum timeout after 2 seconds : Loaded plugins: versionlock|1|||
2018-08-07T15:05:51+10:00|yum timeout after 3 seconds : Loaded plugins: versionlock|1|||
2018-08-07T15:10:16+10:00|Patching complete|Success|false|false|127
2018-08-07T21:31:47+10:00|Patching blocked |100|||
2018-08-08T07:53:59+10:00|Patching blocked |100|||
/var/cache/os_patching
directory
This directory contains the various control files needed for the fact and task to work correctly. They are managed by the manifest.
/var/cache/os_patching/blackout_windows
: contains name, start and end time for all blackout windows/var/cache/os_patching/package_updates
: a list of all package updates available, populated by/usr/local/bin/os_patching_fact_generation.sh
, triggered through cron/var/cache/os_patching/security_package_updates
: a list of all security_package updates available, populated by/usr/local/bin/os_patching_fact_generation.sh
, triggered through cron/var/cache/os_patching/run_history
: a summary of each run of theos_patching::patch_server
task, populated by the task/var/cache/os_patching/reboot_override
: if present, overrides thereboot=
parameter to the task/var/cache/os_patching/patch_window
: if present, sets the value for the factos_patching.patch_window
/var/cache/os_patching/reboot_required
: if the OS can determine that the server needs to be rebooted due to package changes, this file contains the result. Populates the fact reboot.reboot_required./var/cache/os_patching/apps_to_restart
: a list of processes (PID and command line) that haven't been restarted since the packages they use were patched. Sets the fact reboot.apps_needing_restart and .reboot.app_restart_required.
With the exception of the run_history file, all files in /var/cache/os_patching will be regenerated after a puppet run and a run of the os_patching_fact_generation.sh script, which runs every hour by default. If run_history is removed, the same information can be obtained from PDB, apt/yum and syslog.
Limitations
RedHat 5 based systems have support but lack a lot of the yum functionality added in 6, so things like the upgraded package list and job ID will be missing.
Development
Fork, develop, submit a pull request
Contributors
Reference
Table of Contents
Classes
os_patching
: This manifest sets up a script and cron job to populate theos_patching
fact.
Tasks
clean_cache
: Clean patch caches (yum/dpkg) via a taskpatch_server
: Carry out OS patching on the server, optionally including a reboot and/or only applying security related updatesrefresh_fact
: Force a refresh of the os_patching fact cache via a task
Plans
os_patching::patch_after_healthcheck
: An example plan that uses the puppet health check module to perform a pre-check on the nodes you're planning to patch. If the nodes pass the check, they get patched
Classes
os_patching
This manifest sets up a script and cron job to populate
the os_patching
fact.
Examples
assign node to 'Week3' patching window, force a reboot and create a blackout window for the end of the year
class { 'os_patching':
patch_window => 'Week3',
reboot_override => 'always',
blackout_windows => { 'End of year change freeze':
{
'start': '2018-12-15T00:00:00+10:00',
'end': '2019-01-15T23:59:59+10:00',
}
},
}
An example profile to setup patching, sourcing blackout windows from hiera
class profiles::soe::patching (
$patch_window = undef,
$blackout_windows = undef,
$reboot_override = undef,
){
# Pull any blackout windows out of hiera
$hiera_blackout_windows = lookup('profiles::soe::patching::blackout_windows',Hash,hash,{})
# Merge the blackout windows from the parameter and hiera
$full_blackout_windows = $hiera_blackout_windows + $blackout_windows
# Call the os_patching class to set everything up
class { 'os_patching':
patch_window => $patch_window,
reboot_override => $reboot_override,
blackout_windows => $full_blackout_windows,
}
}
JSON hash to specify a change freeze from 2018-12-15 to 2019-01-15
{"End of year change freeze": {"start": "2018-12-15T00:00:00+10:00", "end": "2019-01-15T23:59:59+10:00"}}
Run patching on the node centos.example.com
using the smart reboot option
puppet task run os_patching::patch_server --params '{"reboot": "smart"}' --nodes centos.example.com
Remove from a managed system
class { 'os_patching':
ensure => absent,
}
Parameters
The following parameters are available in the os_patching
class.
patch_data_owner
Data type: String
User name for the owner of the patch data
Default value: 'root'
patch_data_group
Data type: String
Group name for the owner of the patch data
Default value: 'root'
patch_cron_user
Data type: String
User who runs the cron job
Default value: $patch_data_owner
manage_yum_utils
Data type: Boolean
Should the yum_utils package be managed by this module on RedHat family nodes?
If true
, use the parameter yum_utils
to determine how it should be manged
Default value: false
yum_utils
Data type: Enum['installed', 'absent', 'purged', 'held', 'latest']
If managed, what should the yum_utils package set to?
Default value: 'installed'
fact_upload
Data type: Boolean
Should puppet fact upload
be run after any changes to the fact cache files?
Default value: true
manage_delta_rpm
Data type: Boolean
Should the deltarpm package be managed by this module on RedHat family nodes?
If true
, use the parameter delta_rpm
to determine how it should be manged
Default value: false
delta_rpm
Data type: Enum['installed', 'absent', 'purged', 'held', 'latest']
If managed, what should the delta_rpm package set to?
Default value: 'installed'
manage_yum_plugin_security
Data type: Boolean
Should the yum_plugin_security package be managed by this module on RedHat family nodes?
If true
, use the parameter yum_plugin_security
to determine how it should be manged
Default value: false
yum_plugin_security
Data type: Enum['installed', 'absent', 'purged', 'held', 'latest']
If managed, what should the yum_plugin_security package set to?
Default value: 'installed'
reboot_override
Data type: Optional[Variant[Boolean, Enum['always', 'never', 'patched', 'smart', 'default']]]
Controls on a node level if a reboot should/should not be done after patching. This overrides the setting in the task
Default value: 'default'
patch_window
Data type: String
A freeform text entry used to allocate a node to a specific patch window (Optional)
Default value: undef
patch_cron_hour
Data type: Any
The hour(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_month
Data type: Any
The month(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_monthday
Data type: Any
The monthday(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_weekday
Data type: Any
The weekday(s) for the cron job to run (defaults to absent, which means '*' in cron)
Default value: absent
patch_cron_min
Data type: Any
The min(s) for the cron job to run (defaults to a random number between 0 and 59)
Default value: fqdn_rand(59)
ensure
Data type: Enum['present', 'absent']
present
to install scripts, cronjobs, files, etc, absent
to cleanup a system that previously hosted us
Default value: 'present'
blackout_windows
Data type: Optional[Hash]
Options:
- :title
String
: Name of the blackout window - :start
String
: Start of the blackout window (ISO8601 format) - :end
String
: End of the blackout window (ISO8601 format)
Default value: undef
Tasks
clean_cache
Clean patch caches (yum/dpkg) via a task
Supports noop? false
patch_server
Carry out OS patching on the server, optionally including a reboot and/or only applying security related updates
Supports noop? false
Parameters
yum_params
Data type: Optional[String]
Any additional parameters to include in the yum upgrade command (such as including/excluding repos)
dpkg_params
Data type: Optional[String]
Any additional parameters to include in the dpkg command
zypper_params
Data type: Optional[String]
Any additional parameters to include in the zypper update command
reboot
Data type: Optional[Variant[Boolean, Enum['always', 'never', 'patched', 'smart']]]
Should the server reboot after patching has been applied? (Defaults to 'never')
timeout
Data type: Optional[Integer]
How many seconds should we wait until timing out the patch run? (Defaults to 3600 seconds)
security_only
Data type: Optional[Boolean]
Limit patches to those tagged as security related? (Defaults to false)
clean_cache
Data type: Optional[Boolean]
Should the yum/dpkg caches be cleaned at the start of the task? (Defaults to false)
refresh_fact
Force a refresh of the os_patching fact cache via a task
Supports noop? false
Plans
os_patching::patch_after_healthcheck
An example plan that uses the puppet health check module to perform a pre-check on the nodes you're planning to patch. If the nodes pass the check, they get patched
Parameters
The following parameters are available in the os_patching::patch_after_healthcheck
plan.
nodes
Data type: TargetSpec
What are tasks?
Modules can contain tasks that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Tasks in this module release
What are plans?
Modules can contain plans that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Change Log
0.11.0 (2019-05-03)
Implemented enhancements:
- Add litmus tests to run the tasks and validate the results #124
- Enable windows support for the manifests and facter #120
Merged pull requests:
- Release to production in preparation for V0.11.0 release #123 (albatrossflavour)
- Community information added #122 (albatrossflavour)
- Enable windows support #121 (albatrossflavour)
0.10.0 (2019-04-26)
Implemented enhancements:
- Create example bolt plan for patching #117
Merged pull requests:
- Release to production #119 (albatrossflavour)
- Add example plan #118 (albatrossflavour)
- Resync development #116 (albatrossflavour)
0.9.0 (2019-04-26)
Merged pull requests:
- Merge Litmus and Suse to production #115 (albatrossflavour)
- Switch over to litmus tests #114 (albatrossflavour)
- Feature/sles #113 (JakeTRogers)
0.8.0 (2019-01-24)
Closed issues:
Merged pull requests:
- Changelog update #111 (albatrossflavour)
- Merge to master #110 (albatrossflavour)
- Fact upload and stdlib fixes #109 (albatrossflavour)
- Feature/pdqtest #108 (albatrossflavour)
- Bugfix for filter code #105 (albatrossflavour)
- Bugfix/filter #104 (albatrossflavour)
- Merge pull request #102 from albatrossflavour/development #103 (albatrossflavour)
0.7.0 (2018-12-09)
Fixed bugs:
- 3777 updates!? (update check should only count stdout) #99
- AIX - Error resolving os_patching (restrict away from AIX?) #93
- json encoding issue #92
Merged pull requests:
- V0.7.0 release #102 (albatrossflavour)
- metadata updates prior to 0.7.0 release #101 (albatrossflavour)
- Additional filtering based on bug #99 #100 (albatrossflavour)
- Add confine to facter #98 (albatrossflavour)
- filter out yum check-update security messages #97 (albatrossflavour)
- Merge pull request #87 from albatrossflavour/development #88 (albatrossflavour)
0.6.4 (2018-10-03)
Merged pull requests:
- Push to master (0.6.4) #87 (albatrossflavour)
- Merge pull request #85 from albatrossflavour/development #86 (albatrossflavour)
0.6.3 (2018-10-03)
Merged pull requests:
- V0.6.3 release #85 (albatrossflavour)
- Debian fact improvements #84 (albatrossflavour)
- Merge pull request #82 from albatrossflavour/development #83 (albatrossflavour)
0.6.2 (2018-10-03)
Merged pull requests:
- V0.6.2 release #82 (albatrossflavour)
- Enable security patching in debian again #81 (albatrossflavour)
- Merge pull request #79 from albatrossflavour/development #80 (albatrossflavour)
0.6.1 (2018-10-02)
Merged pull requests:
- Fix a couple of strings issues #79 (albatrossflavour)
- Merge pull request #77 from albatrossflavour/development #78 (albatrossflavour)
0.6.0 (2018-10-02)
Implemented enhancements:
- [enhancement] consider validating incoming ISO-8601 timestamps for validity #69
- [bug] invalid times parsed from blackouts file silently ignored #67
- [task] move data to /var/cache #60
- [enhancement][sponsored] fixup puppetstrings output and include REFERENCE.md #59
- [testing][sponsored] need mock version of
puppet fact upload
#58 - [testing][sponsored] acceptance tests #57
- [feature][sponsored] package cleanup before #55
- [feature][sponsored] uninstall support #54
- stack trace when task run before setup complete #52
Fixed bugs:
- [bug] task fails to run on debian [assign geoff] #70
- [bug] script relies on /usr/local/bin/facter but it does not always exist #56
- Value type appears to be incorrect #48
Closed issues:
Merged pull requests:
- Pull to master #77 (albatrossflavour)
- Feature/data parser #76 (albatrossflavour)
- Feature/clean cache #75 (albatrossflavour)
- remove all reference to /opt/puppetlabs/facter/facts.d/os_patching.yaml #72 (GeoffWilliams)
- Add acceptance testing, esure=>absent, simplfify #71 (GeoffWilliams)
- Add reference.md #65 (albatrossflavour)
- Bugfix/strings #64 (albatrossflavour)
- Feature/move cache #63 (albatrossflavour)
- Bugfix/facter path #62 (albatrossflavour)
- Feature/rpm attribute fix #61 (albatrossflavour)
- Warn user when task is not setup yet #53 (GeoffWilliams)
- Merge pull request #50 from albatrossflavour/development #51 (albatrossflavour)
0.5.0 (2018-09-23)
Merged pull requests:
- Merge to master #50 (albatrossflavour)
- Change the way we handle reboot logic #49 (albatrossflavour)
- Resync to dev #47 (albatrossflavour)
0.4.1 (2018-09-16)
Merged pull requests:
- V0.4.1 #46 (albatrossflavour)
0.4.0 (2018-09-16)
Implemented enhancements:
packages\_updated
does not show the kernel itself #29
Fixed bugs:
- Locked, Exiting - Need trap(s) if we have a lockfile (/usr/local/bin/os_patching_fact_generation.sh) #42
- When os_patching reports patches but there is not enough space to install them, it reports success #39
- When unreachable yumrepos are present, os_patching does not restart properly #36
- When no disk space is left, os_patching reports no patches to apply rather than an error #35
Merged pull requests:
- V0.4.0 release #45 (albatrossflavour)
- Add extra error checking for the patch execution #44 (albatrossflavour)
- Feature/facter error reporting #43 (albatrossflavour)
- regex ignores pkgs starting with uppercase or digits #41 (f3sty)
- Bugfix/needs restarting improvements #38 (albatrossflavour)
- Prod release #34 (albatrossflavour)
- Fix parsing of install/installed #33 (albatrossflavour)
- Fix issue with parsing of installed/install output from yum #30 (albatrossflavour)
- Sync back to dev #28 (albatrossflavour)
0.3.5 (2018-08-16)
Merged pull requests:
- Pre-release updates #27 (albatrossflavour)
- Release to master #26 (albatrossflavour)
- Merge timeout fixes #25 (albatrossflavour)
- Resync to development #24 (albatrossflavour)
0.3.4 (2018-08-10)
Merged pull requests:
- Pre release updates #23 (albatrossflavour)
- Missed a new variable #22 (albatrossflavour)
- Remove shell commands as much as possible #21 (albatrossflavour)
- Ooops #20 (albatrossflavour)
- Add cron job to refresh cache at reboot #19 (albatrossflavour)
0.3.3 (2018-08-09)
Merged pull requests:
- Ensure we honour reboot_override even if a reboot isn't required #18 (albatrossflavour)
- Secure the params a little more #17 (albatrossflavour)
0.3.2 (2018-08-09)
Merged pull requests:
- Updates to detect when reboots are required #16 (albatrossflavour)
0.3.1 (2018-08-09)
Merged pull requests:
- Resync to development #15 (albatrossflavour)
0.2.1 (2018-08-07)
Merged pull requests:
- Major documentation update #14 (albatrossflavour)
- rubocop #13 (albatrossflavour)
- Rubocop updates #12 (albatrossflavour)
- Rubocop is on thin ice! #11 (albatrossflavour)
- rubocop updates #10 (albatrossflavour)
- Push to production #9 (albatrossflavour)
- Start/end times added and history file fixed #8 (albatrossflavour)
- Major update for all areas #7 (albatrossflavour)
0.1.19 (2018-07-09)
Merged pull requests:
- Feature/smarter tasks #6 (albatrossflavour)
0.1.17 (2018-06-01)
0.1.16 (2018-05-29)
0.1.14 (2018-05-28)
0.1.13 (2018-05-28)
Merged pull requests:
- Merge pull request #1 from albatrossflavour/development #4 (albatrossflavour)
- clean up unused caches #3 (albatrossflavour)
- Updates #1 (albatrossflavour)
* This Change Log was automatically generated by github_changelog_generator
Dependencies
- puppetlabs-stdlib (>= 4.13.1 < 5.1.0)
- puppetlabs-translate (>= 1.0.0 < 2.0.0)
- puppet-cron (>= 1.3.1 < 5.0.0)
- puppetlabs-scheduled_task (>= 1.0.1 < 5.0.0)
- puppetlabs-cron_core (>= 1.0.1 < 5.1.0)
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.