Forge Home


Manage Splunk Agents and Servers


9,410 latest version

4.6 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 1.7.2 (latest)
  • 1.7.1
  • 1.7.0
  • 1.6.1
  • 1.6.0
  • 1.5.2
  • 1.4.3
  • 1.4.2
  • 1.4.1
  • 1.3.2
  • 1.3.1
  • 1.3.0
  • 1.2.2
  • 1.2.1
released Oct 24th 2016
This version is compatible with:
  • ,

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'huit-splunk', '1.7.2'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add huit-splunk
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install huit-splunk --version 1.7.2

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.



huit/splunk — version 1.7.2 Oct 24th 2016


Build Status

Table of Contents

  1. Overview
  2. Module Description - What the module does and why it is useful
  3. Setup - The basics of getting started with splunk
  4. Usage - Configuration options and additional functionality
  5. Reference - An under-the-hood peek at what the module is doing and how
  6. Limitations - OS compatibility, etc.
  7. Development - Guide for contributing to the module


The Splunk module manages both Splunk servers and forwarders on RedHat, Debian, and Ubuntu.

Module Description

The Splunk module manages both Splunk servers and forwarders. It attempts to make educated and sane guesses at defaults, but requires some explicit configuration via hiera or passed parameters. Supported OS's include RedHat, Debian, and Ubuntu. Puppet versions include Puppet 2.7 and 3.x. The module attempts to make installation and management of a Splunk cluster manageable through Puppet rather then a deployment server. In addition to managing Splunk Installation and running configuration, it also provides means to manage Splunk Apps/TA's. The module also allows you to define inputs, outputs, forwarding and configuration of deployment clients should you decide to use this module to install Splunk agents, and have a Splunk deployment server to manage the configurations.


What splunk affects

  • A list of files, packages, services, or operations that the module will alter, impact, or execute on the system it's installed on.
  • This is a great place to stick any warnings.
  • Can be in list or paragraph form.
  • Installation of Splunk Packages
  • Managment of the service init script (/etc/init.d/splunk)
  • Managment of configuration files under /opt/splunk
    • inputs.conf and outputs.conf
    • indexes.conf on indexers and search heads
  • listened-to ports for Heavy forwarders and indexers

Setup Requirements

If your running a version of Puppet that does not have pluginsync enabled, it should be enabled.

Use of Hiera for passing parameters is highly encouraged!

Beginning with splunk

Disabled inputs for sourcetype "lsof" "ps" as they are verbose and create a lot of events.

By default behavior is to install a Universal Forwarder and configure the agent to forward events to one or many indexers. The below example will install and configure a universal forwarder to send events via port to an indexer at IP listening on port 9997. target_group takes the form of a hash, with name as the name keyword for your indexer, and the IP as the value. So a more real world example might be { 'datacenter1' => 'IP/DNS Enter' }

class { 'splunk':
  target_group => { 'name' => '' },

To Change the "type" of installation, for example from a Universal forward to a Light Weight Forwarder, you can pass the "type" paramter to the Splunk Class. It is worth noting that the module will attempt to cleanup after itself. So for example if your default node definition installs the universal forwarder, and you place the node into a role that inludes the light weight forwarder type, the Splunk module will attempt to uninstall and clean up the universal forwarder from /opt/splunkforwarder before installing into /opt/splunk. This typically has little effect, but does cause the newly installed agent to reindex any inputs that were assigned to both types.

class { 'splunk':
  type         => 'lwf',
  target_group => { 'name' => '' },

To install Splunk and configure Splunk TA's you can use the splunk::ta:: defined types. In this example the Splunk Unix TA is installed from the Puppet master and deployed from the ta files directory within the Splunk module.

class { 'splunk':
  target_group => { 'name' => '' },
splunk::ta::files { 'Splunk_TA_nix': }


Classes and Defined Types

Classes and defined types

####Class: splunk


Toggle to enable/disable managment of the outputs.conf file. You may want to disable the module managment of outputs.conf if you use a deployment server to manage that file. Defaults to true


Default index to sent inputs to. Defaults to 'os'


fqdn of License host, passing this param will turn the node into a license slave of a configured license server. For a license master set licenseserver => 'self'


Accepts a comma seperated list of contacts. Then enables and exports Nagios Service checks for monitoring.


Optional hash of outputs that can be used instead of, or in addition to the default group (tcpout) Useful for forwarding data to third party tools from indexers.

   output_hash   => { 'syslog:example_group' => {
                        'server' => '' }

Defaults to undef


Defaults to undef


Splunk Default Input Port for indexers. Defaults to 9997. This sets both The ports Monitored and the ports set in outputs.conf


Define a proxy server for Splunk to use. Defaults to false.

purge => true

purge defaults to false, and only accepts a boolean as an argument. purge purges all traces of splunk without a backup.


Hash used to define splunk default groups and servers, valid configs are

{ 'target group name' => 'server/ip' }

Install type. Defaults to Universal Forwarder valid inputs are:

  • uf : Splunk Universal Forwarder
  • lwf : Splunk Light Weight Forwarder
  • hwf : Splunk Heavy Weight Forwarder
  • jobs : Splunk Jobs Server - Search + Forwarding
  • search : Splunk Search Head
  • indexer : Splunk Distribuited Index Server

Install package version, defaults to 'latest'

Splunk Universal Forwarder

To Configure a Universal Forwarder that send data to server on port 50514 and used the Unix TA defaults (as specified in this module) as inputs.

class { 'splunk':
  port         => '50514',
  target_group => { 'name' => '' },
splunk::ta::files { 'Splunk_TA_nix': }

The Below example configures a Universal Forwarder to send data to an index server at IP and port 50514, but does not specify any inputs.

class { 'splunk':
  port         => '50514',
  target_group => { 'name' => '' },

Splunk Light Weight Forwarder

This example configures a Light Weight Forwarder to forward data to index server at port 50514, and sets the default index to "ns-os". In addition, we define the Splunk Unix TA as an app with its default inputs

class { 'splunk':
  index        => 'ns-os',
  type         => 'lwf',
  port         => '50514',
  target_group => { 'name' => '' },
splunk::ta::files { 'Splunk_TA_nix': }

Splunk Indexer

This example creates a Splunk Index Server that forwards data to a third party system over both syslog(udp) and raw tcp. This example configured inputs, props, transforms and outputs as well as installing the UNIX TA. Leaving other options as defaults, or picked up by hiera.

  class { 'splunk':
    type            => 'indexer',
    indexandforward => 'True',
    output_hash => {'syslog:qradar_group' =>
                    { 'server' => '' },
                      'tcpout:qradar_tcp' =>
                        { 'server'         => '',
                          'sendCookedData' => 'False' }
  class { 'splunk::inputs':
    input_hash =>  { 'splunktcp://50514' => {} }
  class { 'splunk::props':
    input_hash => {
                    'lsof'                 =>
                      { 'TRANSFORMS-null' => 'setnull' },
                    'ps'                   =>
                      { 'TRANSFORMS-null' => 'setnull' },
                    'linux_secure'         =>
                      { 'TRANSFORMS-nyc'  => 'send_to_qradar' },
                    'WinEventLog:Security' =>
                      { 'TRANSFORMS-nyc'  => 'send_to_qradar_tcp' }
  class { 'splunk::transforms':
    input_hash => {
                    'setnull'            =>
                      { 'REGEX'    => '.',
                        'DEST_KEY' => 'queue',
                        'FORMAT'   => 'nullQueue' },
                    'send_to_qradar'     =>
                      { 'REGEX'    => '.',
                        'DEST_KEY' => '_SYSLOG_ROUTING',
                        'FORMAT'   => 'qradar_group' },
                    'send_to_qradar_tcp' =>
                      { 'REGEX'    => '.',
                        'DEST_KEY' => '_TCP_ROUTING',
                        'FORMAT'   => 'qradar_tcp' }
  splunk::ta::files { 'Splunk_TA_nix': }

Configure Deployment Client

If you have a Splunk Deployment Server set up, you can bind the Splunk instance running on your node to a deployment server with the deploymentclient sub class. Add this to your node.pp or site/. In the below example we are managing A Light Weight Forwarder with on port 8089. Please NOTE - Some basic aspects of the client are still under Puppet Control.

  • Version
  • Admin PW
  • Type
class { 'splunk':
  type => 'lwf',
class { 'splunk::deploymentclient':
  targeturi => '',


This is an optional sub-class which you can pass a nested hash into to create custom inputs for Heavy Fowarders, agents or indexers

By Default the file is created in $splunkhome/etc/system/local

class { 'splunk::inputs':
  input_hash   => { 'script://./bin/' => {
                       disabled   => 'true',
                       index      => 'os',
                       interval   => '3600',
                       source     => 'Unix:SSHDConfig',
                       sourcetype => 'Unix:SSHDConfig'},
                     'script://./bin/sshdChecker.sh2' => {
                       disabled   => 'true2',
                       index      => 'os2',
                       interval   => '36002',
                       source     => 'Unix:SSHDConfig2',
                       sourcetype => 'Unix:SSHDConfig2'}


This is an optional sub-class which you can pass a nested hash into to create custom props.conf

By Default the file is created in $splunkhome/etc/system/local


This is an optional sub-class which you can pass a nested hash into to create custom transforms

By Default the file is created in $splunkhome/etc/system/local


splunk::ulimit takes two parameters, the name of the limit to change and the number of files to allow.

[name] Name of the limit to change (instance name).

[value] The value to set for this limit.

  splunk::ulimit { 'nofile':
    value => 16384,


This is an optional sub-class which you can pass a nested hash into to create custom limits for Heavy Fowarders, agents or indexers

By Default the file is created in $splunkhome/etc/system/local

class { 'splunk::limits':
  limit_hash   => { 'search' => {
                      max_searches_per_cpu => '1'},
                    'thruput' => {
                      maxKBps   => '10240',}


Here, list the classes, types, providers, facts, etc contained in your module. This section should include all of the under-the-hood workings of your module so people know what the module is touching on their system but don't need to mess with things. (We are working on automating this section!)


###RHEL/CentOS 5

RHEL/CentOS 5 is fully supported and functional

###RHEL/CentOS 6

RHEL/CentOS 6 is fully supported and functional

###RHEL/CentOS 7

RHEL/CentOS 7 Support has not been added and pull requests are welcome



gem install bundler
bundle install
bundle exec rake spec

To run beaker tests:

bundle exec rake beaker