Version information
This version is compatible with:
- Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2017.1.x, 2016.5.x, 2016.4.x
- Puppet >= 4.7.0 < 7.0.0
- , , , , , ,
Tasks:
- file_interface_collection_query
- file_ldap_collection_query
- file_ldap_create
- file_ldap_delete
- file_tree_quota_collection_query
- appliance_modify
- file_ldap_download_certificate
- and 377 more. See all tasks
Plans:
- capacity_volumes
- create_assign_protection_policy
- create_multiple_volumes
- create_volume
- create_volume_attach_host_with_fs
- delete_multiple_volumes
- find_empty_volume_groups
- and 2 more. See all plans
Start using this module
Add this module to your Puppetfile:
mod 'dellemc-powerstore', '0.8.1'
Learn more about managing modules with a PuppetfileDocumentation
Puppet module for Dell EMC PowerStore
Table of Contents
- Puppet module for Dell EMC PowerStore
Overview
The dellemc-powerstore
module manages resources on Dell EMC PowerStore.
Dell EMC PowerStore is a next-generation midrange data storage solution targeted at customers who are looking for value, flexibility, and simplicity. Dell EMC PowerStore provides our customers with data-centric, intelligent, and adaptable infrastructure that supports both traditional and modern workloads.
The dellemc-powerstore
Puppet module allows you to configure and deploy Dell EMC PowerStore using Puppet Bolt and Puppet Enterprise. To that end it offers resource types, tasks and plans.
License
Setup
Requirements
- Puppet Bolt
2.29.0
or later or - Puppet Enterprise
2019.8
or later
Installation for use with Bolt
-
Create a Bolt project with a name of your choosing, for example:
mkdir pws cd pws bolt project init --modules dellemc-powerstore
Your new Bolt project is ready to go. To list available plans, run
bolt plan show
To list all Bolt tasks related to the
volume
resource, runbolt task show --filter volume
See Bolt documentation for more information on Puppet Bolt.
-
Create an
inventory.yaml
in your project directory, like so:version: 2 targets: - name: my_array uri: my.powerstore.host config: transport: remote remote: host: my.powerstore.host user: admin password: My$ecret! remote-transport: powerstore
Installation for use with Puppet Enterprise
Installation of this module needs to be done using PE Code Manager. To that end,
-
Add the following to the
Puppetfile
:mod 'dellemc-powerstore', :latest mod 'puppet-format', :latest
-
Perform a code deploy using Code Manager web hook, CD4PE or by using the command
puppet code deploy
on the Primary PE Server.
Note that it is often recommended to pin the installed modules to specific versions in the Puppetfile
. For the purposes of this document, we use :latest
which will fetch the latest available module version each time Code Manager code deploy is done in PE.
Usage with Bolt
Using Tasks
Introduction to Dell EMC PowerStore tasks
Every Dell EMC PowerStore API endpoint has a corresponding task. For example, for manipulating Dell EMC PowerStore volumes, the following tasks are available:
- volume_collection_query
- volume_instance_query
- volume_attach
- volume_clone
- volume_create
- volume_delete
- volume_detach
- volume_modify
- volume_refresh
- volume_restore
- volume_snapshot
Task usage is displayed by running bolt task show
, for example:
bolt task show powerstore::volume_attach
powerstore::volume_attach - Attach a volume to a host or host group.
USAGE:
bolt task run --targets <node-name> powerstore::volume_attach host_group_id=<value> host_id=<value> id=<value> logical_unit_number=<value>
PARAMETERS:
- host_group_id: Optional[String]
Unique identifier of the host group to be attached to the volume. Only one of host_id or host_group_id can be supplied.
- host_id: Optional[String]
Unique identifier of the host to be attached to the volume. Only one of host_id or host_group_id can be supplied.
- id: String
Unique identifier of volume to attach.
- logical_unit_number: Optional[Integer[0,16383]]
Logical unit number for the host volume access.
The --targets
parameter (abbreviated by -t
) is the name of the device as configured in the inventory file (see above).
Every parameter is displayed along with its data type. Optional parameters have a type starting with the word Optional
. So in the above example, the task accepts 4 parameters:
host_group_id
: optional String parameterhost_id
: optional String parameterid
: required String parameterlogical_unit_number
: optional parameter, should be an Integer between 0 and 16383.
Tasks live in the tasks/
folder of the module repository.
Examples
-
Get a list of volumes:
bolt task run powerstore::volume_collection_query -t my_array
-
Get details of one volume:
bolt task run powerstore::volume_instance_query id=<volume_id> -t my_array
-
Create a volume:
bolt task run powerstore::volume_create name="small_volume" size=1048576 description="Small Volume" -t my_array
Using Plans
Plans are higher-level workflows that can leverage logic, tasks and commands to perform orchestrated operations on managed devices. Plans can be written using YAML or Puppet language (see documentation on writing Plans). Example dellemc-powerstore
plans can be found in the plans directory of this repository and are documented here.
For displaying usage information for a plan, run bolt plan show
, for example:
> bolt plan show powerstore::capacity_volumes
powerstore::capacity_volumes - list volumes with more than given capacity
USAGE:
bolt plan run powerstore::capacity_volumes threshold=<value> targets=<value>
PARAMETERS:
- threshold: Variant[Numeric,String]
Volume capacity needed (in bytes or MB/GB/TB)
- targets: TargetSpec
Example of running the plan:
> bolt plan run powerstore::capacity_volumes -t my_array threshold=220G
Starting: plan powerstore::capacity_volumes
Starting: task powerstore::volume_collection_query on my_array
Finished: task powerstore::volume_collection_query with 0 failures in 1.64 sec
+----------------------+-----------------+------------+
| List of volumes with capacity > 220G |
+----------------------+-----------------+------------+
| volume name | capacity | MB |
+----------------------+-----------------+------------+
| Volume1 | 43980465111040 | 43.98 TB |
| my_large_volume | 595926712320 | 595.93 GB |
| my_terabyte_volume | 1099511627776 | 1.10 TB |
+----------------------+-----------------+------------+
Finished: plan powerstore::capacity_volumes in 1.94 sec
Plan completed successfully with no result
Using Idempotent Puppet Resource Types
Tasks are an imperative way to query or manipulate state. In addition, the dellemc-powerstore
module offers Puppet resource types which offer a declarative and idempotent way of managing the device's desired state.
Example of managing a volume called my_volume
and ensuring it is created if it does not exist:
-
Example using YAML-language plan:
resources: - powerstore_volume: my_volume parameters: size: 26843545600 description: My 25G Volume ensure: present
-
Example using a Puppet-language plan:
powerstore_volume { 'my_volume': ensure => present, size => 26843545600, description => 'My 25G Volume', }
See the create_volume.pp and create_volume_yaml.yaml example plans showing a parametrized version of the above.
See the reference documentation for a list of all available Resource types.
Usage with Puppet Enterprise
After the module and its dependencies have been deployed inside PE (see Installation for use with Puppet Enterprise), its tasks and plans should become usable in the PE Console and through the PE Orchestrator APIs.
You can onboard the devices using the Nodes | Add | Add Network Device menu option. Please enter the device's credentials (see the inventory.yaml
above for an example of the credential parameters). After the device has been created you can start managing it as a standard PE node using Puppet manifests with resources and run tasks and plans against the device.
Reference
Please see REFERENCE for detailed information on available resource types, tasks and plans.
Direct links to the various parts of the reference documentation:
Limitations
The module has been tested on CentOS 7 and Windows 10 only but should work on any platform Bolt supports.
Development
Installing PDK
To run syntax checks and unit and acceptance tests, you need to first install the Puppet Development Kit, or PDK.
After installing, cd
to the module directory to run various command explained below.
Running syntax checks
> pdk validate
pdk (INFO): Using Ruby 2.5.8
pdk (INFO): Using Puppet 6.17.0
pdk (INFO): Running all available validators...
┌ [✔] Running metadata validators ...
├── [✔] Checking metadata syntax (metadata.json tasks/*.json).
└── [✔] Checking module metadata style (metadata.json).
┌ [✔] Running puppet validators ...
└── [✔] Checking Puppet manifest style (**/*.pp).
┌ [✔] Running ruby validators ...
└── [✔] Checking Ruby code style (**/**.rb).
┌ [✔] Running tasks validators ...
├── [✔] Checking task names (tasks/**/*).
└── [✔] Checking task metadata style (tasks/*.json).
┌ [✔] Running yaml validators ...
└── [✔] Checking YAML syntax (**/*.yaml **/*.yml).
Running unit tests
> pdk test unit
You should expect to see something like this - the most important thing is that you should have 0 failures:
pdk (INFO): Using Ruby 2.5.8
pdk (INFO): Using Puppet 6.17.0
[✔] Preparing to run the unit tests.
......................................................................................................................
Finished in 2.25 seconds (files took 5.17 seconds to load)
118 examples, 0 failures
Setting up the prism mock API server
The current acceptance test suite assumes that the prism
API server is up and running. prism
is a Open Source tool which can read an OpenAPI specification and generate a mock API server on the fly which is then able to validate incoming requests against the OpenAPI schemas and serve compliant responses with example data.
Although - in theory - it is possible to run acceptance tests against a real device, that is much harder to automate because of unknown id
s of existing resources.
-
Install prism by following the documentation
-
Make sure you have a copy of the Dell EMC PowerStore OpenAPI json file, let's call it
powerstore.json
-
Remove all cyclical dependencies from the OpenAPI json file since
prism
does not support cycles inside OpenAPI specifications, producing the filepowerstore-nocycles.json
-
Start the mock API server:
prism mock powerstore-nocycles.json
You will see something like:
[5:43:55 PM] › [CLI] … awaiting Starting Prism… [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/appliance [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/appliance/vel [5:43:56 PM] › [CLI] ℹ info PATCH http://127.0.0.1:4010/appliance/maiores [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/node [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/node/ut [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/network [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/network/dolor [5:43:56 PM] › [CLI] ℹ info PATCH http://127.0.0.1:4010/network/placeat [5:43:56 PM] › [CLI] ℹ info POST http://127.0.0.1:4010/network/adipisci/replace [5:43:56 PM] › [CLI] ℹ info POST http://127.0.0.1:4010/network/nam/scale [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/ip_pool_address [5:43:56 PM] › [CLI] ℹ info GET http://127.0.0.1:4010/ip_pool_address/pariatur ...
The prism mock API server is now up and running on the default port 4010.
Running type/provider acceptance tests
> MOCK_ACCEPTANCE=true pdk bundle rspec spec/acceptance
The test output will be something like this:
pdk (INFO): Using Ruby 2.5.8
pdk (INFO): Using Puppet 6.17.0
Running tests against this machine !
Run options: exclude {:update=>true, :bolt=>true}
powerstore_email_notify_destination
get powerstore_email_notify_destination
create powerstore_email_notify_destination
delete powerstore_email_notify_destination
and the prism log will show something like this:
[5:47:39 PM] › [HTTP SERVER] get /email_notify_destination ℹ info Request received
[5:47:39 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:39 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:39 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:39 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 200
[5:47:39 PM] › [HTTP SERVER] get /appliance ℹ info Request received
[5:47:39 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:39 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:39 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:39 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 200
[5:47:39 PM] › [HTTP SERVER] post /email_notify_destination ℹ info Request received
[5:47:39 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:39 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:39 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:39 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 201
[5:47:50 PM] › [HTTP SERVER] get /appliance ℹ info Request received
[5:47:50 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:50 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:50 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:50 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 200
[5:47:50 PM] › [HTTP SERVER] get /email_notify_destination ℹ info Request received
[5:47:50 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:50 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:50 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:50 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 200
[5:47:50 PM] › [HTTP SERVER] get /appliance ℹ info Request received
[5:47:50 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:50 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:50 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:50 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 200
[5:47:50 PM] › [HTTP SERVER] delete /email_notify_destination/string ℹ info Request received
[5:47:50 PM] › [NEGOTIATOR] ℹ info Request contains an accept header: */*
[5:47:50 PM] › [VALIDATOR] ✔ success The request passed the validation rules. Looking for the best response
[5:47:50 PM] › [NEGOTIATOR] ✔ success Found a compatible content for */*
[5:47:50 PM] › [NEGOTIATOR] ✔ success Responding with the requested status code 204
The get /appliance
request is done for authentication purposes.
Running task acceptance tests
To execute all available acceptance tests for tasks, run the following:
> MOCK_ACCEPTANCE=true pdk bundle exec rspec spec/task
pdk (INFO): Using Ruby 2.5.8
pdk (INFO): Using Puppet 6.17.0
Run options: exclude {:update=>true, :bolt=>true}
powerstore_email_notify_destination
performs email_notify_destination_collection_query
performs email_notify_destination_instance_query
performs email_notify_destination_delete
performs email_notify_destination_create
performs email_notify_destination_test
...
To run a subset of task tests, for example volume-related, do:
> MOCK_ACCEPTANCE=true pdk bundle exec rspec spec/task -e volume
Generating REFERENCE.md
To (re-)generate the REFERENCE.md file which documents the available types, tasks, functions and plans, run:
pdk bundle exec rake strings:generate:reference
Contact
Dell EMC does not provide support for any source code modifications. For any Dell EMC PowerStore issues, questions or feedback, please contact support https://www.dell.com/support/.
For general help with using Puppet and this module, please see the #puppet
channel in https://puppetcommunity.slack.com/.
For code contributions, you can create pull requests at https://github.com/puppetlabs/dellemc-powerstore.
If you would like to discuss large scale deployments or have other questions, feel free to email us at dellemc-puppet-integrations@puppet.com
.
Release Notes
See CHANGELOG
Reference
Table of Contents
Resource types
powerstore_email_notify_destination
: Use these resource types to configure outgoing SMTP and email notifications.powerstore_file_dns
: Use these resources to configure the Domain Name System (DNS) settings for a NAS server. One DNS settings object may be configured per NAS sepowerstore_file_ftp
: Use these resources to configure one File Transfer Protocol (FTP) server per NAS server. One FTP server can be configured per NAS server to hpowerstore_file_interface
: Information about File network interfaces in the storage system. These interfaces control access to Windows (CIFS) and UNIX/Linux (NFS) filepowerstore_file_interface_route
: Use these resources to manage static IP routes, including creating, modifying, and deleting these routes.A route determines where to send a ppowerstore_file_kerberos
: Use these resources to manage the Kerberos service for a NAS server. One Kerberos service object may be configured per NAS Server. Kerberos ipowerstore_file_ldap
: Use these resources to manage the Lightweight Directory Access Protocol (LDAP) settings for the NAS Server. You can configure one LDAP settinpowerstore_file_ndmp
: The Network Data Management Protocol (NDMP) provides a standard for backing up file servers on a network. NDMP allows centralized applicationpowerstore_file_nis
: Use these resources to manage the Network Information Service (NIS) settings object for a NAS Server. One NIS settings object may be configurpowerstore_file_system
: Manage NAS file systems.powerstore_file_tree_quota
: Tree quota settings in the storage system. A tree quota instance represents a quota limit applied to a specific directory tree in a file systpowerstore_file_virus_checker
: Use these resource types to manage the virus checker service of a NAS server. A virus checker instance is created each time the anti-virus sepowerstore_host
: Manage hosts that access the cluster.powerstore_host_group
: Manage host groups. A host group is a mechanism to provision hosts and volumes to be consistent across the Cyclone cluster. Operations that cpowerstore_import_host_system
: Use these resource types to manage import host systems. Import host enables communication with multipathing software on the host system to pepowerstore_import_session
: Use the import_session resource type to initiate and manage the migration of volumes and consistency groups from a heritage Dell EMC storagepowerstore_local_user
: Use this resource type to manage local user accounts.powerstore_migration_session
: Manage migration sessions.powerstore_nas_server
: Use these resource types to manage NAS servers. NAS servers are software components used to transfer data and provide the connection ports fopowerstore_nfs_export
: NFS Exports use the NFS protocol to provide an access point for configured Linux/Unix hosts or IP subnets to access file_systems or file_snappowerstore_physical_switch
: Manage physical switches settings for the cluster.powerstore_policy
: Use this resource type to manage protection policies and to view information about performance policies.Note: Performance policies are predefpowerstore_remote_system
: Information about remote storage systems that connect to the local PowerStore system. The system uses the configuration to access and communipowerstore_replication_rule
: Use this resource type to manage the replication rules that are used in protection policies.powerstore_smb_share
: SMB Shares use the SMB protocol to provide an access point for configured Windows hosts to access file system storage. The system uses Activepowerstore_snapshot_rule
: Use this resource type to manage snapshot rules that are used in protection policies.powerstore_storage_container
: Manage storage containers. A storage container is a logical grouping of related storage objects in a cluster. A storage container correspondspowerstore_vcenter
: Use this resource type to manage vCenter instances. Registered vCenter enables discovering of virtual machines, managing virtual machine snappowerstore_volume
: Manage volumes, including snapshots and clones of volumes.powerstore_volume_group
: Manage volume_groups. A volume_group is a group of related volumes treated as a single unit. It can optionally be write-order consistent.
Functions
format_bytes
: Converts the bytes argument into a human-readable form, for example 1000000000 bytes becomes 1GB.to_bytes
: Converts the argument into bytes, for example 4kB becomes 4096. Takes a single string value as an argument.
Tasks
alert_collection_query
: Query all alerts.alert_instance_query
: Query a specific alert.alert_modify
: Modify an alert. acknowledged_severity parameter, if included, will cause the request to fail when the alert's severity is higher than the acappliance_collection_query
: Query the appliances in a cluster.appliance_forecast
: Forecast capacity usage for an appliance.appliance_instance_query
: Query a specific appliance in a cluster.appliance_modify
: Modify an appliance's name.appliance_time_to_full
: Returns information about when an appliance is forecast to reach 100% capacity usage.audit_event_collection_query
: Query audit log entries.bond_collection_query
: Query bond configurations.bond_instance_query
: Query a specific bond configuration.chap_config_collection_query
: Query the list of (one) CHAP configuration settings objects. This resource type collection query does not support filtering, sorting or paginchap_config_instance_query
: Query the CHAP configuration settings object.chap_config_modify
: Modify the CHAP configuration settings object. To enable either Single or Mutual CHAP modes, the username and password must already be set, ocluster_collection_query
: Get details about the cluster. This resource type collection query does not support filtering, sorting or paginationcluster_forecast
: Forecast capacity usage for the cluster.cluster_instance_query
: Get details about the cluster. This does not support the following standard query functionality: property selection, and nested query embeddicluster_modifyclusterproperties
: Modify cluster properties, such as physical MTU.cluster_time_to_full
: Returns information about when the cluster is forecast to reach 100% capacity usage.discovered_initiator_collection_query
: Returns connected initiators that are not associated with a host. This resource type collection query does not support filtering, sorting ordns_collection_query
: Query DNS settings for a cluster.dns_instance_query
: Query a specific DNS setting.dns_modify
: Modify a DNS setting.email_notify_destination_collection_query
: Query all email notification destinations.email_notify_destination_create
: Add an email address to receive notifications.email_notify_destination_delete
: Delete an email notification destination.email_notify_destination_instance_query
: Query a specific email notification destination.email_notify_destination_modify
: Modify an email notification destination.email_notify_destination_test
: Send a test email to an email address.eth_port_collection_query
: Get Ethernet front-end port configuration for all cluster nodes.eth_port_instance_query
: Get Ethernet front-end port configuration by instance identifier.eth_port_modify
: Change the properties of the front-end port. Note that setting the port's requested speed may not cause the port speed to change immediately.event_eventsummary
: Get event by Event Id.event_getevents
: Returns all events in the database.fc_port_collection_query
: Query the FC front-end port configurations for all cluster nodes.fc_port_instance_query
: Query a specific FC front-end port configuration.fc_port_modify
: Modify an FC front-end port's speed. Setting the port's requested speed might not cause the port speed to change immediately. In cases wherefile_dns_collection_query
: Query of the DNS settings of NAS Servers.file_dns_create
: Create a new DNS Server configuration for a NAS Server. Only one object can be created per NAS Server.file_dns_delete
: Delete DNS settings of a NAS Server.file_dns_instance_query
: Query a specific DNS settings object of a NAS Server.file_dns_modify
: Modify the DNS settings of a NAS Server.file_ftp_collection_query
: Query FTP/SFTP instances.file_ftp_create
: Create an FTP/SFTP server.file_ftp_delete
: Delete an FTP/SFTP Server.file_ftp_instance_query
: Query a specific FTP/SFTP server for its settings.file_ftp_modify
: Modify an FTP/SFTP server settings.file_interface_collection_query
: Query file interfaces.file_interface_create
: Create a file interface.file_interface_delete
: Delete a file interface.file_interface_instance_query
file_interface_modify
: Modify the settings of a file interface.file_interface_route_collection_query
: Query file interface routes.file_interface_route_create
: Create and configure a new file interface route.There are 3 route types Subnet, Default, and Host.* The default route establishes a static rofile_interface_route_delete
: Delete file interface route.file_interface_route_instance_query
: Query a specific file interface route for details.file_interface_route_modify
: Modify file interface route settings.file_kerberos_collection_query
: Query of the Kerberos service settings of NAS Servers.file_kerberos_create
: Create a Kerberos configuration. The operation will fail if a Kerberos configuration already exists.file_kerberos_delete
: Delete Kerberos configuration of a NAS Server.file_kerberos_download_keytab
: Download previously uploaded keytab file for secure NFS service.file_kerberos_instance_query
: Query a specific Kerberos service settings of a NAS Server.file_kerberos_modify
: Modify the Kerberos service settings of a NAS Server.file_kerberos_upload_keytab
: A keytab file is required for secure NFS service with a Linux or Unix Kerberos Key Distribution Center (KDC). The keytab file can be generatefile_ldap_collection_query
: List LDAP Service instances.file_ldap_create
: Create an LDAP service on a NAS Server. Only one LDAP Service object can be created per NAS Server.file_ldap_delete
: Delete a NAS Server's LDAP settings.file_ldap_download_certificate
file_ldap_download_config
file_ldap_instance_query
: Query a specific NAS Server's LDAP settings object.file_ldap_modify
: Modify a NAS Server's LDAP settings object.file_ldap_upload_certificate
file_ldap_upload_config
file_ndmp_collection_query
: List configured NDMP service instances.file_ndmp_create
: Add an NDMP service configuration to a NAS server. Only one NDMP service object can be configured per NAS server.file_ndmp_delete
: Delete an NDMP service configuration instance of a NAS Server.file_ndmp_instance_query
: Query an NDMP service configuration instance.file_ndmp_modify
: Modify an NDMP service configuration instance.file_nis_collection_query
: Query the NIS settings of NAS Servers.file_nis_create
: Create a new NIS Service on a NAS Server. Only one NIS Setting object can be created per NAS Server.file_nis_delete
: Delete NIS settings of a NAS Server.file_nis_instance_query
: Query a specific NIS settings object of a NAS Server.file_nis_modify
: Modify the NIS settings of a NAS Server.file_system_clone
: Create a clone of a file system.file_system_collection_query
: List file systems.file_system_create
: Create a file system.file_system_delete
: Delete a file system.file_system_instance_query
: Query a specific file system.file_system_modify
: Modify a file system.file_system_refresh
: Refresh a snapshot of a file system. The content of the snapshot is replaced with the current content of the parent file system.file_system_refresh_quota
: Refresh the actual content of tree and user quotas objects.file_system_restore
: Restore from a snapshot of a file system.file_system_snapshot
: Create a snapshot of a file system.file_tree_quota_collection_query
: List tree quota instances.file_tree_quota_create
: Create a tree quota instance.file_tree_quota_delete
: Delete a tree quota instance.file_tree_quota_instance_query
: Query a tree quota instance.file_tree_quota_modify
: Modify a tree quota instance.file_tree_quota_refresh
: Refresh the cache with the actual value of the tree quota.file_user_quota_collection_query
: List user quota instances.file_user_quota_create
: Create a user quota instance.file_user_quota_instance_query
: Query a user quota instance.file_user_quota_modify
: Modify a user quota instance.file_user_quota_refresh
: Refresh the cache with the actual value of the user quota.file_virus_checker_collection_query
: Query all virus checker settings of the NAS Servers.file_virus_checker_create
: Add a new virus checker setting to a NAS Server. Only one instance can be created per NAS Server.Workflow to enable the virus checker settingfile_virus_checker_delete
: Delete virus checker settings of a NAS Server.file_virus_checker_download_config
: Download a virus checker configuration file containing the template or the actual (if already uploaded) virus checker configuration settings.file_virus_checker_instance_query
: Query a specific virus checker setting of a NAS Server.file_virus_checker_modify
: Modify the virus checker settings of a NAS Server.file_virus_checker_upload_config
: Upload a virus checker configuration file containing the virus checker configuration settings.hardware_collection_query
: List hardware components.hardware_drive_repurpose
: A drive that has been used in a different appliance will be locked for use only in that appliance. This operation will allow a locked drive thardware_instance_query
: Get a specific hardware component instance.hardware_modify
: Modify a hardware instance.host_attach
: Attach host to volume.host_collection_query
: List host information.host_create
: Add a host.host_delete
: Delete a host. Delete fails if host is attached to a volume or consistency group.host_detach
: Detach host from volume.host_group_attach
: Attach host group to volume.host_group_collection_query
: List host groups.host_group_create
: Create a host group.host_group_delete
: Delete a host group. Delete fails if host group is attached to a volume.host_group_detach
: Detach host group from volume.host_group_instance_query
: Get details about a specific host group.host_group_modify
: Operations that can be performed are modify name, remove host(s) from host group, add host(s) to host group. Modify request will only supporthost_instance_query
: Get details about a specific host by id.host_modify
: Operation that can be performed are modify name, modify description, remove initiator(s) from host, add initiator(s) to host, update existinghost_virtual_volume_mapping_collection_query
: Query associations between a virtual volume and the host(s) it is attached to.host_virtual_volume_mapping_instance_query
: Query a specific virtual volume mapping.host_volume_mapping_collection_query
: Query associations between a volume and the host or host group it is attached to.host_volume_mapping_instance_query
: Query a specific host volume mapping.import_host_initiator_collection_query
: Query import host initiators.import_host_initiator_instance_query
: Query a specific import host initiator instance.import_host_system_collection_query
: Query import host systems that are attached to volumes.import_host_system_create
: Add an import host system so that it can be mapped to a volume. Before mapping an import host system, ensure that a host agent is installed.import_host_system_delete
: Delete an import host system. You cannot delete an import host system if there are import sessions active in the system referencing the imporimport_host_system_instance_query
: Query a specific import host system instance.import_host_system_refresh
: Refresh the details of a specific import host system. Use this operation when there is a change to the import host or import host volumes.import_host_volume_collection_query
: Query import host volumes.import_host_volume_instance_query
: Query a specific import host volume instance.import_psgroup_collection_query
: Query PS Group storage arrays.import_psgroup_discover
: Discover the importable volumes and snapshot schedules in the PS Group.import_psgroup_instance_query
: Query a specific PS Group storage array.import_psgroup_volume_collection_query
: Query PS Group volumes.import_psgroup_volume_import_snapshot_schedules
: Return the snapshot schedules for a PS Group volume.import_psgroup_volume_instance_query
: Query a specific PS Group volume.import_session_cancel
: Cancel an active import session. Cancel is allowed when the import is in a Scheduled, Queued, Copy_In_Progress, or Ready_For_Cutover state. Aimport_session_cleanup
: Clean up an import session that is in Cleanup_Required state and requires user intervention to revert the source volume to its pre-import staimport_session_collection_query
: Query import sessions.import_session_create
: Create a new import session. The source storage system and hosts that access the volumes or consistency groups must be added prior to creatinimport_session_cutover
: Commit an import session that is in a Ready_For_Cutover state. When the import session is created with the automatic_cutover attribute set toimport_session_delete
: Delete an import session that is in a Completed, Failed, or Cancelled state. Delete removes the historical record of the import. To stop actiimport_session_instance_query
: Query a specific session.import_session_modify
: Modify the scheduled date and time of the specified import session.import_session_pause
: Pauses an ongoing import session. When this occurs, the background data copy stops, but IO to the source still occurs. Pause is only supporteimport_session_resume
: Resumes the paused import session. The background data copy continues from where it was stopped. Resume is only applicable when the import inimport_storage_center_collection_query
: Query SC arrays.import_storage_center_consistency_group_collection_query
: Query SC consistency groups.import_storage_center_consistency_group_import_snapshot_profiles
: Return the snapshot profiles of an SC consistency group.import_storage_center_consistency_group_instance_query
: Query a specific SC consistency group.import_storage_center_discover
: Discover the importable volumes and snapshot profiles in the SC array.import_storage_center_instance_query
: Query a specific SC array.import_storage_center_volume_collection_query
: Query SC volumes.import_storage_center_volume_import_snapshot_profiles
: Return the snapshot profiles of an SC volume.import_storage_center_volume_instance_query
: Query a specific SC volume.import_unity_collection_query
: Query Unity storage systems.import_unity_consistency_group_collection_query
: Query Unity consistency groups.import_unity_consistency_group_import_snapshot_schedules
: Return the snapshot schedules associated with the specified Unity consistency group.import_unity_consistency_group_instance_query
: Query a specific Unity consistency group.import_unity_discover
: Discover the importable volumes and consistency groups in the Unity storage system.import_unity_instance_query
: Query a specific Unity storage system.import_unity_volume_collection_query
: Query Unity volumes.import_unity_volume_import_snapshot_schedules
: Return the snapshot schedules associated with the specified Unity volume.import_unity_volume_instance_query
: Query a specific Unity volume.import_vnx_array_collection_query
: Query VNX storage systems.import_vnx_array_discover
: Discover the importable volumes and consistency groups in a VNX storage system.import_vnx_array_instance_query
: Query a specific VNX storage system.import_vnx_consistency_group_collection_query
: Query VNX consistency groups.import_vnx_consistency_group_instance_query
: Query a specific VNX consistency group.import_vnx_volume_collection_query
: Query VNX volumes.import_vnx_volume_instance_query
: Query a specific VNX volume.ip_pool_address_collection_query
: Query configured IP addresses.ip_pool_address_instance_query
: Query a specific IP address.ip_port_collection_query
: Query IP port configurations.ip_port_instance_query
: Query a specific IP port configuration.ip_port_modify
: Modify IP port parameters.job_collectionquery
: Query jobs.job_instancequery
: Query a specific job.keystore_archive_downloadakeystorebackuparchivefile
: Download a keystore backup archive file that was previously generated by a successful /api/rest/keystore_archive/regenerate POST command. Thikeystore_archive_regeneratearchivefile
: Creates a new encryption keystore archive file to replace the existing archive file, which includes the individual keystore backup files fromlicense_collection_query
: Query license information for the cluster. There is always one license instance.license_instance_query
: Query the specific license information for the cluster.license_license_file_upload
: Upload a software license to install the license on the cluster.license_retrieve_license
: Retrieve the license directly from the DellEMC Software Licensing Central. This runs automatically when the cluster is configured, and if itlocal_user_collection_query
: Query all local user account instances. This resource type collection query does not support filtering, sorting or paginationlocal_user_create
: Create a new local user account. Any existing local user with either an administrator or a security administrator role can create a new locallocal_user_delete
: Delete a local user account instance using the unique identifier. You cannot delete the default 'admin' account or the account you are currenlocal_user_instance_query
: Query a specific local user account instance using an unique identifier.local_user_modify
: Modify a property of a local user account using the unique identifier. You cannot modify the default 'admin' user account.login_session_collection_query
: Obtain the login session for the current user. This resource type collection query does not support filtering, sorting or paginationlogout_logout
: Log out the current user.maintenance_window_collection_query
: Query the maintenance window configurations.maintenance_window_instance_query
: Query one appliance maintenance window configuration.maintenance_window_modify
: Configure maintenance window.metrics_metrics
: Retrieves metrics for specified type.migration_recommendation_collectionquery
: Get migration recommendations.migration_recommendation_create
: Generate a recommendation for redistributing storage utilization between appliances.migration_recommendation_create_migration_sessions
: Create the migration sessions to implement a migration recommendation. If the response contains a list of hosts to rescan, those hosts must bmigration_recommendation_delete
: Delete a migration recommendation.migration_recommendation_instancequery
: Get a single migration recommendation.migration_recommendation_start_migration_sessions
: Start previously created migration sessions for recommendation. Ensure that any rescans specified in the create_migration_sessions response hmigration_session_collection_query
: Query migration sessions.migration_session_create
: Create a new migration session. For virtual volumes (vVols), the background copy is completed during this phase and the ownership of the vVolmigration_session_cutover
: Final phase of the migration, when ownership of the volume, vVol, or volume group is transferred to the new appliance.migration_session_delete
: Delete a migration session. With the force option, a migration session can be deleted regardless of its state. All background activity is canmigration_session_instance_query
: Query a specific migration session.migration_session_pause
: Pause a migration session. Only migration sessions in the synchronizing state can be paused.migration_session_resume
: Resume a paused migration session. You cannot resume a migration session in the failed state.migration_session_sync
: Synchronize a migration session. During this phase, the majority of the background copy is completed and there are no interruptions to any senas_server_collection_query
: Query all NAS servers.nas_server_create
: Create a NAS server.nas_server_delete
: Delete a NAS server.nas_server_download_group
: Download a NAS server group file containing the template or the actual (if already uploaded) group details.nas_server_download_homedir
: Download a NAS server homedir file containing the template or the actual (if already uploaded) homedir configuration settings.nas_server_download_hosts
: Download an NAS server host file containing template/actual(if already uploaded) host details.nas_server_download_netgroup
: Download an NAS server netgroup file containing the template or the actual (if already uploaded) netgroup details.nas_server_download_nsswitch
: Download a NAS server nsswitch file containing the template or the actual (if already uploaded) nsswitch configuration settings.nas_server_download_ntxmap
: Download an NAS server ntxmap file containing the template or the actual (if already uploaded) ntxmap configuration settings.nas_server_download_passwd
: Download a NAS server passwd file containing template or the actual (if already uploaded) passwd details.nas_server_download_user_mapping_report
: Download the report generated by the update_user_mappings action.nas_server_instance_query
: Query a specific NAS server.nas_server_modify
: Modify the settings of a NAS server.nas_server_ping
: Ping destination from NAS server.nas_server_update_user_mappings
: Fix the user mappings for all file systems associated with the NAS server. This process updates file ownership on the NAS server's file systenas_server_upload_group
: Upload NAS server group file.nas_server_upload_homedir
: Upload the NAS server homedir file.nas_server_upload_hosts
: Upload NAS server host file.nas_server_upload_netgroup
: Upload the NAS server netgroup file.nas_server_upload_nsswitch
: Upload the NAS server nsswitch file.nas_server_upload_ntxmap
nas_server_upload_passwd
: Upload NAS server passwd file.network_collection_query
: Query the IP network configurations of the cluster.network_instance_query
: Query a specific IP network configuration.network_modify
: Modify IP network parameters, such as gateways, netmasks, VLAN identifiers, and IP addresses.network_replace
: Reconfigure cluster management network settings from IPv4 to IPv6 or vice versa.network_scale
: Add IP ports for use by the storage network, or remove IP ports so they can no longer be used.At least one IP port must be configured for usenfs_export_collection_query
: List NFS Exports.nfs_export_create
: Create an NFS Export for a Snapshot.nfs_export_delete
: Delete NFS Export.nfs_export_instance_query
: Get NFS Export properties.nfs_export_modify
: Modify NFS Export Properties.nfs_server_collection_query
: Query all NFS Servers.nfs_server_create
: Create an NFS server.nfs_server_delete
: Delete an NFS server.nfs_server_instance_query
: Query settings of an NFS server.nfs_server_join
: Join the secure NFS server to the NAS server's AD domain, which is necessary for Secure NFS.nfs_server_modify
: Modify NFS server settings.nfs_server_unjoin
: Unjoin the secure NFS server from the NAS server's Active Directory domain. If you unjoin with secure NFS exports active, exports will be unanode_collection_query
: Query the nodes in a cluster.node_instance_query
: Query a specific node in a cluster.ntp_collection_query
: Query NTP settings for a cluster.ntp_instance_query
: Query a specific NTP setting.ntp_modify
: Modify NTP settings.performance_rule_collectionquery
: Get performance rules.performance_rule_instancequery
: Get a performance rule by id.physical_switch_collection_query
: Query physical switches settings for a cluster.physical_switch_create
: Create a physical switch settings.physical_switch_delete
: Delete the physical switch settings.physical_switch_instance_query
: Query a specific physical switch settings.physical_switch_modify
: Modify a physical switch settings.policy_collection_query
: Query protection and performance policies.The following REST query is an example of how to retrieve protection policies along with their rulepolicy_create
: Create a new protection policy. Protection policies can be assigned to volumes or volume groups. When a protection policy is assigned to a vopolicy_delete
: Delete a protection policy.Protection policies that are used by any storage resources can not be deleted.policy_instance_query
: Query a specific policy.policy_modify
: Modify a protection policy.remote_system_collection_query
: Query remote systems.remote_system_create
: Create a new remote system relationship. The type of remote system being connected requires different parameter sets. For PowerStore remote sremote_system_delete
: Delete a remote system. Deleting the remote system deletes the management and data connections established with the remote system. You cannotremote_system_instance_query
: Query a remote system instance.remote_system_modify
: Modify a remote system instance. The list of valid parameters depends on the type of remote system.For PowerStore remote system relationshipsremote_system_verify
: Verify and update the remote system instance. Detects changes in the local and remote systems and reestablishes data connections, also takingreplication_rule_collection_query
: Query all replication rules.replication_rule_create
: Create a new replication rule.replication_rule_delete
: Delete a replication rule.Deleting a rule is not permitted, if the rule is associated with a protection policy thatis currently applied to areplication_rule_instance_query
: Query a specific replication rule.replication_rule_modify
: Modify a replication rule.If the rule is associated with a policy that is currently applied toa storage resource, the modified rule is immedireplication_session_collection_query
: Query replication sessions.replication_session_failover
: Fail over a replication session instance. Failing over the replication session changes the role of the destination system. After a failover,replication_session_instance_query
: Query a replication session instance.replication_session_pause
: Pause a replication session instance. You can pause a replication session when you need to modify the source or destination system. For exampreplication_session_reprotect
: Reprotect a replication session instance. Activates the replication session and starts synchronization. This can only be used when the sessioreplication_session_resume
: Resume a replication session instance that is paused. Resuming the replication session schedules a synchronization cycle if the session was ireplication_session_sync
: Synchronize the destination resource with changes on source resource from the previous synchronization cycle. Synchronization happens eitherrole_collection_query
: Query roles. This resource type collection query does not support filtering, sorting or paginationrole_instance_query
: Query a specific role.sas_port_collection_query
: Query the SAS port configuration for all cluster nodes.sas_port_instancequery
: Query a specific SAS port configuration.security_config_collection_query
: Query system security configurations. This resource type collection query does not support filtering, sorting or paginationsecurity_config_instance_query
: Query a specific system security configuration.service_config_collection_query
: Query the service configuration instances for the cluster. This resource type collection query does not support filtering, sorting or paginatservice_config_instance_query
: Query the service configuration instances for an appliance.service_config_modify
: Modify the service configuration for an appliance.service_user_collection_query
: Query the service user account instance. This resource type collection query does not support filtering, sorting or paginationservice_user_instance_query
: Query the service user account using the unique identifier.service_user_modify
: Modify the properties of the service user account.smb_server_collection_query
: Query all SMB servers.smb_server_create
: Create an SMB server.smb_server_delete
: Delete a SMB server. The SMB server must not be joined to a domain to be deleted.smb_server_instance_query
: Query settings of a specific SMB server.smb_server_join
: Join the SMB server to an Active Directory domain.smb_server_modify
: Modify an SMB server's settings.smb_server_unjoin
: Unjoin the SMB server from an Active Directory domain.smb_share_collection_query
: List SMB shares.smb_share_create
: Create an SMB share.smb_share_delete
: Delete an SMB Share.smb_share_instance_query
: Get an SMB Share.smb_share_modify
: Modify SMB share properties.smtp_config_collection_query
: Query the SMTP configuration. There is always exactly one smtp_config instance.smtp_config_instance_query
: Query the specific SMTP configuration.smtp_config_modify
: Configure the outgoing SMTP information.smtp_config_test
: Test the SMTP configuration.snapshot_rule_collection_query
: Query all snapshot rules.snapshot_rule_create
: Create a new snapshot rule.snapshot_rule_delete
: Delete a snapshot rulesnapshot_rule_instance_query
: Query a specific snapshot rule.snapshot_rule_modify
: Modify a snapshot rule.If the rule is associated with a policy that is currently applied toa storage resource, the modified rule is immediatesoftware_installed_collection_query
: Query the software that is installed on each appliance. The output returns a list of JSON objects representing the software that is installedsoftware_installed_instance_query
: Query a specific item from the list of installed software.software_package_collection_query
: Query the software packages that are known by the cluster. The output returns a list of JSON objects representing the packages.software_package_delete
: Delete the specified software package from the cluster. This operation may take some time to complete.software_package_install
: Start a software upgrade background job for the specified appliance within the cluster. If an appliance is not specified, the upgrade is persoftware_package_instance_query
: Query a specific software package.software_package_puhc
: Run the pre-upgrade health check for a software package. This operation may take some time to respond.software_package_upload
: Push a software package file from the client to the cluster. When successfully uploaded and verified, the result is a software_package in thestorage_container_collection_query
: List storage containers.storage_container_create
: Create a virtual volume (vVol) storage container.storage_container_delete
: Delete a storage container.storage_container_instance_query
: Query a specific instance of storage container.storage_container_modify
: Modify a storage container.storage_container_mount
: Mount a storage container as a vVol datastore in vCenter.storage_container_unmount
: Unmount a storage container, which removes the vVol datastore from vCenter.vcenter_collection_query
: Query registered vCenters.vcenter_create
: Add a vCenter. Not allowed in Unified+ deployments.vcenter_delete
: Delete a registered vCenter. Deletion of vCenter disables functionality that requires communication with vCenter. Not allowed in Unified+ depvcenter_instance_query
: Query a specific vCenter instance.vcenter_modify
: Modify a vCenter settings.veth_port_collection_query
: Query virtual Ethernet port configurations.veth_port_instance_query
: Query a specific virtual Ethernet port configuration.virtual_machine_collection_query
: Query virtual machines that use storage from the cluster.virtual_machine_delete
: Delete a virtual machine snapshot. This operation cannot be used on a base virtual machine or virtual machine template.virtual_machine_instance_query
: Query a specific virtual machine instance.virtual_machine_modify
: Modify a virtual machine. This operation cannot be used on virtual machine snapshots or templates.virtual_machine_snapshot
: Create a snapshot of a virtual machine. This operation cannot be used on a virtual machine snapshot or template.virtual_volume_collection_query
: Get virtual volumes.virtual_volume_delete
: Delete a virtual volume.virtual_volume_instance_query
: Get a specific virtual volume.volume_attach
: Attach a volume to a host or host group.volume_clone
: Create a clone of a volume or snapshot.volume_collection_query
: Query volumes that are provisioned on the appliance.volume_create
: Create a volume on the appliance.volume_delete
: Delete a volume. A volume which is attached to a host or host group or is a member of a volume group cannot be deleted. A volume which hasvolume_detach
: Detach a volume from a host or host group.volume_group_add_members
: Add member volumes to an existing primary or clone volume group.This cannot be used to add members to a snapshot set. Members cannot be addedvolume_group_clone
: Clone a volume group. The clone volume group will be created on the same appliance as the source volume group.A clone of a volume group willvolume_group_collection_query
: Query all volume groups, including snapshot sets and clones of volume groups.volume_group_create
: Create a new volume group. The resulting volume group will have a type of Primary.volume_group_delete
: Delete a volume group, snapshot set, or clone.Before you try deleting a volume group, snapshot set, or clone, ensure that you first detach itvolume_group_instance_query
: Query a specific volume group, snapshot set, or clone.volume_group_modify
: Modify a volume group, snapshot set, or clone.volume_group_refresh
: Refresh the contents of a volume group (the target volume group) from another volume group in the same family.A backup snapshot set of the tavolume_group_remove_members
: Remove members from an existing primary or clone volume group.This cannot be used to remove members from a snapshot set. Members cannot be revolume_group_restore
: Restore a volume group from a snapshot set. A primary or a clone volume group can only be restored from one of its immediate snapshot sets.Avolume_group_snapshot
: Create a new snapshot set for a volume group.When a snapshot of a volume group is created, the resultant snapshot volume group is referred tovolume_instance_query
: Query a specific volume instance.volume_modify
: Modify the parameters of a volume.volume_refresh
: Refresh the contents of the target volume from another volume in the same family. By default, a backup snapshot of the target volume is not cvolume_restore
: Restore a volume from a snapshot. A primary or clone volume can only be restored from one of its immediate snapshots. By default, a backup svolume_snapshot
: Create a snapshot of a volume or a clone. The source id of the snapshot is the id of source volume or clone. The source time is the time whenx509_certificate_collection_query
: Query to list X509 Certificates instances. This resource type collection query does not support filtering, sorting or paginationx509_certificate_decommission_certificates
: Decommission x509 certificates for one service type (currently only Replication_HTTP is supported) of one scope (for example remote system)x509_certificate_exchange_certificates
: Exchange certificates between two clusters. Add CA certificates to the trust store of each cluster and assign roles to the client certificatex509_certificate_instance_query
: Query a specific X509 Certificate instance.
Plans
powerstore::capacity_volumes
: list volumes with more than given capacitypowerstore::create_assign_protection_policy
: A Bolt Plan that creates a set of "organization default" snapshot rules, uses them to create a protection policy, then assigns that new protection policy to an existing volumepowerstore::create_multiple_volumes
: This plan creates multiple volumespowerstore::create_volume
: A Bolt Plan that creates or deletes a volumepowerstore::create_volume_attach_host_with_fs
: A Bolt Plan that creates a volume, maps that new volume to an existing host, scans that host's iSCSI bus to ensure device nodes have been created, computes the device name as viewed by the host, partitions the new disk device, puts new file system on partition, mounts fresh file system at designated locationpowerstore::delete_multiple_volumes
: This plan deletes multiple volumes after first removing them from a group if they are a group member.powerstore::find_empty_volume_groups
: Find empty volume groups - Puppet language plan examplepowerstore::get_fs_used_size_greaterthan_threshold
: List filesystems using more than $threshold percent of storage. Note: currently limited to one target.powerstore::multi_create_volume_attach_host_with_fs
: This Bolt Plan makes is possible to create volumes, map them to multiple hosts, create XFS file systems upon them and mount them at a specific location by wrapping another plan that is capable of doing this for a single host
Resource types
powerstore_email_notify_destination
Use these resource types to configure outgoing SMTP and email notifications.
Properties
The following properties are available in the powerstore_email_notify_destination
type.
email_address
Data type: Optional[String]
Email address to receive notifications.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
notify_critical
Data type: Optional[Boolean]
Whether to send notifications for critical alerts.
notify_info
Data type: Optional[Boolean]
Whether send notifications for informational alerts.
notify_major
Data type: Optional[Boolean]
Whether to send notifications for major alerts.
notify_minor
Data type: Optional[Boolean]
Whether to send notifications for minor alerts.
Parameters
The following parameters are available in the powerstore_email_notify_destination
type.
id
namevar
Data type: String
Unique identifier of the email notification destination.
powerstore_file_dns
Use these resources to configure the Domain Name System (DNS) settings for a NAS server. One DNS settings object may be configured per NAS server. A DNS is a hierarchical system responsible for converting domain names to their corresponding IP addresses. A NAS server\'s DNS settings should allow DNS resolution of all names within an SMB server\'s domain in order for the SMB protocol to operate normally within an Active Directory domain. The DNS default port is 53.
Properties
The following properties are available in the powerstore_file_dns
type.
add_ip_addresses
Data type: Optional[Array[String]]
IP addresses to add to the current list. The addresses may be IPv4 or IPv6. Error occurs if an IP address already exists. Cannot be combined with ip_addresses.
domain
Data type: Optional[String[1,255]]
Name of the DNS domain, where the NAS Server does host names lookup when an FQDN is not specified in the request.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
ip_addresses
Data type: Optional[Array[String]]
The list of DNS server IP addresses. The addresses may be IPv4 or IPv6.
nas_server_id
Data type: Optional[String]
Unique identifier of the associated NAS Server instance that uses this DNS object. Only one DNS object per NAS Server is supported.
remove_ip_addresses
Data type: Optional[Array[String]]
IP addresses to remove from the current list. The addresses may be IPv4 or IPv6. Error occurs if IP address is not present. Cannot be combined with ip_addresses.
transport
Data type: Optional[Enum['UDP','TCP']]
Transport used when connecting to the DNS Server: UDP - DNS uses the UDP protocol (default) TCP - DNS uses the TCP protocol
transport_l10n
Data type: Optional[String]
Localized message string corresponding to transport
Parameters
The following parameters are available in the powerstore_file_dns
type.
id
namevar
Data type: String
Unique identifier of the DNS object.
powerstore_file_ftp
Use these resources to configure one File Transfer Protocol (FTP) server per NAS server. One FTP server can be configured per NAS server to have both secure and unsecure services running. By default when an FTP server is created, the unsecure service will be running. FTP is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. For secure transmission that encrypts the username, password, and content, FTP is secured with SSH (SFTP). SFTP listens on port 22.You can activate an FTP server and SFTP server independently on each NAS server. The FTP and SFTP clients are authenticated using credentials defined on a Unix name server (such as an NIS server or an LDAP server) or a Windows domain. Windows user names need to be entered using the 'username@domain' or 'domain\username' formats. Each secure and unsecure service must have a home directory defined in the name server that must be accessible on the NAS server. FTP also allows clients to connect as anonymous users.
Properties
The following properties are available in the powerstore_file_ftp
type.
add_groups
Data type: Optional[Array[String]]
Groups to add to the current groups. Error occurs if the group already exists. Cannot be combined with groups.
add_hosts
Data type: Optional[Array[String]]
Host IP addresses to add to the current hosts. The addresses may be IPv4 or IPv6. Error occurs if the IP address already exists. Cannot be combined with hosts.
add_users
Data type: Optional[Array[String]]
Users to add to the current users. Error occurs if the user already exist. Cannot be combined with users.
audit_dir
Data type: Optional[String]
(Applies when the value of is_audit_enabled is true.) Directory of FTP/SFTP audit files. Logs are saved in '/' directory (default) or in a mounted file system (Absolute path of the File system directory which should already exist).
audit_max_size
Data type: Optional[Integer[40960,9223372036854775807]]
(Applies when the value of is_audit_enabled is true.) Maximum size of all (current plus archived) FTP/SFTP audit files, in bytes.There is a maximum of 5 audit files, 1 current audit file (ftp.log) and 4 archived audit files.The maximum value for this setting is 5GB (each file of 1GB) if the audit directory belongs to a user file system of the NAS server.If the audit directory is '/', the maximum value is 5MB (each file of 1MB).The minimum value is 40kB (each file of 8KB) on any file system.
default_homedir
Data type: Optional[String]
(Applies when the value of is_homedir_limit_enabled is false.) Default directory of FTP and SFTP clients who have a home directory that is not defined or accessible.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
groups
Data type: Optional[Array[String]]
Allowed or denied user groups, depending on the value of the is_allowed_groups attribute.- If allowed groups exist, only users who are members of these groups and no others can connect to the NAS server through FTP or SFTP.- If denied groups exist, all users who are members of those groups always have access denied to the NAS server through FTP or SFTP.- If the list is empty, there is no restriction to the NAS server access through FTP or SFTP based on the user group.
hosts
Data type: Optional[Array[String]]
Allowed or denied hosts, depending on the value of the is_allowed_hosts attribute. A host is defined using its IP address. Subnets using CIDR notation are also supported.- If allowed hosts exist, only those hosts and no others can connect to the NAS server through FTP or SFTP.- If denied hosts exist, they always have access denied to the NAS server through FTP or SFTP.- If the list is empty, there is no restriction to NAS server access through FTP or SFTP based on the host IP address.- The addresses may be IPv4 or IPv6.
is_allowed_groups
Data type: Optional[Boolean]
Indicates whether the groups attribute contains allowed or denied user groups. Values are:- true - groups contains allowed user groups.- false - groups contains denied user groups.
is_allowed_hosts
Data type: Optional[Boolean]
Indicates whether the hosts attribute contains allowed or denied hosts. Values are:true - hosts contains allowed hosts.false - hosts contains denied hosts.
is_allowed_users
Data type: Optional[Boolean]
Indicates whether the users attribute contains allowed or denied users. Values are:- true - users contains allowed users.- false - users contains denied users.
is_anonymous_authentication_enabled
Data type: Optional[Boolean]
Indicates whether FTP clients can be authenticated anonymously. Values are:- true - Anonymous user name is accepted.- false - Anonymous user name is not accepted.
is_audit_enabled
Data type: Optional[Boolean]
Indicates whether the activity of FTP and SFTP clients is tracked in audit files. Values are:- true - FTP/SFTP activity is tracked.- false - FTP/SFTP activity is not tracked.
is_ftp_enabled
Data type: Optional[Boolean]
Indicates whether the FTP server is enabled on the NAS server specified in the nasServer attribute. Values are:- true - FTP server is enabled on the specified NAS server.- false - FTP server is disabled on the specified NAS server.
is_homedir_limit_enabled
Data type: Optional[Boolean]
Indicates whether an FTP or SFTP user access is limited to the home directory of the user. Values are:- true - An FTP or SFTP user can access only the home directory of the user.- false - FTP and SFTP users can access any NAS server directory, according to NAS server permissions.
is_sftp_enabled
Data type: Optional[Boolean]
Indicates whether the SFTP server is enabled on the NAS server specified in the nasServer attribute. Values are:- true - SFTP server is enabled on the specified NAS server.- false - SFTP server is disabled on the specified NAS server.
is_smb_authentication_enabled
Data type: Optional[Boolean]
Indicates whether FTP and SFTP clients can be authenticated using an SMB user name. These user names are defined in a Windows domain controller, and their formats are user@domain or domain\user. Values are:- true - SMB user names are accepted for authentication.- false - SMB user names are not accepted for authentication.
is_unix_authentication_enabled
Data type: Optional[Boolean]
Indicates whether FTP and SFTP clients can be authenticated using a Unix user name. Unix user names are defined in LDAP, NIS servers or in local passwd file. Values are:- true - Unix user names are accepted for authentication.- false - Unix user names are not accepted for authentication.
message_of_the_day
Data type: Optional[String]
Message of the day displayed on the console of FTP clients after their authentication. The length of this message is limited to 511 bytes of UTF-8 characters, and the length of each line is limited to 80 bytes.
nas_server_id
Data type: Optional[String]
Unique identifier of the NAS server that is configured with the FTP server.
remove_groups
Data type: Optional[Array[String]]
Groups to remove from the current groups. Error occurs if the group is not present. Cannot be combined with groups.
remove_hosts
Data type: Optional[Array[String]]
Host IP addresses to remove from the current hosts. The addresses may be IPv4 or IPv6. Error occurs if the IP address is not present. Cannot be combined with hosts.
remove_users
Data type: Optional[Array[String]]
Users to remove from the current users. Error occurs if the user is not present. Cannot be combined with users.
users
Data type: Optional[Array[String]]
Allowed or denied users, depending on the value of the is_allowed_user attribute.- If allowed users exist, only those users and no others can connect to the NAS server through FTP or SFTP.- If denied users exist, they have always access denied to the NAS server through FTP or SFTP.- If the list is empty, there is no restriction to the NAS server access through FTP or SFTP based on the user name.
welcome_message
Data type: Optional[String]
Welcome message displayed on the console of FTP and SFTP clients before their authentication. The length of this message is limited to 511 bytes of UTF-8 characters, and the length of each line is limited to 80 bytes.
Parameters
The following parameters are available in the powerstore_file_ftp
type.
id
namevar
Data type: String
Unique identifier of the FTP/SFTP Server object.
powerstore_file_interface
Information about File network interfaces in the storage system. These interfaces control access to Windows (CIFS) and UNIX/Linux (NFS) file storage.
Properties
The following properties are available in the powerstore_file_interface
type.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
gateway
Data type: Optional[String[1,45]]
Gateway address for the network interface. IPv4 and IPv6 are supported.
ip_address
Data type: Optional[String[1,45]]
IP address of the network interface. IPv4 and IPv6 are supported.
is_disabled
Data type: Optional[Boolean]
Indicates whether the network interface is disabled.
name
Data type: Optional[String]
Name of the network interface. This property supports case-insensitive filtering
nas_server_id
Data type: Optional[String]
Unique identifier of the NAS server to which the network interface belongs, as defined by the nas_server resource type.
prefix_length
Data type: Optional[Integer[1,128]]
Prefix length for the interface. IPv4 and IPv6 are supported.
role
Data type: Optional[Enum['Production','Backup']]
- Production - This type of network interface is used for all file protocols and services of a NAS server. This type of interface is inactive while a NAS server is in destination mode. - Backup - This type of network interface is used only for NDMP/NFS backup or disaster recovery testing. This type of interface is always active in all NAS server modes.
role_l10n
Data type: Optional[String]
Localized message string corresponding to role
vlan_id
Data type: Optional[Integer[0,4094]]
Virtual Local Area Network (VLAN) identifier for the interface. The interface uses the identifier to accept packets that have matching VLAN tags.
Parameters
The following parameters are available in the powerstore_file_interface
type.
id
namevar
Data type: String
Unique identifier of the file interface.
powerstore_file_interface_route
Use these resources to manage static IP routes, including creating, modifying, and deleting these routes.A route determines where to send a packet next so it can reach its final destination. A static route is set explicitly and does not automatically adapt to the changing network infrastructure. A route is defined by an interface, destination IP address range and an IP address of a corresponding gateway.Note: IP routes connect an interface (IP address) to the larger network through gateways. Without routes and gateway specified, the interface is no longer accessible outside of its immediate subnet. As a result, network shares and exports associated with the interface are no longer available to clients outside their immediate subnet.
Properties
The following properties are available in the powerstore_file_interface_route
type.
destination
Data type: Optional[String]
IPv4 or IPv6 address of the target network node based on the specific route type. Values are: For a default route, there is no value because the system will use the specified gateway IP address. For a host route, the value is the host IP address.* For a subnet route, the value is a subnet IP address.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
file_interface_id
Data type: Optional[String]
Unique identifier of the associated file interface.
gateway
Data type: Optional[String[1,45]]
IP address of the gateway associated with the route.
operational_status
Data type: Optional[Enum['Ok','Invalid_IP_Version','Invalid_Source_Interface','Invalid_Gateway','Not_Operational']]
File interface route Operational Status: Ok - the route is working fine. Invalid_IP_Version - source interfaces have a different IP protocol version than the route. Invalid_Source_Interface - no source interfaces set up on the system. Invalid_Gateway - source interfaces in a different subnet than the gateway.* Not_Operational - the route is not operational.
operational_status_l10n
Data type: Optional[String]
Localized message string corresponding to operational_status
prefix_length
Data type: Optional[Integer[1,128]]
IPv4 or IPv6 prefix length for the route.
Parameters
The following parameters are available in the powerstore_file_interface_route
type.
id
namevar
Data type: String
Unique identifier of the file interface route object.
powerstore_file_kerberos
Use these resources to manage the Kerberos service for a NAS server. One Kerberos service object may be configured per NAS Server. Kerberos is a distributed authentication service designed to provide strong authentication with secret-key cryptography. It works on the basis of "tickets" that allow nodes communicating over a non-secure network to prove their identity in a secure manner. When configured to act as a secure NFS server, the NAS Server uses the RPCSEC_GSS security framework and Kerberos authentication protocol to verify users and services. You can configure a secure NFS environment for a multiprotocol NAS Server or one that supports Unix-only shares. In this environment, user access to NFS file systems is granted based on Kerberos principal names.
Properties
The following properties are available in the powerstore_file_kerberos
type.
add_kdc_addresses
Data type: Optional[Array[String[1,255]]]
Fully Qualified domain names of the Kerberos Key Distribution Center (KDC) servers to add to the current list. Error occurs if name already exists. Cannot be combined with kdc_addresses. IPv4 and IPv6 addresses are not supported.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
kdc_addresses
Data type: Optional[Array[String[1,255]]]
Fully Qualified domain names of the Kerberos Key Distribution Center (KDC) servers. IPv4 and IPv6 addresses are not supported.
nas_server_id
Data type: Optional[String]
Unique identifier of the associated NAS Server instance that uses this Kerberos object. Only one Kerberos object per NAS Server is supported.
port_number
Data type: Optional[Integer[0,65535]]
KDC servers TCP port.
realm
Data type: Optional[String[1,255]]
Realm name of the Kerberos Service.
remove_kdc_addresses
Data type: Optional[Array[String[1,255]]]
Fully Qualified domain names of the Kerberos Key Distribution Center (KDC) servers to remove from the current list. Error occurs if name is not in the existing list. Cannot be combined with kdc_addresses. IPv4 and IPv6 addresses are not supported.
Parameters
The following parameters are available in the powerstore_file_kerberos
type.
id
namevar
Data type: String
Unique identifier of the Kerberos service object.
powerstore_file_ldap
Use these resources to manage the Lightweight Directory Access Protocol (LDAP) settings for the NAS Server. You can configure one LDAP settings object per NAS Server. LDAP is an application protocol for querying and modifying directory services running on TCP/IP networks. LDAP provides central management for network authentication and authorization operations by helping to centralize user and group management across the network. A NAS Server can use LDAP as a Unix Directory Service to map users, retrieve netgroups, and build a Unix credential. When an initial LDAP configuration is applied, the system checks for the type of LDAP server. It can be an Active Directory schema or an RFC 2307 schema.
Properties
The following properties are available in the powerstore_file_ldap
type.
add_addresses
Data type: Optional[Array[String]]
IP addresses to add to the current server IP addresses list. The addresses may be IPv4 or IPv6. Error occurs if an IP address already exists in the addresses list. Cannot be combined with addresses.
addresses
Data type: Optional[Array[String]]
The list of LDAP server IP addresses. The addresses may be IPv4 or IPv6.
authentication_type
Data type: Optional[Enum['Anonymous','Simple','Kerberos']]
Authentication type for the LDAP server. Anonymous - Anonymous authentication means no authentication occurs and the NAS Server uses an anonymous login to access the LDAP-based directory server. Simple - Simple authentication means the NAS Server must provide a bind distinguished name and password to access the LDAP-based directory server.* Kerberos - Kerberos authentication means the NAS Server uses a KDC to confirm the identity when accessing the Active Directory.
authentication_type_l10n
Data type: Optional[String]
Localized message string corresponding to authentication_type
base_dn
Data type: Optional[String[3,255]]
Name of the LDAP base DN. Base Distinguished Name (BDN) of the root of the LDAP directory tree. The appliance uses the DN to bind to the LDAP service and locate in the LDAP directory tree to begin a search for information. The base DN can be expressed as a fully-qualified domain name or in X.509 format by using the attribute dc=. For example, if the fully-qualified domain name is mycompany.com, the base DN is expressed as dc=mycompany,dc=com.
bind_dn
Data type: Optional[String[0,1023]]
Bind Distinguished Name (DN) to be used when binding.
bind_password
Data type: Optional[String[0,1023]]
The associated password to be used when binding to the server.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
is_certificate_uploaded
Data type: Optional[Boolean]
Indicates whether an LDAP certificate file has been uploaded.
is_config_file_uploaded
Data type: Optional[Boolean]
Indicates whether an LDAP configuration file has been uploaded.
is_smb_account_used
Data type: Optional[Boolean]
Indicates whether SMB authentication is used to authenticate to the LDAP server. Values are: true - Indicates that the SMB settings are used for Kerberos authentication. false - Indicates that Kerberos uses its own settings.
is_verify_server_certificate
Data type: Optional[Boolean]
Indicates whether Certification Authority certificate is used to verify the LDAP server certificate for secure SSL connections. Values are: true - verifies LDAP server's certificate. false - doesn't verify LDAP server's certificate.
nas_server_id
Data type: Optional[String]
Unique identifier of the associated NAS Server instance that will use this LDAP object. Only one LDAP object per NAS Server is supported.
password
Data type: Optional[String[0,1023]]
The associated password for Kerberos authentication.
port_number
Data type: Optional[Integer[0,65536]]
The TCP/IP port used by the NAS Server to connect to the LDAP servers. The default port number for LDAP is 389 and LDAPS is 636.
principal
Data type: Optional[String[0,1023]]
Specifies the principal name for Kerberos authentication.
profile_dn
Data type: Optional[String[0,255]]
For an iPlanet LDAP server, specifies the DN of the entry with the configuration profile.
protocol
Data type: Optional[Enum['LDAP','LDAPS']]
Indicates whether the LDAP protocol uses SSL for secure network communication. SSL encrypts data over the network and provides message and server authentication. LDAP - LDAP protocol without SSL. LDAPS - (Default) LDAP protocol with SSL. When you enable LDAPS, make sure to specify the appropriate LDAPS port (usually port 636) and to upload an LDAPS trust certificate to the LDAP server.
protocol_l10n
Data type: Optional[String]
Localized message string corresponding to protocol
realm
Data type: Optional[String[0,255]]
Specifies the realm name for Kerberos authentication.
remove_addresses
Data type: Optional[Array[String]]
IP addresses to remove from the current server IP addresses list. The addresses may be IPv4 or IPv6. Error occurs if an IP address does not exist in the addresses_list. Cannot be combined with addresses.
schema_type
Data type: Optional[Enum['RFC2307','Microsoft','Unknown']]
LDAP server schema type. RFC2307 - OpenLDAP/iPlanet schema. Microsoft - Microsoft Identity Management for UNIX (IDMU/SFU) schema.* Unknown - Unknown protocol.
schema_type_l10n
Data type: Optional[String]
Localized message string corresponding to schema_type
Parameters
The following parameters are available in the powerstore_file_ldap
type.
id
namevar
Data type: String
LDAP settings object Id.
powerstore_file_ndmp
The Network Data Management Protocol (NDMP) provides a standard for backing up file servers on a network. NDMP allows centralized applications to back up file servers that run on various platforms and platform versions. NDMP reduces network congestion by isolating control path traffic from data path traffic, which permits centrally managed and monitored local backup operations. Storage systems support NDMP v2-v4 over the network. Direct-attach NDMP is not supported. This means that the tape drives need to be connected to a media server, and the NAS server communicates with the media server over the network. NDMP has an advantage when using multiprotocol file systems because it backs up the Windows ACLs as well as the UNIX security information.
Properties
The following properties are available in the powerstore_file_ndmp
type.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
nas_server_id
Data type: Optional[String]
Unique identifier of the NAS server to be configured with these NDMP settings.
password
Data type: Optional[String]
Password for the NDMP service user.
user_name
Data type: Optional[String]
User name for accessing the NDMP service.
Parameters
The following parameters are available in the powerstore_file_ndmp
type.
id
namevar
Data type: String
Unique identifier of the NDMP service object.
powerstore_file_nis
Use these resources to manage the Network Information Service (NIS) settings object for a NAS Server. One NIS settings object may be configured per NAS server. NIS consists of a directory service protocol for maintaining and distributing system configuration information, such as user and group information, hostnames, and such. The port for NIS Service is 111.
Properties
The following properties are available in the powerstore_file_nis
type.
add_ip_addresses
Data type: Optional[Array[String]]
IP addresses to add to the current list. The addresses may be IPv4 or IPv6. Error occurs if the IP address already exists. Cannot be combined with ip_addresses.
domain
Data type: Optional[String[1,255]]
Name of the NIS domain.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
ip_addresses
Data type: Optional[Array[String]]
The list of NIS server IP addresses.
nas_server_id
Data type: Optional[String]
Unique identifier of the associated NAS Server instance that uses this NIS Service object. Only one NIS Service per NAS Server is supported.
remove_ip_addresses
Data type: Optional[Array[String]]
IP addresses to remove from the current list. The addresses may be IPv4 or IPv6. Error occurs if the IP address is not present. Cannot be combined with ip_addresses.
Parameters
The following parameters are available in the powerstore_file_nis
type.
id
namevar
Data type: String
Unique identifier of the NIS object.
powerstore_file_system
Manage NAS file systems.
Properties
The following properties are available in the powerstore_file_system
type.
access_policy
Data type: Optional[Enum['Native','UNIX','Windows']]
File system security access policies. Each file system uses its access policy to determine how to reconcile the differences between NFS and SMB access control. Selecting an access policy determines which mechanism is used to enforce file security on the particular file system. Native - Native Security. UNIX - UNIX Security. * Windows - Windows Security.
access_policy_l10n
Data type: Optional[String]
Localized message string corresponding to access_policy
access_type
Data type: Optional[Enum['Snapshot','Protocol']]
Indicates whether the snapshot directory or protocol access is granted to the file system snapshot. Snapshot- Snapshot access is via the .snapshot folder in the file system. Protocol - Protocol access is via normal file shares. Protocol access is not provided by default - the NFS and/or SMB share must be created explicitly for the snapshot.
access_type_l10n
Data type: Optional[String]
Localized message string corresponding to access_type
creation_timestamp
Data type: Optional[String]
Time, in seconds, when the snapshot was created.
creator_type
Data type: Optional[Enum['Scheduler','User']]
Enumeration of possible snapshot creator types. Scheduler - Created by a snapshot schedule. User - Created by a user. External_VSS - Created by Windows Volume Shadow Copy Service (VSS) to obtain an application consistent snapshot. External_NDMP - Created by an NDMP backup operation. External_Restore - Created as a backup snapshot before a snapshot restore. External_Replication_Manager - Created by Replication Manager. Snap_CLI - Created inband by SnapCLI. AppSync - Created by AppSync.
creator_type_l10n
Data type: Optional[String]
Localized message string corresponding to creator_type
default_hard_limit
Data type: Optional[Integer[0,9223372036854775807]]
Default hard limit of user quotas and tree quotas (bytes). The hard limit value is always rounded up to match the file system's physical block size.(0 means 'No limitation'. This value can be used to compute the amount of space consumed without limiting the space).
default_soft_limit
Data type: Optional[Integer[0,9223372036854775807]]
Default soft limit of user quotas and tree quotas (bytes). Value is always rounded up to match the file system's physical block size.(0 means 'No limitation'.)
description
Data type: Optional[String[0,255]]
File system description. (255 UTF-8 characters).
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
expiration_timestamp
Data type: Optional[String]
Time when the snapshot will expire. Use 1970-01-01T00:00:00.000Z to set expiration timestamp to null.
filesystem_type
Data type: Optional[Enum['Primary','Snapshot']]
- Primary - Normal file system or clone.* Snapshot - Snapshot of a file system.
filesystem_type_l10n
Data type: Optional[String]
Localized message string corresponding to filesystem_type
folder_rename_policy
Data type: Optional[Enum['All_Allowed','SMB_Forbidden','All_Forbidden']]
File system folder rename policies for the file system with multiprotocol access enabled. These policies control whether the directory can be renamed from NFS or SMB clients when at least one file is opened in the directory, or in one of its child directories. All_Allowed - All protocols are allowed to rename directories without any restrictions. SMB_Forbidden - A directory rename from the SMB protocol will be denied if at least one file is opened in the directory or in one of its child directories. * All_Forbidden - Any directory rename request will be denied regardless of the protocol used, if at least one file is opened in the directory or in one of its child directories.
folder_rename_policy_l10n
Data type: Optional[String]
Localized message string corresponding to folder_rename_policy
grace_period
Data type: Optional[Integer[-1,2147483647]]
Grace period of soft limits (seconds): -1: default: Infinite grace (Windows policy). 0: Use system default of 1 week. positive: Grace period after which the soft limit is treated as a hard limit (seconds).
id
Data type: Optional[String]
File system id.
is_async_m_time_enabled
Data type: Optional[Boolean]
Indicates whether asynchronous MTIME is enabled on the file system or protocol snaps that are mounted writeable. Values are: true - Asynchronous MTIME is enabled on the file system. false - Asynchronous MTIME is disabled on the file system.
is_modified
Data type: Optional[Boolean]
Indicates whether the snapshot may have changed since it was created. Values are:true - Snapshot is or was shared with read/write access.false - Snapshot was never shared.
is_quota_enabled
Data type: Optional[Boolean]
Indicates whether quota is enabled. Quotas are not supported for read-only file systems. Default value for the grace period is set to infinite=-1 to match Windows' quota policyValues are: true - Start tracking usages for all users on a file system or a quota tree, and user quota limits will be enforced. false - Stop tracking usages for all users on a file system or a quota tree, and user quota limits will not be enforced.
is_smb_no_notify_enabled
Data type: Optional[Boolean]
Indicates whether notifications of changes to directory file structure are enabled. true - Change directory notifications are enabled. false - Change directory notifications are disabled.
is_smb_notify_on_access_enabled
Data type: Optional[Boolean]
Indicates whether file access notifications are enabled on the file system. Values are: true - File access notifications are enabled on the file system. false - File access notifications are disabled on the file system.
is_smb_notify_on_write_enabled
Data type: Optional[Boolean]
Indicates whether file writes notifications are enabled on the file system. Values are: true - File writes notifications are enabled on the file system. false - File writes notifications are disabled on the file system.
is_smb_op_locks_enabled
Data type: Optional[Boolean]
Indicates whether opportunistic file locking is enabled on the file system. Values are: true - Opportunistic file locking is enabled on the file system. false - Opportunistic file locking is disabled on the file system.
is_smb_sync_writes_enabled
Data type: Optional[Boolean]
Indicates whether the synchronous writes option is enabled on the file system. Values are: true - Synchronous writes option is enabled on the file system. false - Synchronous writes option is disabled on the file system.
last_refresh_timestamp
Data type: Optional[String]
Time, in seconds, when the snapshot was last refreshed.
last_writable_timestamp
Data type: Optional[String]
If not mounted, and was previously mounted, the time (in seconds) of last mount. If never mounted, the value will be zero.
locking_policy
Data type: Optional[Enum['Advisory','Mandatory']]
File system locking policies. These policy choices control whether the NFSv4 range locks are honored. Because NFSv3 is advisory by design, this policy specifies that the NFSv4 locking feature behaves like NFSv3 (advisory mode), for backward compatiblity with applications expecting an advisory locking scheme. Advisory - No lock checking for NFS and honor SMB lock range only for SMB. Mandatory - Honor SMB and NFS lock range.
locking_policy_l10n
Data type: Optional[String]
Localized message string corresponding to locking_policy
nas_server_id
Data type: Optional[String]
Id of the NAS Server on which the file system is mounted.
parent_id
Data type: Optional[String]
The object id of the parent of this file system (only applies to clones and snapshots). If the parent of a clone has been deleted the object_id will contain null.
protection_policy_id
Data type: Optional[String]
Id of the protection policy applied to the file system.
size_total
Data type: Optional[Integer[3221225472,281474976710656]]
Size that the file system presents to the host or end user. (Bytes)
size_used
Data type: Optional[Integer[0,9223372036854775807]]
Size used, in bytes, for the data and metadata of the file system.
smb_notify_on_change_dir_depth
Data type: Optional[Integer[1,512]]
Lowest directory level to which the enabled notifications apply, if any.
Parameters
The following parameters are available in the powerstore_file_system
type.
name
namevar
Data type: String[1,255]
Name of the file system. (255 UTF-8 characters).
powerstore_file_tree_quota
Tree quota settings in the storage system. A tree quota instance represents a quota limit applied to a specific directory tree in a file system.
Properties
The following properties are available in the powerstore_file_tree_quota
type.
description
Data type: Optional[String]
Description of the tree quota.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
file_system_id
Data type: Optional[String]
Unique identifier of the associated file system.
hard_limit
Data type: Optional[Integer[0,9223372036854775807]]
Hard limit of the tree quota, in bytes. No hard limit when set to 0. This value can be used to compute amount of space that is consumed without limiting the space. Value is always rounded up to match the physical block size of the filesystem.
is_user_quotas_enforced
Data type: Optional[Boolean]
Whether the quota must be enabled for all users, and whether user quota limits, if any, are enforced.Values are:- true - start tracking usage for all users on the quota tree, and enforce user quota limits.- false - stop tracking usage for all users on the quota tree, and do not enforce user quota limits.
path
Data type: Optional[String]
Path relative to the root of the associated filesystem.
remaining_grace_period
Data type: Optional[Integer[0,9223372036854775807]]
Remaining grace period, in seconds, after the soft limit is exceeded:- 0 - Grace period has already expired- -1 - No grace period in-progress, or infinite grace period setThe grace period of user quotas is set in the file system quota config.
size_used
Data type: Optional[Integer[0,9223372036854775807]]
Size already used on the tree quota, in bytes.
soft_limit
Data type: Optional[Integer[0,9223372036854775807]]
Soft limit of the tree quota, in bytes. No hard limit when set to 0. Value is always rounded up to match the physical block size of the filesystem.
state
Data type: Optional[Enum['Ok','Soft_Exceeded','Soft_Exceeded_And_Expired','Hard_Reached']]
State of the user quota or tree quota record period. OK - No quota limits are exceeded. Soft_Exceeded - Soft limit is exceeded, and grace period is not expired. Soft_Exceeded_And_Expired - Soft limit is exceeded, and grace period is expired. Hard_Reached - Hard limit is reached.
state_l10n
Data type: Optional[String]
Localized message string corresponding to state
Parameters
The following parameters are available in the powerstore_file_tree_quota
type.
id
namevar
Data type: String
Unique identifier of the tree quota.
powerstore_file_virus_checker
Use these resource types to manage the virus checker service of a NAS server. A virus checker instance is created each time the anti-virus service is enabled on a NAS server. A configuration file (named viruschecker.conf) needs to be uploaded before enabling the anti-virus service.The cluster supports third-party anti-virus servers that perform virus scans and reports back to the storage system. For example, when an SMB client creates, moves, or modifies a file, the NAS server invokes the anti-virus server to scan the file for known viruses. During the scan any access to this file is blocked. If the file does not contain a virus, it is written to the file system. If the file is infected, corrective action (fixed, removed or placed in quarantine) is taken as defined by the anti-virus server. You can optionally set up the service to scan the file on read access based on last access of the file compared to last update of the third-party anti-virus date.
Properties
The following properties are available in the powerstore_file_virus_checker
type.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
is_config_file_uploaded
Data type: Optional[Boolean]
Indicates whether a virus checker configuration file has been uploaded.
is_enabled
Data type: Optional[Boolean]
Indicates whether the anti-virus service is enabled on this NAS server. Value are:- true - Anti-virus service is enabled. Each file created or modified by an SMB client is scanned by the third-party anti-virus servers. If a virus is detected, the access to the file system is denied. If third-party anti-virus servers are not available, according the policy, the access to the file systems is denied to prevent potential viruses propagation.- false - Anti-virus service is disabled. File systems of the NAS servers are available for access without virus checking.
nas_server_id
Data type: Optional[String]
Unique identifier of an associated NAS Server instance that uses this virus checker configuration. Only one virus checker configuration per NAS Server is supported.
Parameters
The following parameters are available in the powerstore_file_virus_checker
type.
id
namevar
Data type: String
Unique identifier of the virus checker instance.
powerstore_host
Manage hosts that access the cluster.
Properties
The following properties are available in the powerstore_host
type.
add_initiators
Data type: Optional[Array[Struct[{Optional[chap_mutual_password] => Optional[String[12,64]], Optional[chap_mutual_username] => Optional[String[1,64]], Optional[chap_single_password] => Optional[String[12,64]], Optional[chap_single_username] => Optional[String[1,64]], port_name => Optional[String], port_type => Optional[Enum['iSCSI','FC']], }]]]
The list of initiators to be added. CHAP username and password are optional.
description
Data type: Optional[String[0,256]]
An optional description for the host. The description should not be more than 256 UTF-8 characters long and should not have any unprintable characters.
ensure
Data type: Enum['present', 'absent']
Whether this resource should be present or absent on the target system.
Default value: present
host_group_id
Data type: Optional[String]
Associated host group, if host is part of host group.
host_initiators
Data type: Optional[Array[Struct[{Optional[active_sessions] => Optional[Array[Struct[{Optional[appliance_id] => Optional[String], Optional[bond_id] => Optional[String], Optional[eth_port_id] => Optional[String], Optional[fc_port_id] => Optional[String], Optional[node_id] => Optional[String], Optional[port_name] => Optional[String], Optional[veth_id] => Optional[String], }]]], Optional[chap_mutual_username] => Optional[String[1,64]], Optional[chap_single_username] => Optional[String[1,64]], Optional[port_name] => Optional[String], Optional[port_type] => Optional[Enum['iSCSI','FC']], Optional[port_type_l10n] => Optional[String], }]]]
Filtering on the fields of this embedded resource is not supported.
id
Data type: Optional[String]
Unique id of the host.
initiators
What are tasks?
Modules can contain tasks that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Tasks in this module release
What are plans?
Modules can contain plans that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Changelog
All notable changes to this project will be documented in this file.
Release 0.8.1
Features
- Added contact information to README.md.
- Module now sends "Application-Type" header
Bugfixes
- Fixed the missing type descriptions in REFERENCE.md
Release 0.8.0
Features
Initial release.
Bugfixes
Known Issues
Dependencies
- puppet/format (>=0.1.1 < 2.0.0)
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.