Introduction
An Ansible module is a way of extending the existing capabilities and functionality of Ansible. Whilst there is already a huge amount of content available to us with the core capabilities and extended collections, we might want to add a module to provide a capability that doesn’t already exist, or that we could only make available with a long and complex role.
An Ansible module should be focused on delivering coverage of a single thing – i.e. in the example below, Db2 DDF information. A single module covering everything that we might want to do with Db2, MQ and CICS functionality would be a really bad example and probably be (a) enormous, (b) an everlasting development challenge and (c) a maintenance nightmare! As the Ansible doc says: “follow the UNIX philosophy of doing one thing well.”
Where the scope / ambition of the additional capabilities is larger than a single thing, consider creating a collection instead. More on these in the next blog.
The body of this blog will cover the anatomy of a module and an example, as well as how to validate it and how to use it.
Some Documentation
The module creation process is fairly simple and documented on the Ansible website here:
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html
There is also a section on getting the structure and documentation correct, here:
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_documenting.html
It’s important to get all of this right to ensure that the module is self-documenting and to the standard expected by the community, as well as getting the functional code correct.
The Anatomy of a Module
We’re going to focus on Python based modules – which make up the vast majority of Ansible modules. Modules can be written in any language that will run on the intended platform, although interpretive languages will inevitably reach more targets than compiled ones.
A Python Ansible module should begin with the Python shebang – this is the pseudo comment that tells Linux which interpreter to use to process the rest of the file.
The shebang should be followed by a comment clarifying the file encoding – e.g.
#!/usr/bin/python
# -*- coding: utf-8 -*-
Then the copyright comment, which should be the short form – e.g.
# Copyright: Triton Consulting
# GNU General Public License v3.0+ (see COPYING or
# https://www.gnu.org/licenses/gpl-3.0.txt)
and not the whole GPL license statement.
If you’re going to supply the __future__ import, this needs to come next:
from __future__ import (absolute_import, division, print_function)
This has to go before the first code items (which are the documentation assignments – below), but note that no other imports should be here – they should come after the doc.
This is then followed by the documentation of the feature being implemented, one or more examples of use and what the returned data should look like. These are all required and are processed by ansible-doc to generate the module documentation. The specific documentation sections / variables are:
- DOCUMENTATION – this gives the name of the module, what it does, what parameters it takes and what requirements / dependencies it may have.
- EXAMPLES – this shows one or more examples of how to use the module in a play.
- RETURN – what form the returned dictionary takes
Have a look at the documentation link above, and the example module (db2_get_ddf.py), below.
The documentation is then followed by any Python imports and then the implementation code.
An Example – Gathering Db2 for z/OS DDF Configuration Information
The example below started off as a set of tasks in a role but rapidly became quite complicated. This module allows us to simplify the task to a single activity and make better use of available resources. It works by using the ZOA Utilities (ZOAU) mvscmd to run the TSO command processor (IKJEFT01) and use this to run the Db2 for z/OS TSO command processor (DSN) to execute the “-DIS DDF” command. We then parse the results into fields in the “ddf” dictionary and return to the caller.
Fiddly bits with doing this were:
- No real documented examples for doing the concatenated STEPLIB allocation. The Db2 command processor needs both the Db2 exit library (SDSNEXIT) and the Db2 load library (SDSNLOAD) to function. We ended up looking at some of the ibm_zos_core modules in GitHub to get a clue! (Please feel free to use this code as an example)
- There doesn’t seem to be an easy way to pass a set of commands to TSO except by putting them all in a temporary dataset. There’s probably an “opportunity” for someone to add something to ibm_zos_core…
All of the code referenced in this blog including the complete module code is available on GitHub here:
https://github.com/tritonconsulting/ansible-db2_get_ddf-module
The following is snipped a bit to keep the blog manageable. Things to note are:
- The “result” dictionary initialisation. We return data in “ddf” member (also a dictionary)
- The “module = AnsibleModule” which defines the parameters that will be used. The module object is used to access the supplied parms. Be careful to ensure that the DOCUMENTATION aligns with this.
- If the module is successful, we return information in “result” by calling the module.exit_json() method
- If the module is not successful and we want to flag a failure, we can still return “result” but with module.fail_json()
And the rest, as they say, is Python:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Triton Consulting
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
DOCUMENTATION = r'''
---
module: db2_get_ddf
short_description: Get the Db2 for z/OS DDF configuration
description:
:
'''
EXAMPLES = r'''
db2_get_ddf:
db2ssid: DBDG
register: ddf
debug:
msg: "TCPPORT = {{ ddf.tcpport }}"
'''
RETURN = r'''
db2ssid:
description: The original db2ssid param that was passed in.
type: str
returned: always
sample: 'DBDG'
ddf:
:
'''
from zoautil_py import mvscmd, datasets
from zoautil_py.types import DDStatement, DatasetDefinition, FileDefinition
import uuid
from os import environ, remove
from ansible.module_utils.basic import AnsibleModule
def run_module():
# init the result dict
result=dict(
changed=False,
db2ssid='',
ddf={}
)
# define the module argument(s)
module = AnsibleModule(
argument_spec = dict(
db2ssid = dict(type = 'str', aliases = ['ssid','db2'], required = True)
)
)
params = module.params
db2ssid = params['db2ssid']
result['db2ssid'] = db2ssid
# setup DDs ahead of calling IKJEFT01.
ddstmt = []
# Db2 load libraries named in the ssid_DB2LIBS environment variable
db2libs = environ.get("%s_DB2LIBS" % (db2ssid))
if (db2libs == "None"):
db2libs = "DSND10.%s.SDSNEXIT:DSND10.SDSNLOAD" % (db2ssid)
dl_ents = db2libs.split(':')
steplib = []
for dlds in dl_ents:
steplib.append(DatasetDefinition(dlds))
ddstmt.append(DDStatement("steplib",steplib))
# create SYSTSIN TSO cmd file. NB uuid used to make the file unique
cmdfile = "/tmp/db2_get_ddf_cmd_%s.txt" % (uuid.uuid4())
with open(cmdfile, mode="w", encoding="cp1047") as ip:
ip.write("DSN S(DBDG)n")
ip.write(" -DIS DDFn")
ip.write("ENDn")
systsin = FileDefinition(cmdfile)
ddstmt.append(DDStatement("systsin",systsin))
# make SYSTSPRT a SYSOUT=* allocation
ddstmt.append(DDStatement("systsprt","*"))
# we have to call IKJEFT01 authorised
rsp = mvscmd.execute_authorized(pgm="IKJEFT01",dds=ddstmt)
# tidy up the SYSTSIN file
try:
remove(cmdfile)
except OSError:
pass
# parse the response
r = rsp
out = []
ddf = {}
alias = {}
capt = False
if (r.rc == 0):
for line in r.stdout_response.splitlines():
# lose the print control character
tline = line[1:].rstrip()
if (len(tline.split()) > 1):
msgid, msg = tline.split(' ',1)
words = msg.split()
if (msgid == "DSNL080I"):
capt = True
if (capt):
out.append(tline)
match msgid:
case "DSNL081I":
# STATUS=status
ddf['status'] = words[0].split('=',1)[1]
case "DSNL083I":
# location-name luname genericlu
ddf['location'] = words[0]
ddf['luname'] = words[1]
ddf['genericlu'] = words[2]
:
case "DSNL099I":
capt = False
if (len(alias) > 0):
ddf['aliases'] = alias
ddf['out'] = out
else:
out.append("** rc = %d **" % (r.rc))
out.append("stderr:")
for line in r.stderr_response.splitlines():
out.append(line.rstrip()[1:])
out.append("stdout:")
for line in r.stdout_response.splitlines():
out.append(line.rstrip()[1:])
ddf['out'] = out
result['ddf'] = ddf
module.fail_json(msg='Error processing DSN command request',**result)
result['ddf'] = ddf
result['changed'] = False
# exit the module and return the json result
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
Testing The Module
There’s two parts to this really:
- Testing the production of documentation that you’ve created
- Testing the functionality of the module
In the following examples, we’ll assume that we’ve kept the same directory structure as in the GitHub repository – i.e.
- Playbook / role home ($HOME/ansible, below)
- chk_ddf.yml = playbook
- zpdt.yml = inventory
- host_vars – host vars directory
- zpdt.yml = host vars for host named “zpdt” in the inventory
- library – our module directory
- db2_get_ddf.yml = our module (as above)
Working in the Playbook / role home directory, we can run a test generation of the documentation from the module like this:
ansible-doc -t module db2_get_ddf -M $HOME/ansible/library
This generates the three document sections and pipes them to the “more” command:
>DB2_GET_DDF (/home/james/ansible/library/db2_get_ddf.py)
Get the Db2 for z/OS DDF configuration details from TSO DSN -DIS DDF and return the
results as JSON. This uses the ZOA Utilities to execute the IKJEFT01 TSO commnad
processor, and then runs the Db2 DSN command from there. The Db2 load libraries are
named in the ssid_DB2LOAD environment variable as colon separated MVS dataset names. The
data is returned as a JSON / dictionary including the command output.
ADDED IN: version 0.0.1
OPTIONS (= is mandatory):
= db2ssid
The Db2 subsystem ID to issue the command to.
The subsystem must be local to the current host.
aliases: [ssid, db2]
type: str
REQUIREMENTS: IBM ZOA Utilities v1.2.5.0 and up, IBM Open Enterprise SDK for Python 3.11.5 and up, IBM Db2 for z/OS V12.1 and up
AUTHOR: James Gill (@db2dinosaur)
EXAMPLES:
db2_get_ddf:
db2ssid: DBDG
register: ddf
:
To check the function, we will need to run the code. In our case, we will need to setup some environment variables (to tell the module what the Db2 load libraries are) and create a test play to call the module.
We used Ansible host_vars to setup a variable called “environment_vars” for this specific host. This variable is used in the test play (see below). As well as our Db2 load libraries (DBDG_DB2LOAD for subsystem DBDG), we also set the other environment variables used by the Python interpreter and ZOAU tooling on our z/OS host. This is in the GitHub repo as “host_vars/zpdt.yml”:
# Environment vars for zpdt
PYZ: "/usr/lpp/IBM/cyp/v3r11/pyz"
ZOAU: "/usr/lpp/IBM/zoautil"
JAVA: "/usr/lpp/java/J11.0_64"
# environment_vars
environment_vars:
LANG: "C"
_BPXK_AUTOCVT: "ON"
ZOAU_HOME: "{{ ZOAU }}"
PYTHONHOME: "{{ PYZ }}"
JAVA_HOME: "{{ JAVA }}"
PYTHONPATH: "{{ ZOAU }}/lib"
LIBPATH: "{{ PYZ }}/lib:{{ ZOAU }}/lib:{{ JAVA }}/lib:/lib:/usr/lib"
PATH: "{{ ZOAU }}/bin:{{ PYZ }}/bin:{{ JAVA }}/bin:/bin:/usr/bin"
MANPATH: "{{ ZOAU }}/docs/%L"
DBDG_DB2LIBS: "DSND10.DBDG.SDSNEXIT:DSND10.SDSNLOAD"
The test playbook looks like this (“chk_ddf.yml” in the repo):
---
#
# Get DDF Configuration
#
- name: "Get DDF Ports for DBDG"
hosts: zpdt
gather_facts: false
environment: "{{ environment_vars }}"
tasks:
- name: "Playbook start timestamp"
debug:
msg: "{{ lookup('pipe','date') }}"
- name: "Run -DIS DDF"
db2_get_ddf:
db2ssid: DBDG
register: ddf
- name: "What did we get?"
debug:
msg: "{{ ddf | to_nice_json }}"
- name: "Playbook end timestamp"
debug:
msg: "{{ lookup('pipe','date') }}"
Note that the environment is set from the host_vars configured “environment_vars”.
Finally, to run this, use the shell script “chk_ddf.sh” in the repo, which looks like this:
#!/bin/bash
#
# Check the db2_get_ddf module
#
# Make sure that we have the library hooked:
ANSIBLE_LIBRARY=./library ansible-playbook -i zpdt.yml chk_ddf.yml
The ANSIBLE_LIBRARY var tells Ansible where to look for the module.
What If I’ve Got Lots of Modules?
Then, my friend, you need a collection, which is what we will be discussing in the next Ansible blog.