Task 1: Ansible Installation and Configuration
Install ansible package on the control node (including any dependencies) and configure the following:
- Create a regular user automation with the password of devops. Use this user for all sample exam tasks and playbooks, unless you are working on the task #2 that requires creating the automation user on inventory hosts. You have root access to all five servers.
- All playbooks and other Ansible configuration that you create for this sample exam should be stored in
/home/automation/plays
.
Create a configuration file /home/automation/plays/ansible.cfg
to meet the following requirements:
- The roles path should include
/home/automation/plays/roles
, as well as any other path that may be required for the course of the sample exam. - The inventory file path is
/home/automation/plays/inventory
. - Privilege escallation is disabled by default.
- Ansible should be able to manage 10 hosts at a single time.
- Ansible should connect to all managed nodes using the automation user.
Create an inventory file /home/automation/plays/inventory
with the following:
- ansible2.hl.local is a member of the proxy host group.
- ansible3.hl.local is a member of the webservers host group.
- ansible4.hl.local is a member of the webservers host group.
- ansible5.hl.local is a member of the database host group.
Creating /etc/hosts
file:
1 2 3 4 5 6 7 8 |
[root@ansible-control ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.30.9.60 ansible-control ansible-control.hl.local 172.30.9.61 ansible2 ansible2.hl.local 172.30.9.62 ansible3 ansible3.hl.local 172.30.9.63 ansible4 ansible4.hl.local 172.30.9.64 ansible5 ansible5.hl.local |
As root generating ssh key and copy it to the managed hosts:
1 2 3 4 5 |
[root@ansible-control ~]# ssh-keygen [root@ansible-control ~]# ssh-copy-id ansible2 [root@ansible-control ~]# ssh-copy-id ansible3 [root@ansible-control ~]# ssh-copy-id ansible4 [root@ansible-control ~]# ssh-copy-id ansible5 |
Let’s check if we can connect to the remote hosts as root without password:
1 2 3 4 |
[root@ansible-control ~]# ssh ansible2 ssh ansible3 ssh ansible4 ssh ansible5 |
Instaling ansible:
1 |
yum install -y ansible |
Adding automation user:
1 2 3 |
adduser automation passwd automation su - automation |
Making directories and ansible.cfg
file:
1 2 3 4 5 6 7 8 9 10 |
[automation@ansible-control]$ mkdir plays [automation@ansible-control]$ cd plays [automation@ansible-control plays]$ cat ansible.cfg [defaults] roles_path = /home/automation/plays/roles inventory = /home/automation/plays/inventory forks = 10 #remote_user = automation log_path = /home/automation/ansible.log |
Making inventory:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
mkdir plays/inventory cd inventory vim hosts [automation@ansible-control inventory]$ cat hosts [proxy] ansible2.hl.local [webservers] ansible[3:4].hl.local [database] ansible5.hl.local |
Task 2: Ad-Hoc Commands
Generate an SSH keypair on the control node. You can perform this step manually.
Write a script /home/automation/plays/adhoc
that uses Ansible ad-hoc commands to achieve the following:
- User automation is created on all inventory hosts (not the control node).
- SSH key (that you generated) is copied to all inventory hosts for the automation user and stored in
/home/automation/.ssh/authorized_keys
. - The automation user is allowed to elevate privileges on all inventory hosts without having to provide a password.
After running the adhoc script, you should be able to SSH into all inventory hosts using the automation user without password, as well as a run all privileged commands.
1 2 3 4 5 6 7 |
[automation@ansible-control plays]$ cat ./adhoc #!/bin/bash /usr/bin/ansible proxy,webservers,database -b -m user -a "name=automation" /usr/bin/ansible proxy,webservers,database -b -m file -a "path=/home/automation/.ssh state=directory owner=automation" /usr/bin/ansible proxy,webservers,database -b -m copy -a "src=/home/automation/.ssh/id_rsa.pub dest=/home/automation/.ssh/authorized_keys directory_mode=yes" /usr/bin/ansible proxy,webservers,database -b -m lineinfile -a "path=/etc/sudoers state=present line='automation ALL=(ALL) NOPASSWD: ALL'" |
Let’s check if we can connect to the remote hosts without password:
1 2 3 4 |
[automation@ansible-control plays]$ ssh ansible2 ssh ansible3 ssh ansible4 ssh ansible5 |
Task 3: File Content
Create a playbook /home/automation/plays/motd.yml
that runs on all inventory hosts and does the following:
- The playbook should replace any existing content of
/etc/motd
with text. Text depends on the host group. - On hosts in the proxy host group the line should be “Welcome to HAProxy server”.
- On hosts in the webservers host group the line should be “Welcome to Apache server”.
- On hosts in the database host group the line should be “Welcome to MySQL server”.
Simple version of playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
[automation@ansible-control plays]$ cat motd.yml --- - hosts: all remote_user: automation #become: yes tasks: - name: remove motd file file: path: /etc/motd state: absent - name: Put the Proxy text to motd file lineinfile: path: /etc/motd line: 'Welcome to HAProxy server' create: yes when: inventory_hostname in groups['proxy'] - name: Put the Proxy text to motd file lineinfile: path: /etc/motd line: 'Welcome to Apache server' create: yes when: inventory_hostname in groups['webservers'] - name: Put the Proxy text to motd file lineinfile: path: /etc/motd line: 'Welcome to MySQL server' create: yes when: inventory_hostname in groups['database'] |
Playbook which use group_vars
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
[automation@ansible-control plays]$ mkdir group_vars [automation@ansible-control plays]$ cd group_vars [automation@ansible-control group_vars]$ ls -l razem 12 -rw-rw-r--. 1 automation automation 16 08-27 23:23 database -rw-rw-r--. 1 automation automation 18 08-27 23:23 proxy -rw-rw-r--. 1 automation automation 17 08-27 23:24 webservers [automation@ansible-control group_vars]$ cat database --- motd: MySQL [automation@ansible-control group_vars]$ cat proxy --- motd: HAProxy [automation@ansible-control group_vars]$ cat webservers --- motd: Apache [automation@ansible-control plays]$ cat motd1.yml --- - hosts: all #remote_user: automation become: yes gather_facts: no tasks: - name: remove motd file file: path: /etc/motd state: absent - name: Put the Proxy text to motd file lineinfile: path: /etc/motd line: "Welcome to {{ motd }} server" create: yes - name: print motd debug: msg: "Welcome to {{ motd }} server" |
Task 4: Configure SSH Server
Create a playbook /home/automation/plays/sshd.yml
that runs on all inventory hosts and configures SSHD daemon as follows:
- banner is set to
/etc/motd
- X11Forwarding is disabled
- MaxAuthTries is set to 3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
[automation@ansible-control plays]$ cat sshd.yml --- - hosts: all become: yes gather_facts: no tasks: - name: X11Forwarding replace: path: /etc/ssh/sshd_config regexp: '^X11Forwarding yes' replace: 'X11Forwarding no' - name: MaxAuthtries replace: path: /etc/ssh/sshd_config regexp: '^#MaxAuthTries 6' replace: 'MaxAuthTries 3' - name: Banner replace: path: /etc/ssh/sshd_config regexp: '^#Banner none' replace: 'Banner /etc/motd' |
Task 5: Ansible Vault
Create Ansible vault file /home/automation/plays/secret.yml
. Encryption/decryption password is devops.
Add the following variables to the vault:
- user_password with value of devops
- database_password with value of devops
Store Ansible vault password in the file /home/automation/plays/vault_key
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[automation@ansible-control plays]$ echo devops > vault_key [automation@ansible-control plays]$ ansible-vault create secret.yml New Vault password: Confirm New Vault password: [automation@ansible-control plays]$ ansible-vault view secret.yml Vault password: --- user_password: devops database_password: devops [automation@ansible-control plays]$ cat secret.yml $ANSIBLE_VAULT;1.1;AES256 37306230666638656564343830653439643962306439333030656231333838663364363632366230 6635363734636262646136666163346639313130386136360a333438666162383062316663623363 61666264663337653863396138666237326362336166376266633061376661366132633832303337 3766363635646463610a636564356235636535303031623335376333393135316231333562653566 30306364336632633939316330633964623734356463313638323361333632653138356562666539 34356433663933333263363835323132363336323866666264623930633939326137646463656464 336665363634333966393430303933373836 [automation@ansible-control plays]$ ansible-vault view --vault-password-file=vault_key secret.yml --- user_password: devops database_password: devops |
Task 6: Users and Groups
You have been provided with the list of users below.
Use /home/automation/plays/vars/user_list.yml
file to save this content.
1 2 3 4 5 6 7 8 9 10 |
--- users: - username: alice uid: 1201 - username: vincent uid: 1202 - username: sandy uid: 2201 - username: patrick uid: 2202 |
Create a playbook /home/automation/plays/users.yml
that uses the vault file /home/automation/plays/secret.yml
to achieve the following:
- Users whose user ID starts with 1 should be created on servers in the webservers host group. User password should be used from the user_password variable.
- Users whose user ID starts with 2 should be created on servers in the database host group. User password should be used from the user_password variable.
- All users should be members of a supplementary group wheel.
- Shell should be set to
/bin/bash
for all users. - Account passwords should use the SHA512 hash format.
- Each user should have an SSH key uploaded (use the SSH key that you created previously, see task #2).
After running the playbook, users should be able to SSH into their respective servers without passwords.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
[automation@ansible-control plays]$ cat users.yml --- - hosts: all become: yes gather_facts: no vars_files: - vars/user_list.yml - secret.yml vars: hash: "{{ user_password | password_hash('sha512') }}" tasks: - name: print user_password debug: msg: "Password is {{ user_password }}, Hash is {{ hash }} " - name: Add the user "{{ item.username }}" with a specific uid and a primary group of 'wheel' user: name: "{{ item.username }}" password: "{{ user_password | password_hash('sha512') }}" uid: "{{ item.uid }}" group: wheel shell: /bin/bash append: yes loop: "{{ users }}" when: ( item.uid < 2000 and inventory_hostname in groups['webservers'] ) or ( item.uid > 2000 and inventory_hostname in groups['database'] ) - name: Set authorized key taken from file authorized_key: user: "{{ item.username }}" state: present key: "{{ lookup('file', '/home/automation/.ssh/id_rsa.pub') }}" loop: "{{ users }}" when: ( item.uid < 2000 and inventory_hostname in groups['webservers'] ) or ( item.uid > 2000 and inventory_hostname in groups['database'] ) |
Task 7: Scheduled Tasks
Create a playbook /home/automation/plays/regular_tasks.yml
that runs on servers in the proxy host group and does the following:
- A root crontab record is created that runs every hour.
- The cron job appends the file
/var/log/time.log
with the output from the date command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
--- - hosts: all become: yes gather_facts: no tasks: - name: Cron with date command as a job cron: name: "Append actual date to time.log" minute: "0" hour: "*" # If we want to remove crone job we can use state: absent #state: absent job: "/bin/date >> /var/log/time.log" when: inventory_hostname in groups['proxy'] |
Let’s check if playbook works:
1 2 3 4 5 6 7 8 9 10 |
[automation@ansible-control plays]$ ansible all -b -a "crontab -l" ansible4.hl.local | CHANGED | rc=0 >> ansible3.hl.local | CHANGED | rc=0 >> ansible2.hl.local | CHANGED | rc=0 >> #Ansible: Append actual date to time.log 0 * * * * /bin/date >> /var/log/time.log ansible5.hl.local | CHANGED | rc=0 >> |
or
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
--- - hosts: <strong>proxy</strong> become: yes gather_facts: no tasks: - name: Cron with date command as a job cron: name: "Append actual date to time.log" minute: "0" hour: "*" # If we want to remove crone job we can use state: absent #state: absent job: "/bin/date >> /var/log/time.log" |
Task 8: Software Repositories
Create a playbook /home/automation/plays/repository.yml
that runs on servers in the database host group and does the following:
- A YUM repository file is created.
- The name of the repository is mysql56-community.
- The description of the repository is “MySQL 5.6 YUM Repo”.
- Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/.
- Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql.
- Repository GPG check is enabled.
- Repository is enabled.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[automation@ansible-control plays]$ cat repository.yml --- - hosts: database become: yes gather_facts: no tasks: - name: YUM repository yum_repository: name: mysql56-community description: MySQL 5.6 YUM Repo baseurl: http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/ gpgkey: http://repo.mysql.com/RPM-GPG-KEY-mysql gpgcheck: yes enabled: yes |
Check if playbook works:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[automation@ansible-control plays]$ ansible database -a "yum repolist" [WARNING]: Consider using the yum module rather than running 'yum'. If you need to use command because yum is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. ansible5.hl.local | CHANGED | rc=0 >> Wczytane wtyczki: fastestmirror Determining fastest mirrors * base: centos1.hti.pl * extras: centos.slaskdatacenter.com * updates: centos.slaskdatacenter.com identyfikator repozytorium nazwa repozytorium stan base/7/x86_64 CentOS-7 - Base 10070 extras/7/x86_64 CentOS-7 - Extras 413 <strong>mysql56-community MySQL 5.6 YUM Repo 547</strong> updates/7/x86_64 CentOS-7 - Updates 1127 |
Task 9: Create and Work with Roles
Create a role called sample-mysql and store it in /home/automation/plays/roles
. The role should satisfy the following requirements:
- A primary partition number 1 of size 800MB on device
/dev/sdb
is created. - An LVM volume group called
vg_database
is created that uses the primary partition created above. - An LVM logical volume called
lv_mysql
is created of size 512MB in the volume groupvg_database
. - An XFS filesystem on the logical volume
lv_mysql
is created. - Logical volume
lv_mysql
is permanently mounted on/mnt/mysql_backups
. - mysql-community-server package is installed.
- Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.
- MySQL root user password should be set from the variable database_password (see task #5).
- MySQL server should be started and enabled on boot.
- MySQL server configuration file is generated from the
my.cnf.j2
Jinja2 template with the following content:
1 2 3 4 5 6 7 8 9 10 11 12 |
[mysqld] bind_address = {{ ansible_default_ipv4.address }} skip_name_resolve datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid |
Create a playbook /home/automation/plays/mysql.yml
that uses the role and runs on hosts in the database host group.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[automation@ansible-control plays]$ mkdir roles [automation@ansible-control plays]$ cd roles [automation@ansible-control roles]$ ansible-galaxy init --offline sample-mysql - sample-mysql was created successfully [automation@ansible-control roles]$ ll razem 0 drwxrwxr-x. 10 automation automation 135 09-01 18:26 sample-mysql [automation@ansible-control roles]$ cd sample-mysql [automation@ansible-control sample-mysql]$ ll razem 4 drwxrwxr-x. 2 automation automation 22 09-01 18:26 defaults drwxrwxr-x. 2 automation automation 6 09-01 18:26 files drwxrwxr-x. 2 automation automation 22 09-01 18:26 handlers drwxrwxr-x. 2 automation automation 22 09-01 18:26 meta -rw-rw-r--. 1 automation automation 1328 09-01 18:26 README.md drwxrwxr-x. 2 automation automation 22 09-01 18:26 tasks drwxrwxr-x. 2 automation automation 6 09-01 18:26 templates drwxrwxr-x. 2 automation automation 39 09-01 18:26 tests drwxrwxr-x. 2 automation automation 22 09-01 18:26 vars |
Tasks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
[automation@ansible-control plays]$ cat roles/sample-mysql/tasks/main.yml --- # tasks file for sample-mysql - name: Installing packages yum: name: - parted - python - MySQL-python - firewalld - mysql-community-server state: present tags: install #- name: Create a new primary partition for LVM # parted: # device: /dev/sdb # number: 1 # flags: [ lvm ] # state: present # part_end: 800MiB # register: device_info # Maybe lvg is not needed, maybe we can use lvol instead to creaate both vg and lv #- name: create logical volume group # lvg: # vg: vg_database # pvs: /dev/sdb1 # - name: create logical volume # lvol: # vg: vg_database # lv: lv_mysql # pvs: /dev/sdb1 # size: 512m # - name: Create a ext2 filesystem on /dev/sdb1 # filesystem: # fstype: xfs # dev: lv_mysql # - name: Mount file system # mount: # path: /mnt/mysql_backups # src: lv_mysql # fstype: xfs # state: present - name: Starting services service: name: "{{ item }}" state: started enabled: yes loop: - firewalld - mysql - name: Open ports on firewall firewalld: port: 3306/tcp permanent: yes immediate: yes state: enabled #Both login_password and login_user are required when you are passing credentials. If none are present, #the module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the #MySQL default login of ‘root’ with no password. - name: Adding root user to MySql mysql_user: login_user: root login_password: "{{ database_password }}" name: root password: "{{ database_password }}" - name: Adding template to my.cnf template: src: my.cnf.j2 dest: /etc/my.cnf notify: - RestartMySql # The handlers will run when forced even if there's a failure.. # # However in this case, there is no failure, what's happening is there is no notification to that handler.. # # Without a notification, handlers won't run even if force_handlers is set. # # We need changed_when : true to force handlers even if firewalld or mysql are already installed # #- name: install firewalld ## yum: ## name: firewalld ## state: present ## changed_when: true ## notify: ## - firewalld1 ## - firewalld2 |
Handlers:
1 2 3 4 5 6 7 8 |
[automation@ansible-control plays]$ cat roles/sample-mysql/handlers/main.yml --- # handlers file for sample-mysql - name: RestartMySql service: name: mysql state: restarted |
Templates:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[automation@ansible-control plays]$ cat roles/sample-mysql/templates/* [mysqld] bind_address = {{ ansible_default_ipv4.address }} skip_name_resolve datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid |
Playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[automation@ansible-control plays]$ cat mysql.yml --- - hosts: database become: yes # The handlers will run when forced even if there's a failure. # However in this case, there is no failure, what's happening is there is no notification to that handler. # Without a notification, handlers won't run even if force_handlers is set. # We need changed_when: true force_handlers: true vars_files: secret.yml roles: - sample-mysql tasks: - debug: msg: "{{ database_password }}" |
Task 10: Create and Work with Roles (Some More)
Create a role called sample-apache and store it in /home/automation/plays/roles
. The role should satisfy the following requirements:
- The httpd, mod_ssl and php packages are installed. Apache service is running and enabled on boot.
- Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS port TCP 443.
- Apache service should be restarted every time the file
/var/www/html/index.html
is modified. - A Jinja2 template file
index.html.j2
is used to create the file/var/www/html/index.html
with the following content:
1 |
The address of the server is: IPV4ADDRESS |
IPV4ADDRESS is the IP address of the managed node.
Create a playbook /home/automation/plays/apache.yml
that uses the role and runs on hosts in the webservers host group.
Tasks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
[automation@ansible-control roles]$ ansible-galaxy init --offline sample-apache [automation@ansible-control plays]$ cd sample-apache [automation@ansible-control sample-apache]$ cat tasks/main.yml --- # tasks file for sample-apache - name: instal packages yum: name: - httpd - mod_ssl - php - firewalld state: present - name: Start firewalld service: name: firewalld enabled: yes state: started - name: firewall firewalld: service: "{{ item }}" permanent: yes immediate: yes state: enabled loop: - http - https tags: firewall - name: index.html template: src: intex.html.j2 dest: /var/www/html/index.html notify: - restart_apache |
Handlers:
1 2 3 4 5 6 7 |
[automation@ansible-control sample-apache]$ cat handlers/main.yml --- # handlers file for sample-apache - name: restart_apache service: name: httpd state: restarted |
Templates:
1 2 |
[automation@ansible-control sample-apache]$ cat templates/index.html.j2 The address of the server is: {{ ansible_default_ipv4.address }} |
Plabook:
1 2 3 4 5 6 7 |
[automation@ansible-control plays]$ cat apache.yml --- - hosts: webservers become: yes roles: - sample-apache |
Task 11: Download Roles From Ansible Galaxy and Use Them
Use Ansible Galaxy to download and install geerlingguy.haproxy role in /home/automation/plays/roles
.
Create a playbook /home/automation/plays/haproxy.yml
that runs on servers in the proxy host group and does the following:
- Use geerlingguy.haproxy role to load balance request between hosts in the webservers host group.
- Use roundrobin load balancing method.
- HAProxy backend servers should be configured for HTTP only (port 80).
- Firewall is configured to allow all incoming traffic on port TCP 80.
If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from the web server (see task #10). Running the command again should return output from the other web server.
Installing geerlingguy.haproxy role:
1 |
ansible-galaxy install geerlingguy.haproxy |
Playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
[automation@ansible-control plays]$ cat haproxy.yml --- - hosts: proxy become: yes vars: haproxy_backend_balance_method: 'roundrobin' haproxy_backend_mode: 'http' haproxy_backend_servers: - name: app1 address: ansible3.hl.local - name: app2 address: ansible4.hl.local tasks: - firewalld: service: http permanent: yes immediate: yes state: enabled roles: - geerlingguy.haproxy |
Task 12: Security
Create a playbook /home/automation/plays/selinux.yml
that runs on hosts in the webservers host group and does the following:
- Uses the selinux RHEL system role.
- Enables httpd_can_network_connect SELinux boolean.
- The change must survive system reboot.
Installation of rhel system roles:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[automation@ansible-control plays]$ sudo yum -y install rhel-system-roles [automation@ansible-control plays]$ ansible-galaxy search selinux | grep roles Found 414 roles matching your search: avinetworks.network_interface This roles enables users to configure various network components on target machines. lerwys.lnls_ansible Top [meta] ansible role containing all Sirius roles linux-system-roles.certificate Role for managing TLS/SSL certificate issuance and renewal linux-system-roles.cockpit Install and enable the Cockpit Web Console <strong>linux-system-roles.selinux</strong> Configure SELinux oasis_roles.users_and_groups Ansible role that manages groups and users yabhinav.common Install a common configurations for RHEL/CentOS/Fedora and Debian/Ubuntu. Used by my other roles [automation@ansible-control plays]$ ansible-galaxy install linux-system-roles.selinux [automation@ansible-control plays]$ ansible-galaxy list - sample-mysql, (unknown version) - sample-apache, (unknown version) - linux-system-roles.selinux, 1.1.0 |
Preinstalled example of playbook with selinux role:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[automation@ansible-control selinux]$ cat /usr/share/doc/rhel-system-roles-1.0/selinux/example-selinux-playbook.yml --- - hosts: all become: true become_method: sudo become_user: root vars: selinux_policy: targeted selinux_state: enforcing selinux_booleans: - { name: 'samba_enable_home_dirs', state: 'on' } - { name: 'ssh_sysadm_login', state: 'on', persistent: 'yes' } selinux_fcontexts: - { target: '/tmp/test_dir(/.*)?', setype: 'user_home_dir_t', ftype: 'd' } selinux_restore_dirs: - /tmp/test_dir selinux_ports: - { ports: '22100', proto: 'tcp', setype: 'ssh_port_t', state: 'present' } selinux_logins: - { login: 'sar-user', seuser: 'staff_u', serange: 's0-s0:c0.c1023', state: 'present' } # prepare prerequisites which are used in this playbook tasks: - name: Creates directory file: path: /tmp/test_dir state: directory - name: Add a Linux System Roles SELinux User user: comment: Linux System Roles SELinux User name: sar-user - name: execute the role and catch errors block: - include_role: name: rhel-system-roles.selinux rescue: # Fail if failed for a different reason than selinux_reboot_required. - name: handle errors fail: msg: "role failed" when: not selinux_reboot_required - name: restart managed host shell: sleep 2 && shutdown -r now "Ansible updates triggered" async: 1 poll: 0 ignore_errors: true - name: wait for managed host to come back wait_for_connection: delay: 10 timeout: 300 - name: reapply the role include_role: name: rhel-system-roles.selinux |
Create a playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[automation@ansible-control plays]$ cat selinux.yml --- - hosts: webservers become: yes vars: selinux_booleans: - { name: 'httpd_can_network_connect', state: 'on' } tasks: # Selinux requires libsemanage-python support - name: install libsemanage-python yum: name: libsemanage-python state: present roles: - linux-system-roles.selinux |
Task 13: Use Conditionals to Control Play Execution
Create a playbook /home/automation/plays/sysctl.yml
that runs on all inventory hosts and does the following:
- If a server has more than 2048MB of RAM, then parameter vm.swappiness is set to 10.
- If a server has less than 2048MB of RAM, then the following error message is displayed:
Server memory less than 2048MB
1 2 3 4 5 6 7 8 9 10 11 12 |
--- - hosts: all become: yes tasks: - debug: msg: "Server {{ inventory_hostname }} memory has less than 2048MB" when: ansible_memtotal_mb < 2048 - debug: vm.swappiness: 10 when: ansible_memtotal_mb > 2048 |
Task 14: Use Archiving
Create a playbook /home/automation/plays/archive.yml
that runs on hosts in the database host group and does the following:
- A file
/mnt/mysql_backups/database_list.txt
is created that contains the following line: dev,test,qa,prod. - A gzip archive of the file
/mnt/mysql_backups/database_list.txt
is created and stored in/mnt/mysql_backups/archive.gz
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[automation@ansible-control plays]$ cat archive.yml --- - hosts: database become: yes gather_facts: no tasks: - name: create a directory file: path: /mnt/mysql_backups state: directory - name: touch a file file: path: /mnt/mysql_backups/database_list.txt state: touch - name: Copy using the 'content' for inline data copy: content: 'dev,test,qa,prod' dest: /mnt/mysql_backups/database_list.txt - name: Compress file archive: path: /mnt/mysql_backups/database_list.txt dest: /mnt/mysql_backups/archive.gz |
Task 15: Work with Ansible Facts
Create a playbook /home/automation/plays/facts.yml
that runs on hosts in the database host group and does the following:
- A custom Ansible fact server_role=mysql is created that can be retrieved from ansible_local.custom.sample_exam when using Ansible setup module.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[automation@ansible-control plays]$ cat facts.yml --- - hosts: database become: yes # gather_facts: no tasks: - name: create a directory file: path: /etc/ansible/facts.d state: directory - name: touch a file file: path: /etc/ansible/facts.d/custom.fact state: touch - name: blockinfile blockinfile: path: /etc/ansible/facts.d/custom.fact block: | [sample_exam] server_role=mysql - name: debug debug: msg: "{{ ansible_local.custom.sample_exam }}" |
Run the playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[automation@ansible-control plays]$ ansible-playbook facts.yml PLAY [database] ******************************************************************************************************************* TASK [Gathering Facts] ************************************************************************************************************ ok: [ansible5.hl.local] TASK [create a directory] ********************************************************************************************************* ok: [ansible5.hl.local] TASK [touch a file] *************************************************************************************************************** changed: [ansible5.hl.local] TASK [blockinfile] **************************************************************************************************************** ok: [ansible5.hl.local] TASK [debug] ********************************************************************************************************************** ok: [ansible5.hl.local] => { "msg": { "server_role": "mysql" } } PLAY RECAP ************************************************************************************************************************ ansible5.hl.local : ok=5 changed=1 unreachable=0 failed=0 |
Check if facts exists:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[automation@ansible-control plays]$ ansible database -a 'cat /etc/ansible/facts.d/custom.fact' ansible5.hl.local | CHANGED | rc=0 >> # BEGIN ANSIBLE MANAGED BLOCK [sample_exam] server_role=mysql # END ANSIBLE MANAGED BLOCK [automation@ansible-control plays]$ ansible database -m setup -a "filter=ansible_local" ansible5.hl.local | SUCCESS => { "ansible_facts": { "ansible_local": { "custom": { "sample_exam": { "server_role": "mysql" } } } }, "changed": false } |
Task 16: Software Packages
Create a playbook /home/automation/plays/packages.yml
that runs on all inventory hosts and does the following:
- Installs tcpdump and mailx packages on hosts in the proxy host groups.
- Installs lsof and mailx and packages on hosts in the database host groups.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[automation@ansible-control plays]$ cat packages.yml --- - hosts: all become: yes gather_facts: no tasks: - name: Packages for proxy yum: name: - tcpdump - mailx state: present when: inventory_hostname in groups['proxy'] tags: proxy - name: Packages for database yum: name: - lsof - mailx state: present when: inventory_hostname in groups['database'] tags: database |
Run the playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
[automation@ansible-control plays]$ ansible-playbook packages.yml PLAY [all] ********************************************************************************************* TASK [Packages for proxy] ****************************************************************************** skipping: [ansible3.hl.local] skipping: [ansible4.hl.local] skipping: [ansible5.hl.local] changed: [ansible2.hl.local] TASK [Packages for database] *************************************************************************** skipping: [ansible3.hl.local] skipping: [ansible4.hl.local] skipping: [ansible2.hl.local] changed: [ansible5.hl.local] PLAY RECAP ********************************************************************************************* ansible2.hl.local : ok=1 changed=1 unreachable=0 failed=0 ansible3.hl.local : ok=0 changed=0 unreachable=0 failed=0 ansible4.hl.local : ok=0 changed=0 unreachable=0 failed=0 ansible5.hl.local : ok=1 changed=1 unreachable=0 failed=0 |
Check installed packages:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[automation@ansible-control plays]$ ansible database -a 'yum list installed mailx' ansible5.hl.local | CHANGED | rc=0 >> Wczytane wtyczki: fastestmirror Zainstalowane pakiety mailx.x86_64 12.5-19.el7 @base [automation@ansible-control plays]$ ansible database -a 'yum list installed tcpdump' ansible5.hl.local | FAILED | rc=1 >> Wczytane wtyczki: fastestmirrorBłąd: Brak pakietów pasujących do listynon-zero return code [automation@ansible-control plays]$ ansible database -a 'yum list installed lsof' ansible5.hl.local | CHANGED | rc=0 >> Wczytane wtyczki: fastestmirror Zainstalowane pakiety lsof.x86_64 4.87-6.el7 @base |
Another way to check if package is installed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
[automation@ansible-control plays]$ ansible database -m yum -a 'list=tcpdump' ansible5.hl.local | SUCCESS => { "ansible_facts": { "pkg_mgr": "yum" }, "changed": false, "results": [ { "arch": "x86_64", "envra": "14:tcpdump-4.9.2-4.el7_7.1.x86_64", "epoch": "14", "name": "tcpdump", "release": "4.el7_7.1", "repo": "base", "version": "4.9.2", "yumstate": "available" } ] } [automation@ansible-control plays]$ ansible database -m yum -a 'list=mailx' ansible5.hl.local | SUCCESS => { "ansible_facts": { "pkg_mgr": "yum" }, "changed": false, "results": [ { "arch": "x86_64", "envra": "0:mailx-12.5-19.el7.x86_64", "epoch": "0", "name": "mailx", "release": "19.el7", "repo": "base", "version": "12.5", "yumstate": "available" }, { "arch": "x86_64", "envra": "0:mailx-12.5-19.el7.x86_64", "epoch": "0", "name": "mailx", "release": "19.el7", "repo": "installed", "version": "12.5", "yumstate": "installed" } ] } [automation@ansible-control plays]$ ansible proxy -m yum -a 'list=mailx' ansible2.hl.local | SUCCESS => { "ansible_facts": { "pkg_mgr": "yum" }, "changed": false, "results": [ { "arch": "x86_64", "envra": "0:mailx-12.5-19.el7.x86_64", "epoch": "0", "name": "mailx", "release": "19.el7", "repo": "base", "version": "12.5", "yumstate": "available" }, { "arch": "x86_64", "envra": "0:mailx-12.5-19.el7.x86_64", "epoch": "0", "name": "mailx", "release": "19.el7", "repo": "installed", "version": "12.5", "yumstate": "installed" } ] } [automation@ansible-control plays]$ ansible proxy -m yum -a 'list=tcpdump' ansible2.hl.local | SUCCESS => { "ansible_facts": { "pkg_mgr": "yum" }, "changed": false, "results": [ { "arch": "x86_64", "envra": "14:tcpdump-4.9.2-4.el7_7.1.x86_64", "epoch": "14", "name": "tcpdump", "release": "4.el7_7.1", "repo": "base", "version": "4.9.2", "yumstate": "available" }, { "arch": "x86_64", "envra": "14:tcpdump-4.9.2-4.el7_7.1.x86_64", "epoch": "14", "name": "tcpdump", "release": "4.el7_7.1", "repo": "installed", "version": "4.9.2", "yumstate": "installed" } ] } |
Task 17: Services
Create a playbook /home/automation/plays/target.yml
that runs on hosts in the webservers host group and does the following:
- Sets the default boot target to multi-user.
Check which target is actually set on webservers host group:
1 2 3 4 5 6 |
[automation@ansible-control plays]$ ansible webservers -a 'systemctl get-default' ansible3.hl.local | CHANGED | rc=0 >> multi-user.target ansible4.hl.local | CHANGED | rc=0 >> multi-user.target |
We can set default target in this way:
1 |
[automation@ansible-control plays]$ ansible webservers -b -a 'systemctl set-default multi-user.target' |
Plabook which do the same:
1 2 3 4 5 6 7 8 9 10 |
[automation@ansible-control plays]$ cat target.yml --- - hosts: webservers become: yes gather_facts: no tasks: - name: Set default system target to boot at multi-user shell: systemctl set-default multi-user.target |
Task 18. Create and Use Templates to Create Customised Configuration Files
Create a playbook /home/automation/plays/server_list.yml
that does the following:
- Playbook uses a Jinja2 template
server_list.j2
to create a file/etc/server_list.txt
on hosts in the database host group. - The file
/etc/server_list.txt
is owned by the automation user. - File permissions are set to 0600.
- SELinux file label should be set to net_conf_t.
- The content of the file is a list of FQDNs of all inventory hosts.
After running the playbook, the content of the file /etc/server_list.txt
should be the following:
1 2 3 4 |
ansible2.hl.local ansible3.hl.local ansible4.hl.local ansible5.hl.local |
Note: if the FQDN of any inventory host changes, re-running the playbook should update the file with the new values.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
[automation@ansible-control plays]$ cat server_list.yml --- - hosts: database become: yes gather_facts: no ignore_errors: yes vars: j2file1: /home/automation/plays/templates/server_list.j2 tasks: - name: list hosts shell: ansible all -i /home/automation/plays/inventory --list-host > "{{ j2file1 }}" delegate_to: localhost register: local_process - name: debug debug: msg: "{{ local_process }}" - name: replace string replace: path: "{{ j2file1 }}" regexp: '(.*)hosts(.*):$' replace: ' ' delegate_to: localhost - name: template template: src: "{{ j2file1 }}" dest: /etc/server_list.txt owner: automation mode: 0600 setype: net_conf_t |
Check te content of /etc/server_list.txt
:
1 2 3 4 5 6 7 |
automation@ansible-control plays]$ ansible database -a 'cat /etc/server_list.txt' ansible5.hl.local | CHANGED | rc=0 >> ansible3.hl.local ansible4.hl.local ansible5.hl.local ansible2.hl.local |
Check the mode:
1 2 3 4 5 |
[automation@ansible-control plays]$ ansible database -a 'ls -l /etc/server_list.txt' ansible5.hl.local | CHANGED | rc=0 >> -rw-------. 1 automation root 90 09-19 20:42 /etc/server_list.txt |
Second way:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[automation@ansible-control plays]$ cat server_list2.yml --- - name: Create Template hosts: all become: no tasks: - name: reate template lineinfile: path: templates/server_list.j2 line: '{{ ansible_fqdn }}' create: yes delegate_to: localhost - name: Copy template hosts: database become: yes gather_facts: no tasks: - name: template template: src: templates/server_list.j2 dest: /etc/server_list.txt owner: automation mode: 0600 setype: net_conf_t |
Let’s check:
1 2 3 4 5 6 7 8 |
[automation@ansible-control plays]$ ssh ansible5.hl.local cat /etc/server_list.txt ansible3.hl.local ansible5.hl.local ansible2.hl.local ansible4.hl.local [automation@ansible-control plays]$ ssh ansible5.hl.local ls -Z /etc/server_list.txt -rw-------. automation root system_u:object_r:net_conf_t:s0 /etc/server_list.txt |