{"id":3761,"date":"2020-06-03T19:30:37","date_gmt":"2020-06-03T17:30:37","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=3761"},"modified":"2020-09-28T21:44:19","modified_gmt":"2020-09-28T19:44:19","slug":"ansible-excercises","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2020\/06\/03\/ansible-excercises\/","title":{"rendered":"Ansible Excercises"},"content":{"rendered":"<p><!--more--><\/p>\n<h3>Task 1: Ansible Installation and Configuration<\/h3>\n<p>Install ansible package on the control node (including any dependencies) and configure the following:<\/p>\n<ol>\n<li>Create a regular user <strong>automation<\/strong> with the password of <strong>devops<\/strong>. Use this user for all sample exam tasks and playbooks, unless you are working on the task #2 that requires creating the <strong>automation<\/strong> user on inventory hosts. You have root access to all five servers.<\/li>\n<li>All playbooks and other Ansible configuration that you create for this sample exam should be stored in <code>\/home\/automation\/plays<\/code>.<\/li>\n<\/ol>\n<p>Create a configuration file <code>\/home\/automation\/plays\/ansible.cfg<\/code> to meet the following requirements:<\/p>\n<ol>\n<li>The roles path should include <code>\/home\/automation\/plays\/roles<\/code>, as well as any other path that may be required for the course of the sample exam.<\/li>\n<li>The inventory file path is <code>\/home\/automation\/plays\/inventory<\/code>.<\/li>\n<li>Privilege escallation is <strong>disabled<\/strong> by default.<\/li>\n<li>Ansible should be able to manage <strong>10 hosts<\/strong> at a single time.<\/li>\n<li>Ansible should connect to all managed nodes using the <strong>automation<\/strong> user.<\/li>\n<\/ol>\n<p>Create an inventory file <code>\/home\/automation\/plays\/inventory<\/code> with the following:<\/p>\n<ol>\n<li>ansible2.hl.local is a member of the <strong>proxy<\/strong> host group.<\/li>\n<li>ansible3.hl.local is a member of the <strong>webservers<\/strong> host group.<\/li>\n<li>ansible4.hl.local is a member of the <strong>webservers<\/strong> host group.<\/li>\n<li>ansible5.hl.local is a member of the <strong>database<\/strong> host group.<\/li>\n<\/ol>\n<p>Creating <code>\/etc\/hosts<\/code> file:<\/p>\n<pre class=\"lang:sh decode:true\">[root@ansible-control ~]# cat \/etc\/hosts\r\n127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4\r\n::1 localhost localhost.localdomain localhost6 localhost6.localdomain6\r\n172.30.9.60 ansible-control ansible-control.hl.local\r\n172.30.9.61 ansible2 ansible2.hl.local\r\n172.30.9.62 ansible3 ansible3.hl.local\r\n172.30.9.63 ansible4 ansible4.hl.local\r\n172.30.9.64 ansible5 ansible5.hl.local<\/pre>\n<p>As root generating ssh key and copy it to the managed hosts:<\/p>\n<pre class=\"lang:sh decode:true\">[root@ansible-control ~]# ssh-keygen\r\n[root@ansible-control ~]# ssh-copy-id ansible2\r\n[root@ansible-control ~]# ssh-copy-id ansible3\r\n[root@ansible-control ~]# ssh-copy-id ansible4\r\n[root@ansible-control ~]# ssh-copy-id ansible5<\/pre>\n<p>Let&#8217;s check if we can connect to the remote hosts as root without password:<\/p>\n<pre class=\"lang:sh decode:true\">[root@ansible-control ~]# ssh ansible2\r\nssh ansible3\r\nssh ansible4\r\nssh ansible5<\/pre>\n<p>Instaling ansible:<\/p>\n<pre class=\"lang:sh decode:true \">yum install -y ansible<\/pre>\n<p>Adding automation user:<\/p>\n<pre class=\"lang:sh decode:true\">adduser automation\r\npasswd automation\r\nsu - automation<\/pre>\n<p>Making directories and <code>ansible.cfg<\/code> file:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control]$ mkdir plays\r\n[automation@ansible-control]$ cd plays\r\n[automation@ansible-control plays]$ cat ansible.cfg\r\n\r\n[defaults]\r\nroles_path = \/home\/automation\/plays\/roles\r\ninventory = \/home\/automation\/plays\/inventory\r\nforks = 10\r\n#remote_user = automation\r\nlog_path = \/home\/automation\/ansible.log<\/pre>\n<p>Making inventory:<\/p>\n<pre class=\"lang:sh decode:true\">mkdir plays\/inventory\r\ncd inventory\r\nvim hosts\r\n[automation@ansible-control inventory]$ cat hosts\r\n\r\n[proxy]\r\nansible2.hl.local\r\n\r\n[webservers]\r\nansible[3:4].hl.local\r\n\r\n[database]\r\nansible5.hl.local<\/pre>\n<h3>Task 2: Ad-Hoc Commands<\/h3>\n<p>Generate an SSH keypair on the control node. You can perform this step manually.<\/p>\n<p>Write a script <code>\/home\/automation\/plays\/adhoc<\/code> that uses Ansible ad-hoc commands to achieve the following:<\/p>\n<ol>\n<li>User <strong>automation<\/strong> is created on all inventory hosts (not the control node).<\/li>\n<li>SSH key (that you generated) is copied to all inventory hosts for the <strong>automation<\/strong> user and stored in <code>\/home\/automation\/.ssh\/authorized_keys<\/code>.<\/li>\n<li>The <strong>automation<\/strong> user is allowed to elevate privileges on all inventory hosts without having to provide a password.<\/li>\n<\/ol>\n<p>After running the adhoc script, you should be able to SSH into all inventory hosts using the <strong>automation<\/strong> user without password, as well as a run all privileged commands.<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat .\/adhoc\r\n#!\/bin\/bash\r\n\r\n\/usr\/bin\/ansible proxy,webservers,database -b -m user -a \"name=automation\"\r\n\/usr\/bin\/ansible proxy,webservers,database -b -m file -a \"path=\/home\/automation\/.ssh state=directory owner=automation\"\r\n\/usr\/bin\/ansible proxy,webservers,database -b -m copy -a \"src=\/home\/automation\/.ssh\/id_rsa.pub dest=\/home\/automation\/.ssh\/authorized_keys directory_mode=yes\"\r\n\/usr\/bin\/ansible proxy,webservers,database -b -m lineinfile -a \"path=\/etc\/sudoers state=present line='automation ALL=(ALL) NOPASSWD: ALL'\"<\/pre>\n<p>Let&#8217;s check if we can connect to the remote hosts without password:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ssh ansible2\r\nssh ansible3\r\nssh ansible4\r\nssh ansible5<\/pre>\n<h3>Task 3: File Content<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/motd.yml<\/code> that runs on all inventory hosts and does the following:<\/p>\n<ol>\n<li>The playbook should replace any existing content of <code>\/etc\/motd<\/code> with text. Text depends on the host group.<\/li>\n<li>On hosts in the <strong>proxy<\/strong> host group the line should be \u201cWelcome to HAProxy server\u201d.<\/li>\n<li>On hosts in the <strong>webservers<\/strong> host group the line should be \u201cWelcome to Apache server\u201d.<\/li>\n<li>On hosts in the <strong>database<\/strong> host group the line should be \u201cWelcome to MySQL server\u201d.<\/li>\n<\/ol>\n<p>Simple version of playbook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat motd.yml\r\n---\r\n- hosts: all\r\n  remote_user: automation\r\n  #become: yes\r\n\r\n  tasks:\r\n  - name: remove motd file\r\n    file:\r\n      path: \/etc\/motd\r\n      state: absent\r\n\r\n  - name: Put the Proxy text to motd file\r\n    lineinfile:\r\n      path: \/etc\/motd\r\n      line: 'Welcome to HAProxy server'\r\n      create: yes\r\n    when: inventory_hostname in groups['proxy']\r\n\r\n  - name: Put the Proxy text to motd file\r\n    lineinfile:\r\n      path: \/etc\/motd\r\n      line: 'Welcome to Apache server'\r\n      create: yes\r\n    when: inventory_hostname in groups['webservers']\r\n\r\n  - name: Put the Proxy text to motd file\r\n    lineinfile:\r\n      path: \/etc\/motd\r\n      line: 'Welcome to MySQL server'\r\n      create: yes\r\n    when: inventory_hostname in groups['database']<\/pre>\n<p>Playbook which use <code>group_vars<\/code> :<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ mkdir group_vars\r\n[automation@ansible-control plays]$ cd group_vars\r\n[automation@ansible-control group_vars]$ ls -l\r\nrazem 12\r\n-rw-rw-r--. 1 automation automation 16 08-27 23:23 database\r\n-rw-rw-r--. 1 automation automation 18 08-27 23:23 proxy\r\n-rw-rw-r--. 1 automation automation 17 08-27 23:24 webservers\r\n\r\n[automation@ansible-control group_vars]$ cat database\r\n---\r\nmotd: MySQL\r\n[automation@ansible-control group_vars]$ cat proxy\r\n---\r\nmotd: HAProxy\r\n[automation@ansible-control group_vars]$ cat webservers\r\n---\r\nmotd: Apache\r\n\r\n[automation@ansible-control plays]$ cat motd1.yml\r\n---\r\n- hosts: all\r\n  #remote_user: automation\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n  - name: remove motd file\r\n    file:\r\n      path: \/etc\/motd\r\n      state: absent\r\n\r\n  - name: Put the Proxy text to motd file\r\n    lineinfile:\r\n      path: \/etc\/motd\r\n      line: \"Welcome to {{ motd }} server\"\r\n      create: yes\r\n\r\n  - name: print motd\r\n    debug:\r\n      msg: \"Welcome to {{ motd }} server\"<\/pre>\n<h3>Task 4: Configure SSH Server<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/sshd.yml<\/code> that runs on all inventory hosts and configures SSHD daemon as follows:<\/p>\n<ol>\n<li>banner is set to <code>\/etc\/motd<\/code><\/li>\n<li>X11Forwarding is disabled<\/li>\n<li>MaxAuthTries is set to 3<\/li>\n<\/ol>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat sshd.yml\r\n---\r\n- hosts: all\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n  - name: X11Forwarding\r\n    replace:\r\n      path: \/etc\/ssh\/sshd_config\r\n      regexp: '^X11Forwarding yes'\r\n      replace: 'X11Forwarding no'\r\n\r\n  - name: MaxAuthtries\r\n    replace:\r\n      path: \/etc\/ssh\/sshd_config\r\n      regexp: '^#MaxAuthTries 6'\r\n      replace: 'MaxAuthTries 3'\r\n\r\n  - name: Banner\r\n    replace:\r\n      path: \/etc\/ssh\/sshd_config\r\n      regexp: '^#Banner none'\r\n      replace: 'Banner \/etc\/motd'<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 5: Ansible Vault<\/h3>\n<p>Create Ansible vault file <code>\/home\/automation\/plays\/secret.yml<\/code>. Encryption\/decryption password is <strong>devops<\/strong>.<\/p>\n<p>Add the following variables to the vault:<\/p>\n<ol>\n<li><strong>user_password<\/strong> with value of <strong>devops<\/strong><\/li>\n<li><strong>database_password<\/strong> with value of <strong>devops<\/strong><\/li>\n<\/ol>\n<p>Store Ansible vault password in the file <code>\/home\/automation\/plays\/vault_key<\/code>.<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ echo devops &gt; vault_key\r\n[automation@ansible-control plays]$ ansible-vault create secret.yml\r\nNew Vault password:\r\nConfirm New Vault password:\r\n\r\n[automation@ansible-control plays]$ ansible-vault view secret.yml\r\nVault password:\r\n---\r\nuser_password: devops\r\ndatabase_password: devops\r\n\r\n[automation@ansible-control plays]$ cat secret.yml\r\n$ANSIBLE_VAULT;1.1;AES256\r\n37306230666638656564343830653439643962306439333030656231333838663364363632366230\r\n6635363734636262646136666163346639313130386136360a333438666162383062316663623363\r\n61666264663337653863396138666237326362336166376266633061376661366132633832303337\r\n3766363635646463610a636564356235636535303031623335376333393135316231333562653566\r\n30306364336632633939316330633964623734356463313638323361333632653138356562666539\r\n34356433663933333263363835323132363336323866666264623930633939326137646463656464\r\n336665363634333966393430303933373836\r\n\r\n[automation@ansible-control plays]$ ansible-vault view --vault-password-file=vault_key secret.yml\r\n---\r\nuser_password: devops\r\ndatabase_password: devops<\/pre>\n<h3>Task 6: Users and Groups<\/h3>\n<p>You have been provided with the list of users below.<\/p>\n<p>Use <code>\/home\/automation\/plays\/vars\/user_list.yml<\/code> file to save this content.<\/p>\n<pre class=\"\">---\r\nusers:\r\n  - username: alice\r\n    uid: 1201\r\n  - username: vincent\r\n    uid: 1202\r\n  - username: sandy\r\n    uid: 2201\r\n  - username: patrick\r\n    uid: 2202<\/pre>\n<p>Create a playbook <code>\/home\/automation\/plays\/users.yml<\/code> that uses the vault file <code>\/home\/automation\/plays\/secret.yml<\/code> to achieve the following:<\/p>\n<ol>\n<li>Users whose user ID starts with 1 should be created on servers in the <strong>webservers<\/strong> host group. User password should be used from the <strong>user_password<\/strong> variable.<\/li>\n<li>Users whose user ID starts with 2 should be created on servers in the <strong>database<\/strong> host group. User password should be used from the <strong>user_password<\/strong> variable.<\/li>\n<li>All users should be members of a supplementary group <strong>wheel<\/strong>.<\/li>\n<li>Shell should be set to <code>\/bin\/bash<\/code> for all users.<\/li>\n<li>Account passwords should use the SHA512 hash format.<\/li>\n<li>Each user should have an SSH key uploaded (use the SSH key that you created previously, see task #2).<\/li>\n<\/ol>\n<p>After running the playbook, users should be able to SSH into their respective servers without passwords.<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat users.yml\r\n---\r\n- hosts: all\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  vars_files:\r\n  - vars\/user_list.yml\r\n  - secret.yml\r\n\r\n  vars:\r\n    hash: \"{{ user_password | password_hash('sha512') }}\"\r\n\r\n  tasks:\r\n  - name: print user_password\r\n    debug:\r\n      msg: \"Password is {{ user_password }}, Hash is {{ hash }} \"\r\n\r\n  - name: Add the user \"{{ item.username }}\" with a specific uid and a primary group of 'wheel'\r\n    user:\r\n      name: \"{{ item.username }}\"\r\n      password: \"{{ user_password | password_hash('sha512') }}\"\r\n      uid: \"{{ item.uid }}\"\r\n      group: wheel\r\n      shell: \/bin\/bash\r\n      append: yes\r\n    loop:\r\n      \"{{ users }}\"\r\n    when: ( item.uid &lt; 2000 and inventory_hostname in groups['webservers'] ) or\r\n          ( item.uid &gt; 2000 and inventory_hostname in groups['database'] )\r\n\r\n  - name: Set authorized key taken from file\r\n    authorized_key:\r\n      user: \"{{ item.username }}\"\r\n      state: present\r\n      key: \"{{ lookup('file', '\/home\/automation\/.ssh\/id_rsa.pub') }}\"\r\n    loop:\r\n      \"{{ users }}\"\r\n    when: ( item.uid &lt; 2000 and inventory_hostname in groups['webservers'] ) or\r\n          ( item.uid &gt; 2000 and inventory_hostname in groups['database'] )<\/pre>\n<h3>Task 7: Scheduled Tasks<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/regular_tasks.yml<\/code> that runs on servers in the <strong>proxy<\/strong> host group and does the following:<\/p>\n<ol>\n<li>A root crontab record is created that runs every hour.<\/li>\n<li>The cron job appends the file <code>\/var\/log\/time.log<\/code> with the output from the <strong>date<\/strong> command.<\/li>\n<\/ol>\n<pre class=\"lang:sh decode:true\">---\r\n- hosts: all\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: Cron with date command as a job\r\n    cron:\r\n      name: \"Append actual date to time.log\"\r\n      minute: \"0\"\r\n      hour: \"*\"\r\n      # If we want to remove crone job we can use state: absent\r\n      #state: absent\r\n      job: \"\/bin\/date &gt;&gt; \/var\/log\/time.log\"\r\n    when: inventory_hostname in groups['proxy']<\/pre>\n<p>Let&#8217;s check if playbook works:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ansible all -b -a \"crontab -l\"\r\nansible4.hl.local | CHANGED | rc=0 &gt;&gt;\r\n\r\nansible3.hl.local | CHANGED | rc=0 &gt;&gt;\r\n\r\nansible2.hl.local | CHANGED | rc=0 &gt;&gt;\r\n#Ansible: Append actual date to time.log\r\n0 * * * * \/bin\/date &gt;&gt; \/var\/log\/time.log\r\n\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\n<\/pre>\n<h3>or<\/h3>\n<pre class=\"lang:sh decode:true\">---\r\n- hosts: <strong>proxy<\/strong>\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: Cron with date command as a job\r\n    cron:\r\n      name: \"Append actual date to time.log\"\r\n      minute: \"0\"\r\n      hour: \"*\"\r\n      # If we want to remove crone job we can use state: absent\r\n      #state: absent\r\n      job: \"\/bin\/date &gt;&gt; \/var\/log\/time.log\"\r\n<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 8: Software Repositories<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/repository.yml<\/code> that runs on servers in the <strong>database<\/strong> host group and does the following:<\/p>\n<ol>\n<li>A YUM repository file is created.<\/li>\n<li>The name of the repository is <strong>mysql56-community<\/strong>.<\/li>\n<li>The description of the repository is \u201cMySQL 5.6 YUM Repo\u201d.<\/li>\n<li>Repository baseurl is <strong>http:\/\/repo.mysql.com\/yum\/mysql-5.6-community\/el\/7\/x86_64\/<\/strong>.<\/li>\n<li>Repository GPG key is at <strong>http:\/\/repo.mysql.com\/RPM-GPG-KEY-mysql<\/strong>.<\/li>\n<li>Repository GPG check is enabled.<\/li>\n<li>Repository is enabled.<\/li>\n<\/ol>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat repository.yml\r\n---\r\n- hosts: database\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: YUM repository\r\n    yum_repository:\r\n      name: mysql56-community\r\n      description: MySQL 5.6 YUM Repo\r\n      baseurl: http:\/\/repo.mysql.com\/yum\/mysql-5.6-community\/el\/7\/x86_64\/\r\n      gpgkey: http:\/\/repo.mysql.com\/RPM-GPG-KEY-mysql\r\n      gpgcheck: yes\r\n      enabled: yes<\/pre>\n<p>Check if playbook works:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ansible database -a \"yum repolist\"\r\n[WARNING]: Consider using the yum module rather than running 'yum'. If you need to use command because yum is\r\ninsufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid\r\nof this message.\r\n\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\nWczytane wtyczki: fastestmirror\r\nDetermining fastest mirrors\r\n* base: centos1.hti.pl\r\n* extras: centos.slaskdatacenter.com\r\n* updates: centos.slaskdatacenter.com\r\nidentyfikator repozytorium nazwa repozytorium stan\r\nbase\/7\/x86_64 CentOS-7 - Base 10070\r\nextras\/7\/x86_64 CentOS-7 - Extras 413\r\n<strong>mysql56-community MySQL 5.6 YUM Repo 547<\/strong>\r\nupdates\/7\/x86_64 CentOS-7 - Updates 1127<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 9: Create and Work with Roles<\/h3>\n<p>Create a role called <strong>sample-mysql<\/strong> and store it in <code>\/home\/automation\/plays\/roles<\/code>. The role should satisfy the following requirements:<\/p>\n<ol>\n<li>A primary partition number 1 of size 800MB on device <code>\/dev\/sdb<\/code> is created.<\/li>\n<li>An LVM volume group called <code>vg_database<\/code> is created that uses the primary partition created above.<\/li>\n<li>An LVM logical volume called <code>lv_mysql<\/code> is created of size 512MB in the volume group <code>vg_database<\/code>.<\/li>\n<li>An XFS filesystem on the logical volume <code>lv_mysql<\/code> is created.<\/li>\n<li>Logical volume <code>lv_mysql<\/code> is permanently mounted on <code>\/mnt\/mysql_backups<\/code>.<\/li>\n<li><strong>mysql-community-server<\/strong> package is installed.<\/li>\n<li>Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.<\/li>\n<li>MySQL root user password should be set from the variable <strong>database_password<\/strong> (see task #5).<\/li>\n<li>MySQL server should be started and enabled on boot.<\/li>\n<li>MySQL server configuration file is generated from the <code>my.cnf.j2<\/code> Jinja2 template with the following content:<\/li>\n<\/ol>\n<pre class=\"\">[mysqld]\r\nbind_address = {{ ansible_default_ipv4.address }}\r\nskip_name_resolve\r\ndatadir=\/var\/lib\/mysql\r\nsocket=\/var\/lib\/mysql\/mysql.sock\r\n\r\nsymbolic-links=0\r\nsql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES \r\n\r\n[mysqld_safe]\r\nlog-error=\/var\/log\/mysqld.log\r\npid-file=\/var\/run\/mysqld\/mysqld.pid<\/pre>\n<p>Create a playbook <code>\/home\/automation\/plays\/mysql.yml<\/code> that uses the role and runs on hosts in the <strong>database<\/strong> host group.<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ mkdir roles\r\n[automation@ansible-control plays]$ cd roles\r\n[automation@ansible-control roles]$ ansible-galaxy init --offline sample-mysql\r\n- sample-mysql was created successfully\r\n[automation@ansible-control roles]$ ll\r\nrazem 0\r\ndrwxrwxr-x. 10 automation automation 135 09-01 18:26 sample-mysql\r\n[automation@ansible-control roles]$ cd sample-mysql\r\n[automation@ansible-control sample-mysql]$ ll\r\nrazem 4\r\ndrwxrwxr-x. 2 automation automation 22 09-01 18:26 defaults\r\ndrwxrwxr-x. 2 automation automation 6 09-01 18:26 files\r\ndrwxrwxr-x. 2 automation automation 22 09-01 18:26 handlers\r\ndrwxrwxr-x. 2 automation automation 22 09-01 18:26 meta\r\n-rw-rw-r--. 1 automation automation 1328 09-01 18:26 README.md\r\ndrwxrwxr-x. 2 automation automation 22 09-01 18:26 tasks\r\ndrwxrwxr-x. 2 automation automation 6 09-01 18:26 templates\r\ndrwxrwxr-x. 2 automation automation 39 09-01 18:26 tests\r\ndrwxrwxr-x. 2 automation automation 22 09-01 18:26 vars<\/pre>\n<p>Tasks:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ cat roles\/sample-mysql\/tasks\/main.yml\r\n---\r\n# tasks file for sample-mysql\r\n\r\n- name: Installing packages\r\n  yum:\r\n    name:\r\n    - parted\r\n    - python\r\n    - MySQL-python\r\n    - firewalld\r\n    - mysql-community-server\r\n    state: present\r\n  tags: install\r\n\r\n#- name: Create a new primary partition for LVM\r\n# parted:\r\n#   device: \/dev\/sdb\r\n#   number: 1\r\n#   flags: [ lvm ]\r\n#   state: present\r\n#   part_end: 800MiB\r\n# register: device_info\r\n\r\n# Maybe lvg is not needed, maybe we can use lvol instead to creaate both vg and lv\r\n#- name: create logical volume group\r\n#  lvg:\r\n#    vg: vg_database\r\n#    pvs: \/dev\/sdb1\r\n\r\n# - name: create logical volume\r\n#   lvol:\r\n#     vg: vg_database\r\n#     lv: lv_mysql\r\n#     pvs: \/dev\/sdb1\r\n#     size: 512m\r\n\r\n# - name: Create a ext2 filesystem on \/dev\/sdb1\r\n#   filesystem:\r\n#     fstype: xfs\r\n#     dev: lv_mysql\r\n\r\n# - name: Mount file system\r\n# mount:\r\n# path: \/mnt\/mysql_backups\r\n# src: lv_mysql\r\n# fstype: xfs\r\n# state: present\r\n\r\n\r\n- name: Starting services\r\n  service:\r\n    name: \"{{ item }}\"\r\n    state: started\r\n    enabled: yes\r\n  loop:\r\n  - firewalld\r\n  - mysql\r\n\r\n- name: Open ports on firewall\r\n  firewalld:\r\n    port: 3306\/tcp\r\n    permanent: yes\r\n    immediate: yes\r\n    state: enabled\r\n\r\n#Both login_password and login_user are required when you are passing credentials. If none are present,\r\n#the module will attempt to read the credentials from ~\/.my.cnf, and finally fall back to using the\r\n#MySQL default login of \u2018root\u2019 with no password.\r\n- name: Adding root user to MySql\r\n  mysql_user:\r\n    login_user: root\r\n    login_password: \"{{ database_password }}\"\r\n    name: root\r\n    password: \"{{ database_password }}\"\r\n\r\n\r\n- name: Adding template to my.cnf\r\n  template:\r\n    src: my.cnf.j2\r\n    dest: \/etc\/my.cnf\r\n  notify:\r\n  - RestartMySql\r\n\r\n\r\n# The handlers will run when forced even if there's a failure..\r\n# # However in this case, there is no failure, what's happening is there is no notification to that handler..\r\n# # Without a notification, handlers won't run even if force_handlers is set.\r\n# # We need changed_when : true to force handlers even if firewalld or mysql are already installed\r\n#\r\n#- name: install firewalld\r\n## yum:\r\n##   name: firewalld\r\n##   state: present\r\n## changed_when: true\r\n## notify:\r\n## - firewalld1\r\n## - firewalld2<\/pre>\n<p>Handlers:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat roles\/sample-mysql\/handlers\/main.yml\r\n---\r\n# handlers file for sample-mysql\r\n\r\n- name: RestartMySql\r\n  service:\r\n    name: mysql\r\n    state: restarted<\/pre>\n<p>Templates:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ cat roles\/sample-mysql\/templates\/*\r\n[mysqld]\r\nbind_address = {{ ansible_default_ipv4.address }}\r\nskip_name_resolve\r\ndatadir=\/var\/lib\/mysql\r\nsocket=\/var\/lib\/mysql\/mysql.sock\r\n\r\nsymbolic-links=0\r\nsql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES\r\n\r\n[mysqld_safe]\r\nlog-error=\/var\/log\/mysqld.log\r\npid-file=\/var\/run\/mysqld\/mysqld.pid<\/pre>\n<p>Playbook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat mysql.yml\r\n---\r\n- hosts: database\r\n  become: yes\r\n\r\n# The handlers will run when forced even if there's a failure.\r\n# However in this case, there is no failure, what's happening is there is no notification to that handler.\r\n# Without a notification, handlers won't run even if force_handlers is set.\r\n# We need changed_when: true\r\n  \r\n  force_handlers: true\r\n  vars_files: secret.yml\r\n\r\n  roles:\r\n  - sample-mysql\r\n\r\n  tasks:\r\n  - debug:\r\n      msg: \"{{ database_password }}\"\r\n<\/pre>\n<h3>Task 10: Create and Work with Roles (Some More)<\/h3>\n<p>Create a role called <strong>sample-apache<\/strong> and store it in <code>\/home\/automation\/plays\/roles<\/code>. The role should satisfy the following requirements:<\/p>\n<ol>\n<li>The <strong>httpd<\/strong>, <strong>mod_ssl<\/strong> and <strong>php<\/strong> packages are installed. Apache service is running and enabled on boot.<\/li>\n<li>Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS port TCP 443.<\/li>\n<li>Apache service should be restarted every time the file <code>\/var\/www\/html\/index.html<\/code> is modified.<\/li>\n<li>A Jinja2 template file <code>index.html.j2<\/code> is used to create the file <code>\/var\/www\/html\/index.html<\/code> with the following content:<\/li>\n<\/ol>\n<pre>The address of the server is: IPV4ADDRESS<\/pre>\n<p>IPV4ADDRESS is the IP address of the managed node.<\/p>\n<p>Create a playbook <code>\/home\/automation\/plays\/apache.yml<\/code> that uses the role and runs on hosts in the <strong>webservers<\/strong> host group.<\/p>\n<p>Tasks:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control roles]$ ansible-galaxy init --offline sample-apache\r\n[automation@ansible-control plays]$ cd sample-apache\r\n[automation@ansible-control sample-apache]$ cat tasks\/main.yml\r\n---\r\n# tasks file for sample-apache\r\n\r\n- name: instal packages\r\n  yum:\r\n    name:\r\n    - httpd\r\n    - mod_ssl\r\n    - php\r\n    - firewalld\r\n    state: present\r\n\r\n- name: Start firewalld\r\n  service: \r\n    name: firewalld\r\n    enabled: yes \r\n    state: started\r\n\r\n- name: firewall\r\n  firewalld:\r\n    service: \"{{ item }}\"\r\n    permanent: yes\r\n    immediate: yes\r\n    state: enabled\r\n  loop:\r\n  - http\r\n  - https\r\n  tags: firewall\r\n\r\n- name: index.html\r\n  template:\r\n    src: intex.html.j2\r\n    dest: \/var\/www\/html\/index.html\r\n  notify:\r\n  - restart_apache<\/pre>\n<p>Handlers:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control sample-apache]$ cat handlers\/main.yml\r\n---\r\n# handlers file for sample-apache\r\n- name: restart_apache\r\n  service:\r\n    name: httpd\r\n    state: restarted<\/pre>\n<p>Templates:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control sample-apache]$ cat templates\/index.html.j2\r\nThe address of the server is: {{ ansible_default_ipv4.address }}<\/pre>\n<p>Plabook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat apache.yml\r\n---\r\n- hosts: webservers\r\n  become: yes\r\n\r\n  roles:\r\n  - sample-apache<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 11: Download Roles From Ansible Galaxy and Use Them<\/h3>\n<p>Use Ansible Galaxy to download and install <strong>geerlingguy.haproxy<\/strong> role in <code>\/home\/automation\/plays\/roles<\/code>.<\/p>\n<p>Create a playbook <code>\/home\/automation\/plays\/haproxy.yml<\/code> that runs on servers in the <strong>proxy<\/strong> host group and does the following:<\/p>\n<ol>\n<li>Use geerlingguy.haproxy role to load balance request between hosts in the <strong>webservers<\/strong> host group.<\/li>\n<li>Use <strong>roundrobin<\/strong> load balancing method.<\/li>\n<li>HAProxy backend servers should be configured for HTTP only (port 80).<\/li>\n<li>Firewall is configured to allow all incoming traffic on port TCP 80.<\/li>\n<\/ol>\n<p>If your playbook works, then doing \u201c<strong>curl http:\/\/ansible2.hl.local\/<\/strong>\u201d should return output from the web server (see task #10). Running the command again should return output from the other web server.<\/p>\n<p>Installing geerlingguy.haproxy role:<\/p>\n<pre class=\"lang:sh decode:true \">ansible-galaxy install geerlingguy.haproxy<\/pre>\n<p>Playbook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat haproxy.yml\r\n---\r\n- hosts: proxy\r\n  become: yes\r\n\r\n  vars:\r\n    haproxy_backend_balance_method: 'roundrobin'\r\n    haproxy_backend_mode: 'http'\r\n    haproxy_backend_servers:\r\n    - name: app1\r\n      address: ansible3.hl.local\r\n    - name: app2\r\n      address: ansible4.hl.local\r\n\r\n  tasks:\r\n  - firewalld:\r\n      service: http\r\n      permanent: yes\r\n      immediate: yes\r\n      state: enabled\r\n\r\n  roles:\r\n  - geerlingguy.haproxy<\/pre>\n<h3>Task 12: Security<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/selinux.yml<\/code> that runs on hosts in the <strong>webservers<\/strong> host group and does the following:<\/p>\n<ol>\n<li>Uses the selinux <strong>RHEL system role<\/strong>.<\/li>\n<li>Enables <strong>httpd_can_network_connect<\/strong> SELinux boolean.<\/li>\n<li>The change must survive system reboot.<\/li>\n<\/ol>\n<p>Installation of rhel system roles:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ sudo yum -y install rhel-system-roles\r\n\r\n[automation@ansible-control plays]$ ansible-galaxy search selinux | grep roles\r\nFound 414 roles matching your search:\r\navinetworks.network_interface This roles enables users to configure various network components on target machines.\r\nlerwys.lnls_ansible Top [meta] ansible role containing all Sirius roles\r\nlinux-system-roles.certificate Role for managing TLS\/SSL certificate issuance and renewal\r\nlinux-system-roles.cockpit Install and enable the Cockpit Web Console\r\n<strong>linux-system-roles.selinux<\/strong> Configure SELinux\r\noasis_roles.users_and_groups Ansible role that manages groups and users\r\nyabhinav.common Install a common configurations for RHEL\/CentOS\/Fedora and Debian\/Ubuntu. Used by my other roles\r\n\r\n[automation@ansible-control plays]$ ansible-galaxy install linux-system-roles.selinux\r\n\r\n[automation@ansible-control plays]$ ansible-galaxy list\r\n- sample-mysql, (unknown version)\r\n- sample-apache, (unknown version)\r\n- linux-system-roles.selinux, 1.1.0\r\n<\/pre>\n<p>Preinstalled example of playbook with selinux role:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control selinux]$ cat \/usr\/share\/doc\/rhel-system-roles-1.0\/selinux\/example-selinux-playbook.yml\r\n---\r\n- hosts: all\r\nbecome: true\r\nbecome_method: sudo\r\nbecome_user: root\r\nvars:\r\nselinux_policy: targeted\r\nselinux_state: enforcing\r\nselinux_booleans:\r\n- { name: 'samba_enable_home_dirs', state: 'on' }\r\n- { name: 'ssh_sysadm_login', state: 'on', persistent: 'yes' }\r\nselinux_fcontexts:\r\n- { target: '\/tmp\/test_dir(\/.*)?', setype: 'user_home_dir_t', ftype: 'd' }\r\nselinux_restore_dirs:\r\n- \/tmp\/test_dir\r\nselinux_ports:\r\n- { ports: '22100', proto: 'tcp', setype: 'ssh_port_t', state: 'present' }\r\nselinux_logins:\r\n- { login: 'sar-user', seuser: 'staff_u', serange: 's0-s0:c0.c1023', state: 'present' }\r\n\r\n# prepare prerequisites which are used in this playbook\r\ntasks:\r\n- name: Creates directory\r\nfile:\r\npath: \/tmp\/test_dir\r\nstate: directory\r\n- name: Add a Linux System Roles SELinux User\r\nuser:\r\ncomment: Linux System Roles SELinux User\r\nname: sar-user\r\n- name: execute the role and catch errors\r\nblock:\r\n- include_role:\r\nname: rhel-system-roles.selinux\r\nrescue:\r\n# Fail if failed for a different reason than selinux_reboot_required.\r\n- name: handle errors\r\nfail:\r\nmsg: \"role failed\"\r\nwhen: not selinux_reboot_required\r\n\r\n- name: restart managed host\r\nshell: sleep 2 &amp;&amp; shutdown -r now \"Ansible updates triggered\"\r\nasync: 1\r\npoll: 0\r\nignore_errors: true\r\n\r\n- name: wait for managed host to come back\r\nwait_for_connection:\r\ndelay: 10\r\ntimeout: 300\r\n\r\n- name: reapply the role\r\ninclude_role:\r\nname: rhel-system-roles.selinux<\/pre>\n<p>Create a playbook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat selinux.yml\r\n---\r\n- hosts: webservers\r\n  become: yes\r\n\r\n  vars:\r\n    selinux_booleans: \r\n    - { name: 'httpd_can_network_connect', state: 'on' }\r\n\r\n  tasks:\r\n\r\n  # Selinux requires libsemanage-python support\r\n  - name: install libsemanage-python\r\n    yum:\r\n      name: libsemanage-python\r\n      state: present\r\n\r\n  roles:\r\n  - linux-system-roles.selinux\r\n<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 13: Use Conditionals to Control Play Execution<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/sysctl.yml<\/code> that runs on all inventory hosts and does the following:<\/p>\n<ol>\n<li>If a server has more than 2048MB of RAM, then parameter <strong>vm.swappiness<\/strong> is set to 10.<\/li>\n<li>If a server has less than 2048MB of RAM, then the following error message is displayed:<\/li>\n<\/ol>\n<p><strong>Server memory less than 2048MB<\/strong><\/p>\n<pre class=\"lang:sh decode:true \">---\r\n- hosts: all\r\n  become: yes\r\n\r\n  tasks:\r\n  - debug:\r\n      msg: \"Server {{ inventory_hostname }} memory has less than 2048MB\"\r\n    when: ansible_memtotal_mb &lt; 2048\r\n\r\n  - debug:\r\n      vm.swappiness: 10\r\n    when: ansible_memtotal_mb &gt; 2048<\/pre>\n<h3>Task 14: Use Archiving<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/archive.yml<\/code> that runs on hosts in the <strong>database<\/strong> host group and does the following:<\/p>\n<ol>\n<li>A file <code>\/mnt\/mysql_backups\/database_list.txt<\/code> is created that contains the following line: dev,test,qa,prod.<\/li>\n<li>A gzip archive of the file <code>\/mnt\/mysql_backups\/database_list.txt<\/code> is created and stored in <code>\/mnt\/mysql_backups\/archive.gz<\/code>.<\/li>\n<\/ol>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat archive.yml\r\n---\r\n- hosts: database\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: create a directory\r\n    file:\r\n      path: \/mnt\/mysql_backups\r\n      state: directory\r\n\r\n  - name: touch a file\r\n    file:\r\n      path: \/mnt\/mysql_backups\/database_list.txt\r\n      state: touch\r\n\r\n  - name: Copy using the 'content' for inline data\r\n    copy:\r\n      content: 'dev,test,qa,prod'\r\n      dest: \/mnt\/mysql_backups\/database_list.txt\r\n\r\n  - name: Compress file\r\n    archive:\r\n      path: \/mnt\/mysql_backups\/database_list.txt\r\n      dest: \/mnt\/mysql_backups\/archive.gz<\/pre>\n<h3>Task 15: Work with Ansible Facts<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/facts.yml<\/code> that runs on hosts in the <strong>database<\/strong> host group and does the following:<\/p>\n<ol>\n<li>A custom Ansible fact <strong>server_role=mysql<\/strong> is created that can be retrieved from <strong>ansible_local.custom.sample_exam<\/strong> when using Ansible setup module.<\/li>\n<\/ol>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat facts.yml\r\n---\r\n- hosts: database\r\n  become: yes\r\n  # gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: create a directory\r\n    file:\r\n      path: \/etc\/ansible\/facts.d\r\n      state: directory\r\n\r\n  - name: touch a file\r\n    file:\r\n      path: \/etc\/ansible\/facts.d\/custom.fact\r\n      state: touch\r\n\r\n  - name: blockinfile\r\n    blockinfile:\r\n      path: \/etc\/ansible\/facts.d\/custom.fact\r\n      block: |\r\n        [sample_exam]\r\n        server_role=mysql\r\n\r\n  - name: debug\r\n    debug:\r\n      msg: \"{{ ansible_local.custom.sample_exam }}\"\r\n<\/pre>\n<p>Run the playbook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ansible-playbook facts.yml\r\n\r\nPLAY [database] *******************************************************************************************************************\r\n\r\nTASK [Gathering Facts] ************************************************************************************************************\r\nok: [ansible5.hl.local]\r\n\r\nTASK [create a directory] *********************************************************************************************************\r\nok: [ansible5.hl.local]\r\n\r\nTASK [touch a file] ***************************************************************************************************************\r\nchanged: [ansible5.hl.local]\r\n\r\nTASK [blockinfile] ****************************************************************************************************************\r\nok: [ansible5.hl.local]\r\n\r\nTASK [debug] **********************************************************************************************************************\r\nok: [ansible5.hl.local] =&gt; {\r\n\"msg\": {\r\n\"server_role\": \"mysql\"\r\n}\r\n}\r\n\r\nPLAY RECAP ************************************************************************************************************************\r\nansible5.hl.local : ok=5 changed=1 unreachable=0 failed=0<\/pre>\n<p>Check if facts exists:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ ansible database -a 'cat \/etc\/ansible\/facts.d\/custom.fact'\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\n# BEGIN ANSIBLE MANAGED BLOCK\r\n[sample_exam]\r\nserver_role=mysql\r\n# END ANSIBLE MANAGED BLOCK\r\n\r\n[automation@ansible-control plays]$ ansible database -m setup -a \"filter=ansible_local\"\r\nansible5.hl.local | SUCCESS =&gt; {\r\n    \"ansible_facts\": {\r\n        \"ansible_local\": {\r\n            \"custom\": {\r\n                \"sample_exam\": {\r\n                    \"server_role\": \"mysql\"\r\n                }\r\n            }\r\n        }\r\n    },\r\n    \"changed\": false\r\n}<\/pre>\n<h3><\/h3>\n<h3>Task 16: Software Packages<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/packages.yml<\/code> that runs on all inventory hosts and does the following:<\/p>\n<ol>\n<li>Installs <strong>tcpdump<\/strong> and <strong>mailx<\/strong> packages on hosts in the <strong>proxy<\/strong> host groups.<\/li>\n<li>Installs <strong>lsof<\/strong> and <strong>mailx<\/strong> and packages on hosts in the <strong>database<\/strong> host groups.<\/li>\n<\/ol>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ cat packages.yml\r\n---\r\n- hosts: all\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: Packages for proxy\r\n    yum:\r\n      name:\r\n      - tcpdump\r\n      - mailx\r\n      state: present\r\n    when: inventory_hostname in groups['proxy']\r\n    tags: proxy\r\n\r\n  - name: Packages for database\r\n    yum:\r\n      name:\r\n      - lsof\r\n      - mailx\r\n      state: present\r\n    when: inventory_hostname in groups['database']\r\n    tags: database<\/pre>\n<p>Run the playbook:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ansible-playbook packages.yml\r\n\r\nPLAY [all] *********************************************************************************************\r\n\r\nTASK [Packages for proxy] ******************************************************************************\r\nskipping: [ansible3.hl.local]\r\nskipping: [ansible4.hl.local]\r\nskipping: [ansible5.hl.local]\r\nchanged: [ansible2.hl.local]\r\n\r\nTASK [Packages for database] ***************************************************************************\r\nskipping: [ansible3.hl.local]\r\nskipping: [ansible4.hl.local]\r\nskipping: [ansible2.hl.local]\r\nchanged: [ansible5.hl.local]\r\n\r\nPLAY RECAP *********************************************************************************************\r\nansible2.hl.local : ok=1 changed=1 unreachable=0 failed=0\r\nansible3.hl.local : ok=0 changed=0 unreachable=0 failed=0\r\nansible4.hl.local : ok=0 changed=0 unreachable=0 failed=0\r\nansible5.hl.local : ok=1 changed=1 unreachable=0 failed=0<\/pre>\n<p>Check installed packages:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ansible database -a 'yum list installed mailx'\r\n\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\nWczytane wtyczki: fastestmirror\r\nZainstalowane pakiety\r\nmailx.x86_64 12.5-19.el7 @base\r\n\r\n[automation@ansible-control plays]$ ansible database -a 'yum list installed tcpdump'\r\n\r\nansible5.hl.local | FAILED | rc=1 &gt;&gt;\r\nWczytane wtyczki: fastestmirrorB\u0142\u0105d: Brak pakiet\u00f3w pasuj\u0105cych do listynon-zero return code\r\n\r\n[automation@ansible-control plays]$ ansible database -a 'yum list installed lsof'\r\n\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\nWczytane wtyczki: fastestmirror\r\nZainstalowane pakiety\r\nlsof.x86_64 4.87-6.el7 @base<\/pre>\n<p>Another way to check if package is installed:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ ansible database -m yum -a 'list=tcpdump'\r\nansible5.hl.local | SUCCESS =&gt; {\r\n\"ansible_facts\": {\r\n\"pkg_mgr\": \"yum\"\r\n},\r\n\"changed\": false,\r\n\"results\": [\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"14:tcpdump-4.9.2-4.el7_7.1.x86_64\",\r\n\"epoch\": \"14\",\r\n\"name\": \"tcpdump\",\r\n\"release\": \"4.el7_7.1\",\r\n\"repo\": \"base\",\r\n\"version\": \"4.9.2\",\r\n\"yumstate\": \"available\"\r\n}\r\n]\r\n}\r\n[automation@ansible-control plays]$ ansible database -m yum -a 'list=mailx'\r\nansible5.hl.local | SUCCESS =&gt; {\r\n\"ansible_facts\": {\r\n\"pkg_mgr\": \"yum\"\r\n},\r\n\"changed\": false,\r\n\"results\": [\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"0:mailx-12.5-19.el7.x86_64\",\r\n\"epoch\": \"0\",\r\n\"name\": \"mailx\",\r\n\"release\": \"19.el7\",\r\n\"repo\": \"base\",\r\n\"version\": \"12.5\",\r\n\"yumstate\": \"available\"\r\n},\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"0:mailx-12.5-19.el7.x86_64\",\r\n\"epoch\": \"0\",\r\n\"name\": \"mailx\",\r\n\"release\": \"19.el7\",\r\n\"repo\": \"installed\",\r\n\"version\": \"12.5\",\r\n\"yumstate\": \"installed\"\r\n}\r\n]\r\n}\r\n[automation@ansible-control plays]$ ansible proxy -m yum -a 'list=mailx'\r\nansible2.hl.local | SUCCESS =&gt; {\r\n\"ansible_facts\": {\r\n\"pkg_mgr\": \"yum\"\r\n},\r\n\"changed\": false,\r\n\"results\": [\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"0:mailx-12.5-19.el7.x86_64\",\r\n\"epoch\": \"0\",\r\n\"name\": \"mailx\",\r\n\"release\": \"19.el7\",\r\n\"repo\": \"base\",\r\n\"version\": \"12.5\",\r\n\"yumstate\": \"available\"\r\n},\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"0:mailx-12.5-19.el7.x86_64\",\r\n\"epoch\": \"0\",\r\n\"name\": \"mailx\",\r\n\"release\": \"19.el7\",\r\n\"repo\": \"installed\",\r\n\"version\": \"12.5\",\r\n\"yumstate\": \"installed\"\r\n}\r\n]\r\n}\r\n[automation@ansible-control plays]$ ansible proxy -m yum -a 'list=tcpdump'\r\nansible2.hl.local | SUCCESS =&gt; {\r\n\"ansible_facts\": {\r\n\"pkg_mgr\": \"yum\"\r\n},\r\n\"changed\": false,\r\n\"results\": [\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"14:tcpdump-4.9.2-4.el7_7.1.x86_64\",\r\n\"epoch\": \"14\",\r\n\"name\": \"tcpdump\",\r\n\"release\": \"4.el7_7.1\",\r\n\"repo\": \"base\",\r\n\"version\": \"4.9.2\",\r\n\"yumstate\": \"available\"\r\n},\r\n{\r\n\"arch\": \"x86_64\",\r\n\"envra\": \"14:tcpdump-4.9.2-4.el7_7.1.x86_64\",\r\n\"epoch\": \"14\",\r\n\"name\": \"tcpdump\",\r\n\"release\": \"4.el7_7.1\",\r\n\"repo\": \"installed\",\r\n\"version\": \"4.9.2\",\r\n\"yumstate\": \"installed\"\r\n}\r\n]\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 17: Services<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/target.yml<\/code> that runs on hosts in the <strong>webservers<\/strong> host group and does the following:<\/p>\n<ol>\n<li>Sets the default boot target to <strong>multi-user<\/strong>.<\/li>\n<\/ol>\n<p>Check which target is actually set on webservers host group:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ ansible webservers -a 'systemctl get-default'\r\nansible3.hl.local | CHANGED | rc=0 &gt;&gt;\r\nmulti-user.target\r\n\r\nansible4.hl.local | CHANGED | rc=0 &gt;&gt;\r\nmulti-user.target<\/pre>\n<p>We can set default target in this way:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ansible webservers -b -a 'systemctl set-default multi-user.target'<\/pre>\n<p>Plabook which do the same:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ cat target.yml\r\n---\r\n- hosts: webservers\r\n  become: yes\r\n  gather_facts: no\r\n\r\n  tasks:\r\n\r\n  - name: Set default system target to boot at multi-user\r\n    shell: systemctl set-default multi-user.target<\/pre>\n<p>&nbsp;<\/p>\n<h3>Task 18. Create and Use Templates to Create Customised Configuration Files<\/h3>\n<p>Create a playbook <code>\/home\/automation\/plays\/server_list.yml<\/code> that does the following:<\/p>\n<ol>\n<li>Playbook uses a Jinja2 template <code>server_list.j2<\/code> to create a file <code>\/etc\/server_list.txt<\/code> on hosts in the <strong>database<\/strong> host group.<\/li>\n<li>The file <code>\/etc\/server_list.txt<\/code> is owned by the <strong>automation<\/strong> user.<\/li>\n<li>File permissions are set to <strong>0600<\/strong>.<\/li>\n<li>SELinux file label should be set to <strong>net_conf_t<\/strong>.<\/li>\n<li>The content of the file is a list of FQDNs of all inventory hosts.<\/li>\n<\/ol>\n<p>After running the playbook, the content of the file <code>\/etc\/server_list.txt<\/code> should be the following:<\/p>\n<pre class=\"\">ansible2.hl.local\r\nansible3.hl.local\r\nansible4.hl.local\r\nansible5.hl.local<\/pre>\n<p>Note: if the FQDN of any inventory host changes, re-running the playbook should update the file with the new values.<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ cat server_list.yml\r\n---\r\n- hosts: database\r\n  become: yes\r\n  gather_facts: no\r\n  ignore_errors: yes\r\n\r\n  vars:\r\n    j2file1: \/home\/automation\/plays\/templates\/server_list.j2\r\n\r\n  tasks:\r\n\r\n  - name: list hosts\r\n    shell: ansible all -i \/home\/automation\/plays\/inventory --list-host &gt; \"{{ j2file1 }}\"\r\n    delegate_to: localhost\r\n    register: local_process\r\n\r\n  - name: debug\r\n    debug:\r\n      msg: \"{{ local_process }}\"\r\n\r\n  - name: replace string\r\n    replace:\r\n      path: \"{{ j2file1 }}\"\r\n      regexp: '(.*)hosts(.*):$'\r\n      replace: ' '\r\n   delegate_to: localhost\r\n\r\n  - name: template\r\n    template:\r\n      src: \"{{ j2file1 }}\"\r\n      dest: \/etc\/server_list.txt\r\n      owner: automation\r\n      mode: 0600\r\n      setype: net_conf_t<\/pre>\n<p>Check te content of <code>\/etc\/server_list.txt<\/code> :<\/p>\n<pre class=\"lang:sh decode:true\">automation@ansible-control plays]$ ansible database -a 'cat \/etc\/server_list.txt'\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\n\r\nansible3.hl.local\r\nansible4.hl.local\r\nansible5.hl.local\r\nansible2.hl.local<\/pre>\n<p>Check the mode:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ ansible database -a 'ls -l \/etc\/server_list.txt'\r\n\r\n\r\nansible5.hl.local | CHANGED | rc=0 &gt;&gt;\r\n-rw-------. 1 automation root 90 09-19 20:42 \/etc\/server_list.txt<\/pre>\n<p>&nbsp;<\/p>\n<p>Second way:<\/p>\n<pre class=\"lang:sh decode:true \">[automation@ansible-control plays]$ cat server_list2.yml\r\n---\r\n- name: Create Template\r\n  hosts: all\r\n  become: no\r\n\r\n  tasks:\r\n  - name: reate template\r\n    lineinfile:\r\n      path: templates\/server_list.j2\r\n      line: '{{ ansible_fqdn }}'\r\n      create: yes\r\n    delegate_to: localhost\r\n\r\n- name: Copy template\r\n  hosts: database\r\n  become: yes\r\n  gather_facts: no\r\n  \r\n  tasks:\r\n  - name: template\r\n    template:\r\n      src: templates\/server_list.j2\r\n      dest: \/etc\/server_list.txt\r\n      owner: automation\r\n      mode: 0600\r\n      setype: net_conf_t\r\n<\/pre>\n<p>Let&#8217;s check:<\/p>\n<pre class=\"lang:sh decode:true\">[automation@ansible-control plays]$ ssh ansible5.hl.local cat \/etc\/server_list.txt\r\nansible3.hl.local\r\nansible5.hl.local\r\nansible2.hl.local\r\nansible4.hl.local\r\n\r\n[automation@ansible-control plays]$ ssh ansible5.hl.local ls -Z \/etc\/server_list.txt\r\n-rw-------. automation root system_u:object_r:net_conf_t:s0 \/etc\/server_list.txt<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[86],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/3761"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=3761"}],"version-history":[{"count":33,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/3761\/revisions"}],"predecessor-version":[{"id":3763,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/3761\/revisions\/3763"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=3761"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=3761"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=3761"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}