RedHat ex200 practice test

Red Hat Certified System Administrator Exam


Question 1

Part 2 (on Node2 Server)
Task 8 [Tuning System Performance]
Set your server to use the recommended tuned profile

Answer:

See the

Explanation:
[[email protected] ~]# tuned-adm list
[[email protected] ~]# tuned-adm active
Current active profile: virtual-guest
[[email protected] ~]# tuned-adm recommend
virtual-guest
[[email protected] ~]# tuned-adm profile virtual-guest
[[email protected] ~]# tuned-adm active
Current active profile: virtual-guest
[[email protected] ~]# reboot
[[email protected] ~]# tuned-adm active
Current active profile: virtual-guest

Discussions

Question 2

Part 2 (on Node2 Server)
Task 7 [Implementing Advanced Storage Features]
Create a thin-provisioned filesystem with the name think_fs from a pool think_pool using the
devices.
The filesystem should be mounted on /strav and must be persistent across reboot

Answer:

See the

Explanation:
*
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdd 252:48 0 5G 0 disk
vde 252:64 0 10G 0 disk
vdo1 253:4 0 50G 0 vdo /vbread
[[email protected] ~]# yum install stratis* -y
[[email protected] ~]# systemctl enable --now stratisd.service
[[email protected] ~]# systemctl start stratisd.service
[[email protected] ~]# systemctl status stratisd.service
[[email protected] ~]# stratis pool create think_pool /dev/vdd
[[email protected] ~]# stratis pool list
Name Total Physical Properties
think_pool 5 GiB / 37.63 MiB / 4.96 GiB ~Ca,~Cr
*
[[email protected] ~]# stratis filesystem create think_pool think_fs
[[email protected] ~]# stratis filesystem list
Pool Name Name Used Created Device UUID
think_pool
think_fs
546
MiB
Mar
2021
08:21
/stratis/think_pool/think_fs
ade6fdaab06449109540c2f3fdb9417d
[[email protected] ~]# mkdir /strav
[[email protected] ~]# lsblk
[[email protected] ~]# blkid
/dev/mapper/stratis-1-91ab9faf36a540f49923321ba1c5e40d-thin-fs-
ade6fdaab06449109540c2f3fdb9417d:
UUID="ade6fdaa-b064-4910-9540-c2f3fdb9417d"
BLOCK_SIZE="512" TYPE="xfs"
*
[[email protected] ~]# vim /etc/fstab
UUID=ade6fdaa-b064-4910-9540-c2f3fdb9417d
/strav
xfs
defaults,x-
systemd.requires=stratisd.service 0 0
[[email protected] ~]# mount /stratis/think_pool/think_fs /strav/
[[email protected] ~]# df -hT
/dev/mapper/stratis-1-91ab9faf36a540f49923321ba1c5e40d-thin-fs-
ade6fdaab06449109540c2f3fdb9417d xfs 1.0T 7.2G 1017G 1% /strav

Discussions

Question 3

Part 2 (on Node2 Server)
Task 6 [Implementing Advanced Storage Features]
Add a new disk to your virtual machine with a ize of 10 GiB
On this disk, create a VDO volume with a size of 50 GiB and mount it persistently on /vbread with xfs
filesystem

Answer:

See the

Explanation:
*
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdd 252:48 0 5G 0 disk
vde 252:64 0 10G 0 disk
[[email protected] ~]# yum install kmod-kvdo vdo
[[email protected] ~]# systemctl enable --now vdo
[[email protected] ~]# systemctl start vdo
[[email protected] ~]# systemctl status vdo
[[email protected] ~]# vdo create --name=vdo1 --device=/dev/vde --vdoLogicalSize=50G
[[email protected] ~]# vdostats --hu
Device Size Used Available Use% Space saving%
/dev/mapper/vdo1 10.0G 4.0G 6.0G 40% N/A
[[email protected] ~]# mkfs.xfs -K /dev/mapper/vdo1
*
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vde 252:64 0 10G 0 disk
vdo1 253:4 0 50G 0 vdo
[[email protected] ~]# mkdir /vbread
[[email protected] ~]# blkid
/dev/mapper/vdo1:
UUID="1ec7a341-6051-4aed-8a2c-4d2d61833227"
BLOCK_SIZE="4096"
TYPE="xfs"
[[email protected] ~]# vim /etc/fstab
UUID=1ec7a341-6051-4aed-8a2c-4d2d61833227
/vbread
xfs
defaults,x-
systemd.requires=vdo.service 0 0
[[email protected] ~]# mount /dev/mapper/vdo1 /vbread/
[[email protected] ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vdo1 xfs 50G 390M 50G 1% /vbread

Discussions

Question 4

Part 2 (on Node2 Server)
Task 5 [Managing Logical Volumes]
Add an additional swap partition of 656 MiB to your system. The swap partition should automatically
mount when your system boots
Do not remove or otherwise alter any existing swap partition on your system

Answer:

See the

Explanation:
*
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdc 252:32 0 5G 0 disk
vdc1 252:33 0 4.1G 0 part
datavg-datalv 253:3 0 3.9G 0 lvm /data
vdd 252:48 0 5G 0 disk
vde 252:64 0 10G 0 disk
[[email protected] ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2097148 1548 -2
[[email protected] ~]# free -m
total used free shared buff/cache available
Mem: 1816 1078 104 13 633 573
Swap: 2047 1 2046
[[email protected] ~]# parted /dev/vdc print
Number Start End Size Type File system Flags
1 1049kB 4404MB 4403MB primary lvm
*
[[email protected] ~]# parted /dev/vdc mkpart primary linux-swap 4404MiB 5060MiB
[[email protected] ~]# mkswap /dev/vdc2
Setting up swapspace version 1, size = 656 MiB (687861760 bytes)
no label, UUID=9faf818f-f070-4416-82b2-21a41988a9a7
[[email protected] ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2097148 1804 -2
[[email protected] ~]# swapon /dev/vdc2
*
[[email protected] ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2097148 1804 -2
/dev/vdc2 partition 671740 0 -3
[[email protected] ~]# blkid
/dev/vdc2: UUID="9faf818f-f070-4416-82b2-21a41988a9a7" TYPE="swap" PARTUUID="0f22a35f-02"
[[email protected] ~]# vim /etc/fstab
UUID=9faf818f-f070-4416-82b2-21a41988a9a7 swap swap defaults 0 0
[[email protected] ~]# reboot
[[email protected] ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2097148 1804 -2
/dev/vdc2 partition 671740 0 -3

Discussions

Question 5

Part 2 (on Node2 Server)
Task 4 [Managing Logical Volumes]
Resize the logical volume, lvrz and reduce filesystem to 4600 MiB. Make sure the the filesystem
contents remain intact with mount point /datarz
(Note: partitions are seldom exactly the size requested, so anything within the range of 4200MiB to
4900MiB is acceptable)

Answer:

See the

Explanation:
*
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdb 252:16 0 5G 0 disk
vdb1 252:17 0 4.2G 0 part
vgrz-lvrz 253:2 0 4.1G 0 lvm /datarz
vdc 252:32 0 5G 0 disk
vdc1 252:33 0 4.4G 0 part
datavg-datalv 253:3 0 3.9G 0 lvm /data
vdd 252:48 0 5G 0 disk
vde 252:64 0 10G 0 disk
[[email protected] ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lvrz vgrz -wi-ao---- 4.10g
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vgrz 1 1 0 wz--n- <4.15g 48.00m
[[email protected] ~]# parted /dev/vdb print
Number Start End Size Type File system Flags
1 1049kB 4456MB 4455MB primary lvm
*
[[email protected] ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgrz-lvrz ext4 4.0G 17M 3.8G 1% /datarz
[[email protected] ~]# parted /dev/vdb mkpart primary 4456MiB 5100MiB
[[email protected] ~]# parted /dev/vdb set 2 lvm on
[[email protected] ~]# udevadm settle
[[email protected] ~]# pvcreate /dev/vdb2
Physical volume "/dev/vdb2" successfully created.
*
[[email protected] ~]# vgextend vgrz /dev/vdb2
Volume group "vgrz" successfully extended
[[email protected] ~]# lvextend -r -L 4600M /dev/vgrz/lvrz
Size of logical volume vgrz/lvrz changed from 4.10 GiB (1050 extents) to 4.49 GiB (1150 extents).
Logical volume vgrz/lvrz successfully resized.
[[email protected] ~]# resize2fs /dev/vgrz/lvrz
[[email protected] ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgrz-lvrz ext4 4.4G 17M 4.2G 1% /datarz

Discussions

Question 6

Part 2 (on Node2 Server)
Task 3 [Managing Logical Volumes]
Create a new volume group in the name of datavg and physical volume extent is 16 MB
Create a new logical volume in the name of datalv with the size of 250 extents and file system must
xfs
Then the logical volume should be mounted automatically mounted under /data at system boot time

Answer:

See the

Explanation:
*
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdb 252:16 0 5G 0 disk
vdb1 252:17 0 4.2G 0 part
vgrz-lvrz 253:2 0 4.1G 0 lvm /datarz
vdc 252:32 0 5G 0 disk
vdd 252:48 0 5G 0 disk
vde 252:64 0 10G 0 disk
[[email protected] ~]# parted /dev/vdc mklabel msdos
[[email protected] ~]# parted /dev/vdc mkpart primary 1MiB 4200MiB
[[email protected] ~]# parted /dev/vdc set 1 lvm on
*
[[email protected] ~]# udevadm settle
[[email protected] ~]# pvcreate /dev/vdc1
Physical volume "/dev/vdc1" successfully created.
[[email protected] ~]# vgcreate -s 16M datavg /dev/vdc1
Volume group "datavg" successfully created
[[email protected] ~]# lvcreate -n datalv -L 4000M datavg
Logical volume "datalv" created.
[[email protected] ~]# mkfs.xfs /dev/datavg/datalv
[[email protected] ~]# mkdir /data
[[email protected] ~]# blkid
/dev/mapper/datavg-datalv: UUID="7397a292-d67d-4632-941e-382e2bd922ce" BLOCK_SIZE="512"
TYPE="xfs"
*
[[email protected] ~]# vim /etc/fstab
UUID=7397a292-d67d-4632-941e-382e2bd922ce /data xfs defaults 0 0
[[email protected] ~]# mount UUID=7397a292-d67d-4632-941e-382e2bd922ce /data
[[email protected] ~]# reboot
[[email protected] ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/datavg-datalv xfs 3.9G 61M 3.9G 2% /data

Discussions

Question 7

Part 2 (on Node2 Server)
Task 2 [Installing and Updating Software Packages]
Configure your system to use this location as a default repository:
http://utility.domain15.example.com/BaseOS
http://utility.domain15.example.com/AppStream
Also configure your GPG key to use this location
http://utility.domain15.example.com/RPM-GPG-KEY-redhat-release

Answer:

See the

Explanation:
[[email protected] ~]# vim /etc/yum.repos.d/redhat.repo
[BaseOS]
name=BaseOS
baseurl=http://utility.domain15.example.com/BaseOS
enabled=1
gpgcheck=1
gpgkey=http://utility.domain15.example.com/RPM-GPG-KEY-redhat-release
[AppStream]
name=AppStream
baseurl=http://utility.domain15.example.com/AppStream
enabled=1
gpgcheck=1
gpgkey=http://utility.domain15.example.com/RPM-GPG-KEY-redhat-release
[[email protected] ~]# yum clean all
[[email protected] ~]# yum repolist
repo id repo name
AppStream AppStream
BaseOS BaseOS
[[email protected] ~]# yum list all

Discussions

Question 8

Part 2 (on Node2 Server)
Task 1 [Controlling the Boot Process]
Interrupt the boot process and reset the root password. Change it to kexdrams to gain access to the
system

Answer:

See the

Explanation:
*
1. Reboot the server pressing by Ctrl+Alt+Del
2. When the boot-loader menu appears, press the cursor keys to highlight the default boot-loader
entry
3. Press e to edit the current entry.
4. Use the cursor keys to navigate to the line that starts with linux.
5. Press End to move the cursor to the end of the line.
6. Append rd.break to the end of the line.
7. Press Ctrl+x to boot using the modified configuration.
8. At the switch_root prompt
*
switch_root:/# mount -o remount,rw /sysroot
switch_root:/# chroot /sysroot
sh-4.4# echo kexdrams | passwd --stdin root
Changing password for user root.
passwd: all authentication tokens updated successfully.
sh-4.4# touch /.autorelabel
sh-4.4# exit; exit
*
Type exit twice to continue booting your system as usual.

Discussions

Question 9

Part 1 (on Node1 Server)
Task 17 [Accessing Linux File Systems]
Find all the files owned by user alex and redirect the output to /home/alex/files.

Answer:

See the

Explanation:
* [email protected] ~]# find / -user alex -type f > /home/alex/files

Discussions

Question 10

Part 1 (on Node1 Server)
Task 16 [Running Containers]
Configure your host journal to store all journal across reboot
Copy all journal files from /var/log/journal/ and put them in the /home/shangrila/container-
logserver
Create and mount /home/shangrila/container-logserver as a persistent storage to the container as
/var/log/ when container start

Answer:

See the

Explanation:
*
[[email protected] ~]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d5ffe018a53c registry.domain15.example.com:5000/rhel8/rsyslog:latest /bin/rsyslog.sh 5 seconds
ago Up 4 seconds ago logserver
[[email protected] ~]$ podman stats logserver
Error: stats is not supported in rootless mode without cgroups v2
[[email protected] ~]$ podman stop logserver
d5ffe018a53ca7eb075bf560d1f30822ab6fe51eba58fd1a8f370eda79806496
[[email protected] ~]$ podman rm logserver
Error: no container with name or ID logserver found: no such container
[[email protected] ~]$ mkdir -p container-journal/
*
[[email protected] ~]$ sudo systemctl restart systemd-journald
[sudo] password for shangrila:
[[email protected] ~]$ sudo cp -av /var/log/journal/* container-journal/
[[email protected] ~]$ sudo cp -av /var/log/journal/* container-journal/
[[email protected] ~]$ sudo chown -R shangrila container-journal/
[[email protected] ~]$ podman run -d --name logserver -v /home/shangrila/container-
journal/:/var/log/journal:Z registry.domain15.example.com:5000/rhel8/rsyslog
[[email protected] ~]$ podman ps
[[email protected] ~]$ loginctl enable-linger
[[email protected] ~]$ loginctl show-user shangrila|grep -i linger
Linger=yes
*
[[email protected] ~]$ podman stop logserver
[[email protected] ~]$ podman rm logserver
[[email protected] ~]$ systemctl --user daemon-reload
[[email protected] ~]$ systemctl --user enable --now container-logserver
[[email protected] ~]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3903e1d09170 registry.domain15.example.com:5000/rhel8/rsyslog:latest /bin/rsyslog.sh 4 seconds
ago Up 4 seconds ago logserver
[[email protected] ~]$ systemctl --user stop container-logserver.service
*
[[email protected] ~]$ sudo reboot
[[email protected] ~]$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7e6cd59c506a registry.domain15.example.com:5000/rhel8/rsyslog:latest /bin/rsyslog.sh 10 seconds
ago Up 9 seconds ago logserver

Discussions
To page 2