
Rapports et analyses Cisco Mobility Unified - Cisco
MURAL Software Upgrade Guide
Version: 5.0.2.p3
Published: 18-09-2020 Copyright © 2020,CISCO Systems, Inc. Americas Headquarters
MURAL Software Upgrade Guide
Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 MURAL Software Upgrade Guide
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED "AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
iii
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
MURAL Software Upgrade Guide
Copyright © 2020,CISCO Systems, Inc.. All rights reserved.
Copyright © 2020,CISCO Systems, Inc.
iv
MURAL Software Upgrade Guide
Table of Contents
1. Introduction
1
2. Prerequisites
2
2.1 Monitor the System Health
2
2.2 Download RPMs
3
2.3 Compare md5sum
4
3. Upgrading MURAL from v 5.0.2.p2 to v 5.0.2.p3
5
3.1 Extract the TAR file
5
3.2 Load Extracted Files into HDFS
5
3.3 Update the Repositories
6
3.4 Take a Backup
9
3.5 Update Variables
9
3.6 Refresh Inventory
9
3.7 Stop the Running Jobs
10
3.8 Run the Ansible solution
18
4. Post Upgrade Procedure
21
4.1 Run the Generated Reports
21
4.2 Verify the .csv file
21
4.3 Generate Encrypted Password
21
4.4 Use Encrypted Password
22
4.5 Start the Jobs
23
5. Cemus Report Verification
35
5.1 Verify the DailyCemusReport
39
5.2 Verify the MonthlyCemusReport
41
Copyright © 2020,CISCO Systems, Inc.
v
MURAL Software Upgrade Guide
5.3 Verify the GRTCemusReport
42
6. Cemus Report Generation
43
6.1 Daily Cemus Report generation
43
6.2 Monthly Cemus Report generation
43
6.3 GRT Cemus Report generation
44
7. Cleanup Job configuration
45
8. Troubleshooting MURAL
47
8.1 Unable to launch User Interface
47
8.2 Postgres is Down
47
8.3 Active Master Node is Down
48
8.4 Incorrect password is set
48
8.5 HAProxy Error-503
50
8.6 Services are down
50
8.7 MURAL UI shows old or no data
51
9. PatchRollback
55
10. Appendix-A
56
10.1 customergroup.csv
56
vi
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
1. Introduction
Mobility Unified Reporting and Analytics (MURAL) is a next-gen analytics solution, tailored for telecoms. This next-gen analytics solution, is multi-dimensional and has been enhanced to provide visibility into each subscriber's behavior and usage.The CISCO MURAL application requires nodes called namenode to collect the data streams, compute nodes for data analysis and aggregation, load balancer and management node that helps in managing the installation of all other nodes. This documents provides the step by step information on upgrading the MURAL patch from version 5.0.2.p2 to version 5.0.2.p3.
Copyright © 2020,CISCO Systems, Inc.
1
MURAL Software Upgrade Guide
2. Prerequisites
The following prerequisites must be fulfilled to begin the upgrade process:
2.1 Monitor the System Health
2.1.1 Check the Postgres Health
crm_mon -Arf1
Output: * Node mural-mgt-1.us.guavus.com: + master-pgsql : 10 + pgsql-data-status : STREAMING|POTENTIAL + pgsql-master-baseline : 0000001285000090 + pgsql-receiver-status : normal + pgsql-status : HS:alone + pgsql-xlog-loc : 000000128427F600 * Node mural-nn-1.us.guavus.com: + master-pgsql : 1000 + pgsql-data-status : LATEST + pgsql-master-baseline : 0000001247000090 + pgsql-receiver-status : normal (master) + pgsql-status : PRI + pgsql-xlog-loc : 000000128173E718 * Node mural-nn-2.us.guavus.com: + master-pgsql : 100 + pgsql-data-status : STREAMING|SYNC + pgsql-receiver-status : normal + pgsql-status : HS:sync + pgsql-xlog-loc : 000000128173E888
Copyright © 2020,CISCO Systems, Inc.
2
MURAL Software Upgrade Guide
2.1.2 Enable Ansible
Note: As with previous release of CISCO, Ansible & Jinja2 was removed from the management node, to run the installer, we need to re-install both the components and after installation, follow the given steps to remove them.
1. To install from management node, run the following command: cd /etc/reflex-provisioner/packages/pip pip install ansible-2.3.1.0.tar.gz Jinja2-2.8.1-py2.py3-noneany.whl
2. Verify the installation pip list | grep -iE 'jinja2|ansible'
3. To uninstall pip uninstall jinja2 ansible
2.1.3 Verify all the services
To verify all the services are up and running, run the following command: cd /etc/reflex-provisioner ansible-playbook -i inventory/generated/prod/mural/hosts playbooks/platform/service_checks/all.yml -k
2.2 Download RPMs
Perform the following steps to download the RPMs: 1. Log into the management node. 2. Download the patch artifacts from SFTP Server, to this location /opt/repos/mrx/5.6/5.6.2.rc1/ cd /opt/repos/mrx/5.6/5.6.2.rc1
Note: Contact Technical Support for any information on how to access the SFTP server.
3
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
The following list of artifacts are included in the patch: l patch3_artc.tar.gz l reflex-aggregation-5.6.2.rc1-296.el7.centos.x86_64.rpm l reflex-datafactory-5.6.2.rc1-296.el7.centos.x86_64.rpm l reflex-solution-provisioner-5.6.2.rc1-296.el7.centos.x86_64.rpm
2.3 Compare md5sum
To verify the data integrity of the copied packages, compare the values of md5sum of RPMs with the downloaded artifacts available in the Release Notes using the following command: # md5sum <filename> Example, md5sum reflex-aggregation-5.6.2.rc1-296.el7.centos.x86_64.rpm b57013b0f764e46acc19ee2627aa5e8f
Copyright © 2020,CISCO Systems, Inc.
4
MURAL Software Upgrade Guide
3. Upgrading MURAL from v 5.0.2.p2 to v 5.0.2.p3
To start the upgrade process, perform the following steps in the specified order:
3.1 Extract the TAR file
To extract the TAR file, run the following command: tar -zxvf patch3_artc.tar.gz The following sample may resemble the output: patch3_artc/ patch3_artc/DailyQueriesCemus.txt patch3_artc/MonthlyQueriesCemus.txt patch3_artc/GRTQueriesCemus.txt patch3_artc/DailyCemusReport.properties patch3_artc/MonthlyCemusReport.properties patch3_artc/GRTCemusReport.properties patch3_artc/protocol-values.txt patch3_artc/grt-protocol-values.csv patch3_artc/GenerateReportsVodafone.py
3.2 Load Extracted Files into HDFS
Load .txt and .property files extracted in Section 1.1 into HDFS. The files to be loaded are as follows:
l DailyQueriesCemus.txt l DailyCemusReport.properties l MonthlyQueriesCemus.txt l MonthlyCemusReport.properties l GRTQueriesCemus.txt l GRTCemusReport.properties l grt-protocol-values.csv
Copyright © 2020,CISCO Systems, Inc.
5
MURAL Software Upgrade Guide
Run the following commands:
cd patch3_artc/ hdfs dfs -put -f DailyQueriesCemus.txt DailyCemusReport.properties protocol-values.txt /data/streaming hdfs dfs -put -f MonthlyQueriesCemus.txt MonthlyCemusReport.properties /data/streaming hdfs dfs -put -f GRTQueriesCemus.txt GRTCemusReport.properties grtprotocol-values.csv /data/streaming
3.3 Update the Repositories
Before updating the repositories, ensure that all the packages are downloaded from the SFTP server. For more information, refer to Prerequisites. Perform the following steps to create a repo and yum install:
1. Run the following command to update all the available yum repositories:
cd /opt/repos/mrx/5.6/5.6.2.rc1/ createrepo /opt/repos/mrx/5.6/5.6.2.rc1/
The following sample may resemble the output:
Spawning worker 0 with 3 pkgsSpawning worker 1 with 3 pkgs Spawning worker 2 with 3 pkgs Spawning worker 3 with 3 pkgs Spawning worker 4 with 3 pkgs Spawning worker 5 with 3 pkgsSpawning worker 6 with 3 pkgs Spawning worker 7 with 3 pkgs Spawning worker 8 with 2 pkgs Spawning worker 9 with 2 pkgs Spawning worker 10 with 2 pkgsSpawning worker 11 with 2 pkgs Spawning worker 12 with 2 pkgs Spawning worker 13 with 2 pkgs Spawning worker 14 with 2 pkgs Spawning worker 15 with 2 pkgs Spawning worker 16 with 2 pkgs
6
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
Spawning worker 17 with 2 pkgs Spawning worker 18 with 2 pkgsSpawning worker 19 with 2 pkgs Spawning worker 20 with 2 pkgs Spawning worker 21 with 2 pkgs Spawning worker 22 with 2 pkgs Spawning worker 23 with 2 pkgsWorkers Finished Saving Primary metadata Saving file lists metadataSaving other metadata Generating sqliteDBs Sqlite DBs complete
2. Run the following command to install yum repositories:
[root@mural001-mgt-01 5.6.2.rc1]# yum install -y /opt/repos/mrx/5.6/5.6.2.rc1/reflex-solution-provisioner5.6.2.rc1-296.el7.centos.x86_64.rp
The following sample may resemble the output:
Loaded plugins: fastestmirror Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast Examining /opt/repos/mrx/5.6/5.6.2.rc1/reflex-solutionprovisioner-5.6.2.rc1-296.el7.centos.x86_64.rpm: reflexsolution-provisioner-5.6.2.rc1-296.el7.centos.x86_64 Marking /opt/repos/mrx/5.6/5.6.2.rc1/reflex-solutionprovisioner-5.6.2.rc1-296.el7.centos.x86_64.rpm as an update to reflex-solution-provisioner-5.6.2.rc1-283.el7.centos.x86_64 Resolving Dependencies --> Running transaction check ---> Package reflex-solution-provisioner.x86_64 0:5.6.2.rc1283.el7.centos will be updated ---> Package reflex-solution-provisioner.x86_64 0:5.6.2.rc1296.el7.centos will be an update --> Finished Dependency Resolution
Copyright © 2020,CISCO Systems, Inc.
7
MURAL Software Upgrade Guide
Dependencies Resolved
==============================================================
Package
Arch
Version
Repository
Size
==============================================================
Updating:
reflex-solution-provisioner
x86_64
5.6.2.rc1-296.el7.centos
/reflex-solution-provisioner-
5.6.2.rc1-296.el7.centos.x86_64
696 k
Transaction Summary ============================================================== Upgrade 1 Package
Total size: 696 k Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : reflex-solution-provisioner-5.6.2.rc1296.el7.centos.x86_64
1/2 Cleanup : reflex-solution-provisioner-5.6.2.rc1283.el7.centos.x86_64
2/2 Verifying : reflex-solution-provisioner-5.6.2.rc1296.el7.centos.x86_64
8
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
1/2 Verifying : reflex-solution-provisioner-5.6.2.rc1283.el7.centos.x86_64
2/2
Updated:reflex-solution-provisioner.x86_64 0:5.6.2.rc1296.el7.centos
Complete!
3.4 Take a Backup
Take a backup of all the values and variables stored in the mrx and common directories. These variables are used to run the solution on Hadoop cluster and also help in executing the spark jobs. Run the following commands to take the backup: tar -zcvf drop3_mural_groupvars_bkp.tgz /etc/reflexprovisioner/inventory/generated/prod/mural/group_vars/all/mrx tar -zcvf drop3_common_bkp.tgz /etc/reflexprovisioner/inventory/generated/prod/mural/vars/customer/commo n
3.5 Update Variables
In file, /etc/reflex-provisioner/work_dir/reflex-solution-provisioner/inventory/templates/group_vars/global/all/mrx/all.yml change the value of install_type variable to upgrade.
3.6 Refresh Inventory
Run the following commands to refresh the inventory: # cd /etc/reflex-provisioner
Copyright © 2020,CISCO Systems, Inc.
9
MURAL Software Upgrade Guide
# ./scripts/composition/refresh.sh -i mural -s prod The following sample may resemble the output: -i mural was triggered! -s prod was triggered! Refreshing init inventory Refreshing mural inventory
3.7 Stop the Running Jobs
1. Stop the input data flow. 2. Stop all the running jobs before executing solution installer:
a. Log into the namenode where jobs are running. For example, jobs in Mural 5 lab run from the active namenode, nn2. # ssh <NN2 FQDN>
b. Stop Talend jobs on this active namenode by checking the process of talend http and talend nonhttp job and killing the process ID if it is in running status: i. Check status of talend http process: # ps -ef | grep talend-http | grep -v grep The following sample may resemble the output: root 51484 45694 0 06:59 pts/2 00:00:00 sh /root/jobs/ingestion_jobs/run-talend-httpjob.sh ii. Kill talend http process if it is running: # kill -9 <PID's> For example,
10
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
# kill -9 51484
iii. Check status of talend non-http process:
# ps -ef | grep talend-nonhttp | grep -v grep
The following sample may resemble the output:
root 51483 45693 0 06:59 pts/2 00:00:00 sh /root/jobs/ingestion_jobs/run-talend-nonhttpjob.sh
iv. Kill talend non-http process if it is running:
# kill -9 <PID's>
For example, # kill -9 51483
c. Verify if the jobs are killed
root@mural-nn-2 ~]# ps -ef | egrep 'talendhttp|talend-nonhttp' | grep -v grep
d. Stop the master job.
# ps -ef | grep master_http | grep -v grep
The following sample may resemble the output
root
6420 60609 3 07:05 pts/2 00:09:22
/usr/java/latest/bin/java -cp
/usr/lib/spark2/jars/netty-all-
4.0.42.Final.jar:/usr/lib/spark2/jars/*:/opt/tms/jav
a/DataMediationEngine/WEB-
INF/classes/:/opt/tms/java/dme-with-
dependencies.jar:/opt/tms/java/ddj-with-
dependencies.jar:/usr/lib/hive/lib/*:/usr/lib/spark2
/conf/:/usr/lib/spark2/jars/*:/etc/hadoop/conf/:/etc
Copyright © 2020,CISCO Systems, Inc.
11
MURAL Software Upgrade Guide
/hadoop/conf/:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/ .//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoophdfs/lib/*:/usr/lib/hadoophdfs/.//*:/usr/lib/hadoopyarn/lib/*:/usr/lib/hadoopyarn/.//*:/usr/lib/hadoopmapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//* Xmx2g -XX:-ResizePLAB org.apache.spark.deploy.SparkSubmit --master yarnclient --conf spark.scheduler.allocation.file=/opt/tms/java/DataMe diationEngine/WEB-INF/classes/poolConfig.xml --conf spark.driver.extraJavaOptions=-XX:-ResizePLAB -properties-file /opt/tms/java/DataMediationEngine/WEBINF/classes/spark.properties --class com.guavus.reflex.marketing.dme.job.MRXMasterJob -name master_http --queue jobs.dme --files /opt/tms/java/DataMediationEngine/WEBINF/classes/log4jexecutor.properties,/opt/tms/java/DataMediationEngin e/WEB-INF/classes/streaming.ini --jars /opt/tms/java/dme-with-dependencies.jar /opt/tms/java/dme-with-dependencies.jar
root
60609 45694 0 07:02 pts/2 00:00:00 sh
/root/jobs/streaming_jobs/master_http_wrapper.sh
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's>
For example,
# kill -9 6420 60609
12
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
l Wait for counters in the /var/log/mural_logs/master_ http.log file. Once counters show as 0, then proceed to stop the the master_nonhttp job:
# ps -ef | grep master_nonhttp | grep -v grep
The following sample may resemble the output:
root
61263 45694 0 07:03 pts/2 00:00:00 sh
/root/jobs/streaming_jobs/master_nonhttp_wrapper.sh
root
61349 61263 9 07:03 pts/2 00:24:55
/usr/java/latest/bin/java -cp
/usr/lib/spark2/jars/netty-all-
4.0.42.Final.jar:/usr/lib/spark2/jars/*:/opt/tms/jav
a/DataMediationEngine2/WEB-
INF/classes/:/opt/tms/java/dme-with-
dependencies.jar:/opt/tms/java/ddj-with-
dependencies.jar:/usr/lib/hive/lib/*:/usr/lib/spark2
/conf/:/usr/lib/spark2/jars/*:/etc/hadoop/conf/:/etc
/hadoop/conf/:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/
.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-
hdfs/lib/*:/usr/lib/hadoop-
hdfs/.//*:/usr/lib/hadoop-
yarn/lib/*:/usr/lib/hadoop-
yarn/.//*:/usr/lib/hadoop-
mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//* -
Xmx2g -XX:-ResizePLAB
org.apache.spark.deploy.SparkSubmit --master yarn-
client --conf
spark.scheduler.allocation.file=/opt/tms/java/DataMe
diationEngine2/WEB-INF/classes/poolConfig.xml --conf
spark.driver.extraJavaOptions=-XX:-ResizePLAB --
properties-file
/opt/tms/java/DataMediationEngine2/WEB-
INF/classes/spark.properties --class
Copyright © 2020,CISCO Systems, Inc.
13
MURAL Software Upgrade Guide
com.guavus.reflex.marketing.dme.job.MRXMasterJob -name master_nonhttp --queue jobs.dme --files /opt/tms/java/DataMediationEngine2/WEBINF/classes/log4jexecutor.properties,/opt/tms/java/DataMediationEngin e2/WEB-INF/classes/streaming.ini --jars /opt/tms/java/dme-with-dependencies.jar /opt/tms/java/dme-with-dependencies.jar
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's>
For example, # kill -9 61263 61349
l Verify if the jobs are killed: ps -ef | egrep 'master_http|master_nonhttp' | grep v grep
The following sample may resemble the output: # ps -ef | egrep 'master_http|master_nonhttp' | grep -v grep
l Wait for counters in the /var/log/mural_logs/master_nonhttp.log file. Once counters show as 0, then proceed to stop Aggreation job.
e. Stop CONV and SDR jobs: # ps -ef | grep conv_config | grep -v grep
The following sample may resemble the output: # ps -ef | grep conv_config | grep -v grep
14
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
root 38891 1 0 Apr22 ? 00:00:00 sh /root/jobs/aggregation_jobs/run-conv_config_file.sh
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's> For example, # kill -9 38891
l Find sdr process: ps -ef | grep sdr_config | grep -v grep The following sample may resemble the output: # ps -ef | grep sdr_config | grep -v grep root 55161 1 0 Apr22 ? 00:00:00 /bin/bash /root/jobs/aggregation_jobs/run-sdr_config_file.sh
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's> For example, # kill -9 55161 f. Stop the 5 minutes Aggregation Job: ps -ef | grep 5min-agg | grep -v grep The following sample may resemble the output: # ps -ef | grep 5min-agg | grep -v grep root 5131 45694 0 07:05 pts/2 00:00:00 sh /root/jobs/aggregation_jobs/run-5min-agg-mgr_sh.sh
Copyright © 2020,CISCO Systems, Inc.
15
MURAL Software Upgrade Guide
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's>
For example, #kill -9 5131 g. Stop the hourly Aggregation Job:
ps -ef | grep hourly-agg | grep -v grep
The following sample may resemble the output: # ps -ef | grep hourly-agg | grep -v grep
root 17868 45694 0 07:10 pts/2 00:00:00 sh /root/jobs/aggregation_jobs/run-hourly-agg-mgr_sh.sh
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's>
For example, #kill -9 17868
h. Stop the daily Aggregation Job:
ps -ef | grep daily-agg | grep -v grep
The following sample may resemble the output: # ps -ef | grep daily-agg | grep -v grep
root
19338 74594 0 07:10 pts/2 00:00:00 sh
/root/jobs/aggregation_jobs/run-daily-agg-mgr_sh.sh
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's>
16
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
For example, #kill -9 19338 i. Stop the monthly Aggregation Job: ps -ef | grep monthly-agg | grep -v grep The following sample may resemble the output: # ps -ef | grep monthly-agg | grep -v grep root 16543 55644 0 07:10 pts/2 00:00:00 sh /root/jobs/aggregation_jobs/run-monthly-agg-mgr_ sh.sh
l Obtain the ID of the pod from the preceding output and run the following command:
# kill -9 <PID's> For example, #kill -9 16543 j. Verify if the aggregation jobs are killed: ps -ef | egrep '5min-agg|hourly-agg|dailyagg|monthly-agg'| grep -v grep The following sample may resemble the output: # ps -ef | egrep '5min-agg|hourly-agg|dailyagg|monthly-agg'| grep -v grep k. Stop the cleanup job # ps -ef | grep cleanup | grep -v grep Output: # ps -ef | grep cleanup | grep -v grep
Copyright © 2020,CISCO Systems, Inc.
17
MURAL Software Upgrade Guide
root 11249 1 0 Jun23 ? 00:00:00 sh /root/jobs/misc_jobs/run_cleanup_job.sh Obtain the ID of the pod from the preceding output and run the following command: # kill -9 <PID's>
For example, #kill -9 11249
l. Run the following command to ensure that jobs are not running in Yarn: [root@mural-nn-2 aggregation_jobs]# yarn application -list
The following sample may resemble the output: [root@mural-nn-2 aggregation_jobs]# yarn application -list Total number of applications (applicationtypes: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):2 Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL #yarn application -kill <Application-Id> The following sample may resemble # yarn application -kill application_1585652398251_ 0003 Killing application application_1585652398251_ 0003 the output:
3.8 Run the Ansible solution
Run the following command on the management node to run the installer:
18
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
cd /etc/reflex-provisioner ansible-playbook -i inventory/generated/prod/mural/hosts playbooks/mrx/deploy.yml -k --skip-tags tomcat,spetomcat,haproxy,azkacli,Grafana
Notes:
l Enter initialization key of size 32 bytes. If the key is not available, 12345678901234567890123456789012 can be used as a default key.
l Press ENTER if encryption is not required. l Enter vector key of size 16 bytes. or use
1234567890123456 as a default key if the key is not available. l Press ENTER if encryption is not required.
The following sample may resemble the output:
PLAY RECAP **************************************************** ***************** localhost : ok=23 changed=1 unreachable=0 failed=0
mural001-lb-01.cisco.com : ok=23 changed=1 unreachable=0 failed=0
mural001-lb-02.cisco.com : ok=23 changed=1 unreachable=0 failed=0
mural001-mgt-01.cisco.com : ok=24 changed=1 unreachable=0 failed=0
mural001-mst-01.cisco.com : ok=105 changed=27 unreachable=0 failed=0
mural001-mst-02.cisco.com : ok=83 changed=14 unreachable=0 failed=0
mural001-slv-01.cisco.com : ok=78 changed=13 unreachable=0 failed=0
mural001-slv-02.cisco.com : ok=76 changed=13
Copyright © 2020,CISCO Systems, Inc.
19
MURAL Software Upgrade Guide
unreachable=0 failed=0 mural001-slv-03.cisco.com : ok=76 changed=13
unreachable=0 failed=0
Restart MRX UI tomcat container (if required: when tomcat container is not restarted with solution installer) kubectl get pods -o wide (to get list of running pods- copy tomcat pod ids) kubectl delete pod <tomcat pod id>
20
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
4. Post Upgrade Procedure
4.1 Run the Generated Reports
1. Login to the namenode, nn . 2. To execute the generated reports, copy the GenerateReportsVodafone.py
file to namenode using below commands:
cd /opt/repos/mrx/5.6/5.6.2.rc1/patch3_artc for i in `cat /etc/hosts |grep -E 'nn' |awk '{print $1}'`;do scp GenerateReportsVodafone.py root@$i:/opt/etc/scripts/;done
Note: namenode variable "nn" will differ based on the namenode name in /etc/hosts file of the customer.
4.2 Verify the .csv file
Check the location of customergroup.csv (sample file) file using the following commands:
hdfs dfs -ls /data/streaming/customergroup.csv
Output:
-rw-r--r-- 3 root hadoop
270 2020-04-03 09:21
/data/streaming/customergroup.csv
Check the content of customergroup.csv file using the following commands:
hdfs dfs -cat /data/streaming/customergroup.csv
To see the content of customergroup.csv, refer Appendix-A.
4.3 Generate Encrypted Password
Generate the encrypted password to allow the smooth execution of spark jobs. 1. Run the following commands on active Namenode to move to scripts directory cd /opt/etc/scripts
Copyright © 2020,CISCO Systems, Inc.
21
MURAL Software Upgrade Guide
2. Run the bash script file using below command to generate the encrypted password /bin/bash EncryptPassword.sh
3. The prompt for <password-to-encrypt> and <password-to-generate>is displayed. Notes:
o password-to-encrypt value should be same as the value of postgres_password in the file,
/etc/reflex-provisioner/inventory/templates/group_vars/global/all/mrx/agg/main.yml.
o password-to-generate-key can be any string, it is used to generate encrypted password in combination with postgres_password
4.4 Use Encrypted Password
1. Store <password-to-generate-key> in the HDFS file, key.txt located at /data/streaming/
2. Update the local file,ImpalaToPostgres_DB.properties located at /opt/sample_jobs/dimensionImpalaToPostgres/ on both the name nodes.
a. Use the encrypted password string property, db.pwd and assign the newly generated password to db.pwd.
b. Use the hdfs path as mentioned in step 1 for the property, key.filepath. The default path is /data/streaming/key.txt.
3. Perform the following steps to update hdfs file, postgres_fb.xml located at /data/streaming: a. Download the file postgres_fb.xml: hdfs dfs -get /data/streaming/postgres_fb.xml
b. Open the file: vi postgres_fb.xml
22
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
c. Update the password string property, password. Refer to the section Generate Encrypted Password for more information. d. Use the hdfs path as mentioned in the Step 1 for the property, key_filepath. The default path from installer is /data/streaming/key.txt e. Save and upload the file on HDFS using the below command: hdfs dfs -put -f postgres_fb.xml /data/streaming/
4.5 Start the Jobs
Follow the below commands, to start all the MURAL jobs that were stopped in Section 1.7.
4.5.1 For Master httppdm
1. Run the following command to open the streaming.ini file: vim /opt/tms/java/DataMediationEngine/WEB-INF/classes/streaming.ini 2. Change value of streaming.kafkaTopicHttpPDM to httpPDM_new streaming.kafkaTopicHttpPDM.property is used by http streaming jobs to add new table for 5 min aggregation. The default value of this property is "httpPDM" For example: streaming.kafkaTopicHttpPDM to httpPDM_new Note: - Copy streaming.ini file on all namenode and datanode. Example: for i in `cat /etc/hosts |grep -E 'mst-02|slv' |awk '{print $3}'`;do scp /opt/tms/java/DataMediationEngine/WEBINF/classes/streaming.ini root@$i:/opt/tms/java/DataMediationEngine/WEBINF/classes/streaming.ini;done
nohup sh /root/jobs/streaming_jobs/master_http_wrapper.sh > /var/log/mural_logs/master-http.out
Copyright © 2020,CISCO Systems, Inc.
23
MURAL Software Upgrade Guide
4.5.2 For Master nonhttppdm
1. Run the following command to open the streaming.ini file:
vim /opt/tms/java/DataMediationEngine2/WEBINF/classes/streaming.ini
2. Change value of streaming.kafkaTopicNonHttpPDM to nonhttpPDM_new. This property is used by non-http streaming jobs to add new table for 5 min aggregation. For example: streaming.kafkaTopicNonHttpPDM nonhttpPDM_new Note: - Copy streaming.ini file to all namenode and datanode. Example:
for i in `cat /etc/hosts |grep -E 'mst-02|slv' |awk '{print $3}'`;do scp /opt/tms/java/DataMediationEngine2/WEBINF/classes/streaming.ini root@$i:/opt/tms/java/DataMediationEngine2/WEBINF/classes/streaming.ini;done
nohup sh /root/jobs/streaming_jobs/master_nonhttp_wrapper.sh > /var/log/mural_logs/master-nonhttp.out &
Check the logs in the files, /var/log/mural_logs/master-http.out and /var/log/mural_logs/master-nonhttp.out and wait for 0 counter to be displayed.
4.5.3 Run the Ingestion Job
Perform the following steps: 1. Update the extract.conf file in /opt/mrx/ingestion/etc/ on the active name node:
Attribute
Value
source_file_mask *http*.gz
root_dir
/user/mrx/ingestion
2. Update the extract.conf file in /opt/mrx/ingestion/etc2/ on
24
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
the active name node:
Attribute
Value
source_file_mask *flow*.gz
root_dir
/user/mrx/ingestion
3. Copy both the files on all the name node and data node server. Modify the below commands based on your hosts (name node and data node server)
for i in `cat /etc/hosts |grep -E '<NN-2>|<DN>' |awk '{print $3}'`;do scp extract.conf root@<Master/Data Nodes_ FQDN>:/opt/mrx/ingestion/etc/ for i in `cat /etc/hosts |grep -E '<NN-2>|<DN>|awk '{print $3}'`; scp extract.conf root@<Master/Data Nodes_ FQDN>:/opt/mrx/ingestion/etc2/
Example:
for i in `cat /etc/hosts |grep -E 'mst-02|slv' |awk '{print $3}'`;do scp /opt/mrx/ingestion/etc/extract.conf root@$i:/opt/mrx/ingestion/etc/extract.conf;done for i in `cat /etc/hosts |grep -E 'mst-02|slv' |awk '{print $3}'`;do scp /opt/mrx/ingestion/etc2/extract.conf root@$i:/opt/mrx/ingestion/etc2/extract.conf;done
4. Run the ingestion job on the active name node:
nohup sh /root/jobs/ingestion_jobs/run-talend-http-job.sh > /var/log/mural_logs/talend-http.out &
nohup sh /root/jobs/ingestion_jobs/run-talend-nonhttp-job.sh > /var/log/mural_logs/talend-nonhttp.out &
5. Start the input data.
Copyright © 2020,CISCO Systems, Inc.
25
MURAL Software Upgrade Guide
4.5.4 Run the 5 minutes Aggregation Job
1. Change the table name in the config file for 5min aggregation located at
hdfs at /data/streaming/pdm-aggregation.config. 2. Copy file pdm-aggregation.config from HDFS to Local:
hdfs dfs -get /data/streaming/pdm-aggregation.config 3. Put new values as mentioned:
output.db=kafkaconnectdb output.table=5min_points_new http.pdm.table=httpPDM_new nonhttp.pdm.table=nonhttpPDM_new 4. Save the file. 5. Copy the updated file in the hdfs location /data/streaming: hdfs dfs -put -f pdm-aggregation.config /data/streaming 6. Delete the ts files used as a checkpoint and stores the bintag value to run the aggregation job for next instance: hdfs dfs -rm -r -skipTrash /data/streaming/pdm-aggregation-ts 7. Run the 5-minutes aggregation job: nohup sh /root/jobs/aggregation_jobs/run-5min-agg-mgr_sh.sh > /var/log/mural_logs/5min-agg.out &
4.5.5 Configure CAR Reports and BDR Reports
For information, refer to MURAL Upgrade Guide from v 5.0.2.p1 to v 5.0.2.p2
4.5.6 Run the Hourly Aggregation Job
1. Change the table name in the config file for hourly aggregation located at hdfs at
26
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
/data/streaming/HourlyAggregationJobConfig.properties. 2. Copy file HourlyAggregationJobConfig.properties from
HDFS to Local using the below command: hdfs dfs -get /data/streaming/HourlyAggregationJobConfig.properties 3. Put new values as mentioned: output.db=kafkaconnectdb output.table=hourly_points_new inputPoint.tableName=5min_points_new 4. Save the file.
5. Copy the file in the hdfs location at /data/streaming:
hdfs dfs -put -f HourlyAggregationJobConfig.properties /data/streaming 6. Delete the ts files: hdfs dfs -rm -r -skipTrash /data/streaming/hourly-aggregationts 7. Finally, run the hourly aggregation job: nohup sh /root/jobs/aggregation_jobs/run-hourly-agg-mgr_sh.sh > /var/log/mural_logs/hourly-agg.out &
4.5.7 Run Daily Aggregation Job
1. Change the table name in the config file for daily aggregation located at
hdfs at /data/streaming/ DailyAggregationJobConfig.properties. 2. Copy file DailyAggregationJobConfig.properties from HDFS
Copyright © 2020,CISCO Systems, Inc.
27
MURAL Software Upgrade Guide
to Local: hdfs dfs -get /data/streaming/DailyAggregationJobConfig.properties 3. Assign the new values as mentioned: dbName=kafkaconnectdb output.tableName=daily_points_new inputPoint.tableName=hourly_points_new 4. Save the file.
5. Copy the file in the hdfs location at /data/streaming:
hdfs dfs -put -f DailyAggregationJobConfig /data/streaming 6. Delete the ts files:
hdfs dfs -rm -r -skipTrash /data/streaming/daily-aggregationts 7. Run the daily aggregation job:
Update/root/jobs/aggregation_jobs/run-daily-aggweekReport_sh.sh file with JOB_GRT variable. JOB_GRT is a new vari-
able which needs to be added in the daily aggregation job wrapper as GRT reports are configured in Mural5 p3. JOB_GRT="/usr/bin/spark2-submit --master yarn-client --queue jobs.daily --name cemus_report_grt --properties-file /opt/sample_jobs/report_cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/ pcsaudf-with-dependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation .CemusReport /opt/tms/java/aggregations-with-dependencies.jar
28
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
cemus_report_grt /data/streaming/GRTCemusReport.properties"
#GRT report Script ${JOB_GRT} ${JOB_TIME}
The following sample may resemble the output:
#!/bin/bash
INTERVAL=86400
JOB_CMD="/usr/lib/spark2/bin/spark-submit --master yarn-client --queue jobs.daily --name DailyAggregationJob --properties-file /opt/sample_jobs/agg_ 1day/DailyAggregationJobConfig -spark --verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/ pcsaudf-with-dependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation .AggregationJob /opt/tms/java/aggregations-withdependencies.jar DailyAggregationJob /data/streaming/DailyAggregationJobConfig.properties"
JOB_CMD2="/usr/lib/spark2/bin/spark-submit --master yarnclient --queue default --name ImpalaToPostgresJob --properties-file /opt/sample_ jobs/dimensionImpalaToPostgres/ sparkConfigs.txt --driver-class-path /opt/tms/java/aggregations-with-dependencies.jar --verbose --class com.guavus.reflex.marketing.parquetstore.aggregation.ImpalaToP ostgres_ DimensionTables /opt/tms/java/aggregations-withdependencies.jar /opt/sample_jobs/ dimensionImpalaToPostgres/ImpalaToPostgres_DB.properties /opt/sample_jobs/ dimensionImpalaToPostgres/MappingsPostgresToImpalaColumn.conf"
Copyright © 2020,CISCO Systems, Inc.
29
MURAL Software Upgrade Guide
JOB_weekReport="/bin/bash /opt/etc/scripts/reportExecutionWeekly.sh"
JOB_Cemus="/usr/bin/spark2-submit --master yarn-client --queue jobs.daily --name cemus_report_daily --properties-file /opt/sample_jobs/report_ cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudfwith-dependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation. CemusReport /opt/tms/java/aggregations-with-dependencies.jar cemus_report_daily /data/streaming/DailyCemusReport.properties"
JOB_GRT="/usr/bin/spark2-submit --master yarn-client --queue jobs.daily --name cemus_report _grt --properties-file /opt/sample_jobs/report_ cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-withdependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation.CemusRepo rt /opt/tms/java/aggregations-with-dependencies.jar cemus_report_ grt /data/streaming/ GRTCemusReport.properties"
while ((1)); do CUR_JOB_TIME=$(date +%s) # 1-day aggregation job expects job time aligned to hour boundary JOB_TIME=$((((${CUR_JOB_TIME} / 86400) * 86400)+3600)) NEW_JOB_TIME=$((CUR_JOB_TIME + ${INTERVAL})) ${JOB_CMD} ${JOB_TIME}
30
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
sleep 120 ${JOB_CMD2} ${JOB_TIME} Daily END_JOB_TIME=$(date +%s) # Weekly report script ${JOB_weekReport} ${JOB_TIME} sleep 60 #Cemus report Script ${JOB_Cemus} ${JOB_TIME} sleep 60 #GRT report Script ${JOB_GRT} ${JOB_TIME} SLEEP_DUR=$((${NEW_JOB_TIME} - ${END_JOB_TIME})) #echo ${SLEEP_DUR} if (( ${SLEEP_DUR} > 0 )); then sleep ${SLEEP_DUR} fi nohup sh /root/jobs/aggregation_jobs/run-daily-agg-weekReport_ sh.sh > /var/log/mural_logs/ daily-week-agg.out &
Note: Reports will be available at cemusReportRootDir=/data/mrx/customer/cemus-report. The table
used for daily report generation is 5min_ponts that can only be populated after 5min aggregation. Report will be scheduled as soon as daily job finishes and will be generated for yesterday's data.
4.5.8 Run Monthly Aggregation Job
1. Change the table name in the config file for monthly aggregation located at hdfs at/data/streaming/MonthlyAggregationJobConfig.properties.
2. Copy file MonthlyAggregationJobConfig.properties from HDFS to local using the below command:
hdfs dfs -get /data/streaming/MonthlyAggregationJobConfig.properties
Copyright © 2020,CISCO Systems, Inc.
31
MURAL Software Upgrade Guide
3. Put new values as mentioned: dbName=kafkaconnectdb output.tableName=monthly_points_new inputPoint.tableName=daily_points_new 4. Save the file.
5. Copy the file in the hdfs location /data/streaming:
hdfs dfs -put -f MonthlyAggregationJobConfig /data/streaming
6. Delete the ts files: hdfs dfs -rm -r -skipTrash /data/streaming/monthly-aggregation-ts7
7. Run the monthly aggregation job:
Update /root/jobs/aggregation_jobs/ run-monthlyaggmonthlyReport.sh file
JOB_Cemus="/usr/bin/spark2-submit --master yarn-client --queue jobs.daily --name cemus_report_ monthly --properties-file /opt/sample_jobs/report_ cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-with-dependencies. jar --class com.guavus.reflex.marketing.parquetstore.aggregation.CemusReport /opt/tms/java/ aggregations-with-dependencies.jar cemus_report_monthly /data/streaming/MonthlyCemusReport. properties"
#Cemus Daily ${JOB_Cemus} ${CUR_JOB_TIME} Output:
32
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
#!/bin/bash #interval is set for 31 days INTERVAL=2678400 echo Interval is $INTERVAL JOB_CMD="/usr/lib/spark2/bin/spark-submit --master yarn-client -queue default --name MonthlyAggregationJob --properties-file /opt/sample_jobs/agg_1month/MonthlyAggregationJobConfig-spark -verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-with-dependencies.jar -class com.guavus.reflex.marketing.parquetstore.aggregation.AggregationJob /opt/tms/java/aggregations-with-dependencies.jar MonthlyAggregationJob /data/streaming/MonthlyAggregationJobConfig.properties" JOB_monReport="/usr/bin/python3 /opt/etc/scripts/GenerateReportsVodafone.py /opt/etc/scripts/monthly_conf_vodafone" JOB_Cemus="/usr/bin/spark2-submit --master yarn-client --queue jobs.daily --name cemus_report_monthly --properties-file /opt/sample_jobs/report_cemus/CemusReportConfig-spark --verbose -jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-with-dependencies.jar -class com.guavus.reflex.marketing.parquetstore.aggregation.CemusReport /opt/tms/java/aggregations-with-dependencies.jar cemus_report_ monthly /data/streaming/MonthlyCemusReport.properties" while ((1)); do date1=`date +%Y%m01`; date2=`date -d "$date1 +1 hour"`; CUR_JOB_TIME=`date -d "$date2" +%s`; #JOB_TIME=$(((${CUR_JOB_TIME} / 2678400) * 2678400)+86400) NEW_JOB_TIME=$((CUR_JOB_TIME + ${INTERVAL})) ${JOB_CMD} ${CUR_JOB_TIME} END_JOB_TIME=$(date +%s)
Copyright © 2020,CISCO Systems, Inc.
33
MURAL Software Upgrade Guide
${JOB_monReport} ${CUR_JOB_TIME} sleep 60 #Cemus Daily ${JOB_Cemus} ${CUR_JOB_TIME} SLEEP_DUR=$((${NEW_JOB_TIME} - ${END_JOB_TIME})) if (( ${SLEEP_DUR} > 0 )); then sleep ${SLEEP_DUR} fi done nohup sh /root/jobs/aggregation_jobs/run-monthlyaggmonthlyReport.sh > /var/log/mural_logs/run-monthlyaggmonthlyReport.out &
34
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
5. Cemus Report Verification
5.0.1 Verify the Location of Cemus Config Files
To verify the location of the Daily Cemus, Monthly Cemus, GRT Cemus reports log into the management node and run the following commands: hdfs dfs -ls /data/streaming/file_name
l For DailyCemusReports, check the following files should be available at the location /data/streaming 1. DailyCemusReports.properties 2. DailyQueriesCemus.txt
l For MonthlyCemusReports, check the following files should be available at the location /data/streaming 1. MonthlyCemusReports.properties 2. MonthlyQueriesCemus.txt
l For GRTCemusReports, check the following files should be available at the location /data/streaming 1. GRTCemusReport.properties 2. GRTQueriesCemus.txt 3. grt-protocol-values.csv
l protocol-values.txt This file consists of the list of protocols for which daily, monthly and GRT reports will be generated. This file remains common for daily, monthly and GRT aggregations and should also be available at the location /data/streaming. To verify its location, run the below command: hdfs dfs -ls /data/streaming/protocol-values.txt
Output: -rw-r--r-- 3 root hadoop 75428 2020-03-31 16:37 /data/streaming/protocol-values.txt
Copyright © 2020,CISCO Systems, Inc.
35
MURAL Software Upgrade Guide
5.0.2 Verify the Content of Cemus Config Files:
The content of the Daily Cemus, Monthly Cemus, GRT Cemus reports can be verified by the following steps:
1. For DailyCemusReport
l To verify the content of DailyCemusReport.properties file, run the following command:
hdfs dfs -cat /data/streaming/DailyCemusReport.properties
Output:
dbName=kafkaconnectdb input.tableName=5min_points_new reportType=daily query.file=/data/streaming/DailyQueriesCemus.txt download.threshold=1000 upload.threshold=1000 locality.code=UK notAvailableString=NotAvailable unknownString=Unknown cemusReportRootDir=/data/mrx/customer/cemus-report single.file.report=true protocol.file=/data/streaming/protocol-values.txt sftp.location=/root/sftp/cemus
Notes: i. Here, download.threshold and upload.threshold represents the threshold value in bytes for downloading and uploading of the files. Both the property values can be customized based on the client requirement. ii. sftp.location represents the location where all the reports can be placed.
iii. The value for single.file.reportshould be set to true. 2. For MonthlyCemusReport
36
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
l To verify the content of MonthlyCemusReport.properties, run the following command:
hdfs dfs -cat /data/streaming/MonthlyCemusReport.properties
Output:
dbName=kafkaconnectdb input.tableName=daily_points_new input.tableName2=montly_points_new reportType=monthly query.file=/data/streaming/MonthlyQueriesCemus.txt download.threshold=3000 upload.threshold=3000 locality.code=UK notAvailableString=NotAvailable unknownString=Unknown cemusReportRootDir=/data/mrx/customer/cemus-report single.file.report=true protocol.file=/data/streaming/protocol-values.txt sftp.location=/root/sftp/cemus
3. For GRTCemusReport l To verify the content of GRTCemusReport.properties, run the following command:
hdfs dfs -cat /data/streaming/GRTCemusReport.properties
Output:
dbName=kafkaconnectdb input.tableName=daily_points_new reportType=GRT query.file=/data/streaming/GRTQueriesCemus.txt download.threshold=3000 upload.threshold=3000
Copyright © 2020,CISCO Systems, Inc.
37
MURAL Software Upgrade Guide
locality.code=UK notAvailableString=NotAvailable unknownString=Unknown cemusReportRootDir=/data/mrx/customer/cemus-report single.file.report=true protocol.file=/data/streaming/protocol-values.txt grt.protocol.file=/data/streaming/grt-protocol-values.csv sftp.location=/root/sftp/cemus smtp.server.ip=192.168.104.25 smtp.port=25 sender.address=support@host.com receiver.addresses=sample1@host.com,sample2@host.com
Notes:
i. Here, download.threshold and upload.threshold represents the threshold value in bytes for downloading and uploading of the files. Both the property values can be customized based on the client requirement.
ii. sftp.location represents the location where all the reports can be placed.
iii. The value for single.file.report should be set to true.
l grt-protocol-value.csv
l To verify the content of grt-protocol-value.csv, run the following command:
hdfs dfs -cat /data/streaming/grt-protocolvalues.csv
Output:
protocol,audio_coded_kbps,video_coded_kbps whatsapp,25,240 facebook,30,500 skype,50,390 viber,45,980
38
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
facetime,32,1000 googleduo,60,590 googlehangout,55,450 other voip,35,335
This file is used specifically for report Mobile-UK-2-PGW_ 04.16.19 08.00 AM.csv report generation. The property values like protocol, audio codec and video codec can be changed by the customer accordingly or customer can also add a new protocol or remove existing protocol as per the requirements. protocol-values.txt l To verify the content of protocol-values.txt file, run the following command.
hdfs dfs -cat /data/streaming/protocol-values.txt
Output: Output: YouTube HTTPS Google Play WhatsApp Voice WhatsApp Transfer
5.1 Verify the DailyCemusReport
This is the output file generated based on the configurations in DailyCemusReports.properties and DailyQueriesCemus.txt file. To verify the location of the file, run the following command:
hdfs dfs -ls /data/mrx/customer/cemus-report/daily
For Example,
[root@mural-nn-2 ~]# hdfs dfs -ls /data/mrx/customer/cemus-report/daily/20200824
Copyright © 2020,CISCO Systems, Inc.
39
MURAL Software Upgrade Guide
Output:
Found 10 items
-rw-r--r-- 3 root hadoop
537754 2020-08-25 12:47
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServicesCorporate_ALL_ALL_D_2020-08-24-00-00-
00.csv
-rw-r--r-- 3 root hadoop
796825 2020-08-25 12:32
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServicesCorporate_ALL_TECH_D_2020-08-24-00-00-
00.csv
-rw-r--r-- 3 root hadoop
514024 2020-08-25 12:39
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServicesCorporate_DS_ALL_D_2020-08-24-00-00-00.csv
-rw-r--r-- 3 root hadoop
761948 2020-08-25 12:24
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServicesCorporate_DS_TECH_D_2020-08-24-00-00-
00.csv
-rw-r--r-- 3 root hadoop
532237 2020-08-25 12:17
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServices_ALL_ALL_D_2020-08-24-00-00-00.csv
-rw-r--r-- 3 root hadoop
788862 2020-08-25 12:02
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServices_ALL_TECH_D_2020-08-24-00-00-00.csv
-rw-r--r-- 3 root hadoop
508448 2020-08-25 12:09
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServices_DS_ALL_D_2020-08-24-00-00-00.csv
-rw-r--r-- 3 root hadoop
753944 2020-08-25 11:55
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServices_DS_TECH_D_2020-08-24-00-00-00.csv
-rw-r--r-- 3 root hadoop
0 2020-08-25 12:55
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServices_Time_BW_Distribution_ALL_D_2020-08-24-00-
00-00.csv
40
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
-rw-r--r-- 3 root hadoop
0 2020-08-25 12:51
/data/mrx/customer/cemus-report/daily/20200824/Umbrella_
UK_DataServices_Time_BW_Distribution_TECH_D_2020-08-24-
00-00-00.csv
5.2 Verify the MonthlyCemusReport
This is the output file generated based on the configurations in MonthlyCemusReports.properties and MoonthlyQueriesCemus.txt file. To verify the location of the file, run the following command:
hdfs dfs -ls /data/mrx/customer/cemus-report/monthly
Example:
[root@mural-nn-2 ~]# hdfs dfs -ls /data/mrx/customer/cemus-report/monthly/20200701
Output:
Found 4 items
-rw-r--r-- 3 root hadoop
0 2020-08-26 12:06
/data/mrx/customer/cemus-
report/monthly/20200701/Umbrella_UK_DataServices_BW_
Distribution_ALL_M_2020-07-01-00-0000.csv
-rw-r--r-- 3 root hadoop
0 2020-08-26 12:05
/data/mrx/customer/cemus-
report/monthly/20200701/Umbrella_UK_DataServices_BW_
Distribution_TECH_M_2020-07-01-00-0000.csv
-rw-r--r-- 3 root hadoop
0 2020-08-26 12:06
/data/mrx/customer/cemus-
report/monthly/20200701/Umbrella_UK_DataServices_BW_
Percentile_ALL_M_2020-07-01-00-0000.csv
-rw-r--r-- 3 root hadoop
0 2020-08-26 12:06
/data/mrx/customer/cemus-
Copyright © 2020,CISCO Systems, Inc.
41
MURAL Software Upgrade Guide
report/monthly/20200701/Umbrella_UK_DataServices_BW_ Percentile_TECH_M_2020-07-01-00-0000.csv
5.3 Verify the GRTCemusReport
This is the output file generated based on the configurations in GRTCemusReports.properties and GRTQueriesCemus.txt file. To verify the location of the file, run the following command:
hdfs dfs -ls /data/mrx/customer/cemus-report/GRT
Example:
[root@mural-nn-2 ~]# hdfs dfs -ls /data/mrx/customer/cemus-report/GRT/20200823
Output:
Found 2 items:
-rw-r--r-- 3 root hadoop
0 2020-08-24 11:31
/data/mrx/customer/cemus-report/GRT/20200823/Mobile-UK-1-
PGW_08.23.20.00.00 AM.csv
-rw-r--r-- 3 root hadoop
1235 2020-08-24 11:32
/data/mrx/customer/cemus-report/GRT/20200823/Mobile-UK-2-
PGW_08.23.20.00.00 AM.csv
42
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
6. Cemus Report Generation
To generate the daily, monthly and GRT cemus report follow the below mentioned steps: Note: This section is for information purpose. Run the following commands, only when all the aggregation jobs stopped.
6.1 Daily Cemus Report generation
Execute the following command to generate the Daily Cemus Report /usr/bin/spark2-submit --master yarn-client --queue jobs.daily -name cemus_report_daily --properties-file /opt/sample_jobs/report_cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hivejdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-with-dependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation.CemusReport /opt/tms/java/aggregations-with-dependencies.jar cemus_report_daily /data/streaming/DailyCemusReport.properties `expr ${job_time} / 1000
Here, job_time represents the execution time in millisecond in an epoch format. The report will be generated for the previous day of ${job_time} passed as a variable. The table used for daily report generation is 5min_points_new which is populated after every 5min aggregation. The daily, monthly and GRT Cemus reports are available at hdfs at cemusReportRootDir= /data/mrx/customer/cemusreport in .csv format and all the artifacts (Section 1.2) of the reports will be available at SFTP location in compressed .gz format.
6.2 Monthly Cemus Report generation
Execute the below mentioned commands for monthly Cemus report generation:
Copyright © 2020,CISCO Systems, Inc.
43
MURAL Software Upgrade Guide
/usr/bin/spark2-submit --master yarn-client --queue jobs.daily -name cemus_report_monthly --properties-file /opt/sample_jobs/report_cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hive-jdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-with-dependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation.CemusReport /opt/tms/java/aggregations-with-dependencies.jar cemus_report_monthly /data/streaming/MonthlyCemusReport.properties `expr ${job_time} / 1000`
The table used for monthly report generation are daily_points_new and monthly_points_new which can only be populated after daily and monthly aggregation.
6.3 GRT Cemus Report generation
Execute the below command to generate GRT Cemus report:
/usr/bin/spark2-submit --master yarn-client --queue jobs.daily -name cemus_report_grt --properties-file /opt/sample_jobs/report_cemus/CemusReportConfig-spark --verbose --jars /usr/lib/hive/lib/hivejdbcstandalone.jar,/opt/tms/java/pcsa/pcsaudf-with-dependencies.jar --class com.guavus.reflex.marketing.parquetstore.aggregation.CemusReport /opt/tms/java/aggregations-with-dependencies.jar cemus_report_grt /data/streaming/GRTCemusReport.properties `expr ${job_time} / 1000`
The table used for GRT report generation are daily_points_new which is populated after daily aggregation.
44
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
7. Cleanup Job configuration
Cleanup job configuration helps to add the tags to newly added reports for cleanup after the defined retention period of daily, monthly and grt file. Follow the below steps:
1. Go to the directory:
cd /opt/sample_jobs/cleanup_job
2. Open the cleanup_config.xml file :
vim cleanup_config.xml
3. Replace the existing daily tags with new below mentioned daily tags.
<files> ...
<file> <path type="hdfs" regex="/data/mrx/customer/cemus-
report/daily/" filetime="modifiedtime"/> <sla freq_unit="day" freq="20"/> <bin freq_unit="day" freq="1" holes="7"/>
</file> <file>
<path type="hdfs" regex="/data/mrx/customer/cemusreport/monthly/" filetime="modifiedtime"/>
<sla freq_unit="day" freq="180"/> <bin freq_unit="day" freq="1" holes="7"/> </file> <file> <path type="hdfs" regex="/data/mrx/customer/cemusreport/GRT/" filetime="modifiedtime"/> <sla freq_unit="day" freq="20"/> <bin freq_unit="day" freq="1" holes="7"/> </file>
Copyright © 2020,CISCO Systems, Inc.
45
MURAL Software Upgrade Guide
<file> <path type="localfs"
regex="/root/sftp/cemus/daily/${YYYYMMdd}" filetime="regex"/> <sla freq_unit="day" freq="20"/> <bin freq_unit="day" freq="1" holes="7"/>
</file> <file>
<path type="localfs" regex="/root/sftp/cemus/monthly/${YYYYMMdd}" filetime="regex"/>
<sla freq_unit="day" freq="180"/> <bin freq_unit="day" freq="1" holes="7"/> </file> <file> <path type="localfs" regex="/root/sftp/cemus/GRT/${YYYYMMdd}" filetime="regex"/> <sla freq_unit="day" freq="20"/> <bin freq_unit="day" freq="1" holes="7"/> </file> ... </files>
4. Copy the file to all the nodes.
cd /opt/sample_jobs/cleanup_job for i in `cat /etc/hosts |grep -E 'nn|dn' |awk '{print $1}'`;do scp cleanup_config.xml root@$i: /opt/sample_jobs/cleanup_job/;done
5. Run the cleanup job, once it is configured
nohup sh /root/jobs/misc_jobs/run_cleanup_job.sh > /var/log/mural_logs/cleanup.out &
46
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
8. Troubleshooting MURAL
8.1 Unable to launch User Interface
If you are unable to launch MURAL UI, check if the oauth container is up and running. If not, then restart the oauth container.
1. Run the following command to identify on which namenode oauth container is running: # kubectl get pods -o wide | grep -i oauth The following sample may resemble the output: Output: # kubectl get pods -o wide | grep -i oauth oauth-deployment-1968528477-b6x1w 2/2 Running 1 11d 10.233.113.7 mural-nn-2
2. Delete the oauth pod # kubectl delete pod <pod_name> --force --grace-period=0 Output: # kubectl delete pod oauth-deployment-1968528477-b6x1w --force --grace-period=0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "oauth-deployment-1968528477-b6x1w" deleted
8.2 Postgres is Down
Perform the following steps to restart the Postgres if its not operational # ssh root@<mural-node-wherePostgresDown> su - postgres /usr/pgsql-9.6/bin/postgres -D /var/lib/pgsql/9.6/data -c config_
Copyright © 2020,CISCO Systems, Inc.
47
MURAL Software Upgrade Guide
file=/var/lib/pgsql/9.6/data//postgresql.conf -p 5432 & cd /var/lib/pgsql/9.6 mv data data_backup_<Date> mkdir /var/lib/pgsql/9.6/data cd rm -rf /var/lib/pgsql/tmp/PGSQL.lock chmod 700 /var/lib/pgsql/9.6/data pg_basebackup -h 10.1.1.159 -U postgres -D /var/lib/pgsql/9.6/data -X stream -P cp /var/lib/pgsql/9.6/data_backup_0806/recovery.conf /var/lib/pgsql/9.6/data/recovery.conf chown postgres:postgres /var/lib/pgsql/9.6/data/recovery.conf chmod 600 /var/lib/pgsql/9.6/data/recovery.conf Login as a root user from another prompt pcs resource clear pgsql <node> pcs resource failcount reset pgsql pcs resource cleanup pgsql pcs resource clear pgsql <node> crm_mon -Afr1 (pcs status)
8.3 Active Master Node is Down
If the active master node is down or changed, all the jobs must be restarted on the current active node. Refer to the section Start the Jobs
8.4 Incorrect password is set
If you have set an incorrect password, perform the following steps to troubleshoot:
1. Run the following command to delete Kube Secret:
kubectl delete secret postgres-secret
2. Run the following command to reset Kube Secret:
48
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
kubectl create secret generic postgres-secret --fromliteral=postgres_password=<value of password_for_postgresDB_ for_editservice_user>
The following sample may resemble the output:
[root@mural001-mgt-01 reflex-provisioner]# kubectl create secret generic postgres-secret --from-literal=postgres_ password=$password_for_postgresDB_for_editservice_user 2020-03-12 11:27:58.677238 I | proto: duplicate proto type registered: google.protobuf.Any 2020-03-12 11:27:58.677286 I | proto: duplicate proto type registered: google.protobuf.Duration 2020-03-12 11:27:58.677300 I | proto: duplicate proto type registered: google.protobuf.Timestamp secret "postgres-secret" created
Run the following command to get list of running pods:
kubectl get pods -o wide
The following sample may resemble the output:
NAME
READY
RESTARTS AGE
IP
NODE
azkaban-platform-1060947321-77rfm
1/1
0
11d
10.233.113.8 mural-nn-2
editservice-57700654-kt0mg
2/2
0
23d
10.233.75.5 mural-nn-1
oauth-deployment-1968528477-b6x1w
2/2
1
23d
10.233.113.7 mural-nn-2
tomcat-mrxui-930774083-56nsg
2/2
0
21d
10.233.75.4 mural-nn-1
tomcat-mrxui-930774083-lxl16
2/2
0
21d
10.233.113.3 mural-nn-2
usermanagement-deployment-2792260976-z361b 3/3
STATUS Running Running Running Running Running Running
Copyright © 2020,CISCO Systems, Inc.
49
MURAL Software Upgrade Guide
0
4d
10.233.113.5 mural-nn-2
You have new mail in /var/spool/mail/root
Copy tomcat pod ID to restart that tomcat pod from the preceding output:
kubectl delete pod <tomcat pod id>
kubectl delete pod tomcat-mrxui-2665334765-dv3gx kubectl delete pod tomcat-mrxui-2665334765-qj47s
8.5 HAProxy Error-503
Though UI is running as Docker Service, HAProxy marks it as Error - 503 1. ssh to both the load balancer nodes, as follows : ssh to <both lb-nodes> vim /etc/haproxy/haproxy.cfg
2. Change below lines to: Need to append GET /mrx-web/ to option httpchk under MRX_UI_BE (as shown
below) Need to append GET / to option httpchk under MRX_UI_FE (as shown below) systemctl restart haproxy
8.6 Services are down
If any of the services are not running ssh to node: systemctl stop <service-name> systemctl start <service-name>
Example: While starting the platform services in Ansible following error messages is displayed: PLAY [Check Schema-Registry service] ******************************************************************* ************* TASK [Check schema-registry status]
50
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
******************************************************************* ************** fatal: [nn1.perf.guavus.in]: FAILED! => {"changed": false, "failed": true, "msg": "Service schema-registry is stopped, inactive, dead or failed"} fatal: [nn2.perf.guavus.in]: FAILED! => {"changed": false, "failed": true, "msg": "Service schema-registry is stopped, inactive, dead or failed"}
Solution: 1. On node nn1: ssh nn1; systemctl stop schema-registry.service systemctl enable schema-registry.service systemctl start schema-registry.service systemctl status schema-registry.service
2. Repeat the same step for node nn2.
8.7 MURAL UI shows old or no data
1. Log in to the active namenode [root@mural-nn-2 ~]#ssh mural-nn-2 Last login: Thu May 14 10:53:42 2020 from <node>
2. Edit the yml file, to update the new flexibin tables # vi /opt/tomcat-mrxui-config.yaml
Go to section (application.properties) in the above yml file and update the table names below "#FlexiBinning properties" section: #FlexiBinning properties #All properties should be matching with .properties file of Hourly/Daily/Monthly aggregation jobs . Stage will be hard-coded as below
Copyright © 2020,CISCO Systems, Inc.
51
MURAL Software Upgrade Guide
flexibin.aggregation.hourly.tableName=<updated_hourly_ table_name>
flexibin.aggregation.daily.tableName=<updated_daily_table_ name>
flexibin.aggregation.monthly.tableName=<updated_monthly_ table_name> 3. Copy the update file to standby Namenode as well [root@<activeNN> ~]# scp /opt/tomcat-mrxui-config.yaml root@<standbyNN>:/opt/ 4. List all config maps and copy tomcat config map [root@<activeNN> ~]# kubectl get cm 5. Delete old tomcat config map [root@<activeNN> ~]# kubectl delete cm tomcat-mrxui 6. Create a new config map for tomcat-mrxui [root@<activeNN> ~]# kubectl create -f /opt/tomcat-mrxuiconfig.yaml 7. Do below steps for both tomcat pods
1. List the tomcat pods and copy pod ID # kubectl get pod -o wide
2. Delete the pods # kubectl delete pod <tomcat-pod> --force --graceperiod=0
8. Check for both tomcat pods if both are up and running # kubectl get pod -o wide | grep tomcat <NN2> Output:
52
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
NAME
RESTARTS AGE
IP
tomcat-mrxui-3591704529-m3vnl
0
1d
<Internal_IP2>
tomcat-mrxui-3591704529-zk116
0
1d
<Internal_IP2>
READY NODE
2/2 <NN1>
2/2
STATUS Running Running
9. Verify that correct table names are pointed in tomcat UI (application.properties) - Sample Output
Correct flexibin tables names are pointed in tomcat docker container #FlexiBinning properties #All properties should be matching with .properties file of Hourly/Daily/Monthly aggregation jobs . Stage will be hardcoded as below
flexibin.aggregation.hourly.tableName=<updated_hourly_table_ name>
flexibin.aggregation.daily.tableName=<updated_daily_table_ name>
flexibin.aggregation.monthly.tableName=<updated_monthly_ table_name>
10. List the tomcat docker container ID and copy ID
# docker ps | grep mrxui | grep mrxtomcat | awk '{print $1}'
Output:
70c51523926a
11. Login to the docker container
# docker exec -it <tomcatcontainerID> bash For example # docker exec -it 70c51523926a bash
12. Check the updated flexibin table name in tomcat container (
Copyright © 2020,CISCO Systems, Inc.
53
MURAL Software Upgrade Guide
application.properties) file vi /opt/apache-tomcat-8.5.11/webapps/mrx-web/WEBINF/classes/application.properties 13. Also, check the time on MURAL UI if it's reflecting correct.
54
Copyright © 2020,CISCO Systems, Inc.
MURAL Software Upgrade Guide
9. PatchRollback
To roll back to the earlier version of MURAL, refer to Section 7: Installing the Solution in the Cisco MURAL Installation Guide (Mural Install Guide v5.0.2.rc1.pdf).
Copyright © 2020,CISCO Systems, Inc.
55
MURAL Software Upgrade Guide
10. Appendix-A
10.1 customergroup.csv
This file is used to add the companyname column values in 5min_points_new table and all the other tables like hourly_points_new, daily_points_new, monthly_points_new. This file has to be populated by customer. 5 min Aggregation job running without this file will have UD value populated in companyname column of aggregation tables. Below is the content of the file: msisdn,companyname 18109361934,corporate_A 98210461927,corporate_A 11000288258,corporate_A 18109361947,corporate_A 18109361927,corporate_A 28260461927,corporate_A 28110461927,corporate_A 28247461927,corporate_A 13132081290,corporate_A 26210461927,corporate_A 18109361951,corporate_A 28210461934,corporate_A
Copyright © 2020,CISCO Systems, Inc.
56
MURAL Software Upgrade Guide 18109361933,corporate_A 11000036516,corporate_A
57
Copyright © 2020,CISCO Systems, Inc.
madbuild; modified using iText 2.1.7 by 1T3XT