Generally you have much bigger benefit with dim_STAT when you're collecting and analyzing your data live (online). However, there maybe still many situations when this is simply not possible (due security restrictions, wide remote distance, or simply externally inaccessible hosts, etc.) -- for such cases dim_STAT has special tool EasySTAT allowing you to collect all the needed stats locally on the given machine, and then later to upload them to your dim_STAT server for post-analyze. To simplify overall setup, EasySTAT is integrated into STAT-service of dim_STAT and can be directly involved from there, also following the same rules for MySQL access and so on (e.g. if your STAT-service is already operational, then your EasySTAT will be too ;-))

The following article is explaining how to deploy and start EasySTAT on your Linux server from scratch, even if you never hear about..

Step-by-step instructions

0) be sure you have right perf tool installed on your system (it should be installed and perf --version should match your kernel from uname -a output)

1) download the STAT-service tarball from my site :

2) deploy it somewhere on the server (ex: /opt/stats) ("/opt/stats" is used as example here and then)..

 # cd /opt/stats
 # tar xzf /path/to/STATsrv-v5-Linux-x64-mysql.tgz

3) be sure you're logged as "root" user to allow "perf" and "Proc" stats to be collected !

go to STAT-service "bin" directory :

 # cd /opt/stats/STATsrv/bin

4) edit the ".env-mysql" file to provide the connection account to your MySQL instance :

 # vi .env-mysql

(you need to provide IP address / port / user / password)

NOTE :
  • this user don't need any "special" privileges, just having "connect + read" permissions will be just enough
  • if you're using MySQL 8.0 this account should be created with mysql_native_password authentication plugin

Also, be sure you have at least the following enabled in your my.conf :

innodb_monitor_enable = '%'
performance_schema = ON

5) check the scripts are are able to connect and working, try the following :

 # ./mysqlLOAD.sh 5

be sure you see some real numbers on the output, not just "-1" only ! (at least in the first 3-4 columns))

 # ./innodbSTAT.sh 5

(be sure the output is not empty here either)

 # ./innodbMUTEX.sh 5

(be sure the output is not empty here too)

6) create an output directory for stats collection :

 # mkdir /opt/stats/data

7) start stats collection :

 # nohup ./EasySTAT.sh /opt/stats/data 10 8 "Title or tag" &

means :

  • collect the data into "/opt/stats/data" with "10" seconds timeout frequency during "8" hours
  • if you need more or less hours, then just change "8" to what you need
  • "Title" (or tag) is optional, but helps to tag internally each collected series of stats -- this is just any useful name you can use to "tag" your collects, so don't do just copy & paste of "Title or tag" text, but just replace it by something spelling to you ;-))

also, for more advanced users since STAT-service rev-5.11 there are few more options available :

 # nohup ./EasySTAT.sh /opt/stats/data 10 8 "Title or tag" "" "" "" "/path/to/stats.log" "/path/to/user.log" &

where :

  • /path/to/stats.log -- full path to log file of all EasySTAT output messages, so you can follow the progress of all steps reported by EasySTAT about stats collections on your server..

  • /path/to/user.log -- full path to "user log" file, this is the file where you can add any kind of messages about what is going on your system to "tag" the collected stats with your messages (the file is watched all the time by EasySTAT script and all received messages are saved from this file together with collected stats and timestamped to be in sync) -- this is then largely simplifies further analyze as we can easily find within collected stats all "tagged" periods in time!

for example, supposing /var/tmp/log file is used for user log in EasySTAT, so you can do the following :

...
$ echo "Starting my night batch" >> /var/tmp/log
...
... you're running your batch ...
...
$ echo "Finished night batch" >> /var/tmp/log
...
$ echo "Day work started" >> /var/tmp/log
...
$ echo "Observing slowdown on user queries" >> /var/tmp/log
...
$ echo "End of slowdown, back to normal" >> /var/tmp/log
...

after what we could easily and exactly identify in collected stats time period of your night batch, day time slowdown, and everything else you wanted to tag ;-))

8) once finished :

 # cd /opt/stats
 # tar czf ./data-stats.tgz data

and send me back the obtained data-stats.tgz (by email or via dropbox / google-drive / etc. ;-))

Few more things :

  • instead of using nohup when starting EasySTAT.sh script you can run it manually as well for each test, and then ending the stats collection by Ctrl+C on each test end (end restart for each new test again)

  • you may also use different output directories as well for each test (e.g. data-test1, data-test2, etc..)

  • yes, better to run the script as "root" user to be sure to have all required permissions (specially for perf)

  • PLEASE, check carefully the step 5) to avoid to collect just empty stats.. -- unfortunately it's a very common problem for many users ;-))

  • no need to worry too much about disk space which the collected stats will occupy at the end.. -- generally 24H non-stop collected stats with 10sec interval will use less than 100MB in space, so you can safely run it for many days non-stop (when you don't know for ex. when exactly the critical conditions you're looking for will happening on your system, etc)..

  • the impact on your production workload generally will be also very minimal (similar if you were running the same stat commands in your Terminal online).. -- the biggest potential impact may come from perf tool, but even this in my own benchmark workloads is not changing the final results any much (on some pure CPU-bound in-memory workloads I've observed 5% max).. -- but if you have any worry about, mind that you can always re-edit .env-easystat file and comment out all *Perf*.sh commands..

All in all, once we have all these stats collected, we could try then to get the whole picture of your workload, show all the available details about, and propose further steps or tuning, etc.