Saturday, 12 July, 2014

dim_STAT v.9.0-u14 Core Update is here!

I'm happy to present you the latest CoreUpdate-14 for dim_STAT v.9.0 !

This update includes many minor fixes and small improvements making life better (like few more options on CLI scripts, more smart logic on dim_STAT-Server start/stop, few more options in web interface, updated look & feel, etc. etc.) - well, all these changes are inspired by my own and many other users daily usage of this tool. So, indeed, it remains alive and still helpful ;-)

However, there is a completely new feature I'd present to you which is coming with CoreUpdate-14 - and it's calling "Screenshots" ;-)

What are Screenshots? - in fact this feature was inspired by a more and more often need to save somewhere all or most of the graphs I'm seeing right now, while analyzing collecting data, rather generate them later when the work is done and observed issue was solved.. ReportTool is great for the post-analyze work, or for a straight forward reporting, etc. (you know exactly the time intervals you want to see, stats to graph, and so on) -- while during a live performance analyze you may have many intermediate graphs you're looking on which are part of the whole story board during your investigation, and they are all important as well as the base of your step-by-step logic and your final conclusion..

So far, Screenshots :

  • at any time you see any PNG graphs on your dim_STAT web page you'll also a small [Screenshot] button on the bottom of the page
  • as soon as you click on [Screenshot] button a new Tab/Window will be opened within your web browser and containing all the graphs you're currently seeing + the 2 special fields: Title and Notes to allow you to add some annotations related to your graphs (as well you may select all the graphs or only a part of them)..
  • similar to ReportTool, the wiki-style syntax is allowed within Notes content, while HTML syntax is also allowed (see details from the dim_STAT UsersGuide)
  • once editing is finished, you click on [Save] and your given Snapshot is saved
  • each Snapshot has a [PermaLink] (a static URL link you may use to share your Snapshot page with any users having an access to your dim_STAT web server)
  • at any time you may re-edit your Snapshot once again, or duplicate it to make a different version (ex.: less graphs, shorter Notes, etc.)
  • you can delete Snapshots as well, but be careful - there is no restore ;-)
  • also, Snapshots data are partially saved within your database, so they are depending on it, and from one database you cannot access Snapshots of another one (except you're clicking on a [PremaLink], but this will just print you a static document, without any editing possibility)..

The Snapshots page is looking like that:

So, at any time you may select to see :
  • the latest N Screenshots
  • the Screenshots matching a Title pattern and ordered by Time or Title, Ascending or Descending..
  • link to [PermaLink] is pointing to a URL with a static page with Snapshot content
  • link to [tar.Z] is pointing to a compressed TAR archive containing the whole Snapshot data (so can be sent as it and then deployed on any other computer as an HTML document)..

Publishing Snapshots :
  • at any time one or several Snapshots may be selected for publishing
  • a published document is then no more depending on a database, has its own Title and Notes, and containing all the data from the selected Snapshots
  • once generated, a published document will have as well its own [PermaLink] and [tar.Z], but also a [PDF] link as sometimes sharing a PDF document may be more preferable ;-)
  • the published document may still be re-edited in place (then PDF and tar.Z will be re-generated on every Save)
  • keep in mind that all data of a published document are kept only locally within its own directory and nowhere else (nothing in database, etc.) - so, re-editing is really implemented in place and based on content files
  • saving a document under a different title will create a new document leaving the original as it
  • re-editing can be disabled for a given document if its directory contains a file with a name ".noedit"
  • re-editing of all published documents may be disabled globally if the "pub" (upper) directory contains ".noedit" file
  • there is no way to delete a published document via a web interface (and this is made intentionally, as all "published" is expected to remain forever) -- while of course you may delete it manually, all these documents are just simple files after all.. ;-))

Few comments about internals :
  • as soon as you're involving any Snapshot action, there will be created a missed dim_Snapshot table in your current database and additional directories on your dim_STAT web server (/apps/httpd/home/docs/Snapshots/*)
  • Snapshot path: /apps/httpd/home/docs/Snapshots/data/{DBNAME}/{Snapshot-ID}
  • Published Document path: /apps/httpd/home/docs/Snapshots/pub/{Title}

Well, it's much more fun to use Snapshots than speak about, so hope you'll adapt it very quickly and and enjoy as me ;-)

The upgrade process is as usual :
  • 1.) download the latest tgz (WebX_apps-v90-u14.tgz) file from the CoreUpdates repository:
  • 2.) backup your current apps scripts: $ cd /opt/WebX ; tar czf apps-bkp.tgz apps
  • 3.) deploy the CoreUpdate-14 scripts bundle: $ cd /opt/WebX; tar xzf /path/to/WebX_apps-v90-u14.tgz
  • 4.) enjoy ;-)

As usually, any feedback is welcome!

Posted by Dimitri at 1:32 - Comments...
Categories: dim_STAT

Thursday, 15 May, 2014

MySQL Tech Day @Paris, 22/May-2014

The next MySQL TechDay is taking place in Paris, 22/May (the next week!!!) - if you're MySQL lover and will be in Paris area this day - hurry up to register on the event page and attend it - trust me, you'll not regret ;-))

We're continuing to follow our TechDay tradition:

  • the event is completely free (but places are limited, so you have to be registered to attend)
  • the content is pure technical and directly from Oracle engineering, no marketing ;-)
  • this is a true full day event, and we're reserving enough time to go in depth for each presented stuff..
  • the event is taking place in Oracle office in a pretty wide and comfortable amphitheater, covered by WiFi, so you may twit live about #mysqltechday and remain "connected" if this is a part of your constrains ;-)
  • we're starting at 10:30 to let you arrive "stressless" regardless traffic issues and distance (we know from previous experience that many arriving from different places in France, far away from Paris, and also some will come even from different countries! - Brussels, London, Birmingham, Dublin are already in our map for now ;-))
  • for those who will arrive earlier, a hot coffee with some sweats will be already waiting since 10:00 as a bonus ;-)
  • note: if you're arriving via public transportation keep in mind there is a direct tram going to the Oracle office from La Defense station (15min and you're arrived)..
  • and to finish with organization points:
    • around 13:00 we'll have a lunch in Oracle enterprise restaurant,
    • around 15:30 a coffee break
    • and around 17:30 we're expecting to finish (and let you in the same "stressless" conditions arrive at home ;-))

And now about the content..
  • very briefly we'll provide you an overview about the latest tech news from the MySQL Team

  • then, as promised from the last TechDay, I'll tell you the whole story about heavy OLTP workloads:
    • In-Memory and IO-bound, Read-Only and Read+Write..
    • their problems, solutions, workarounds, and improvements already made in MySQL 5.7
    • there was a long and hard work made since then, the result are surprising and amazing on the same time - and there are still many questions remaining without an answer.. ;-)
    • and, as promised, this time with a full deep dive into InnoDB internals -- we'll dig in details all the story with InnoDB flushing and purge, what was wrong before MySQL 5.5, what remained wrong in 5.5, improved in 5.6, redesigned and probably fixed in 5.7 -- how read-on-write issues were resolved, why parallel + improved flushing was implemented, what can be wrong and how to tune LRU flushing -- I'll tell you ALL ;-))
    • I'll have 2 hours to tell you the whole story, so be sure, you'll have for your time ;-)

  • then, again, as promised, we'll have Mark LEITH as our special guest during this event!
    • last time Mark was unable to come due unexpected "management issues"..
    • while this time we fixed all issues ahead, and just crossing fingers now for the flying conditions, as Mark will fly from UK to Paris the same day ;-)
    • if you did not attend any Mark's talks before, I'd present him as a "Practical MySQL Performance Schema Magician" !! ;-)
    • Performance Schema (PFS) is a gold mine of various valuable information about your MySQL instance
    • while entering in any huge gold mine you may feel yourself little bit lost.. ;-)
    • but Mark will show you how easy to find there your way in practice, and how powerful solutions built around PFS could be..
    • Mark will also present you his "ps_helper" - a collection of scripts he made to simplify practical PFS usage - this is a really great stuff, I'd compare it to what DTrace Toolkit made for DTrace -- you may use many scripts as they are just straightforward, then learn by example and create many new ones adapted explicitly to what you need, etc..
    • and trust me, some examples will really surprise you about how deep you may go with PFS ;-)
    • the best will be if you'll come with your laptop with installed latest MySQL 5.7 (or 5.6) on it and play with presented stuff by yourself..
    • BTW, ps_helper is fully integrated now within 5.7 and taking part of "sys" schema
    • as well, if you are not already doing, think to use MySQL Workbench 6.1+ : while this GUI tool is simply great for many general DBA tasks, it also introduced since v.6.1 a very helpful interface to discover, request and configure PFS via GUI.. - the tool is free and can be downloaded from here: (Linux, MacOSX and Windoze versions)
    • and of course Mark will speak about the latest MySQL Enterprise Monitor (MEM) version - it's fully using now PFS in its metrics and the result is really amazing.. - Mark will tell you all about and show you a live demo, and if you want to try your hands on - you may start from here: (the tool is not free, but has a long enough trial period to try)..

Well, I hope you'll have a lot of fun and a lot of food for your brain this day! ;-))

See you there!
And also think ahead about other tech topics you'll happy so see covered the next MySQL TechDay..

Useful event links:
UPD: my talk was based on my Percona Live 2014 slides
Posted by Dimitri at 14:28 - Comments...
Categories: MySQL

Wednesday, 02 April, 2014

MySQL 5.7 just rocks! ;-)

A next MySQL 5.7 milestone release is available an it just rocks! ;-)

few benchmark results to see where we're today comparing:

  • MySQL 5.7 / 5.6 / 5.5
  • Percona Server 5.6 / 5.5
  • MariaDB 10 / 5.5

for all engines the latest available versions were used; the data set is fitting memory size, so the main focus is on the internal contentions here: already fixed for some engines, or still remained for another ones ;-)

Sysbench OLTP_RO 8-tables :

Sysbench OLTP_RO Point-Selects 8-tables :

Sysbench OLTP_RW 8-tables :

All details about these benchmark results and others (IO-bound OLTP_RW, Uniform & Pareto, DBT2, LinkBench) - I'll present during my talk tomorrow at PerconaLive 2014. I will also cover: all internals about InnoDB flushing design and how we're making it yet more improved it in 5.7, fixed and pending issues we have for today, the impact of InnoDB purge, filesystem and flash storage choices on IO-heavy workloads, and several still pending unexplained mysteries around.. - prepare your brain for some storming ;-)

If you're attending Percona Live, don't miss the following talks from MySQL Team :

as well we'll be present all here this evening during our "Meeting MySQL Team" BOF session to answer any questions you want and discuss any issues you have - your valuable feedback helps us to make MySQL yet more better! - and nothing is better here than live and fair face to face discussions.. - so, don't miss ;-)

Also, to get an overview of all the new features and improvements coming within this 5.7 milestone release - you may find many interesting information for you by reading Geir's article -
UPD: my slides are here - MySQL_Perf-Percona_Live_2014-dim-key.pdf 
Posted by Dimitri at 16:37 - Comments...
Categories: MySQL

Friday, 31 January, 2014

MySQL & Friends @ FOSDEM-2014

February was yet so far.. - and finally it's just tomorrow, starting with MySQL & Friends Dev Room at FOSDEM 2014 in Brussels. I have a talk about "Starting with MySQL PERFORMANCE SCHEMA" - in fact I would call it rather "Using PFS with zero configuration" ;-) -- many people are thinking PFS is complicate, while in reality it's very simple, and just need little bit of love ;-) Since MySQL 5.6 PFS is enabled by default, and as the result - there are several very useful instrumentation stats available out-of-the-box, and my talk will be about them..

Of course I'll speak about MySQL Performance as well, and feel free to ask any questions about.

Also, don't miss talks from our MySQL Team :

See you all there!

(and hurry up to not miss MySQL & Friends Community Dinner - only few places left)

-Dimitri, heading to FOSDEM in train ;-)

UPD: my slides are here - MySQL_PFS_2014-dim.pdf 

Posted by Dimitri at 15:10 - Comments...
Categories: MySQL

Friday, 22 November, 2013

MySQL Performance: over 1M QPS with InnoDB Memcached Plugin in MySQL 5.7

Last week, during Tomas' keynote at MySQL Percona Live Conference in London we announced as one of "previews" of the following MySQL 5.7 release(s) -- an over 1,000,000 Query/sec result obtained with InnoDB Memcached plugin on a Read-Only workload. This article here is just to confirm the announced results without going too much in details..

In fact we have no idea yet for today what are exactly the scalability and performance limits for this solution.. The huge gain in performance was possible here due initial overall speed-up made recently in MySQL 5.7 and letting us reach 500K QPS in a "normal" SQL Read-Only workload. Then yet more improvement in the InnoDB Memcached Plugin code were possible and came just naturally. Specially since Facebook Team challenged us here pretty well by expressing all performance limitations they are hitting in their workloads. As well Facebook provided us a test case workload which we successfully used to improve even more our code. And finally the same test case was used to obtain the following benchmark results ;-)

The test was executed in "standalone" mode (both server and client are running on the same server). So, we used our biggest HW box we have in the LAB - a 48cores machine. This server was able very quickly to point us into any existing or potential performance issues and bottlenecks (and what is interesting that most of them were now on the memcached code itself). However, Query/sec rate (QPS) is depending a lot here of memory latency and CPU frequency, while this server is having 2Ghz CPU cores only, so on a faster HW you may expect even better results ;-)

Now, comparing best-to-best QPS results obtained on this server we have the following :

and for people who prefer 2D charts :

I've placed in legend "MySQL 5.6", while a true label should be rather "the best result we observed until now" ;-)) -- because some part of Memcached code improvement will be back-ported to MySQL 5.6 as well, so we may expect to see next 5.6 releases running here better too. However - only with MySQL 5.7 code base you'll be able to go really high..

During my talk at Percona Live in London I've also presented the following graphs - the Memcached QPS is corresponding here to the InnoDB "dml_reads/sec" stats :

There are 4 tests on these graphs representing "previous" MySQL code running on Memcached workload :

  • #1 - running on 48cores as it.. - we're hitting a severe contentions related to the MVCC code (which was fixed in the latest MySQL 5.7)..
  • #2 - limiting MySQL server to run on 16cores only to lower this contention.. - and then hitting transaction related contentions (which was also fixed in the latest MySQL 5.7 code)..
  • #3 - tune memcached plugin to keep several reads within a single internal transaction -- helps, but hitting other contentions..
  • #4 - limiting MySQL server to run on 8cores to see if contentions may be lowered -- indeed, the max peak QPS becomes higher (on 32 users), but overall performance is worse..

While on the latest MySQL 5.7 code things are looking completely differently :

There are 2 tests on these graphs:
  • #1 - is running on 48cores as it (no comments ;-))
  • #2 - is using "tuning" option to keep several reads within a single internal transaction - just slightly better on a peak max QPS, otherwise no significant difference anymore..

And to really feel the difference in obtained QPS gap, let's bring them all together to the same graph :

As you can see, the difference is more than impressive ;-))
  • all the curves on the left parts of graph representing QPS levels obtained on the "previous" MySQL 5.6 / 5.7 code..
  • then, the last curves on the right part - with the latest MySQL 5.7 code..

So, work is still in progress, and I let Sunny and Jimmy provide you all deep details about this huge step forward we made in the latest MySQL 5.7 release!

I don't know what will be the performance limit here.. Probably only HW level.. And don't know if we'll have a big enough HW to see it ;-) -- currently via a single 1Gbit network link we already observed over 700K QPS performance, and while the limitation is coming here from a single network link, the main troubles are coming from clients processing rather server.. - so, seems like Memcached @InnoDB is scaling now way better comparing to the "original" Memcached itself ;-) -- then, what kind of performance may be expected when several network links are used (or simply more fast network cards are used) -- there is still a lot to discover! and RW workload performance will be yet another challenge as well ;-)

Kudos to Sunny and Jimmy! And my special thanks to Yoshinori (Facebook)! - I think this is an excellent example where a common work on a given problem provides a fantastic final result for all MySQL users!..

If you need some details about Memcached Plugin design - you may start your reading from here: - while then, keeping in mind all presented here results, I let you imagine now what kind of performance you may expect if data will be accessed directly via "native" InnoDB API and by-passing the Memcached level.. ;-))

Posted by Dimitri at 13:36 - Comments...
Categories: MySQL