Created: 2009-08-12
Last modified: 2009-08-26

MySQL Performance: InnoDB Purge & Ahead Flushing - FIXED! :-)

by Dimitri

SSC Team, 2009
Sun Microsystems Inc.

Ces informations sont données à titre indicatif et n'engagent pas Sun Microsystems.

Table of contents

Benchmark Information
Customer Name(s): SSC Team

NDA: no

Contact Information:
     - dimitrik (at)

Dates: Aug.2009

Keywords: MySQL, InnoDB, Ahead Flushing, purge, purge lag, innodb_max_purge_lag, log size, dirty pages, performance

Hardware Configuration
     - M5000 8CPU SPARC64-VII (quad-core bi-thread) 2400Mhz, 64GB RAM, 2x FC-port 4Gbit

     - ST6140 2x LUN (500GB RAID1 8HD), each LUN is connected by its own fiber channel to the host

Software Configuration
     - Solaris 10 update7
     - UFS

     - MySQL 5.4
     - MySQL Perf #45
     - XtraDB-6
     - InnoDB plugin-1.0.4
     - dbSTRESS (injector)

Overview: this performance study is explaining idea and functionality of the final fix for previously discovered issues with Purge Lag and Ahead Flushing (for more details see following report or/and blog posts: Jul.2009 , Jun.2009 )

     - finalize the fix with the latest ideas...
     - validate the fix on dbSTRESS workloads

Result(s): Wow! :-))

Benchmark Scenario
My benchmark scenario will be still the same as explained during Purge Lag & Ahead Flushing performance study. And I'm still lucky again to get exactly the same test server :-))

Following dbSTRESS scenario is used to reproduce the same problem again:

Test Scenario :

Configuration :

Ahead Flushing, and why...
Let me show you once again what's going on...

First, let me show you a small graph representing the TPS level observed during the current test scenario and executed against several engines in the following order:

Purge Lag
So, if you look with attention on the previous graphs you should be surprised by seeing a constantly growing "History len" value - this number is represiting the current number of un-purged pages..

Why it happens and where is a danger you may find from MySQL manual:

In the InnoDB multi-versioning scheme, a row is not physically removed from the database immediately when you delete it with an SQL statement. Only when InnoDB can discard the update undo log record written for the deletion can it also physically remove the corresponding row and its index records from the database. This removal operation is called a purge, and it is quite fast, usually taking the same order of time as the SQL statement that did the deletion.

In a scenario where the user inserts and deletes rows in smallish batches at about the same rate in the table, it is possible that the purge thread starts to lag behind, and the table grows bigger and bigger, making everything disk-bound and very slow. Even if the table carries just 10MB of useful data, it may grow to occupy 10GB with all the “dead” rows . In such a case, it would be good to throttle new row operations and allocate more resources to the purge thread. The innodb_max_purge_lag system variable exists for exactly this purpose.

(see for more details)

Now, let show you first what is happening when my test is finished (and it was the main issue which inspired all these performance studies since last year :-)) - once workload is finished it still took 15-20 minutes for InnoDB to flush all dirty pages! But what will be changed now when dirty pages percentage is really limited and Ahead Flushing is in action?...