« dim_STAT: Release Candidate 8.3 version | Main | dim_STAT: Version 8.3 is ready!!! :-) »

Tuesday, 30 June, 2009

MySQL Performance: InnoDB Purge Lag and Ahead Flushing

After publishing in May a benchmark report about InnoDB Dirty pages & Log size impact I received several very interesting comments, and one of them pointed to the purge lag problem and InnoDB innodb_max_purge_lag parameter created specially for such purpose! So, I was very curious to know how it may help me in my workload...

The following study is about how innodb_max_purge_lag is working now and what may be done to make things better :-))

I'm not pretending here on any absolute truth :-) My goal is to understand what's going on with purge on my workload and see if there is something else possible to improve..

Benchmark scenario

My benchmark scenario will be exactly the same as in my previous test - and by chance I'm still having exactly the same server :-))

To reproduce the same workload I've prepared a follow dbSTRESS scenario:

Test Scenario :

  • during 5 minutes a Read-Only warm-up
  • then during 1 hour a Read+Write workload
  • each sessions is operating a fixed list of objects (choice is random, but the list is fixed and InnoDB buffer pool is sized big enough to keep all necessary pages cached!)

From my previous tests I observed that the max throughput on 16cores is generally reached with 32 sessions (both with innodb thread concurrency equal to 0 or 16). So, other settings for this scenario:

Config :

  • CPU cores: 16
  • concurrent sessions: 32
  • think time: 0

Each test is executed :

  • with 1024MB redo log size
  • or 128MB redo to compare

For every test to check :

  • dirty pages percentage
  • total time of dirty page flushing at the end of the test
  • reached performance level
  • History length
  • DML delays (if any)

However, I'm concentrating only on MySQL 5.4 testing now.

InnoDB Purge Lag

According the InnoDB documentation (and as well InnoDB code :-)) innodb_max_purge_lag variable controls how to delay INSERT, UPDATE, and DELETE operations when purge operations are lagging. The default value 0 (no delays).

The InnoDB transaction system maintains a list of transactions that have delete-marked index records by UPDATE or DELETE operations. Let the length of this list be purge_lag. When purge_lag exceeds innodb_max_purge_lag, each INSERT, UPDATE, and DELETE operation is delayed by ((purge_lag/innodb_max_purge_lag)×10)–5 milliseconds. The delay is computed in the beginning of a purge batch, every ten seconds. The operations are not delayed if purge cannot run because of an old consistent read view that could see the rows to be purged.

However, as I explained before , purge routine is not called as expected, Main Thread is looping all the time on trx_purge() - but let's see more in depth how it works.

Purge lag is also called as a Rollback segment History length within InnoDB code, and it's the name I prefere :-))

InnoDB Code Instrumentation

To be able to monitor InnoDB internals, I've added following stats:

  • FLush calls (DTrace script tracing who is calling buf_flush_batch() function and how often)
  • innodbSTAT (based on "show innodb status" - to check a buffer pool state)
  • added "History len" value into "show innodb status"
  • added DML delay values into "show innodb status"

the added code is looks like:

 fputs("--------------\n"
         "PURGE STATUS\n"
         "--------------\n", file);
 fprintf( file, "History len: %d\nDML Delay: %d max: %d\n",
   trx_sys->rseg_history_len, srv_dml_needed_delay, srv_dml_needed_delay_max );

srv_dml_needed_delay_max= 0;

As DML-delay (srv_dml_needed_delay) may vary a lot at any time, I've added a DML-delay-max value (srv_dml_needed_delay_max) - it's keeping the max delay value reached until current "show innodb status" call (and set to zero after).

First Observations

As you may see from my previous tests , a smaller redo log size gives a better TPS stability on InnoDB. While a bigger redo log gives a higher performance level, but - generates a periodical burst data flushing which may have a partial freeze of throughput as a side effect...

And even I don't like too much any kind of artificial limit (storage box is even not fully used here), but probably an innodb_max_purge_lag option may bring little bit more stability?..

Let's see..

First test :

  • innodb_max_purge_lag=0
  • innodb_max_purge_lag=1000000
  • innodb_max_purge_lag=100000
  • innodb_max_purge_lag=10000

For my big surprise, History length outpassed 8.000.000 within all cases and there was NO DIFFERENCE between these tests..

Why?..

Click here to continue reading...

Full report: http://dimitrik.free.fr/db_STRESS_MySQL_540_Purge_Lag_and_Ahead_Flushing_Jun2009.html

Any comments are welcome! :-)

Posted by Dimitri at 18:14
Categories: MySQL, Solaris, Tools/ dbSTRESS
blog comments powered by Disqus
Note: if you don't see any "comment" dialog above, try to access this page with another web browser, or google for known issues on your browser and DISQUS..