Created: 2009-05-17
Last modified: 2009-05-25




MySQL Performance: InnoDB Dirty Pages & Redo Log Size Impact






by Dimitri


SSC Team, 2009
Sun Microsystems Inc.





Ces informations sont données à titre indicatif et n'engagent pas Sun Microsystems.

Table of contents



Benchmark Information
Customer Name(s): SSC Team

NDA: no

Contact Information:
     - dimitri (at) sun.com

Dates: May.2009

Keywords: MySQL, log size, dirty pages, performance


Hardware Configuration
Server(s):
     - M5000 8CPU SPARC64-VII (quad-core bi-thread) 2400Mhz, 64GB RAM, 2x FC-port 4Gbit

Storage:
     - ST6140 2x LUN (500GB RAID1 8HD), each LUN is connected by its own fiber channel to the host


Software Configuration
System:
     - Solaris 10 u6
     - UFS

Application(s):
     - MySQL 5.4
     - MySQL 5.Perf build5
     - XtraDB-5
     - XtraDB-3
     - InnoDB plugin-1.0.3
     - dbSTRESS (injector)


History
The story is coming from the beginning of the year when preparing a MySQL Workshop I've continued my first tests with XtraDB and looked for MySQL configuration parameters giving the most optimal, performant and stable processing. During these tests I've discovered several strange things:

I've told about a lot of people on that time, but everybody were just surprised and did not really believe such case is possible... Agree, it's hard to believe ;-)

So, to refresh the story, let me first to show you the following picture:


Test Scenario
To reproduce the same problem again (if it still exists of course :-)) I've prepared a follow dbSTRESS scenario:

Test Scenario :

From my previous tests I observed that the max throughput on 16cores is generally reached with 32 sessions (both with innodb thread concurrency equal to 0 or 16). So, other settings for this scenario:

Config :

Each test is executed :

For every test to check :

Initial list of engines :


First observations
First of all I want to be sure with 128MB log size everything is still ok until now. So, the following graph is showing an I/O and Buffer Pool state during a test with MySQL 5.4 and:

What if bigger redo log?..
Let's see now what will change if the redo log will be 1024MB size. The following graph is showing an I/O and Buffer Pool state during a test with MySQL 5.4 and:

In depth with Dirty Pages Level
The main question of this chapter is - Why dirty pages percentage setting (innodb_max_dirty_pages_pct=15) was ignored?..

To go more in depth, we'll need to open the MySQL sources. The file "innodb/srv/srv0srv.c" contains the source code we're interesting in.


In depth with a Purge Process
The main question of this chapter is - What is wrong with Purge Process?...

As I did not design InnoDB and don't have yet enough knowledge about its code, my goal here will be to clarify what's going wrong rather to make a patch - I'm pretty sure there are at least several people around managing InnoDB code as their own :-))

To make things more clear I'll first of all simplify my workload.

Simplified Read+Write :

Question: - if I'm doing only updates of my rows and all rows I'm using are fitting into my buffer pool - how many buffer space will I need to continue to keep them cached?...

NOTE: - current workload is using only fixed length rows! And from previous graphs you may see the whole current workset pages are fitting into 70.000 InnoDB pages within a buffer pool. So, I'd say 140.000 should be more than enough, even if we need to keep all of the time the previous copy of each page.

But let's see now what is going there...


Instead of SUMMARY...
Last notes :

I'm also surprised nobody observed such issue before... - I encourage you to monitor your databases, probably it's already your case too...

At least one positive thing - I'll obtain now even better results with my tests using 1024M log size! :-))
As well, once this issue will be fixed it'll be fine to avoid any regression comparing to the currently observed performance levels :-))