Tuesday, June 02, 2009

Increasing log file size increases performance

I have been trying to analyse a number of new patches we've
developed for MySQL to see their scalability. However I've
have gotten very strange results which didn't at all compare
with my old results and most of changes gave negative impact :(
Not so nice.

As part of debugging the issues with sysbench I decided to go
back to the original version I used previously (sysbench 0.4.8).
Interestingly even then I saw a difference on 16 and 32 threads
whereas on 1-8 threads and 64+ threads the result were the same
as usual.

So I checked my configuration and it turned out that I had changed
log file size to 200M from 1300M and also used 8 read and write
threads instead of 4. I checked quickly and discovered that the
parameter that affected the sysbench results was the log file size.
So increasing the log file size from 200M to 1300M increased the
top result at 32 threads from 3300 to 3750, a nice 15% increase.
The setting of the number of read and write threads had no
significant impact on performance.

This is obviously part of the problem which is currently being
researched both by Mark Callaghan and Dimitri.
Coincidentally Dimitri has just recently blogged about this and
provided a number of more detailed comparisons of the
performance of various settings of the log file size in InnoDB.

5 comments:

Mark Callaghan said...

One of my tests is reload of a large database using concurrent sessions. The largest possible log file size versus 3x256MB log files has a big impact on performance -- my vague memory is ~2X and also on the number of page writes done during the reload.

Arjen Lentz said...

Using such big InnoDB log files makes same-system recovery completely unrealistic, the recovery time would be measured in the order of perhaps a day or more!

Furthermore, unless Percona's dynamic checkpointing patch is used, you will at some point get a huge dip in performance while the log is processed and dirty pages flushed. So you're just delaying the problem.

Mark Callaghan said...

Arjen,
You are speculating. I have no doubt that the larger log file makes recovery take longer, but I would run some tests before making blanket statements like that.

James Day said...

Arjen, deferring work is good, lets real servers handle transient load surges better. Then the background threads can take care of getting the dirty page count down.

If you care about uptime an hour or thirty minutes is already too long, you fail over well before that. If you can't fail over then either you're not serious about needing the uptime or you had all servers hit at the same time.

We also have a third party patch that speeds up InnoDB recovery for large buffer pool sizes and can probably find other ways to speed up recovery if we look at it. With the patch I doubt that most systems will see recovery times longer than an hour or three even with 4G of logs. Could construct exceptions like lots of small single page changes if desired, but that's not usual.

Mark Callaghan said...

James,
What is the 3rd party patch? Is it something from Percona to reduce the CPU bottleneck from sorting during recovery?