日志文章


2018-05-15

Galera / MySQL: log block numbers mismatch

https://dba.stackexchange.com/questions/132409/galera-replication-error-log-block-numbers-mismatch

[Q]

I am attempting to get Galera replication working between two nodes.

I am finding these errors in my innobackup.backup.log file:

xtrabackup: error: log block numbers mismatch:
xtrabackup: error: expected log block no. 671400745, but got no. 679592737 from the log file.
xtrabackup: error: it looks like InnoDB log has wrapped around before xtrabackup could process all records due to either log copying being too slow, or log files being too small.
xtrabackup: Error: xtrabackup_copy_logfile() failed.

I am not sure where to do from here, I have googled a bit and didn't find anything that seemed like it could help me in my particular instance.

Any advice will be most appreciated.

[A]

I finally have solved this problem.

I found an article on launchpad that said to use this command to see if the problem was IO:
innobackupex --user=<username> --password=<password> --stream=tar -- ibbackup=/usr/bin/xtrabackup /tmp >/dev/null

This failed to, so I started thinking IO was not at fault.

I played around with this command and found this one to complete:
innobackupex --user=<username> --password=<password> --stream=xbstream --parallel=15 /tmp >/dev/null

This also showed that my ulimit was not high enough, so I had to set this higher for the backup to complete.

To force innobackupex to use xbstream instead of tar for the --stream option I but this in my.cnf on both of my galera servers:
[sst] streamfmt=xbstream


类别: 无分类 |  评论(3) |  浏览(111) |  收藏
一共有 3 条评论
没楼可以吗 2018年05月15日16时00分 16:00 评论:
How to calculate a good InnoDB log file size
Baron Schwartz | November 21, 2008 |  Posted In: Insight for DBAs

PREVIOUS POST NEXT POST
Peter wrote a post a while ago about choosing a good InnoDB log file size. Not to pick on Peter, but the post actually kind of talks about a lot of things and then doesn’t tell you how to choose a good log file size! So I thought I’d clarify it a little.

The basic point is that your log file needs to be big enough to let InnoDB optimize its I/O, but not so big that recovery takes a long time. That much Peter covered really well. But how do you choose that size? I’ll show you a rule of thumb that works pretty well.


In most cases, when people give you a formula for choosing a configuration setting, you should look at it with skepticism. But in this case you can calculate a reasonable value, believe it or not. Run these queries at your server’s peak usage time:

Shellmysql> pager grep sequence
PAGER set to \'grep sequence\'
mysql> show engine innodb statusG select sleep(60); show engine innodb statusG
Log sequence number 84 3836410803
1 row in set (0.06 sec)
1 row in set (1 min 0.00 sec)
Log sequence number 84 3838334638
1 row in set (0.05 sec)
1
2
3
4
5
6
7
8
9
10 mysql> pager grep sequencePAGER set to \'grep sequence\'mysql> show engine innodb statusG select sleep(60); show engine innodb statusGLog sequence number 84 38364108031 row in set (0.06 sec) 1 row in set (1 min 0.00 sec) Log sequence number 84 38383346381 row in set (0.05 sec)
Notice the log sequence number. That’s the total number of bytes written to the transaction log. So, now you can see how many MB have been written to the log in one minute. (The technique I showed here works on all versions of MySQL. In 5.0 and newer, you can just watch Innodb_os_log_written from SHOW GLOBAL STATUS, too.)

Shellmysql> select (3838334638 - 3836410803) / 1024 / 1024 as MB_per_min;
+------------+
| MB_per_min |
+------------+
| 1.83471203 |
+------------+
1
2
3
4
5
6 mysql> select (3838334638 - 3836410803) / 1024 / 1024 as MB_per_min;+------------+| MB_per_min |+------------+| 1.83471203 | +------------+
As a rough rule of thumb, you can make the log big enough that it can hold at most an hour or so of logs. That’s generally plenty of data for InnoDB to work with; an hour’s worth is more than enough so that it can reorder the writes to use sequential I/O during the flushing and checkpointing process. At this rate, this server could use about 110 MB of logs, total. Round it up to 128 for good measure. Since there are two log files by default, divide that in half, and now you can set

Shellinnodb_log_file_size=64M
1 innodb_log_file_size=64M
Does that look surprisingly small? It might. I commonly see log file sizes in the gigabyte ranges. But that’s generally a mistake. The server I used for the measurements above is a big one doing a lot of work, not a toy. Log file sizes can’t be left at the default 5MB for any real workload, but they often don’t need to be as big as you might think, either.

If this rule-of-thumb calculation ends up showing you that your log file size ought to be many gigabytes, well, you have a more active write workload. Perhaps you’re inserting a lot of big rows or something. In this case you might want to make the log smaller so you don’t end up with GB of logs. But also realize this: the recovery time depends not only on the total log file size, but the number of entries in it. If you’re writing huge entries to the log, fewer log entries will fit into a given log file size, which will generally make recovery faster than you might expect with a big log.

However, most of the time when I run this calculation, I end up finding that the log file size needs to be a lot smaller than it’s configured to be. In part that’s because InnoDB’s log entries are very compact. The other reason is that the common advice to size the logs as a fraction of the buffer pool size is just wrong.

One final note: huge buffer pools or really unusual workloads may require bigger (or smaller!) log sizes. This is where formulas break down and judgment and experience are needed. But this “rule of thumb” is generally a good sane place to start.
没楼可以吗 2018年05月15日15时49分 15:49 评论:
innodb_log_file_size
没楼可以吗 2018年05月15日15时37分 15:37 评论:
Percona官方的一个答复内容:
I would suggest to change backup time probably during off peak time. Also i would suggest to configure good redo log file size as per