Mysql Error Reading Relay Log Event
Email Updates: Status: Open Impact on me: None Category:MySQL Server Severity:S3 (Non-critical) Version:MySQL-5.5.28 OS:Linux (CentOS 6) Assigned to: View Add Comment Files Developer Edit Submission View Progress Log Contributions [9 Sep 2013 3:37] li ming Description: I have scheduled a daily backup by this mysqldump on a slave. mysqldump -u$user -p$passwd --dump-slave=2 --single-transaction -B database >backup.sql The error log on slave: 130908 3:30:01 Error reading relay log event: slave SQL thread was killed 130908 3:30:01 Slave SQL thread initialized, starting replication in log 'mysql-bin.000049' at position 93844393, relay log './mysql-relay-bin.000037' position: 93844539 130908 3:30:01 Error reading relay log event: slave SQL thread was killed 130908 3:30:32 Slave SQL thread initialized, starting replication in log 'mysql-bin.000049' at position 93844393, relay log './mysql-relay-bin.000037' position: 93844539 130909 3:30:01 Error reading relay log event: slave SQL thread was killed 130909 3:30:01 Slave SQL thread initialized, starting replication in log 'mysql-bin.000049' at position 103115353, relay log './mysql-relay-bin.000037' position: 103115499 130909 3:30:01 Error reading relay log event: slave SQL thread was killed 130909 3:30:01 Slave SQL thread initialized, starting replication in log 'mysql-bin.000049' at position 103116696, relay log './mysql-relay-bin.000037' position: 103116842 130909 3:30:01 Error reading relay log event: slave SQL thread was killed 130909 3:30:33 Slave SQL thread initialized, starting replication in log 'mysql-bin.000049' at position 103116696, relay log './mysql-relay-bin.000037' position: 103116842 How to repeat: When add the option '--dump-slave=2' the error occur... [11 Sep 2013 5:10] Umesh Umesh Hello li ming, This is a known and documented behavior: --dump-slave[=value] causes mysqldump to stop the slave SQL thread before the dump and restart it again after. Please reference http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_dump-slave Thanks, Umesh [12 Oct 2013 1:00] Bugs System No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open". [8 Mar 9:11] Samuel Åslund T
log in tour help Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Database Administrators Questions Tags Users Badges Unanswered Ask Question _ Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in https://bugs.mysql.com/bug.php?id=70275 the community. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Slave I/O thread dies very often up vote 2 down vote favorite I have MySQL Replication working between two servers in master-master mode. Replication has been working http://dba.stackexchange.com/questions/24258/slave-i-o-thread-dies-very-often from day 1. The IO Thread and the SQL Thread has never died. However, the Slave SQL thread dies very, very often. The log only says: 120913 17:58:50 [Note] Slave I/O thread killed while reading event 120913 17:58:50 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.004109', position 101146947 120913 17:58:50 [Note] Error reading relay log event: slave SQL thread was killed I've Googled for the problem, but found no indication of what to look for or how to resolve the issue. What could be causing the thread to die? mysql replication share|improve this question edited Sep 13 '12 at 19:49 Max Vernon 27.1k1160118 asked Sep 13 '12 at 12:12 GZ Kabir 1112 Do you have any additional slaves replicating from either Master ? –RolandoMySQLDBA Sep 13 '12 at 12:48 add a comment| 1 Answer 1 active oldest votes up vote 3 down vote When the IO Thread dies, it usually dies for these reasons STOP SLAVE; STOP SLAVE IO_THREAD; Network Issues The first two reasons are obvious, but many have been victimized by network connectivity. For ex
Cluster and HA SupportTokuMX SupportMongoDB https://www.percona.com/blog/2008/08/02/troubleshooting-relay-log-corruption-in-mysql/ SupportContact SupportPercona Emergency SupportSupport PoliciesSupport TiersRead MoreConsultingPerformance OptimizationInfrastructure Architecture https://github.com/percona/percona-pacemaker-agents/issues/9 and DesignHigh AvailabilityUpgrades & MigrationsServer & Database AutomationConsulting PoliciesRead MorePercona Care Software MySQL Database SoftwarePercona ServerPercona XtraDB ClusterPercona XtraBackupPercona TokuDBMySQL Software DocumentationSoftware RepositoriesRead MoreMongoDB Database SoftwarePercona Server for MongoDBMongoDB Software DocumentationPercona TokuMXRead MoreOpen Source mysql error Database ToolsPercona Monitoring and ManagementPercona ToolkitPercona Monitoring PluginsDatabase Tools DocumentationRead MoreDocumentation LibraryFind all the documentation you need to set up and manage all our products.Read MoreDownloadsRead More Solutions BuildHighly Scalable Data InfrastructureHighly Available Data InfrastructureData Infrastructure AutomationCloud Data StorageDatabase and Infrastructure Architecture mysql error reading and DesignRead MoreFixPerformance Audit, Tuning and OptimizationServer Audit and StabilizationDatabase Server Outage Restoration24 x 7 ExpertiseData RecoveryRead MoreOptimizeDatabase MonitoringApplication and Server Performance OptimizationInfrastructure Review and Design ServicesExpertise On DemandRead MoreManageRemote Managed ServicesProject Management and AdvisorsTrusted Data AdvisorsDatabase BackupRead More Community Data Performance BlogRead from leading data performance experts in Percona's Official blog.Read MoreEventsView all the information about upcoming events and shows where we can meet up!Read MoreForumsAsk Percona database experts for performance help now in our support forums!Read MoreLet's Get SocialTwitterLinkedInGoogle GroupsFacebookRead MoreMySQL 101 Sessions Resources WebinarsPercona offers free webinars about MySQL®, MongoDB®, OpenStack, NoSQL, Percona Toolkit, DBA best practices and more.Read MoreEbooksImportant literature for getting specialized on database management and administration.Read MoreTechnical PresentationsBrowse our library of nearly 300 technical presentations from webinars and events.Read MoreVide
Sign in Pricing Blog Support Search GitHub This repository Watch 90 Star 62 Fork 36 percona/percona-pacemaker-agents Code Issues 9 Pull requests 1 Projects 0 Pulse Graphs New issue Agent cannot resume a slave from previous position #9 Closed dotmanila opened this Issue May 16, 2014 · 6 comments Projects None yet Labels None yet Milestone No milestone Assignees No one assigned 3 participants dotmanila commented May 16, 2014 When a slave lost connection to the master for some reason i.e. network issue and Pacemaker takes it down, when it comes back up, it will not resume from the replication coordinates before the shutdown, instead it will pickup REPL_INFO from the cluster configuration which is going back. FrankVanDamme commented Jun 19, 2014 I had a database problem yesterday and when the slave tried to come up again, there WAS no REPL_INFO in the configuration. I think this must be because I upgraded the resource script and have not switched masters since, since REPL_INFO seems to be updated when the master is started or promoted. This triggered, off course, an error.. turned out this was my fault BUT in the process, I found out why the script doesn't seem to pick up replication. in set_master, this should happen if "$new_master" = "$master_host" alas, on my system, $new_master (