Error Reading Relay Log Event
Contents |
log in tour help Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of error reading packet from server: lost connection to mysql server during query ( server_errno=2013) this site About Us Learn more about Stack Overflow the company Business Learn
Waiting For Master To Send Event
more about hiring developers or posting ads with us Database Administrators Questions Tags Users Badges Unanswered Ask Question _ Database
Could Not Find First Log File Name In Binary Log Index File
Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Slave I/O thread dies very often up vote 2 down vote favorite I have MySQL Replication working between two servers in master-master mode. Replication has been working from day 1. The IO Thread and the SQL Thread has never died. However, the Slave SQL thread dies very, very often. The log only says: 120913 17:58:50 [Note] Slave I/O thread killed while reading event 120913 17:58:50 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.004109', position 101146947 120913 17:58:50 [Note] Error reading relay log event: slave SQL thread was killed I've Googled for the problem, but found no indication of what to look for or how to resolve the issue. What could be causing the thread to die? mysql replication share|improve this question edited Sep 13 '12 at 19:49 Max Vernon 27k1160118 asked Sep 13 '12 at 12:12 GZ Kabir 1112 Do you have any additional slaves replicating from either Master ? –RolandoMySQLDBA Sep 13 '12 at 12:48 add a comment| 1 Answer 1 active oldest votes up vote 3 down vote When the IO Thread dies, it usually dies for these reasons STOP SLAVE; STOP SLAVE IO_THREAD; Network Issues The first two reasons are obvious, but many have been victimized by network connectivity. For example, if there are enough dropped packets along the traceroute of the IP you are using as the Master, the IO Thread will
problem regarding replication. I have successfully setup my replication. Whenever I start the slave server [mysqladmin -u root start-slave], I don't find any problems. My problem is whenever I stop the slave server [mysqladmin -u root stop-slave], I always read the message *Error reading relay log event: slave SQL thread was killed*. What could be the possible problems with the slave server? 060605 11:36:02 InnoDB: Started; log sequence number 0 43655 060605 11:36:02 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000002' at position 1219, relay log '.\backup_server-relay-bin.000008' http://dba.stackexchange.com/questions/24258/slave-i-o-thread-dies-very-often position: 235 060605 11:36:02 [Note] Slave I/O thread: connected to master 'replUSR@stripped:3306', replication started in log 'mysql-bin.000002' at position 1219 060605 11:36:02 [Note] mysqld: ready for connections. Version: '5.0.21-community-log' socket: '' port: 3306 MySQL Community Edition (GPL) 060605 11:36:12 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013) 060605 11:36:12 [Note] Slave I/O http://lists.mysql.com/replication/412 thread killed while reading event 060605 11:36:12 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.000002', position 1219 060605 11:36:12 [Note] Error reading relay log event: slave SQL thread was killed 060605 11:36:25 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000002' at position 1219, relay log '.\backup_server-relay-bin.000010' position: 235 060605 11:36:25 [Note] Slave I/O thread: connected to master 'replUSR@stripped:3306', replication started in log 'mysql-bin.000002' at position 1219 060605 11:45:54 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013) 060605 11:45:54 [Note] Slave I/O thread killed while reading event 060605 11:45:54 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.000002', position 1219 060605 11:45:54 [Note] Error reading relay log event: slave SQL thread was killed 060605 11:46:21 [Note] mysqld: Normal shutdown 060605 11:46:22 InnoDB: Starting shutdown... 060605 11:46:24 InnoDB: Shutdown completed; log sequence number 0 43655 060605 11:46:24 [Note] mysqld: Shutdown complete Attachment: [application/pgp-signature] OpenPGP digital signature signature.asc Thread• Error reading relay log eventMichaelLouieLoria5Jun • Re: Error reading relay log eventRaviPrasadLR5Jun © 1995, 2016, Oracle Corporation
Cluster and HA https://www.percona.com/blog/2008/08/02/troubleshooting-relay-log-corruption-in-mysql/ SupportTokuMX SupportMongoDB SupportContact SupportPercona Emergency SupportSupport PoliciesSupport TiersRead https://github.com/percona/percona-pacemaker-agents/issues/9 MoreConsultingPerformance OptimizationInfrastructure Architecture and DesignHigh AvailabilityUpgrades & MigrationsServer & Database AutomationConsulting PoliciesRead MorePercona Care Software MySQL Database SoftwarePercona ServerPercona XtraDB ClusterPercona XtraBackupPercona TokuDBMySQL Software DocumentationSoftware RepositoriesRead MoreMongoDB Database SoftwarePercona Server for MongoDBMongoDB error reading Software DocumentationPercona TokuMXRead MoreOpen Source Database ToolsPercona Monitoring and ManagementPercona ToolkitPercona Monitoring PluginsDatabase Tools DocumentationRead MoreDocumentation LibraryFind all the documentation you need to set up and manage all our products.Read MoreDownloadsRead More Solutions BuildHighly Scalable Data InfrastructureHighly Available Data InfrastructureData Infrastructure error reading relay AutomationCloud Data StorageDatabase and Infrastructure Architecture and DesignRead MoreFixPerformance Audit, Tuning and OptimizationServer Audit and StabilizationDatabase Server Outage Restoration24 x 7 ExpertiseData RecoveryRead MoreOptimizeDatabase MonitoringApplication and Server Performance OptimizationInfrastructure Review and Design ServicesExpertise On DemandRead MoreManageRemote Managed ServicesProject Management and AdvisorsTrusted Data AdvisorsDatabase BackupRead More Community Data Performance BlogRead from leading data performance experts in Percona's Official blog.Read MoreEventsView all the information about upcoming events and shows where we can meet up!Read MoreForumsAsk Percona database experts for performance help now in our support forums!Read MoreLet's Get SocialTwitterLinkedInGoogle GroupsFacebookRead MoreMySQL 101 Sessions Resources WebinarsPercona offers free webinars about MySQL®, MongoDB®, OpenStack, NoSQL, Percona Toolkit, DBA best practices and more.Read MoreEbooksImportant literature for getting specialized on database management and administration.Read MoreTechnica
Sign in Pricing Blog Support Search GitHub This repository Watch 90 Star 62 Fork 36 percona/percona-pacemaker-agents Code Issues 9 Pull requests 1 Projects 0 Pulse Graphs New issue Agent cannot resume a slave from previous position #9 Closed dotmanila opened this Issue May 16, 2014 · 6 comments Projects None yet Labels None yet Milestone No milestone Assignees No one assigned 3 participants dotmanila commented May 16, 2014 When a slave lost connection to the master for some reason i.e. network issue and Pacemaker takes it down, when it comes back up, it will not resume from the replication coordinates before the shutdown, instead it will pickup REPL_INFO from the cluster configuration which is going back. FrankVanDamme commented Jun 19, 2014 I had a database problem yesterday and when the slave tried to come up again, there WAS no REPL_INFO in the configuration. I think this must be because I upgraded the resource script and have not switched masters since, since REPL_INFO seems to be updated when the master is started or promoted. This triggered, off course, an error.. turned out this was my fault BUT in the process, I found out why the script doesn't seem to pick up replication. in set_master, this should happen if "$new_master" = "$master_host" alas, on my system, $new_master (