Posts

Showing posts with the label mysql

Backing Up MySQL Database using Openshift/Kubernetes Cron Job

Periodic Backups When running an application in production, the data and app should be backed up periodically. One way to do that is by using git repository for the source code and periodically dumping the database. This article shows how to do such periodic jobs using CronJob functionality of Openshift/Kubernetes. Technique When running a cronjob, the filesystem that being used to run the  pod will be ephemeral, that is, non-persistent. To work around this limit we should mount from external volume, which can be in the form of persistent volume claim or directly provided volume specification.  The Job specs apiVersion: batch/v1 kind: Job metadata:   name: job04-test  spec:   template:     spec:       containers:       - name: job04c-test         image: centos/mysql-57-centos7         command: ["/bin/sh","-c",  "/opt/rh/rh-mysql57/root/usr/bin/mysqldump ...

Lessons Upgrading MySQL DB 5.1 to Percona 5.7

I just recently upgraded a database server that were previously running MySQL 5.1 (standard, Sun/Oracle version) into Percona Server 5.7. A few quirks notable enough to warrant this blog post. Planning and Preparation A Percona blog post ( mysql-upgrade-best-practices ) stated that the best way to upgrade with such huge difference in major version (5.1 to 5.7) is to do a full logical dump for all database except mysql, dump user and grants, uninstall database and remove datafiles, then install new version and import the logical dump and grants. But alas the database we are going to upgrade is so big and the IO subsystem became some sort of bottleneck when doing logical dump, our colleagues tried to do mysqldump and it tooks more than 2 days to run, prompting us to cancel the backup (otherwise it would interfere with workday application usage of the database).  Reading the blog I noted that : for major version upgrade, using logical dump  and restore dump is the safe...

Enhancing MySQL Query Performance - July 2015

Background This post is meant to document steps that I took in July 2015 to fix a MySQL-related performance problem.  Problem Identification The symptoms is that pages of one application have very large response time. First thing we check is the CPU usage of the application server, which is < 25%, meaning that there are no issue in the application servers' CPU usage. And need to check logical limits also, in this case, Apache's MaxClients.. and compare it with the concurrent HTTP connections to the server. This is also < 25%. The second part we check is the database server. Check the CPU usage.. and gotcha, it was > 70%. With application server have almost no load,  this means query executions in the database server were not optimal. Next we check the queries. MySQL Processlist To examine queries running in a MySQL database server, open mysql command line client, and we could check the processlist using : SHOW FULL PROCESSLIST \G This should be r...

Database connection pooling problems

We have some problems running Oracle and ASP.NET application in different network segments. Similar to Garreth Hooper's problem in his blog , we encountered a ORA-12571 TNS: packet writer failure during period of server inactivity. This all came from the best practice of connection pooling, which in my opinion a complex matter. The Original Problems In one side, we have application servers, hosting applications written either in ASP.NET, Java, or PHP, and on the other side we have database servers. The first problem is, opening a connection from application to db server takes time, even though this is not a certainity (a lot of factors came into play, such as database server configuration, firewall setup, etc), this fact seems to encourage application developers to introduce connection pooling system to the architecture.  The second problem comes from the fact that a connection open is a session open in the database server, and it almost always takes precious resources, so...

MySQL Corrupt Tables and How to Avoid it

Once in a while MySQL's tables became corrupted. This post is not interested in repair process (you should see other posts, but the most general advice is to do a REPAIR TABLE table ; ) In my humble opinion, a real life production database must not have any corruption, it must have sufficient failsafe mechanisms to avoid such corruption. Causes of Corruption MyISAM tables could became corrupted by  (refer http://dev.mysql.com/doc/refman/5.1/en/corrupted-myisam-tables.html) : mysqld is being killed in the middle of write unexpected computer shutdown occured hardware failures running an external program (example: myisamchk) while mysqld is running software bug in Mysql/myISAM code Tips to mitigate data corruption Do not use MyISAM for long lasting data. Use InnoDB. InnoDB is less corruption prone than MyISAM.  Use file per table option for InnoDB. Check your disk space and database availability periodically. On one occasion, my mysql data partition is full...

Recovering from Deleted Row Disaster in MySQL

This post is dedicated to accidents that every so often resulting in deleted rows of data. Of course everyone should have backups. But things do happen when the backups nowhere to be found, or not being done often enough, or we're storing data in a forgotten database server. In the event of such incident.. deactivate your apps. Prevent unnecessary writes to db that could make your data overwritten. In case of oracle database, you could try flashback query that assume the transaction is still fresh in the redo log. In case of mysql database, copy the whole data directory into a safe location, where we would try to dump deleted rows using Percona Innodb recovery tool. But such methods failed miserably when we found out about the accident much too late. I think every person should be responsible for their actions, even if the action is about deleting data. But the consequence of mass-delete actions is very different from single-row deletes. That is why mass delete function must be ...

How to Recreate MySQL System Tables

You might wonder, why would I need to recreate MySQL system tables. Well, the case is that my WoS folder has gotten pretty big, and I lost my original WoS.zip file. And the website doesn't carry WoS anymore. (now its called MoWes, and for some reason I still trust the old' WoS than the MoWes). I need to copy this to another computer, without the 270 MB-or-so MySQL datafiles. So the solution is-> just copy them, excluding the data folder inside mysql folder. But the newly copied MySQL won't start. It created new datafiles all right, but it still won't start. Examining the log files, it said something about missing grant tables. So I added this option when starting mysql: bin\mysqld-nt --skip-grant-tables This forced it to start even without grant tables. Now, the grant tables still must be created, and after browsing for a while I found out that we could initialize the grant tables using these scripts: mysql_system_tables.sql mysql_system_tables_data.sql fill_help_ta...