MySQL dying and not restarting on Ubuntu 14.04.3 LTS
I loaded an SQL dump via a web interface. It timed out. Afterwards the database server responded for any requested database 'SQLSTATE[HY000]: General error: 2006 MySQL server has gone away'. The MySQL process was still there though.
Problem 1: AppArmor
I could neither stop the process via service mysql stop nor start it via service mysql start. None of the commands returned. No error message, not on the command line, not in the logs. This is a symptom for AppArmor going wild. apparmor_status showed that mysql was in enforced mode, so it looked like it wasn't even allowed to start, which explains why there is no error in the logs. I added it to complain with aa-complain /etc/apparmor.d/*mysql*, which didn't change anything. So I deactivated apparmor testwise with service apparmor teardown. After that, mysql would at least log errors again while the start fails.
Problem 2: Changed default settings after update of MySQL to 5.5.x
The first error in the log (/var/log/mysql/error.log) was [ERROR] Failed to access directory for --secure-file-priv. I added into the section [mysqld] of my.cnf secure-file-priv = "" which solved this problem. MySQL starts as usual. I assumed that the SQL dump I was trying to load before wasn't read completely, so I re-read it from commandline via mysql < dump.sql. It loaded without error, but the MySQL server immediately went down again after connecting to the database. The log in /var/log/mysql/error.log said [ERROR] Cannot find or open table ... from the internal data dictionary of InnoDB though the .frm file for the table exists. This error was not related to the problem though: I found that with a version update of MySQL somewhere around 5.5.55 the default max_allowed_packet size decreased, so I set it in my.cnf within [mysqld] to max_allowed_packet = 32M. Voila, after a restart of MySQL everything worked again.
After a machine reboot, everything still works, so AppArmor seems to behave.