This example is a lesson in double-checking what you think you’re seeing. I just invested a half hour in trying to solve a problem which didn’t even exist. Let me step through it in case anybody else makes the same mistake I did.
I was doing an export of a MySQL table so that I could re-import it into a new schema name. This is a technique of changing a schema name in MySQL. Just to double-check the data, I opened up my SQL export file in TextPad to see how it looked. The key table I checked was the posts table. It should have 5801 posts in it.
Nope. I got only 5 rows. 5. 5801. That’s a minor difference, you might think.
My users table is supposed to have 10,533 users in it. I only got 2.
I next re-ran the export both as a single SQL file output and also as a file-per-table output. In both cases I only got 5 rows and 2 rows.
My research said that the problem was probably with the data in the table. Apparently if you change character sets along the way, this sort of problem can happen. I’ve had these tables for nearly twenty years now, through various server moves, so it’s certainly possible that something like that happened along the way.
The question was what to do now.
Well, being a lover of going back to the basics, I popped open a command line. I gave it a try from the command line just for one table. So I put in:
mysqldump –force –single-transaction -u USERNAME -p SCHEMANAME TABLENAME > output.sql
That gave me the same results. It had the table script with just the five rows.
So then I removed the output part. I let the output scroll past my eyes on the screen to try to figure out if something was going on.
Voila. I could see all the rows scrolling past my eyes. THE DATA WAS BEING EXPORTED.
Now I was completely baffled. If the output was showing up in front of my eyes, why would a simple > FILENAME parameter then lose that data? Why wasn’t the data showing up in the destination file?
I checked the last entry in my screen output listing, to see what the data looked like. Then I opened the matching file in TextPad to see if that row was in the file. I hit the ‘word wrap’ button so that I could see the end of my last SQL line.
FACE TO PALM.
Yes, the entire block of data was there. The “five rows” I saw were five massively long rows, each with hundreds of data inserts contained on them. It makes no sense, but that’s the way they were output.
To test this, I created my new-name schema and imported this SQL file into it. That was the whole purpose of this project in any case.
I then inspected both the old schema and new schema side by side, so I could easily compare row counts table by table.
Now, it did report some strange discrepancies. Both tables had 5801 posts, for example. But the original users table had 10422 users in it. The new users table had 10530 users. How did I add users while doing an export and import?
So instead of trusting the schema inspector, I did actual count commands on both schema tables.
Old table: 10533 rows
New table: 10533 rows
So they matched, even though the text file looked like it was missing rows. Even though the schema inspector reported that they didn’t match. Heck, the schema inspector row counts were off in BOTH cases.
End lesson. Be diligent about checking the data and not relying on quick-and-easy overviews. You’ll get to the bottom of the problem :).