Portal Home > Knowledgebase > Articles Database > Replicating a (remote) MySQL database to improve PHP execution time
Replicating a (remote) MySQL database to improve PHP execution time
Posted by shought, 10-26-2010, 06:01 PM |
First of all I'm not really sure this should go here, but I've had a look at all forums and this one seemed to suit my question the most (please read all of it before moving it, if necessary, it's 'multi-disciplinary', I guess).
Current setup:
Server 1: frontpage (located in Europe)
Server 2: forums (located in the US)
Server 1 has lighttpd + FastCGI with Alternative PHP Cache.
Server 2 setup is irrelevant.
Now, when server 1 sends queries to server 2 it takes some time before the information is received because the servers are approximately 150ms away from each other. You might suspect that APC 'fixes' this because it caches the compiled PHP, but it has to check whether 'everything' is still the same on every request as well, so the ping matters in the execution time.
Now my aim is to decrease the execution time and NOT increase load on server 2 (database). Currently all queries are sent to server 2 so there is a considerable load.
Now I thought of this:
if I synchronize the database from server 2 to server 1 (master-slave setup) then in essence both server should have the same database at approximately the same moment, because all instructions are synchronized. This would require server 2 to 'forward' all instructions to server 1, which might (I'm not too sure) be less resource intensive then having to look up queries all the time.
This setup would eliminate the 'ping issue' with the execution time because the site would now 'pull' the information from the local database, rather than the remote database (inevitably increasing load on server 1, but this does not matter).
To 'not' further complicate things as I mentioned server 2 has a forum database, so there might be a lot of instructions coming at it. Server 1 only requires some database tables, so it might even be possible to only synchronize the necessary database tables, effectively reducing the load on server 2.
I'd like to know, before I get into finding out how all of this stuff works, would it actually work as I expect it to work, or is there some kind of mistake I'm making?
Thanks in advance for any help.
; shought
ps if any additional information is required, let me know
|
Posted by Drinian, 10-27-2010, 09:37 AM |
You could try the approach of database replication from server2->server1. (This would initially cause a load, but after the initial loading only updates get replicated across). Then server1 accesses the database locally.
If you wanted to split it out even more, do writes to the master and reads from the slave. This requires immeditate replication to be in-place, but reduces the number of queries from being handled by a single server.
|
Posted by shought, 10-27-2010, 09:42 AM |
I'm currently attempting to measure the performance increase by uploading a backup of the database from server 2 to server 1 and then using that (I know the testing scenario is not perfect, but it will give an impression).
Will post back with results.
ps thanks for your suggestion, but I'm not sure what you mean with 'do writes to the master and reads from the slave.'...
|
Posted by shought, 10-27-2010, 07:41 PM |
Ok, the results are amazing. The execution times went from 1,5 seconds to 0,015 seconds.
The database on server 2 has a lot of tables, but we only need like 5 on server 1.
Could someone provide some instructions on how to program MySQL (on both servers) to replicate the database from server 2 to server 1? (Preferably also explaining how to only replicate particular tables...)
Thanks in advance for any help!
|
Posted by tmesolutions, 10-28-2010, 07:38 AM |
Word of warning,
There are various different setups for replication. Master <-> Master allows writting to both, but will cause you major headaches with conflicts from time to time. Master <-> Slave is less likely to fail but you can only write to the Master so need to split reads and writes
Google mysql replication. Its very tricky though.
Good luck.
A technology to watch is MySQL Galera but its not stable enough yet
|
Posted by shought, 10-28-2010, 07:42 AM |
We only need master --> slave.
How would I go about setting up Master --> Slave MySQL replication for only 3 tables? (So Master database1 has (for example) 20 tables and Slave database2 has 3 tables, which are replications of 3 tables from the master.)
|
Posted by Maxnet, 10-28-2010, 12:18 PM |
You cannot setup replication for individual tables, only for individual databases.
Read the MySQL manual.
|
Posted by shought, 10-28-2010, 03:04 PM |
cough*
dev.mysql.com/doc/refman/5.0/en/replication-rules-table-options.html
Thanks for the hint though...
|
Posted by harget, 10-28-2010, 03:13 PM |
You can always try local caching the queries in Memcached or APC to reduce the issues with latency when page is loading.
That can reduce your load from 4-3 seconds to around 2s-60ms depending on what is needed and user input and it should not increase load on your other server.
|
Posted by Maxnet, 10-28-2010, 03:22 PM |
I stand corrected.
I think it still fetches all data from the server though, and filters it locally for the table.
As you can only specify which database updates to put in the binlog (--binlog-do-db) and not which tables.
|
Posted by shought, 10-28-2010, 03:23 PM |
Thanks for your suggestion
I actually thought the same thing at first: APC is an opcode cached so why should the latency even matter? The thing is APC caches the 'final' PHP code, but it does have to 'go back' an check whether the MySQL data is still the same on every request (at least, that is what I have concluded from experience).
Right now I'm running a local copy of the database from server 2 and the execution times went down from 1 second to 0,0080 second (on average). Now if I could somehow configure APC to only check for updated data every 5 minutes or so, that'd be great, but I don't think this is possible.
I will look into Memcached, it seems that this might be what I'm looking for...
You're right, the slave server still pulls all data from the master and then filters it locally
Last edited by shought; 10-28-2010 at 03:24 PM.
Reason: Added quote and reply.
|
Posted by shought, 10-28-2010, 03:48 PM |
It seems Memcached would require us to rewrite all the PHP to make use of it, or did I get lost somewhere in reading the endless configuration notes...
|
Posted by harget, 10-28-2010, 04:07 PM |
Yes, you would have to rewrite your code, but the nice thing is that you can store the query in memcached and then use that as if it was a makeshift database.
Also, APC is not just a OPCache. It has very similar functions to Memcached except it is local only, read up on http://us3.php.net/apc
|
Posted by shought, 11-04-2010, 12:34 PM |
Ok, we've attempted to setup MySQL replication but now we're getting an error:
This is in the MySQL log after we've setup the replication, I've confirmed that the user does actually have the permissions to replicate the database.
When I setup the replication I also get an error, at first:
Suggested solutions are increasing the max packet size, but the max packet sized was already increased to 128M (which is waaaay too much) and it still doesn't work.
Anyone?
|
Posted by quad3datwork, 11-04-2010, 01:56 PM |
How did you create your replication user?
Something like this would do...
|
Posted by shought, 11-04-2010, 04:05 PM |
Yeah, we did that.
Afterwards we checked with phpMyAdmin and the permissions are set correctly.
I'm more worried about the second error though (well, the first actually; the one which occurs when I setup (CHANGE MASTER TO ...) the replication). Everything works out (the settings are created and at restart it starts working), but still that error is there.
I think I might be able to fix the first with some fuddling around (I think it has something to do with the domain the user is on).
|
Add to Favourites Print this Article
Also Read