Showing posts with label Wordpress. Show all posts
Showing posts with label Wordpress. Show all posts

Tuesday, May 26, 2015

Ubuntu 14.04 - Mysql Galera Cluster for Wordpress

This cluster has 2 nodes run in multi master mode

Installing cluster
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db
sudo add-apt-repository 'deb http://mirror3.layerjet.com/mariadb/repo/5.5/ubuntu trusty main'
sudo apt-get update -y ; sudo apt-get install -y galera mariadb-galera-server rsync

Configuring cluster
On each of the host in the cluster add this configuration
vi /etc/mysql/conf.d/galera.cnf
[mysqld]
#mysql settings
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
#galera settings
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://192.20.3.31,192.20.3.32"
wsrep_sst_method=rsync

Stop MariaDB instance in all of the Galera hosts
sudo service mysql stop

Init the Galera cluster on my 1st node by start MariaDB
sudo service mysql start --wsrep-new-cluster

Just start MariaDB on my 2nd node
sudo service mysql start

To prevent startup error on 2nd node we need to copy /etc/mysql/debian.cnf content from node1 to node2 and then restart MariaDB on 2nd node

Confirm cluster status has started
mysql -u root -p -e 'SELECT VARIABLE_VALUE as "cluster size" FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME="wsrep_cluster_size"'
Enter password: 
+--------------+
| cluster size |
+--------------+
| 2            |
+--------------+
From now on the data will sync between both hosts. To test it try create one database from 1st node then go to 2nd node and see it is there.
On 1st node
mysql -u root -p
create database HelloWorld;

On 2nd node
mysql -u root -p
show databases;
if Helloworld database was there. That's kool. We're ready to go.

Grant Remote privilege to remote user on DBcluster
On one of the DB host (node1 or node2)
mysql -u root -p
create user 'handsome'@'%' identified by handsome_password;
grant all privileges on Database_name . * to  'handsome'@'%';
flush privileges;
show grants for 'handsome'@'%';

Note: 
Replace handsome with your username or root
Replace Database_name with your DB name or * to grant privileges on all Database

DNS setting
On my premise DNS server I add a Round Robin DNS record point to 2 Galera host
dbcluster -> 192.20.3.31
dbcluster -> 192.20.3.32

On my PHP app server I can connect to the dbcluster with this command
mysql -u root -p -h dbcluster 

If it not work, you could have a look on MariaDB log

Reference link:
https://www.linode.com/docs/databases/mariadb/clustering-with-mariadb-and-galera

Thursday, May 21, 2015

Ubuntu 14.04 - GlusterFS for Wordpress

Here's my setup:  Client -> Varnish cache -> 2 x Worpdress -> Memcached -> 2 x MariaDB

I need to sync Wordpress folder on both Machine, just have a look on NFS, CEPH, GlusterFS and I go with GlusterFS
sudo apt-get -y install glusterfs-server

If you don't have a on premise DNS server, we can add the server name in to hosts files
192.20.1.1    srvr-phpnode1.mydomain.vn    srvr-phpnode1    
192.20.1.2    srvr-phpnode2.mydomain.vn    srvr-phpnode2

Create a directory for Gluster Volume on both hosts
mkdir /gluster-volume

Check for other peer status
sudo gluster peer probe srvr-phpnode2
peer probe: success

sudo gluster peer status
Number of Peers: 1

Hostname: srvr-phpnode2
Port: 24007
Uuid: 225698cb-a84f-4b5b-8537-27732d752aba
State: Peer in Cluster (Connected)

Glusterfs distributed vs Replica mode (GlusterFS docs):
Distributed mean data distributes across the available bricks in a volume: write 100 files, on average, 50 on one server, and 50 on another. This is faster than a “replicated” volume, but isn’t as popular since it doesn’t give you two of the most sought after features of Gluster — multiple copies of the data, and automatic failover if something goes wrong

Create wwwVol volume on host srvr-phpnode1
sudo gluster volume create wwwVol replica 2 transport tcp srvr-phpnode1:/gluster-volume srvr-phpnode2:/gluster-volume force

(GlusterFS docs): we tell it to make the volume a replica volume, and to keep a copy of the data on at least 2 bricks at any given time. Since we only have two bricks total, this means each server will house a copy of the data. Lastly, we specify which nodes to use, and which bricks on those nodes. The order here is important when you have more bricks

sudo gluster volume start wwwVol
sudo gluster volume info

Volume Name: wwwVol
Type: Replicate
Volume ID: 8c1430d4-8b25-4967-9a46-39287cdedbb2
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: srvr-phpnode1:/gluster-volume
Brick2: srvr-phpnode2:/gluster-volume

We need /var/www will be on GlusterFS so we will mount wwwVol to /var/www on both hosts
On both hosts
mkdir /var/www

On phpnode1
vi /etc/fstab
srvr-phpnode1:/wwwVol /var/www glusterfs defaults,_netdev 0 0

mount -a 

On phpnode2
vi /etc/fstab
srvr-phpnode2:/wwwVol /var/www glusterfs defaults,_netdev 0 0
mount -a 

Other GlusterFS command
gluster volume stop VOLNAME
gluster volume delete VOLNAME

gluster volume remove-brick VOLNAME wnode1:/www
http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Deleting_Volumes
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting

Ubuntu 14.04 - Memcached server for Wordpress

First thing first: memcache vs memcached

According to http://blackbe.lt/php-memcache-vs-memcached/
PHP has two separate module implementations wrapping the memcached (as in memcache daemon) server.

The memcache module utilizes this daemon directly, whereas the memcached module wraps the libMemcached client library and contains some added bonuses.
Memcached module will be faster than memcache module => I'll go with Memcached module

Install Memcached daemon on my dedicate Memcached host

sudo apt-get install memcached libmemcached-tools

My memcache server listen all of its address. So edit /etc/memcached.conf
-m 20000      #amount of RAM in MB
-p 11211      #listen port 11211
-l 0.0.0.0    #listen on all interfaces

Install memcached client
In other Wordpress PHP server we need to install memcahed client and server (optional):
sudo apt-get install php5-memcached memcached libmemcached-tools php-pear
sudo mkdir /etc/php5/conf.d
ln -s /etc/php5/mods-available/memcached.ini /etc/php5/conf.d/memcached.ini

I need to store Php sessions in Memcached because store in memory will be faster than disk 
Default is session.save_handler = files
Add to /etc/php5/conf.d/memcached.ini
[Session]
session.save_handler = memcached
session.save_path = "tcp://localhost:11211"

Wordpress and Memcached
Finally edit Wordpress wp-config file to add list of Memcached server for Wordpress Total cache plugin
global $memcached_servers;
$memcached_servers = array('default' => array('192.1.3.30:11211','192.1.1.11:11211','192.1.1.12:11211'));

Now your Wordpress can Memcached it.

Friday, May 16, 2014

Increase file upload size limit in PHP-Nginx

If Nginx aborts your connection when uploading large files, you will see something like below in Nginx’s error logs:
[error] 25556#0: *52 client intended to send too large body:
This means, you need to increase PHP file-upload size limit. Following steps given below will help you troubleshoot this!

Changes in php.ini

To change max file upload size to 100MB
Edit…
vim /etc/php5/fpm/php.ini
Set…
upload_max_filesize = 100M
post_max_size = 100M

Notes:

  1. Technically,  post_max_size should always be larger than upload_max_filesize but for large numbers like 100M you can safely make them equal.
  2. There is another variable max_input_time which can limit upload size but I have never seen it creating any issue. If your application supports uploads of file-size in GBs, you may need to adjust it accordingly. I am using PHP-FPM behind Nginx from very long time and I think in such kind of setup, its Nginx to which a client uploads file and then Nginx copies it to PHP. As Nginx to PHP copying will be local operation max_input_time may never create issue. I also believe Nginx may not copy the file but merely hand-over the location of file or descriptor records to PHP!
You may like to read these posts which explains PHP file upload related config in some details.

Change in Nginx config

Add following line to http{..} block in nginx config:
http {
 #...
        client_max_body_size 100m;
 #...
}
Note: For very large files, you may need to change value of client_body_timeout parameter. Default is 60s.

Reload PHP-FPM & Nginx

service php5-fpm reload
service nginx reload

Changes in WordPress-Multisite

If you are running WordPress Multisite setup, then you may need to make one more change at the WordPress end.
Go to: Network Admin Dashboard >> Settings. Look for Upload Settings
Also change value for Max upload file size
https://rtcamp.com/tutorials/php/increase-file-upload-size-limit/