Download - Lonestar php scalingmagento

Transcript

Scaling Magento – Reaching Peak Performance

Mathew BeaneLonestar PHP 2015 – April 16

@aepodhttp://aepod.com/

Mathew BeaneDirector, Systems Engineering @ Robofirm

that deliver results.Magento Solutions

http://www.robofirm.com/

Magento: What is it?• Open-source e-commerce platform• PHP based application• Core utilizes Zend Framework 1• Very flexible, it’s built to modify• Extremely scalable, supports huge stores• Market leader and still growing• Magento 2 is right around the corner

• It’s is in development beta now• http://magento.com/developers/magento2

Magento is the market leader in E-Commerce

Be prepared to scale your clients Magento websites because growth is the norm.

E-Commerce growth is Worldwide

Vagrant / Puppet Configurations

https://github.com/aepod/Magento-Vagrant-Puppet-Sandbox/

• Uses Puppetmaster to build out cluster• Checkout and Run `vagrant up` - it will bring up a full cluster• Initial release standard (n)webnodes + single db config• Monolith server has Web, DB, Redis all-in-one• We will come back to this later

We have memory sticks to speed up install process, it would be best to start now so when we get to the demonstration parts you can follow along.

Todays Plan

• Introduction• Magento Application: Preparing and Optimizing• Magento Cluster Architecture: Examine Typical Layouts• Magento Vagrant/Puppet Sandbox: Demonstrating Clustering• Magento Performance Toolkit: Measuring Performance Benchmarks• Advanced Scaling Topics: Redis, Reverse Proxy, Database and Others• Conclusion: Open Q & A

Getting Ready to Ascend

Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place

Optimizing Overview

• Optimize sysctl.conf: Increase limits for the servers based on Memory and CPU.

• Modify Nginx settings: – Setup PHP FPM with TCP properly – Optimize Worker Connections– Tune up caching and other settings

• Optimize PHP FPM– Process Manager Optimization– Zend Opcode Cache– Tune up other settings

• Optimize DB / Redis– Typically done at the same time

Optimizing Hints – sysctl.conf

Example /etc/sysctl.conf

### Probably requires adjustment# Settings for: 32GB RAM, 32 CPU Cores##

fs.file-max = 2097152kernel.pid_max = 2097152kernel.shmmax = 4294967296

net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.rmem_default = 1048576net.core.wmem_default = 1048576net.core.netdev_max_backlog = 65536net.core.somaxconn = 65536net.core.optmem_max = 25165824

net.ipv4.tcp_rmem = 4096 1048576 16777216net.ipv4.tcp_wmem = 4096 1048576 16777216net.ipv4.tcp_max_syn_backlog = 65536

vm.max_map_count = 262144

Remember to backup your configurations before making changes.

net.core.somaxconn is the least trivial.

Typically it is set to 128, this will cause errors such as:“apr_socket_recv: Connection reset by peer” and “connect() … failed“.

net.core.netdev_max_backlog is also important when considering higher load

sys.fs.file_max will almost certainly need to be increased

After changing you run: sysctl –p to update your kernel settings.

ALWAYS TEST after changing values and quantify your results against previous tests.

See also:https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/8.2/html/Performance_Tuning_Guide/system-tuning.html

Optimizing Hints – Nginx

• Set fastcgi up correctly:location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/$fastcgi_script_name; fastcgi_param MAGE_IS_DEVELOPER_MODE 0; fastcgi_param PHP_VALUE error_log=/var/log/php-errors.log; include fastcgi_params; fastcgi_cache off; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 120;}

Adding buffers will help a lot with the odd socket errors.

Optimizing Hints – Nginx

• worker considerations:– Worker Processes = Total # CPU Cores– Worker Connections: ½ of ulimit –n– Max Connections = worker processes * worker connections– Worker_rlimit_nofile safe value = 2 * Max Connections– keepalive_timeout mitigates overage by trading for latency

ulimit can be increased as discussed in the sysctl.conf optimizations.

Some systems will allow for the following command:ulimit –n 10000

This will change the ulimit without having to change the sysctl.conf. This is handy for testing different limit settings.

Optimizing Hints – Nginx

• Tune Client Caching using expires# Media: images, icons, video, audio, HTC location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ { expires 1M; access_log off; add_header Cache-Control "public"; }

• When tuning Nginx cache use curl to fetch headers:curl -X GET -I www.idealphp.com:80/skin/frontend/rwd/default/images/logo.gifHTTP/1.1 200 OKServer: nginx/1.0.15Date: Tue, 14 Apr 2015 17:56:32 GMTContent-Type: image/gifContent-Length: 2320Last-Modified: Tue, 14 Apr 2015 00:30:08 GMTX-Robots-Tag: noindex, nofollow, nosnippet, noarchiveAccept-Ranges: bytesCache-Control: max-age=31536000Cache-control: public

Optimizing Hints – PHP FPM• PHP 5.4+ - Must use PHP-FPM instead of mod_php• Use TCP instead of sockets

– More work to setup and tune, scales much further

• Test with Zend opcode cache instead of APC– Zend OPCode Cache can be quite a bit faster– Turn off apc.stat if using APC– Zend Opcode: opcache.revalidate_freq=0

• Tune Process Manager– pm: ondemand– pm.max_children: about 6-8 per CPU *more later– pm.max_requests: 10000 *php threads should die for garbage collection

• Listen backlog is limited by net.ipv4.tcp_max_syn_backlog – listen.backlog: set to 64000 (typically set to -1 or 128) *only do after sytsctl.conf optimization

Optimizing Hints – PHP FPM

pm.max_children will limit the amount of memory php uses overall– Calculate the memory a thread is using while under load:ps --no-headers -o "rss,cmd" -C php-fpm | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }'

– Use the total amount of memory you want php to consume:pm.max_children = Total RAM dedicated to the web server / Max child process size

Based on this, you can set your max_children based on memory instead of the CPU metric used earlier. This will allow for more memory consumption in production environment. Be sure to test with higher values, more is not always better.

Source: http://myshell.co.uk/index.php/adjusting-child-processes-for-php-fpm-nginx/

Optimizing Hints – Redis

• Redis is the preferred cache for Magento– sysctl ulimit values affect the redis maxclients– Use three separate Redis instances with a single DB in each, on different ports– tcp-backlog can be increased, adjust sysctl somaxconn and tcp_max_syn_backlog– maxmemory can be set very low typically.

• Using Sentinel and twemproxy you can horizontally scale– We will come back to this in the slides– Cluster twemproxy to eliminate all single threaded bottlenecks and single points of

failure

• At Rackspace or AWS: Use Object Rocket https://objectrocket.com/ for Redis SaaS

Optimizing Hints – MySQL• Turn off MySQL Query Caching – This is single threaded and not of use with Magento• Typical Selected Config Settings for a 32GB Dual Hex-Core Server

myisam_sort_buffer_size = 64Mkey_buffer_size = 256Mjoin_buffer_size = 4Mread_buffer_size = 4Mread_rnd_buffer_size = 4Msort_buffer_size = 8Mtable_open_cache = 8192thread_cache_size = 512

tmp_table_size = 384Mmax_heap_table_size = 384Mmax_allowed_packet = 1024M

query_cache_limit = 4Mquery_cache_size = 0query_cache_type = 0query_prealloc_size = 16384query_alloc_block_size = 16384

innodb_thread_concurrency = 24innodb_buffer_pool_size = 16Ginnodb_log_file_size = 384Minnodb_log_buffer_size=24Minnodb_additional_mem_pool_size = 16Minnodb_io_capacity = 800innodb_concurrency_tickets = 900innodb_flush_neighbor_pages=continnodb_lock_wait_timeout=75innodb_flush_method=O_DIRECT

Optimizing Hints - Other

• Varnish is great, however your application must be suitable• CDNs can have a great impact on end-user performance• Always test any changes you make

Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place

Clean Magento Core

Keep a clean core. If you inherit a project, one of the first things you should do is clean up core. This means you have not changed any default files including templates, layouts and code.

What is a clean core:- No changes to app/code/core- No changes to ANY files provided by Magento- Nothing in app/code/local/Mage (and so on)

Ben Marks, Magento Community Evangelist @benmarks(Edit core and find out what he will unleash on you)

Magento Project Mess Detector

Find and eliminate your core changes with:https://github.com/AOEpeople/mpmdMagento Project Mess Detector by Fabrizio Branca

Refactoring Bad Magento Projects – Further Reading

Fabrizo Branca’s work on cleaning up Magento projects:http://fbrnc.net/blog/2014/12/magento-updateIn this article he goes into depth on the process of separating the core files, and using Modman and composer to package everything.

Tegan Snyder's work on building modman configurations for extensions out of existing source:https://github.com/tegansnyder/meffThis will allow you to generate modman files for your extensions, which may be handy in the process of cleaning up.

Optimize Your Application with Z-Ray• Use Zend Server 8 Z-Ray to profile and tune your Magento Application• Z-Ray allows you to see every request, and many details about the request.• It’s easy to install, free to try and really amazing.• It can be used to debug Mobile browsers through the Zend Server backend• It can be used to debug server cron scripts etc as well

• Demo: http://serverdemo.zend.com/magento/• Blog Article: http://aepod.com/debug-magento-using-zend-zray/

Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place

Development Cycle

Applies equally to the hardware architecture as to the software application.

Every project should have a continuous development cycle in place, and ensure that it is well documented and easy to follow for all of the people involved.

Development Cycle – Application Packaging

Application Packaging

Composer / Modman

In order to deploy to a cluster of servers you will need to package you application.

Maintain separate packages for core Magento. Designs, extensions and other modifications should be packaged in a way that allows you to apply them to a new core Magento.

Development Cycle – Branching

Choose a branching methodology and build around it.

Release / Feature Branch

Make pull requests part of your workflow. Use Pull Requests

SuccessfulGit Branching Model

http://nvie.com/posts/a-successful-git-branching-model/

Development Cycle - Testing

Build testing into all of your development cycles, can be very beneficial, when making releases you can test against them.

Testing

Developers are working on BDD: https://speakerdeck.com/alistairstead/bdd-with-magentoBeHat: https://github.com/MageTest/BehatMagePHPSpec: https://github.com/MageTest/MageSpecAnd others

Magento 2 will have built in PHPUnit testshttps://wiki.magento.com/display/MAGE2DOC/Magento+Automated+Testing+Guidehttps://alankent.wordpress.com/2014/06/28/magento-2-test-automation/

Do Not Edit on Production

Use a deployment tool, and LOCK DOWN production. No one should ever have to touch an application file manually for any reason.

If you can extend this to your OS and server stack you now have fully engaged DevOps.

Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place

Deployment Tools – Typical Software

• When deploying an application to a cluster of application server, if you do not have a deployment tool, it will be nearly impossible to put changes to you code up.

• There are many choice, here are a couple typical deployment tools.

– Capistrano: Written in Ruby but well accepted and great to work with

– Jenkins: Another example of Deployment Automation

– Bamboo: Part of the Atlassian stack also includes testing and other features.

– uDeploy: IBM based deployment tool, drag and drop of building of flows, good reporting, very scalable and heavily complicated

– Roll Your Own: This is more common, using bash scripts and other tools you can build a project deployment tool fairly easily.

Deployment Tools – Some Requirements

• Integrated with code versioning• Supports Multi-Server• Supports the Application

– Handles Maintenance Mode automatically– Runs Installers– Clears Caches

• Low Downtime• Rollback Procedure (a/b switch)• Trigger events such as shell scripts, or CDN clearing

Nice to have• Integrated testing• Integration to GUI Interfaces

Deployment Tools – Further Reading

I highly suggest researching Fabrizo Branca’s work on the subject: http://www.slideshare.net/aoepeople/rock-solid-magentoAlso his Angry Birds presentations explains deployment in AWS.

Also check out Joshua Warren’s slides on Test Driven Development and his stem to stern tutorial Rock Solid Magento Development.http://www.slideshare.net/joshuaswarren/

Deployment Tools - Server Deployments

• Automate Servers using:– Puppet– Chef– Ansible– Salt

• Centralize Configurations• Automated Updates and Server Online Process

Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place

Load Testing and Metrics"Don't rate potential over performance."

- Jim Fassel

Blaze MeterUsing Blazemeter you can easily build repeatable tests, with very nice graphs.(based on JMeter)

Gatlinghttp://gatling.io/On par with Blazemeter.

JMeterVery effective, without having to purchase a SaaS

Blazemeter graph showing very poor performance

SiegeCan be used minimally to simulate some types of load.

Magento Performance Toolkit

https://github.com/magento/magento-performance-toolkit

The Magento Performance Toolkit is a set of automated tools that enables you to quickly and consistently measure the application performance of Magento using Apache JMeter. The toolkit includes(sic requires), a PHP script for generating a test database and a JMX script for the performance measurement.From: Magento Performance Toolkit Users Guide

More Reading: http://aepod.com/using-the-magento-performance-toolkit/

JMeter Tests for “Normal Sites”

The Magento Performance Toolkit can be crafted to pull data from production sites, although its complicated and you may want to cover your site using your own JMeter tests.

Gathering the steps for a client site typically takes a days or more depending on the complexity of the site design, and the checkout.

Developing and debugging this can be a real time sink depending on how far down the rabbit hole you want to go.

Using Siege

siege –t 5M –b –c 5 http://www.idealphp.com/This will start up 5 threads and hammer the URL for 5 minutes.

Siege is very useful, but limited. It’s easy to put a lot of traffic on a box with it, but the traffic will be limited to GET requests.

You can pull a list of URL’s from the Magento sitemap and run through random requests, this can even be used to do some pseudo-warming of the cache.

New Relic - Production

New Relic - Testing

Load Testing Production

1. Prepare a downtime:– Forward site to a non-magento maintenance page – Exclude Testing IP – So tests can still proceed– Disable Cron

2. Copy Production Database3. Point Magento at the new copied DB4. Modify Magento Admin Settings in copied DB

– Use Test Payment Gateway– Disable Transaction Emails

5. Test against Copied DB6. After Testing is complete:

– Point site at original DB– Put site back up on live– Enable Cron

Using this method, you can test against production on production. You will need to bring the site down during the time period, but if this is done at a low point in traffic it causes little to no stress to the customers.

Your ready to scale Magento

Before you start: Optimized Server Settings Magento application is optimized and clean Development Pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place

Get started: Add a Load Balancer Proxy Web Traffic to web(n) servers Cluster Redis using Sentinel / Twemproxy Add Varnish if application permits Add MySQL Read servers Build in auto-scaling and your done

Hardware Load Balancer

HAProxy Load Balancer

Sentinel / twemproxy

High Availability

MySQL / Percona

Master/Slave

Single Write /

Multiple Read Servers

Database

Apache / Nginx

PHP 5.4 +

Multiple Web Servers

Varnish

Web

File Server (NFS / NAS)

Redis / Memcache

Deployment Tools

Monitoring

Other

Magento Cluster ArchitectureTypical Cluster Components

Typical Magento Cluster

Load Balancers – HAProxy (software)

• HAProxy: a free fast and reliable solution featuring high availability, load balancing and proxying for TCP and HTTP-based applications.

• HAProxy can be used for web servers, database servers, Redis and any other TCP based application.

• Easy to configure using a puppet module• Can be clustered easily for high availability concerns• Handles L4 Load Balancing if you want to do fancy multi-

application environments• Built in stats interface

SOFTWARE

Load Balancers – Hardware & Hybrid

HARDWARE

• F5 Hardware load balancers are a standard• Rackspace offers a very easy to use web interface to

maintain a hybrid infrastructure• Hardware load balancers offer a turn-key mature

solution.• Typically Managed Service, with very hands off

approach.

HYBRID

• AWS Elastic Load Balancers• Rackspace Cloud Load Balancers• Cloudflare• Many others…

Load Balancers – Making a choice

• Budget concerns will drive this decision• Hosting Choices will affect availability, costs and toolsets• Start locally with HAProxy and build test clusters using vagrant• HAProxy can still be used, even with a hardware load balancer in place.

Load Balancers – Web Server Concerns

• Simple to Load Balance, most sites start here• Challenges include the following:

– Session Management: should be a no brainer if your using redis for sessions– Shared filesystem: Use NFS for media, keep all code local to web servers– Fencing: easily solved with either software or hardware– Log Collection: Rsyslog or Splunk

• Keep the system builds simple, repeatable and automate if possible• How to automate:

– Create an complete image to work from - include puppet so it can pull from a puppet master– The puppet master spins up your webserver stack– You have your deployment process in place, so plant the application and spin up the node

• Be prepared to lose nodes, the more you have the more likely failure is• When a node runs amok, you must be prepared to kill it dead

Load Balancing - File systems

Best practice for sharing files: Use NFS• Most other options are not viable. (glusterfs, iscsi, and so on)• NFS will not be the bottleneck if your application is configured correctly• Share only media/ to avoid bottlenecks• Client requirements may drive cluster toward a NAS

Network Attached Storage (NAS)• Typically only on very large sites• Expensive, but very reliable and robust• Entry level has very high bar (typically 20+ drives)

Load Balancing - Redis

“Redis clustering using sentinel is easy to set up. Adding twemproxy allows for a highly scalable Redis cluster and you get auto fail over and a ton of other benefits with this configuration. This arrangement can also remove your Redis single point of failure.”http://aepod.com/clustering-magento-redis-caching-with-sentinel-keepalived-twemproxy-and-twemproxy-agent/

• Easy to setup… well not really• Very Complex, requires orchestration• Sentinel Monitors Redis clusters• twemproxy handles sharding• twemproxy agent monitors sentinel• Very robust when setup, nothing is single

threaded, everything is HA and the speed….

• Pretty much transparent to Magento despite the complexity

Load Balancing - MySQL

• Should be considered a low value target, except for High Availability concerns

• Magento typically is not bottlenecked by MySQL

• Using Percona XtraDB you can add read slaves

• Magento only supports a single Master/Write server, with multiple read servers

• Setting up load balanced READ db servers is not overly difficult, but doesn’t offer much if any performance benefits

• Cluster DB will require more advanced fencing

• Typical production sites do not use load balanced or clustered MySQL

Load Balancing - Others

• Reverse Proxy: Varnish or Nginx you can and should cluster these

• Puppet likes to be clustered: Mastering Puppet by Thomas Uphill is a great reference

• Monitoring Tools: You’re running these single threaded, what if the monitoring server fails?

• Warehouse Integrations, ERP Layer Stuff, everything may need to be multi-threaded

• Search back ends such as solr or elasticsearch

• Consider all of the parts of the business that surrounds the Magento Application and try to encapsulate them

Magento Performance Toolkit

https://github.com/magento/magento-performance-toolkit

The Magento Performance Toolkit is a set of automated tools that enables you to quickly and consistently measure the application performance of Magento using Apache JMeter. The toolkit includes(sic requires), a PHP script for generating a test database and a JMX script for the performance measurement.From: Magento Performance Toolkit Users Guide

More Reading: http://aepod.com/using-the-magento-performance-toolkit/

Magento CE 1.9.1.0 Ready Version: https://github.com/aepod/magento-performance-toolkit

Magento Performance Toolkit – generate.php

• Importing can take a long time, and may require changes to php and MySQL settings

• If it fails, it fails hard. Restart with a fresh DB

• You can modify the profiles to better suite your testing requirements

• Using the toolkit requires putting it in the dev/tools/performance_toolkit/ directory in the Magento install you want to use it on

• The scripts requires this exact directory, due to includes and autoloading

• Oftentimes will require you to remove the tmp dir inside the toolkit

Magento Performance Toolkit – Generate Profiles

• You can create your own profiles by copying and modifying the XML in profiles/

• Large and X-Large take extremely long times to import and require changes to the PHP and MySQL settings

• After it completes import, double check the following• Front end still up and running and shows the framework categories• Admin and Frontend logs in correctly• Indexes and Caches rebuild correctly

Magento Performance Toolkit – JMeter Configuration

./jmeter -n -t ~/benchmark-default.jmx -Jhost=www.idealphp.com -Jbase_path=/ -Jusers=30 -Jramp_period=30 -Jreport_save_path=./ -Jloops=2000 -Jadmin_user="admin" -Jadmin_password=“adminpass";

• Host: host to check against• Base Path: for building the url• User: Roughly the # of threads (more on next slid)• Ramp Period: How long to take starting up• Report Save Path: Where it saves the summary• Loops: How many times it will try to loop through (except if it runs out of customers)• Admin User/Pass: The admin user and pass for the admin site.

– This is expected to be at http://www.idealphp.com/admin/ • Any JMeter Attribute from top level can be set from command line using –Jattribute=

Magento Performance Toolkit – JMeter Test Results

Magento Performance Toolkit –Users Demystified

The JMeter tests use 4 thread groups to accommodate the testing.• Category Product Browsing(30%): Opens the homepage, goes to a category, opens up 2 simple products

and then a configurable.• View Product Add To Cart(62%): Same as Category browsing however it adds each of the items to the cart

in addition to viewing them.• Guest Checkout (4%): Same as View/Add to Cart, in addition it runs through the checkout as a guest user.• Customer Checkout (4%): Same as View/Add to Cart, in addition it runs through the checkout as a logged in

customer.

Thread groups create threads based on the number of users and the above percentages(by default).

users * group_percent / 100

• This result is rounded down, and can be 0 for checkouts, making no checkout threads.• You can set EACH group independently, and they do not need to add up to 100% or less in totality.

Magento Performance Toolkit – More Attributes

Any JMeter Attribute from top level can be set from command line using –Jattribute=

Customer and Admin Password must match what was used during the generate.

orders is tricky and used to calculate “users” which are used within the thread groups.See jmeter test setup Thread Group -> validate properties and count users

Magento Performance Toolkit –JMeter Test

1. Setup Thread Groups1. Extracts Catergories from Top Menu2. Searches for Simple Products, builds list3. Searches for Config Products, builds list4. Logs in on the Admin5. Grabs a list of customers from the customer grid6. Builds users count which is used by the threads.

2. Loops through Each Thread Group1. Category Product Browsing2. Product Browsing and add items to cart3. Guest Checkout4. Customer Checkout

3. Teardown1. Just removes some stats, but you could extend this spot for some killer stats.

Using Newrelic During JMeter tests

Using Newrelic During JMeter tests

Magento Online Customers

Magento Performance Toolkit – Maxed Orders

./jmeter -n -t ~/benchmark-default.jmx -Jhost=www.idealphp.com -Jbase_path=/ -Jusers=10 \-Jramp_period=30 -Jreport_save_path=./ -Jloops=2000 -Jadmin_password="5678password" \-Jadmin_user=admin -Jview_product_add_to_cart_percent=0 -Jview_catalog_percent=0 \-Jcustomer_checkout_percent=0 -Jguest_checkout_percent=100

(This can be done in the JMX as well)

Non-Trivial Settings:View Catalog : 0%View Product Add to Cart : 0%Customer Checkout: 0%Guest Checkout: 100%

This test will slam orders through as fast as it can. Using it is as simple as recording the number of orders at the start and finish of a test, and then calculating the number of orders per hour on that.

Orders Per Hour = (3600 / Time to Complete) * (Ending Orders – Starting Orders)

Magento Performance Toolkit – In Production

To use the Magento Performance Toolkit on a existing built out production website:1. Follow the testing on a production site plan from earlier.2. Ensure your top-menu matches the JMeter tests search for top_menu, you can switch the

jmeter test or try to modify the template to match.3. Modify the generate script to not create categories4. Generate your data5. Run your tests

Other Issues:• Checkout and cart parts may fail, and require changes to tests• Not the cleanest methodology, but it does work if you make the effort

Magento Performance Toolkit – Benchmarking

• Find metrics to base any decisions you make, and quantify the results• 30 Threads per web head is a good spot to start (except in the VM)• Orders Per Hour

• Collect test start and end orders• (end orders – start orders) * (3600 / time to complete)

• Customers Per Day• (86400 / time to complete) * Online Customers

Magento Performance Toolkit – Benchmarking

• Max Response Time: Should stay below 10 seconds in a good test• Average Response Time: Should be below 500ms in a good test• Error Rate: 0.00% is what your looking for, some glitches may happen• Collect New Relic Stats or other stats and use them to supplement the tests

Using the Magento Vagrant Puppet Sandbox

• Virtualbox or VMWare Workstation w/ Provider– https://www.virtualbox.org/wiki/Downloads– http://www.vmware.com/products/workstation

• http://www.vagrantup.com/vmware

• Vagrant– https://www.vagrantup.com/

• Magento Vagrant Puppet Sandbox– https://github.com/aepod/Magento-Vagrant-Puppet-Sandbox/

• Centos 6.6– Can work with RHEL 6.6 very easily.

1) Install Virtualbox or VMWare2) Install Vagrant (and vmware provider if your using vmware)3) Grab Source from github4) vagrant Up

– This can be done one box at a time, for instance puppet and monolith– All boxes require puppet to be up and running first

Config - Switching Hosts (monolith)

• To use the tests locally you need to Poison your DNS

• On Mac: /private/etc/hostshttp://www.tekrevue.com/tip/edit-hosts-file-mac-os-x/

• On Windows: c:\windows\System32\drivers\etc\hostshttp://www.thewindowsclub.com/hosts-file-in-windows

Add the following for monolith:192.168.200.21 web www.idealphp.com

Test -Monolith

• Login on JMeter box• Copy benchmark.jmx to benchmark-small.jmx• Modify benchmark-small.jmx

– Users: 10– Loops: 100– Browse and Add to Cart: 10%– Browse only: 20%– Guest Checkout: 10%– Customer Checkout: 10%

• These modifications give you 5 total threads, which will work well with the VMs

./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30

DON’T FORGET TO NOTE THE RESULTS

Config – Switching Hosts (cluster/web)

• To use the tests locally you need to Poison your DNS• Remove or change the entry for monolith

• On Mac: /private/etc/hostshttp://www.tekrevue.com/tip/edit-hosts-file-mac-os-x/

• On Windows: c:\windows\System32\drivers\etc\hostshttp://www.thewindowsclub.com/hosts-file-in-windows

• Change the hosts on the JMeter box, in /etc/hosts

Add the following for cluster:192.168.200.12 web www.idealphp.com

Test – Cluster (1 web)

• Repeat Tests:./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30

• Note Results

Config – Add a Web Node

• Vagrant up render2

• Note Results in haproxy stats: http://www.idealphp.com/haproxy?stats

Test – Cluster (2 web)

• Repeat Tests:./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30

• Note Results

Break Q/A

Config – Add a Web Node

• Vagrant up render3

• Note Results in haproxy stats: http://www.idealphp.com/haproxy?stats

Test – Cluster (3 web)

• Repeat Tests:./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30

• Note Results

• You can start to experiment with more threads if you wish.

Config – Add Another Web Node

• Vagrant up render4

• Note Results in haproxy stats: http://www.idealphp.com/haproxy?stats

Test – Cluster (4 web)

• Repeat Tests:./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30

• Note Results

• Try more threads

Results – What does it all mean?

• We are testing on VMs the results will not be good.

• Doing some compare between the setups you should see throughput going up as you add boxes.

• We will not be able to hit any bottlenecks because of the VMs

• In production you will start to find bottlenecks easily using this method

Test – Cluster (Max Orders)

./jmeter -n -t ../tests/benchmark.jmx -Jhost=www.idealphp.com -Jbase_path=/ -Jusers=10 \-Jramp_period=30 -Jreport_save_path=./ -Jloops=2000 -Jadmin_password="5678password" \-Jadmin_user=admin -Jview_product_add_to_cart_percent=0 -Jview_catalog_percent=0 \-Jcustomer_checkout_percent=0 -Jguest_checkout_percent=100

This test will slam orders through as fast as it can. Using it is as simple as recording the number of orders at the start and finish of a test, and then calculating the number of orders per hour on that.

Orders Per Hour = (3600 / Time to Complete) * (Ending Orders – Starting Orders)

Try against less web nodes and compare results

Config – PHP FPM Tuning

• Adjust /etc/php-fpm.d/www.conf change max_children from 8 to 16

• Retest with original 4 server test:./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30

Magento Vagrant Puppet Sandbox

Questions / Feedback

Thank You!

• Mathew Beane <[email protected]>• Twitter: @aepod• Blog: http://aepod.com/

Rate this talk:https://joind.in/talk/view/13541

(Slides will be available)Thanks to :My Family

The Magento CommunityRobofirm

Fabrizo Branca (deployments)Thjis Feryn (sentinel)

Ben Rose (puppet)Rackspace

Digital Ocean

Advanced Magento Load Balancing

• Reverse Proxy Vagrant or Nginx: Using Varnish to reverse proxy can really speed up the site. Magento designs can break this, as well as extensions. Expect some difficulty and issues.

• Redis Sentinel w/ twemproxy: Using Sentinel and twemproxy wrapped by your load balancer will give you amazing performance out of your Redis cluster. This setup is fairly complex.

• Database Servers: Read servers can be load balanced with Magento. This is easy to achieve, but this doesn’t solve the checkout issue.

• Autoscaling: Automatically bring up new nodes as you need them, and destroy them when not in use.

Reverse Proxy (Varnish) Scaling Concerns

• Turpentine is the standard go to extension for Magento to tie to Varnish– Handles ESI– Handles Dealing with updating VCL– http://www.magentocommerce.com/magento-connect/turpentine-varnish-cache.html

• If on Enterprise disable FPC, CE disable lesti_fpc

• https://github.com/nexcess/magento-turpentine/wiki/Installation

• Typically Varnish will run on each render box, and proxy to the local nginx.

• Issues will typically show up as over caching of blocks or other bits of front end page

• SSL is NOT supported by vagrant, terminate it at your load balancer

Reverse Proxy (Nginx) Scaling Concerns

• NGINX is a great reverse proxy and load balancer

• http://nginx.com/resources/admin-guide/reverse-proxy/

• Can be used to effectively buffer and split requests to different cluster servers

Redis (Sentinel / twemproxy)

• Requires several layers of applications– Load Balancer: Load balances redis traffic to the twemproxy servers

– Twemproxy: proxy server, typically setup as a cluster of 2 servers (can be mixed on other servers)

– Nutcracker Agent: Connects twemproxy to sentinel to keep track of shards of servers

– Sentinel: Monitors the redis instances and maintains availability

– Redis: Dozens of redis servers, sharded, maintained and fenced by Sentinel and nutcracker

– VERY Complex

Database Servers (Percona)• Percona XtraDB to cluster MySQL• Percona XtraBackup for duplicating and rebuilding nodes• Percona Toolkit to help debug any issues your running into• Difficult to scale Write Servers• Scale out your read servers as needed, but MySQL reads are rarely the bottleneck• Typically Slave server is used for backup and hot swap, NOT clustering.

A couple quick tips:• Not all tables in Magento are InnoDB, converting the MyISAM and Memory tables is OK• You will need to be able to kill read servers and refresh • Use your Master server as a read server in the load balancer pool, when you kill all your read

servers, it can fall back to master.

Auto-Scaling Strategy

• Insert puzzle building analogy joke here: http://www.wikihow.com/Assemble-Jigsaw-Puzzles• Each hosting environment has its own quirks and add on top of that the business logic

requirements you will almost always have a unique infrastructure for every client• Build small pieces and work them into the larger picture, you can get a lot of performance

with a few minor changes. • Test everything you do, keep detailed notes on the configurations and compare against the

previous tests

Virtualbox

• Uses 127.0.0.1 for primary Network Device very annoying– You can still use the secondary IPs provided

• Has some issues puppet provision, may fail requiring some handling– Standup may fail during puppet, due to non-zero response.

• Puppet Cert Can fail, this is a pain– Go on puppet remove cert: puppet cert clean boxname – Remove /var/lib/puppet/ssl/ on the client box– Remove /home/vagrant/.vagrantlocks/puppetagent on the client– Rerun vagrant provision boxname from host OS

• Puppet provisioning can fail– Rerun provisioning, keep an eye on what is causing it to fail