Lonestar php scalingmagento

Click here to load reader

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of Lonestar php scalingmagento

  1. 1. Mathew Beane
  2. 2. Mathew Beane Director, Systems Engineering @ Robofirm
  3. 3. http://www.robofirm.com/
  4. 4. Open-source e-commerce platform PHP based application Core utilizes Zend Framework 1 Very flexible, its built to modify Extremely scalable, supports huge stores Market leader and still growing Magento 2 is right around the corner Its is in development beta now http://magento.com/developers/magento2
  5. 5. Be prepared to scale your clients Magento websites because growth is the norm.
  6. 6. https://github.com/aepod/Magento-Vagrant-Puppet-Sandbox/ Uses Puppetmaster to build out cluster Checkout and Run `vagrant up` - it will bring up a full cluster Initial release standard (n)webnodes + single db config Monolith server has Web, DB, Redis all-in-one We will come back to this later We have memory sticks to speed up install process, it would be best to start now so when we get to the demonstration parts you can follow along.
  7. 7. Introduction Magento Application: Preparing and Optimizing Magento Cluster Architecture: Examine Typical Layouts Magento Vagrant/Puppet Sandbox: Demonstrating Clustering Magento Performance Toolkit: Measuring Performance Benchmarks Advanced Scaling Topics: Redis, Reverse Proxy, Database and Others Conclusion: Open Q & A
  8. 8. Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place
  9. 9. Optimize sysctl.conf: Increase limits for the servers based on Memory and CPU. Modify Nginx settings: Setup PHP FPM with TCP properly Optimize Worker Connections Tune up caching and other settings Optimize PHP FPM Process Manager Optimization Zend Opcode Cache Tune up other settings Optimize DB / Redis Typically done at the same time
  10. 10. Example /etc/sysctl.conf ## # Probably requires adjustment # Settings for: 32GB RAM, 32 CPU Cores ## fs.file-max = 2097152 kernel.pid_max = 2097152 kernel.shmmax = 4294967296 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.rmem_default = 1048576 net.core.wmem_default = 1048576 net.core.netdev_max_backlog = 65536 net.core.somaxconn = 65536 net.core.optmem_max = 25165824 net.ipv4.tcp_rmem = 4096 1048576 16777216 net.ipv4.tcp_wmem = 4096 1048576 16777216 net.ipv4.tcp_max_syn_backlog = 65536 vm.max_map_count = 262144 Remember to backup your configurations before making changes. net.core.somaxconn is the least trivial. Typically it is set to 128, this will cause errors such as: apr_socket_recv: Connection reset by peer and connect() failed. net.core.netdev_max_backlog is also important when considering higher load sys.fs.file_max will almost certainly need to be increased After changing you run: sysctl p to update your kernel settings. ALWAYS TEST after changing values and quantify your results against previous tests. See also: https://access.redhat.com/documentation/en- US/Red_Hat_Directory_Server/8.2/html/Performance_Tuning_Guid e/system-tuning.html
  11. 11. Set fastcgi up correctly: location ~ .php$ { fastcgi_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/$fastcgi_script_name; fastcgi_param MAGE_IS_DEVELOPER_MODE 0; fastcgi_param PHP_VALUE error_log=/var/log/php-errors.log; include fastcgi_params; fastcgi_cache off; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 120; } Adding buffers will help a lot with the odd socket errors.
  12. 12. worker considerations: Worker Processes = Total # CPU Cores Worker Connections: of ulimit n Max Connections = worker processes * worker connections Worker_rlimit_nofile safe value = 2 * Max Connections keepalive_timeout mitigates overage by trading for latency ulimit can be increased as discussed in the sysctl.conf optimizations. Some systems will allow for the following command: ulimit n 10000 This will change the ulimit without having to change the sysctl.conf. This is handy for testing different limit settings.
  13. 13. Tune Client Caching using expires # Media: images, icons, video, audio, HTC location ~* .(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ { expires 1M; access_log off; add_header Cache-Control "public"; } When tuning Nginx cache use curl to fetch headers: curl -X GET -I www.idealphp.com:80/skin/frontend/rwd/default/images/logo.gif HTTP/1.1 200 OK Server: nginx/1.0.15 Date: Tue, 14 Apr 2015 17:56:32 GMT Content-Type: image/gif Content-Length: 2320 Last-Modified: Tue, 14 Apr 2015 00:30:08 GMT X-Robots-Tag: noindex, nofollow, nosnippet, noarchive Accept-Ranges: bytes Cache-Control: max-age=31536000 Cache-control: public
  14. 14. PHP 5.4+ - Must use PHP-FPM instead of mod_php Use TCP instead of sockets More work to setup and tune, scales much further Test with Zend opcode cache instead of APC Zend OPCode Cache can be quite a bit faster Turn off apc.stat if using APC Zend Opcode: opcache.revalidate_freq=0 Tune Process Manager pm: ondemand pm.max_children: about 6-8 per CPU *more later pm.max_requests: 10000 *php threads should die for garbage collection Listen backlog is limited by net.ipv4.tcp_max_syn_backlog listen.backlog: set to 64000 (typically set to -1 or 128) *only do after sytsctl.conf optimization
  15. 15. pm.max_children will limit the amount of memory php uses overall Calculate the memory a thread is using while under load: Use the total amount of memory you want php to consume: pm.max_children = Total RAM dedicated to the web server / Max child process size Based on this, you can set your max_children based on memory instead of the CPU metric used earlier. This will allow for more memory consumption in production environment. Be sure to test with higher values, more is not always better. Source: http://myshell.co.uk/index.php/adjusting-child-processes-for-php-fpm-nginx/
  16. 16. Redis is the preferred cache for Magento sysctl ulimit values affect the redis maxclients Use three separate Redis instances with a single DB in each, on different ports tcp-backlog can be increased, adjust sysctl somaxconn and tcp_max_syn_backlog maxmemory can be set very low typically. Using Sentinel and twemproxy you can horizontally scale We will come back to this in the slides Cluster twemproxy to eliminate all single threaded bottlenecks and single points of failure At Rackspace or AWS: Use Object Rocket https://objectrocket.com/ for Redis SaaS
  17. 17. Turn off MySQL Query Caching This is single threaded and not of use with Magento Typical Selected Config Settings for a 32GB Dual Hex-Core Server myisam_sort_buffer_size = 64M key_buffer_size = 256M join_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M sort_buffer_size = 8M table_open_cache = 8192 thread_cache_size = 512 tmp_table_size = 384M max_heap_table_size = 384M max_allowed_packet = 1024M query_cache_limit = 4M query_cache_size = 0 query_cache_type = 0 query_prealloc_size = 16384 query_alloc_block_size = 16384 innodb_thread_concurrency = 24 innodb_buffer_pool_size = 16G innodb_log_file_size = 384M innodb_log_buffer_size=24M innodb_additional_mem_pool_size = 16M innodb_io_capacity = 800 innodb_concurrency_tickets = 900 innodb_flush_neighbor_pages=cont innodb_lock_wait_timeout=75 innodb_flush_method=O_DIRECT
  18. 18. Varnish is great, however your application must be suitable CDNs can have a great impact on end-user performance Always test any changes you make
  19. 19. Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place
  20. 20. Keep a clean core. If you inherit a project, one of the first things you should do is clean up core. This means you have not changed any default files including templates, layouts and code. What is a clean core: - No changes to app/code/core - No changes to ANY files provided by Magento - Nothing in app/code/local/Mage (and so on) Ben Marks, Magento Community Evangelist @benmarks (Edit core and find out what he will unleash on you)
  21. 21. Find and eliminate your core changes with: https://github.com/AOEpeople/mpmd Magento Project Mess Detector by Fabrizio Branca
  22. 22. Fabrizo Brancas work on cleaning up Magento projects: http://fbrnc.net/blog/2014/12/magento-update In this article he goes into depth on the process of separating the core files, and using Modman and composer to package everything. Tegan Snyder's work on building modman configurations for extensions out of existing source: https://github.com/tegansnyder/meff This will allow you to generate modman files for your extensions, which may be handy in the process of cleaning up.
  23. 23. Use Zend Server 8 Z-Ray to profile and tune your Magento Application Z-Ray allows you to see every request, and many details about the request. Its easy to install, free to try and really amazing. It can be used to debug Mobile browsers through the Zend Server backend It can be used to debug server cron scripts etc as well Demo: http://serverdemo.zend.com/magento/ Blog Article: http://aepod.com/debug-magento-using-zend-zray/
  24. 24. Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place
  25. 25. Applies equally to the hardware architecture as to the software application. Every project should have a continuous development cycle in place, and ensure that it is well documented and easy to follow for all of the people involved.
  26. 26. Application Packaging Composer / Modman In order to deploy to a cluster of servers you will need to package you application. Maintain separate packages for core Magento. Designs, extensions and other modifications should be packaged in a way that allows you to apply them to a new core Magento.
  27. 27. Choose a branching methodology and build around it. Release / Feature Branch Make pull requests part of your workflow.Use Pull Requests
  28. 28. http://nvie.com/posts/a-successful-git-branching-model/
  29. 29. Build testing into all of your development cycles, can be very beneficial, when making releases you can test against them. Testing Developers are working on BDD: https://speakerdeck.com/alistairstead/bdd-with-magento BeHat: https://github.com/MageTest/BehatMage PHPSpec: https://github.com/MageTest/MageSpec And others Magento 2 will have built in PHPUnit tests https://wiki.magento.com/display/MAGE2DOC/Magento+Automated+Testing+Guide https://alankent.wordpress.com/2014/06/28/magento-2-test-automation/
  30. 30. Use a deployment tool, and LOCK DOWN production. No one should ever have to touch an application file manually for any reason. If you can extend this to your OS and server stack you now have fully engaged DevOps.
  31. 31. Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place
  32. 32. When deploying an application to a cluster of application server, if you do not have a deployment tool, it will be nearly impossible to put changes to you code up. There are many choice, here are a couple typical deployment tools. Capistrano: Written in Ruby but well accepted and great to work with Jenkins: Another example of Deployment Automation Bamboo: Part of the Atlassian stack also includes testing and other features. uDeploy: IBM based deployment tool, drag and drop of building of flows, good reporting, very scalable and heavily complicated Roll Your Own: This is more common, using bash scripts and other tools you can build a project deployment tool fairly easily.
  33. 33. Integrated with code versioning Supports Multi-Server Supports the Application Handles Maintenance Mode automatically Runs Installers Clears Caches Low Downtime Rollback Procedure (a/b switch) Trigger events such as shell scripts, or CDN clearing Nice to have Integrated testing Integration to GUI Interfaces
  34. 34. I highly suggest researching Fabrizo Brancas work on the subject: http://www.slideshare.net/aoepeople/rock-solid-magento Also his Angry Birds presentations explains deployment in AWS. Also check out Joshua Warrens slides on Test Driven Development and his stem to stern tutorial Rock Solid Magento Development. http://www.slideshare.net/joshuaswarren/
  35. 35. Automate Servers using: Puppet Chef Ansible Salt Centralize Configurations Automated Updates and Server Online Process
  36. 36. Before you start: Optimized Server Settings Magento application is optimized and clean Development pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place
  37. 37. "Don't rate potential over performance." - Jim Fassel Blaze Meter Using Blazemeter you can easily build repeatable tests, with very nice graphs. (based on JMeter) Gatling http://gatling.io/ On par with Blazemeter. JMeter Very effective, without having to purchase a SaaS Siege Can be used minimally to simulate some types of load.
  38. 38. https://github.com/magento/magento-performance-toolkit The Magento Performance Toolkit is a set of automated tools that enables you to quickly and consistently measure the application performance of Magento using Apache JMeter. The toolkit includes(sic requires), a PHP script for generating a test database and a JMX script for the performance measurement. From: Magento Performance Toolkit Users Guide More Reading: http://aepod.com/using-the-magento-performance-toolkit/
  39. 39. The Magento Performance Toolkit can be crafted to pull data from production sites, although its complicated and you may want to cover your site using your own JMeter tests. Gathering the steps for a client site typically takes a days or more depending on the complexity of the site design, and the checkout. Developing and debugging this can be a real time sink depending on how far down the rabbit hole you want to go.
  40. 40. This will start up 5 threads and hammer the URL for 5 minutes. Siege is very useful, but limited. Its easy to put a lot of traffic on a box with it, but the traffic will be limited to GET requests. You can pull a list of URLs from the Magento sitemap and run through random requests, this can even be used to do some pseudo-warming of the cache.
  41. 41. 1. Prepare a downtime: Forward site to a non-magento maintenance page Exclude Testing IP So tests can still proceed Disable Cron 2. Copy Production Database 3. Point Magento at the new copied DB 4. Modify Magento Admin Settings in copied DB Use Test Payment Gateway Disable Transaction Emails 5. Test against Copied DB 6. After Testing is complete: Point site at original DB Put site back up on live Enable Cron Using this method, you can test against production on production. You will need to bring the site down during the time period, but if this is done at a low point in traffic it causes little to no stress to the customers.
  42. 42. Before you start: Optimized Server Settings Magento application is optimized and clean Development Pipeline in place Deployment infrastructure is solid Rigorous testing protocols are in place Get started: Add a Load Balancer Proxy Web Traffic to web(n) servers Cluster Redis using Sentinel / Twemproxy Add Varnish if application permits Add MySQL Read servers Build in auto-scaling and your done
  43. 43. Hardware Load Balancer HAProxy Load Balancer Sentinel / twemproxy High Availability MySQL / Percona Master/Slave Single Write / Multiple Read Servers Database Apache / Nginx PHP 5.4 + Multiple Web Servers Varnish Web File Server (NFS / NAS) Redis / Memcache Deployment Tools Monitoring Other Typical Cluster Components Typical Magento Cluster
  44. 44. HAProxy: a free fast and reliable solution featuring high availability, load balancing and proxying for TCP and HTTP- based applications. HAProxy can be used for web servers, database servers, Redis and any other TCP based application. Easy to configure using a puppet module Can be clustered easily for high availability concerns Handles L4 Load Balancing if you want to do fancy multi- application environments Built in stats interface SOFTWARE
  45. 45. HARDWARE F5 Hardware load balancers are a standard Rackspace offers a very easy to use web interface to maintain a hybrid infrastructure Hardware load balancers offer a turn-key mature solution. Typically Managed Service, with very hands off approach. HYBRID AWS Elastic Load Balancers Rackspace Cloud Load Balancers Cloudflare Many others
  46. 46. Budget concerns will drive this decision Hosting Choices will affect availability, costs and toolsets Start locally with HAProxy and build test clusters using vagrant HAProxy can still be used, even with a hardware load balancer in place.
  47. 47. Simple to Load Balance, most sites start here Challenges include the following: Session Management: should be a no brainer if your using redis for sessions Shared filesystem: Use NFS for media, keep all code local to web servers Fencing: easily solved with either software or hardware Log Collection: Rsyslog or Splunk Keep the system builds simple, repeatable and automate if possible How to automate: Create an complete image to work from - include puppet so it can pull from a puppet master The puppet master spins up your webserver stack You have your deployment process in place, so plant the application and spin up the node Be prepared to lose nodes, the more you have the more likely failure is When a node runs amok, you must be prepared to kill it dead
  48. 48. Best practice for sharing files: Use NFS Most other options are not viable. (glusterfs, iscsi, and so on) NFS will not be the bottleneck if your application is configured correctly Share only media/ to avoid bottlenecks Client requirements may drive cluster toward a NAS Network Attached Storage (NAS) Typically only on very large sites Expensive, but very reliable and robust Entry level has very high bar (typically 20+ drives)
  49. 49. Redis clustering using sentinel is easy to set up. Adding twemproxy allows for a highly scalable Redis cluster and you get auto fail over and a ton of other benefits with this configuration. This arrangement can also remove your Redis single point of failure. http://aepod.com/clustering-magento-redis-caching-with-sentinel-keepalived-twemproxy-and-twemproxy-agent/ Easy to setup well not really Very Complex, requires orchestration Sentinel Monitors Redis clusters twemproxy handles sharding twemproxy agent monitors sentinel Very robust when setup, nothing is single threaded, everything is HA and the speed. Pretty much transparent to Magento despite the complexity
  50. 50. Should be considered a low value target, except for High Availability concerns Magento typically is not bottlenecked by MySQL Using Percona XtraDB you can add read slaves Magento only supports a single Master/Write server, with multiple read servers Setting up load balanced READ db servers is not overly difficult, but doesnt offer much if any performance benefits Cluster DB will require more advanced fencing Typical production sites do not use load balanced or clustered MySQL
  51. 51. Reverse Proxy: Varnish or Nginx you can and should cluster these Puppet likes to be clustered: Mastering Puppet by Thomas Uphill is a great reference Monitoring Tools: Youre running these single threaded, what if the monitoring server fails? Warehouse Integrations, ERP Layer Stuff, everything may need to be multi-threaded Search back ends such as solr or elasticsearch Consider all of the parts of the business that surrounds the Magento Application and try to encapsulate them
  52. 52. https://github.com/magento/magento-performance-toolkit The Magento Performance Toolkit is a set of automated tools that enables you to quickly and consistently measure the application performance of Magento using Apache JMeter. The toolkit includes(sic requires), a PHP script for generating a test database and a JMX script for the performance measurement. From: Magento Performance Toolkit Users Guide More Reading: http://aepod.com/using-the-magento-performance-toolkit/ Magento CE Ready Version: https://github.com/aepod/magento-performance-toolkit
  53. 53. Importing can take a long time, and may require changes to php and MySQL settings If it fails, it fails hard. Restart with a fresh DB You can modify the profiles to better suite your testing requirements Using the toolkit requires putting it in the directory in the Magento install you want to use it on The scripts requires this exact directory, due to includes and autoloading Oftentimes will require you to remove the tmp dir inside the toolkit
  54. 54. You can create your own profiles by copying and modifying the XML in profiles/ Large and X-Large take extremely long times to import and require changes to the PHP and MySQL settings After it completes import, double check the following Front end still up and running and shows the framework categories Admin and Frontend logs in correctly Indexes and Caches rebuild correctly
  55. 55.
  56. 56. The JMeter tests use 4 thread groups to accommodate the testing. Category Product Browsing(30%): Opens the homepage, goes to a category, opens up 2 simple products and then a configurable. View Product Add To Cart(62%): Same as Category browsing however it adds each of the items to the cart in addition to viewing them. Guest Checkout (4%): Same as View/Add to Cart, in addition it runs through the checkout as a guest user. Customer Checkout (4%): Same as View/Add to Cart, in addition it runs through the checkout as a logged in customer. Thread groups create threads based on the number of users and the above percentages(by default). users * group_percent / 100 This result is rounded down, and can be 0 for checkouts, making no checkout threads. You can set EACH group independently, and they do not need to add up to 100% or less in totality.
  57. 57. Any JMeter Attribute from top level can be set from command line using Jattribute= Customer and Admin Password must match what was used during the generate. orders is tricky and used to calculate users which are used within the thread groups. See jmeter test setup Thread Group -> validate properties and count users
  58. 58. 1. Setup Thread Groups 1. Extracts Catergories from Top Menu 2. Searches for Simple Products, builds list 3. Searches for Config Products, builds list 4. Logs in on the Admin 5. Grabs a list of customers from the customer grid 6. Builds users count which is used by the threads. 2. Loops through Each Thread Group 1. Category Product Browsing 2. Product Browsing and add items to cart 3. Guest Checkout 4. Customer Checkout 3. Teardown 1. Just removes some stats, but you could extend this spot for some killer stats.
  59. 59. ./jmeter -n -t ~/benchmark-default.jmx -Jhost=www.idealphp.com -Jbase_path=/ -Jusers=10-Jramp_period=30 -Jreport_save_path=./ -Jloops=2000 -Jadmin_password="5678password"-Jadmin_user=admin -Jview_product_add_to_cart_percent=0 -Jview_catalog_percent=0-Jcustomer_checkout_percent=0 -Jguest_checkout_percent=100 (This can be done in the JMX as well) Non-Trivial Settings: View Catalog : 0% View Product Add to Cart : 0% Customer Checkout: 0% Guest Checkout: 100% This test will slam orders through as fast as it can. Using it is as simple as recording the number of orders at the start and finish of a test, and then calculating the number of orders per hour on that. Orders Per Hour = (3600 / Time to Complete) * (Ending Orders Starting Orders)
  60. 60. To use the Magento Performance Toolkit on a existing built out production website: 1. Follow the testing on a production site plan from earlier. 2. Ensure your top-menu matches the JMeter tests search for top_menu, you can switch the jmeter test or try to modify the template to match. 3. Modify the generate script to not create categories 4. Generate your data 5. Run your tests Other Issues: Checkout and cart parts may fail, and require changes to tests Not the cleanest methodology, but it does work if you make the effort
  61. 61. Find metrics to base any decisions you make, and quantify the results 30 Threads per web head is a good spot to start (except in the VM) Orders Per Hour Collect test start and end orders (end orders start orders) * (3600 / time to complete) Customers Per Day (86400 / time to complete) * Online Customers
  62. 62. Max Response Time: Should stay below 10 seconds in a good test Average Response Time: Should be below 500ms in a good test Error Rate: 0.00% is what your looking for, some glitches may happen Collect New Relic Stats or other stats and use them to supplement the tests
  63. 63. Virtualbox or VMWare Workstation w/ Provider https://www.virtualbox.org/wiki/Downloads http://www.vmware.com/products/workstation http://www.vagrantup.com/vmware Vagrant https://www.vagrantup.com/ Magento Vagrant Puppet Sandbox https://github.com/aepod/Magento-Vagrant-Puppet-Sandbox/ Centos 6.6 Can work with RHEL 6.6 very easily. 1) Install Virtualbox or VMWare 2) Install Vagrant (and vmware provider if your using vmware) 3) Grab Source from github 4) vagrant Up This can be done one box at a time, for instance puppet and monolith All boxes require puppet to be up and running first
  64. 64. To use the tests locally you need to Poison your DNS On Mac: /private/etc/hosts http://www.tekrevue.com/tip/edit-hosts-file-mac-os-x/ On Windows: c:windowsSystem32driversetchosts http://www.thewindowsclub.com/hosts-file-in-windows Add the following for monolith: web www.idealphp.com
  65. 65. Login on JMeter box Copy benchmark.jmx to benchmark-small.jmx Modify benchmark-small.jmx Users: 10 Loops: 100 Browse and Add to Cart: 10% Browse only: 20% Guest Checkout: 10% Customer Checkout: 10% These modifications give you 5 total threads, which will work well with the VMs ./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30 DONT FORGET TO NOTE THE RESULTS
  66. 66. To use the tests locally you need to Poison your DNS Remove or change the entry for monolith On Mac: /private/etc/hosts http://www.tekrevue.com/tip/edit-hosts-file-mac-os-x/ On Windows: c:windowsSystem32driversetchosts http://www.thewindowsclub.com/hosts-file-in-windows Change the hosts on the JMeter box, in /etc/hosts Add the following for cluster: web www.idealphp.com
  67. 67. Repeat Tests: ./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30 Note Results
  68. 68. Vagrant up render2 Note Results in haproxy stats: http://www.idealphp.com/haproxy?stats
  69. 69. Repeat Tests: ./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30 Note Results
  70. 70. Vagrant up render3 Note Results in haproxy stats: http://www.idealphp.com/haproxy?stats
  71. 71. Repeat Tests: ./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30 Note Results You can start to experiment with more threads if you wish.
  72. 72. Vagrant up render4 Note Results in haproxy stats: http://www.idealphp.com/haproxy?stats
  73. 73. Repeat Tests: ./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30 Note Results Try more threads
  74. 74. We are testing on VMs the results will not be good. Doing some compare between the setups you should see throughput going up as you add boxes. We will not be able to hit any bottlenecks because of the VMs In production you will start to find bottlenecks easily using this method
  75. 75. ./jmeter -n -t ../tests/benchmark.jmx -Jhost=www.idealphp.com -Jbase_path=/ -Jusers=10-Jramp_period=30 -Jreport_save_path=./ -Jloops=2000 -Jadmin_password="5678password"-Jadmin_user=admin -Jview_product_add_to_cart_percent=0 -Jview_catalog_percent=0-Jcustomer_checkout_percent=0 -Jguest_checkout_percent=100 This test will slam orders through as fast as it can. Using it is as simple as recording the number of orders at the start and finish of a test, and then calculating the number of orders per hour on that. Orders Per Hour = (3600 / Time to Complete) * (Ending Orders Starting Orders) Try against less web nodes and compare results
  76. 76. Adjust /etc/php-fpm.d/www.conf change max_children from 8 to 16 Retest with original 4 server test: ./jmeter -n -t ../tests/benchmark-small.jmx -Jhost=www.idealphp.com -Jramp_period=30
  77. 77. Questions / Feedback
  78. 78. Mathew Beane Twitter: @aepod Blog: http://aepod.com/ Rate this talk: https://joind.in/talk/view/13541 (Slides will be available)Thanks to : My Family The Magento Community Robofirm Fabrizo Branca (deployments) Thjis Feryn (sentinel) Ben Rose (puppet) Rackspace Digital Ocean
  79. 79. Reverse Proxy Vagrant or Nginx: Using Varnish to reverse proxy can really speed up the site. Magento designs can break this, as well as extensions. Expect some difficulty and issues. Redis Sentinel w/ twemproxy: Using Sentinel and twemproxy wrapped by your load balancer will give you amazing performance out of your Redis cluster. This setup is fairly complex. Database Servers: Read servers can be load balanced with Magento. This is easy to achieve, but this doesnt solve the checkout issue. Autoscaling: Automatically bring up new nodes as you need them, and destroy them when not in use.
  80. 80. Turpentine is the standard go to extension for Magento to tie to Varnish Handles ESI Handles Dealing with updating VCL http://www.magentocommerce.com/magento-connect/turpentine-varnish-cache.html If on Enterprise disable FPC, CE disable lesti_fpc https://github.com/nexcess/magento-turpentine/wiki/Installation Typically Varnish will run on each render box, and proxy to the local nginx. Issues will typically show up as over caching of blocks or other bits of front end page SSL is NOT supported by vagrant, terminate it at your load balancer
  81. 81. NGINX is a great reverse proxy and load balancer http://nginx.com/resources/admin-guide/reverse-proxy/ Can be used to effectively buffer and split requests to different cluster servers
  82. 82. Requires several layers of applications Load Balancer: Load balances redis traffic to the twemproxy servers Twemproxy: proxy server, typically setup as a cluster of 2 servers (can be mixed on other servers) Nutcracker Agent: Connects twemproxy to sentinel to keep track of shards of servers Sentinel: Monitors the redis instances and maintains availability Redis: Dozens of redis servers, sharded, maintained and fenced by Sentinel and nutcracker VERY Complex
  83. 83. Percona XtraDB to cluster MySQL Percona XtraBackup for duplicating and rebuilding nodes Percona Toolkit to help debug any issues your running into Difficult to scale Write Servers Scale out your read servers as needed, but MySQL reads are rarely the bottleneck Typically Slave server is used for backup and hot swap, NOT clustering. A couple quick tips: Not all tables in Magento are InnoDB, converting the MyISAM and Memory tables is OK You will need to be able to kill read servers and refresh Use your Master server as a read server in the load balancer pool, when you kill all your read servers, it can fall back to master.
  84. 84. Insert puzzle building analogy joke here: http://www.wikihow.com/Assemble-Jigsaw-Puzzles Each hosting environment has its own quirks and add on top of that the business logic requirements you will almost always have a unique infrastructure for every client Build small pieces and work them into the larger picture, you can get a lot of performance with a few minor changes. Test everything you do, keep detailed notes on the configurations and compare against the previous tests
  85. 85. Uses for primary Network Device very annoying You can still use the secondary IPs provided Has some issues puppet provision, may fail requiring some handling Standup may fail during puppet, due to non-zero response. Puppet Cert Can fail, this is a pain Go on puppet remove cert: puppet cert clean boxname Remove /var/lib/puppet/ssl/ on the client box Remove /home/vagrant/.vagrantlocks/puppetagent on the client Rerun vagrant provision boxname from host OS Puppet provisioning can fail Rerun provisioning, keep an eye on what is causing it to fail