jacobs_tuuri_performance
-
Upload
hiroshi-ono -
Category
Technology
-
view
103 -
download
0
description
Transcript of jacobs_tuuri_performance
InnoDB1
InnoDB
Designing Applications andConfiguring for Best Performance
Heikki TuuriCEO Innobase Oy
Vice President, DevelopmentOracle Corporation
InnoDB
InnoDB
Today’s Topics
• Application Design and Performance
• Basic InnoDB Configuration
• Maximizing CPU Efficiency
• Maximizing Disk I/O Efficiency
• Maximizing Transaction Throughput
• Some Operating System Tips
• Speeding Bulk Data Operations
InnoDB
Application Design and Performance
• Create appropriate indexes, so that SELECTs, UPDATEs and DELETEs do not require table scans to find the rows
• index the most-often referenced columns in your tables• try different column orderings in composite (multi-column) keys• Note: too many indexes can slow INSERT, UPDATE, DELETE
• Use EXPLAIN SELECT ... to check that query plans look sensible and there are good indexes for them
• Consider “fat indexes” (a secondary index including extra columns that queries need), to avoid clustered index lookup
• Note: InnoDB's secondary index records always contain the clustered index columns (usually the table’s PRIMARY KEY); consider storage implications of long keys!
Proper application design is the most important part of performance tuning
InnoDB
Application Design and Performance
•Use SQL statements that process sets of rows at a time, rather than just one row-at-a-time
• Enforce referential integrity within the server, not at the application level
•Use transactions to group operations• Avoid excessive commits (e.g., avoid autocommit)• But don’t make transactions to large, or too long-lasting
Proper application design is the most important part of performance tuning
InnoDB
Basic InnoDB Configuration
[mysqld]# You can write your other MySQL server options here# ...# Data files must be able to hold your data and indexes.# Make sure that you have enough free disk space.innodb_data_file_path = ibdata1:10M:autoextend## Set buffer pool size to 50-80% of your computer's memoryinnodb_buffer_pool_size=70Minnodb_additional_mem_pool_size=10M## Set the log file size to about 25% of the buffer pool sizeinnodb_log_file_size=20Minnodb_log_buffer_size=8M#innodb_flush_log_at_trx_commit=1
InnoDB
Analyzing InnoDB Performance Problems
• Print several consecutive SHOW INNODB STATUS\G outputs during high workload
• Print SHOW STATUS
• Enable the MySQL 'slow query log'
• In Unix/Linux, use 'top', vmstat, iostat
• In Windows use the Task Manager
InnoDB
CPU or Disk-Bound Workload?
• Database throughput can be determined by the processor speed or by the disk access time
• Both CPU and disk use need to be optimized
• first tune the more limited resource, the one limiting throughput
• Unix/Linux shell command ‘top’ and Windows Task Manager show Total CPU usage
CPU usage < 70 % suggests a disk-bound workload
InnoDB
Save CPU Time
• Use multi-row inserts: INSERT INTO t VALUES (1, 2), (2, 2), (3, 2);
• Use stored procedures
• Instead of many small SELECT queries, try to use one bigger query
• Use the MySQL query cache: query_cache_type=ON query_cache_size=100M
Save CPU time by minimizing communicationsbetween the client and the server
InnoDB
Reduce Disk Access Bottleneck
• Keep your hot data set small; delete obsolete data
• Run OPTIMIZE TABLE ...• Rebuilds the table to eliminate fragmentation -> smaller table!
• Requires taking the database offline
• Create 'fat' secondary indexes with extra columns
• Use a large buffer pool to cache more data• set innodb_buffer_pool_size up to 80% of computer's RAM
• Use sufficiently large log files• set innodb_log_file_size to approx 25% of the buffer pool
(assumes 2 log files with combined size not more than 4 GB)
InnoDB
Make Your InnoDB Tables Smaller
• Use new (default) COMPACT InnoDB table format in MySQL 5.0• typically saves ~20 % of space vs. old REDUNDANT table format
• Use a short PRIMARY KEY, or surrogate key• InnoDB stores the primary key in every secondary index record• InnoDB creates 6-byte internal ROW_ID for tables with no primary
key or UNIQUE, NOT NULL index
• Use VARCHAR instead of CHAR(n) where possible• InnoDB always reserves n bytes for fixed-size CHAR(n)• Note: UPDATEs to VARCHAR columns can cause fragmentation
Smaller tables use fewer disk blocks, thus requiring less i/o
InnoDB
Reduce Transaction Commit Disk I/O
• InnoDB must flush its log to a durable medium at a COMMIT
• Log flush is not normally a bottleneck with a battery-backed disk controller and write-back cache enabled
• But … log flush to a physical disk can be a very serious bottleneck
• Log flush typically takes 4 milliseconds on a fast disk
• Minimize log flushes by wrapping several individual INSERTs, UPDATEs, DELETEs in one transaction
• OR, allow transactions to commit without flush
• Set innodb_flush_log_at_trx_commit=2 in my.cnf
• WARNING: can lose 1 second’s worth of transactions that occurred prior to an operating system crash
InnoDB
Spread Disk I/O to Several Disks
• Two InnoDB modes for storing tables in files:• one file/table (innodb_file_per_table in my.cnf)• multiple tables per ibdata file
• With one InnoDB file/table on Unix/Linux• Symlink MySQL database directories to separate disk drives• Also can symlink .ibd (data) files on different drives• Note: ALTER TABLE and CREATE INDEX relocate the .ibd file
because the table is recreated
• With InnoDB data in multiple ibdata files, create the files on different disk drives• InnoDB fills ibdata files linearly, starting from first listed in the
my.cnf
• Though maybe the easiest solution is to buy a RAID disk array!
Store InnoDB data onmultiple physical disk drives
InnoDB
Avoid O/S Double Buffering
• InnoDB itself buffers database pages. Using the operating system file cache is a waste of memory and CPU time• the o/s can page out parts of the InnoDB buffer pool• use ~80% of physical RAM for InnoDB, and turn off o/s file caching
• Linux: set innodb_flush_method=O_DIRECT in my.cnf to advise o/s not to buffer InnoDB files
• Solaris: use a direct i/o UFS filesystem• use mount option forcedirectio; see mount_ufs(1M)• with Veritas file system VxFS, use the mount option convosync=direct
• Windows: InnoDB always uses unbuffered async i/o. No action required
Devote physical RAM to InnoDB, not the o/s file cache!
InnoDB
Avoid Slow O/S File Flushing
• Linux/Unix use fsync() to flush data to disk
• fsync() is extremely slow on some old Linux and Unix versions
• NetBSD was mentioned by a user as one such platform
• Setting innodb_flush_method=O_DSYNC inmy.cnf may help
InnoDB
Avoid “Thread Thrashing” inHigh Concurrency Environments
• With too many threads (on Linux), performance can degrade badly• SHOW INNODB STATUS\G shows many threads waiting for a semaphore• queries pile up, and little work gets done• throughput can drop to 1/1000 of the normal
• Workaround: set innodb_thread_concurrency to 8, or even to 1 in my.cnf
• We are working on removing this problem
• MySQL-5.1.9 behaves somewhat better in this respect
Limit the number of threads executing inside InnoDB at once
InnoDB
Avoid Transaction Deadlocks
• Use short transactions, which are less prone to collide and deadlock
• Appropriate indexes reduce table scans and reduce the number of locks taken
• Access tables and rows in the same predefined order throughout your application
Design and configure to reduce the likelihood of deadlocks, which lower throughput if they occur frequently
InnoDB
Avoid Transaction Deadlocks (2)
• With MySQL V5.0 or before, use the my.cnf option innodb_locks_unsafe_for_binlog to minimize next-key locking
• This is safe only if you do not use binlogging for replication,
• OR, if your application is not prone to 'phantom row' problems
• Beginning with MySQL-5.1.xx, if you use row-based replication, you can safely reduce next-key locking by …
• setting innodb_locks_unsafe_for_binlog
• use TRANSACTION ISOLATION LEVEL READ COMMITTED
Design and configure to reduce the likelihood of deadlocks, which lower throughput if they occur frequently
InnoDB
Avoid Transaction Deadlocks (3)
• Last resort: use table locks to serialize transactions• table locks eliminate deadlocks, but reduce throughput
• You have to SET AUTOCOMMIT=0 to make them work properly
Design and configure to reduce the likelihood of deadlocks, which lower throughput if they occur frequently
SET AUTOCOMMIT=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
<INSERTs, UPDATEs, DELETEs> …;
UNLOCK TABLES;
COMMIT;
InnoDB
Speed up Table Imports to InnoDB
• Execute many INSERTs per transaction, not one transaction per row!
• Before loading, sort the rows in primary key order to reduce random disk seeks
• For tables with foreign key constraints, turn off foreign key checks with SET FOREIGN_KEY_CHECKS=0
• Do this only if you know your data conforms to the constraints
• Use only for the duration of the import session
• For tables with unique secondary indexes, , turn off uniqueness checking with SET UNIQUE_CHECKS=0
• Do this only if the data is unique in required columns
• Use only for the duration of the import session
Pre-process your data and know whether it is valid before loading
InnoDB
Avoid Big Deletes or Rollbacks
• Use TRUNCATE TABLE to empty a table, rather than DELETE all rows
• Beware of big rollbacks … use smaller transactions!• InnoDB speeds INSERTs with an insert buffer, but …• Rollback occurs row-by-row – can be 30 times slower than
inserts• If you end up with a runaway rollback, drop the table if you
can afford losing the data
InnoDB
Summary: InnoDB Performance Tuning• Design SQL and Indexes with care
• Follow buffer pool, log & data file size guidelines
• Use performance problem diagnostic tools
• Minimize client-server, MySQL-InnoDB traffic
• Reduce i/o w/ small data, large buffers, good indexes
• Spread i/o to multiple physical disk drives
• Avoid o/s double buffering, slow fsync(), “thread thrashing”
• Reduce or eliminate transaction deadlocks
• Pre-process your data and know its validity
• Avoid big deletes or rollbacks
InnoDB
AQ&Q U E S T I O N SQ U E S T I O N S
A N S W E R SA N S W E R S
InnoDB1
InnoDB