check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for innodb data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
defragment tables, rebuild indexes, do table maintenance
If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controller
more RAM is good so faster disk speed
use 64-bit architectures
Know when to split a complex query and join smaller ones
Debugging sucks, testing rocks!
Delete small amounts at a time if you can
Archive old data -- don't be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables
use INET_ATON and INET_NTOA for IP addresses, not char or varchar
make it a habit to REVERSE() email addresses, so you can easily search domains
increase myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)
look up memory tuning parameter for on-insert caching
increase temp table size in a data warehousing environment (default is 32Mb) so it doesn't write to disk (also constrained by max_heap_table_size, default 16Mb)
Normalize first, and denormalize where appropriate.
Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn't a real database
In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it's 1 byte.
A NULL data type can take more room to store than NOT NULL
Choose appropriate character sets & collations -- UTF16 will store each character in 2 bytes, whether it needs it or not, latin1 is faster than UTF8.
make similar queries consistent so cache is used
Have good SQL query standards
Don't use deprecated features
Use Triggers wisely
Run in SQL_MODE=STRICT to help identify warnings
Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up.
/tmp dir on battery-backed write cache
consider battery-backed RAM for innodb logfiles
use min_rows and max_rows to specify approximate data size so space can be pre-allocated and reference points can be calculated.
as your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same.
pare down cron scripts
create a test environment
try out a few schemas and storage engines in your test environment before picking one.
Use HASH indexing for indexing across columns with similar data prefixes
Use myisam_pack_keys for int data
Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS()
use --safe-updates for client
Redundant data is redundant
Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
use groupwise maximum instead of subqueries
be able to change your schema without ruining functionality of your code
source control schema and config files
for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forward
use multi_query if appropriate to reduce round-trips