#table_open_cache change
Explore tagged Tumblr posts
straven-loft · 6 years ago
Text
Увеличиваем значение table_open_cache
В какой-то момент существования сайта или портала на платформе Битрикс возникает необходимость увеличить значение параметра table_open_cache. Система об этом сигнализирует на странице “Сервер БД” в административной части в секции “Настройки”. В этой статье я опишу шаги которые необходимо выполнить для обновления этого значения. Настройка будет производиться на рекомендуемом Битрикс окружении: ОС
View On WordPress
0 notes
globalmediacampaign · 5 years ago
Text
More on Checkpoints in InnoDB MySQL 8
Recently I posted about checkpointing in MySQL, where MySQL showed interesting “wave” behavior. Soon after Dimitri posted a solution with how to fix “waves,” and I would like to dig a little more into proposed suggestions, as there are some materials to process. This post will be very heavy on InnoDB configuration, so let’s start with the basic configuration for MySQL, but before that some initial environment. I use MySQL version 8.0.21 on the hardware as described here.  As for the storage, I am not using some “old dusty SSD”, but production available Enterprise-Grade Intel SATA SSD D3-S4510. This SSD is able to handle the throughput of 468MiB/sec of random writes or 30000 IOPS of random writes of 16KiB blocks. So initial configuration for my test was: [mysqld] datadir= /data/mysql8-8.0.21 user=mysql bind_address = 0.0.0.0 socket=/tmp/mysql.sock log-error=error.log ssl=0 performance_schema=OFF skip_log_bin server_id = 7 # general table_open_cache = 200000 table_open_cache_instances=64 back_log=3500 max_connections=4000 join_buffer_size=256K sort_buffer_size=256K # files innodb_file_per_table innodb_log_file_size=10G innodb_log_files_in_group=2 innodb_open_files=4000 # buffers innodb_buffer_pool_size= 140G innodb_buffer_pool_instances=8 innodb_page_cleaners=8 innodb_purge_threads=4 innodb_lru_scan_depth=512 innodb_log_buffer_size=64M default_storage_engine=InnoDB innodb_flush_log_at_trx_commit = 1 innodb_doublewrite= 1 innodb_flush_method = O_DIRECT innodb_file_per_table = 1 innodb_io_capacity=2000 innodb_io_capacity_max=4000 innodb_flush_neighbors=0 #innodb_monitor_enable=all max_prepared_stmt_count=1000000 innodb_adaptive_hash_index=1 innodb_monitor_enable='%' innodb-buffer-pool-load-at-startup=OFF innodb_buffer_pool_dump_at_shutdown=OFF There is a lot of parameters, so let’s highlight the most relevant for this test: innodb_buffer_pool_size= 140G Buffer pool size is enough to fit all data, which is about 100GB in size innodb_adaptive_hash_index=1 Adaptive hash index is enabled (as it comes in default InnoDB config) innodb_buffer_pool_instances=8 This is what defaults provide, but I will increase it, following my previous post.  innodb_log_file_size=10G innodb_log_files_in_group=2 These parameters define the limit of 20GB for our redo logs, and this is important, as our workload will be “redo-log” bounded, as we will see from the results innodb_io_capacity=2000 innodb_io_capacity_max=4000 You may ask, why do I use 2000 and 4000, while the storage can handle 30000 IOPS. This is a valid point, and as we can see later, these parameters are not high enough for this workload, but also it does not mean we should use them all the way up to 30000, as we will see from the results. MySQL Manual says the following about innodb_io_capacity: “The innodb_io_capacity variable defines the overall I/O capacity available to InnoDB. It should be set to approximately the number of I/O operations that the system can perform per second (IOPS). When innodb_io_capacity is set, InnoDB estimates the I/O bandwidth available for background tasks based on the set value.”  From this, you may get the impression that if you set innodb_io_capacity to I/O bandwidth of your storage, you should be fine. Though this part does not say what you should take as I/O operations. For example, if your storage can perform 500MB/sec, then if you do 4KB block IO operations it will be 125000 IO per second, and if you do 16KB IO, then it will be 33000 IO per second.  MySQL manual leaves it up to your imagination, but as InnoDB typical page size is 16KB, let’s assume we do 16KB blocks IO. However later on that page, we can read: “Ideally, keep the setting as low as practical, but not so low that background activities fall behind. If the value is too high, data is removed from the buffer pool and change buffer too quickly for caching to provide a significant benefit. For busy systems capable of higher I/O rates, you can set a higher value to help the server handle the background maintenance work associated with a high rate of row changes” and “Consider write workload when tuning innodb_io_capacity. Systems with large write workloads are likely to benefit from a higher setting. A lower setting may be sufficient for systems with a small write workload.” I do not see that the manual provides much guidance about what value I should use, so we will test it. Initial results So if we benchmark with initial parameters, we can see the “wave” pattern.   As for why this is happening, let’s check Percona Monitoring and Management “InnoDB Checkpoint Age” chart: Actually InnoDB Flushing by Type in PMM does not show sync flushing yet, so I had to modify chart a little to show “sync flushing” in orange line: And we immediately see that Uncheckpointed Bytes exceed Max Checkpoint Age in 16.61GiB, which is defined by 20GiB of innodb log files. 16.61GiB is less than 20GB, because InnoDB reserves some cushion for the cases exactly like this, so even if we exceed 16.61GiB, InnoDB still has an opportunity to flush data. Also, we see that before Uncheckpointed Bytes exceed Max Checkpoint Age, InnoDB flushes pages with the rate 4000 IOPS, just as defined by innodb_io_capacity_max. We should try to avoid the case when Uncheckpointed Bytes exceed Max Checkpoint Age, because when it happens, InnoDB gets into “emergency” flushing mode, and in fact, this is what causes the waves we see. I should have detected this in my previous post, mea culpa. So the first conclusion we can make – if InnoDB does not flush fast enough, what if we increase innodb_io_capacity_max ? Sure, let’s see. And for the simplification, for the next experiments, I will use Innodb_io_capacity = innodb_io_capacity_max, unless specified otherwise. Next run with Innodb_io_capacity = innodb_io_capacity_max = 7000 Not much improvement and this also confirmed by InnoDB Checkpoint ge chart InnoDB tries to flush more pages per second up to 5600 pages/sec, but it is not enough to avoid exceeding Max Checkpoint Age. Why is this the case? The answer is a double write buffer. Even though MySQL improved the doublewrite buffer in MySQL 8.0.20, it does not perform well enough with proposed defaults.  Well, at least the problem was solved because previous Oracle ran benchmarks with disabled doublewrite, just to hide and totally ignore the issue with doublewrite. For the example check this. But let’s get back to our 8.0.21 and fixed doublewrite. Dimiti mentions: “the main config options for DBLWR in MySQL 8.0 are: innodb_doublewrite_files = N innodb_doublewrite_pages = M” Let’s check the manual again: “The innodb_doublewrite_files variable is intended for advanced performance tuning. The default setting should be suitable for most users.” “innodb_doublewrite_pages The innodb_doublewrite_pages variable (introduced in MySQL 8.0.20) controls the number of maximum number of doublewrite pages per thread. If no value is specified, innodb_doublewrite_pages is set to the innodb_write_io_threads value. This variable is intended for advanced performance tuning. The default value should be suitable for most users.” Was it wrong to assume that innodb_doublewrite_files and  innodb_doublewrite_pages provides the value suitable for our use case? But let’s try with the values Dmitri recommended to look into, I will use innodb_doublewrite_files=2 and innodb_doublewrite_pages=128 Results with innodb_doublewrite_files=2 and innodb_doublewrite_pages=128 The problem with waves is fixed!  And InnoDB Checkpoint Age chart: Now we are able to keep Uncheckpointed Bytes under Max Checkpoint Age, and this is what fixed “waves” pattern. We can say that parallel doublewrite is a new welcomed improvement, but the fact that one has to change innodb_doublewrite_pages in order to get improved performance is the design flaw in my opinion. But there are still a lot of variations in 1 sec resolution and small drops. Before we get to them, let’s take a look at another suggestion: use –innodb_adaptive_hash_index=0 ( that is to disable Adaptive Hash Index). I will use AHI=0 on the charts to mark this setting. Let’s take a look at the results with improved settings and with –innodb_adaptive_hash_index=0 Results with –innodb_adaptive_hash_index=0 To see what is the real improvement with –innodb_adaptive_hash_index=0 , let’s compare barcharts: Or in numeric form: settings Avg tps, last 2000 sec io_cap_max=7000,doublewrite=opt 7578.69 io_cap_max=7000,doublewrite=opt,AHI=0 7996.33 So –innodb_adaptive_hash_index=0 really brings some improvements, about 5.5%, so I will use  –innodb_adaptive_hash_index=0 for further experiments. Let’s see if increased innodb_buffer_pool_instances=32 will help to smooth periodical variance. Results with innodb_buffer_pool_instances=32 So indeed using innodb_buffer_pool_instances=32 gets us less variations, keeping overall throughput about the same. It is 7936.28 tps for this case. Now let’s review the parameter innodb_change_buffering=none, which Dmitri also suggests. Results with innodb_change_buffering=none There is NO practical difference if we disable innodb_change_buffer. And if we take a look at PMM change buffer chart: We can see there is NO Change Buffer activity outside of the initial 20 mins. I am not sure why Dimitri suggested disabling it. In fact, Change Buffer can be quite useful, and I will show it in my benchmark for the different workloads. Now let’s take a look at suggested settings with Innodb_io_capacity = innodb_io_capacity_max = 8000. That will INCREASE innodb_io_capacity_max , and compare to results with innodb_io_capacity_max = 7000. Or in tabular form: settings Avg tps, last 2000 sec io_cap_max=7000,doublewrite=opt,AHI=0,BPI=32 7936.28 io_cap_max=8000,doublewrite=opt,AHI=0,BPI=32 7693.08 Actually with innodb_io_capacity_max=8000 the throughput is LESS than with  innodb_io_capacity_max=7000 Can you guess why?  Let’s compare InnoDB Checkpoint Age. This is for innodb_io_capacity_max=8000 : And this is for innodb_io_capacity_max=7000  This is like a child’s game: Find the difference. The difference is that with  innodb_io_capacity_max=7000Uncheckpointed Bytes is 13.66 GiB,and with innodb_io_capacity_max=8000Uncheckpointed Bytes is 12.51 GiB What does it mean? It means that with innodb_io_capacity_max=7000 HAS to flush LESS pages and still keep within Max Checkpoint Age. In fact, if we try to push even further, and use innodb_io_capacity_max=innodb_io_capacity=6500 we will get InnoDB Checkpoint Age chart as: Where Uncheckpointed Bytes are 15.47 GiB. Does it improve throughput? Absolutely! settings Avg tps, last 2000 sec io_cap_max=6500,doublewrite=opt,AHI=0,BPI=32 8233.628 io_cap_max=7000,doublewrite=opt,AHI=0,BPI=32 7936.283 io_cap_max=8000,io_cap_max=8000,doublewrite=opt,AHI=0,BPI=32 7693.084 The difference between innodb_io_capacity_max=6500 and innodb_io_capacity_max=8000 is 7% This now becomes clear what Manual means in the part where it says: “Ideally, keep the setting as low as practical, but not so low that background activities fall behind” So we really need to increase innodb_io_capacity_max to the level that Uncheckpointed Bytes stays under Max Checkpoint Age, but not by much, otherwise InnoDB will do more work then it is needed and it will affect the throughput. In my opinion, this is a serious design flaw in InnoDB Adaptive Flushing, that you actually need to wiggle innodb_io_capacity_max to achieve appropriate results. Inverse relationship between innodb_io_capacity_max and innodb_log_file_size To show an even more complicated relation between innodb_io_capacity_max and innodb_log_file_size, let consider the following experiment. We will increase innodb_log_file_size from 10GB to 20GB, effectively doubling our redo-log capacity. And now let’s check InnoDB Checkpoint Age with innodb_io_capacity_max=7000: We can see there is a lot of space in InnoDB logs which InnoDB does not use. There is only 22.58GiB of Uncheckpointed Bytes, while 33.24 GiB are available. So what happens if we increase innodb_io_capacity_max to 4500  InnoDB Checkpoint Age with innodb_io_capacity_max=4500: In this setup, We can push Uncheckpointed Bytes to 29.80 GiB, and it has a positive effect on the throughput. Let’s compare throughput : settings Avg tps, last 2000 sec io_cap_max=4500,log_size=40GB,doublewrite=opt,AHI=0,BPI=32 9865.308 io_cap_max=7000,log_size=40GB,doublewrite=opt,AHI=0,BPI=32 9374.121 So by decreasing innodb_io_capacity_max from 7000 to 4500 we can gain 5.2% in the throughput. Please note that we can’t continue to decrease innodb_io_capacity_max, because in this case Uncheckpointed Bytes risks to exceed Max Checkpoint Age, and this will lead to the negative effect of emergency flushing. So again, in order to improve throughput, we should be DECREASING innodb_io_capacity_max, but only to a certain threshold. We should not be setting innodb_io_capacity_max to 30000, to what really SATA SSD can provide. Again, for me, this is a major design flaw in the current InnoDB Adaptive Flushing. Please note this was a static workload. If your workload changes during the day, it is practically impossible to come up with optimal value.  Conclusions: Trying to summarize all of the above, I want to highlight: To fix “wave” pattern we need to tune innodb_io_capacity_max and innodb_doublewrite_pages  InnoDB parallel doublewrite in MySQL 8.0.20 is a definitely positive improvement, but the default values seem chosen poorly, in contradiction with Manual. I wish Oracle/MySQL shipped features that work out of the box for most users. InnoDB Adaptive Hash index is not helping here, and you get better performance by disabling it. I also observed that in other workloads, the InnoDB Adaptive Hash index might be another broken subsystem, which Oracle ignores to fix and just disables it in its benchmarks. InnoDB Change Buffer has no effect on this workload, so you may or may not disable it — there is no difference. But I saw a positive effect from InnoDB Change Buffer in other workloads, so I do not recommend blindly disabling it. Now about InnoDB Adaptive Flushing. In my opinion, InnoDB Adaptive Flushing relies too much on manual tuning of innodb_io_capacity_max , which in fact has nothing to do with the real storage IO capacity. In fact, often you need to lower innodb_io_capacity_max  to get better performance, but not make it too low, because at some point it will hurt the performance. The best way to monitor it is to check InnoDB Checkpoint Age chart in PMM I would encourage Oracle to fix the broken design of InnoDB Adaptive Flushing, where it would detect IO capacity automatically and to not flush aggressively, but to keep  Uncheckpointed Bytes just under Max Checkpoint Age. Let’s hope Oracle faster than doublewrite buffer because history shows that to force Oracle to make improvements in InnoDB IO subsystem, we need to do it first in Percona Server for MySQL like we did with parallel doublewrite buffer.  For the reference parallel doublewrite was implemented first in Percona Server for MySQL 5.7.11-4 which was released March 15th, 2016. Oracle implemented (broken by default) parallel doublewrite in MySQL 8.0.20, which was released 4 years later after Percona Server, on April 4th, 2020. https://www.percona.com/blog/2020/08/27/more-on-checkpoints-in-innodb-mysql-8/
0 notes
suzukiapple · 7 years ago
Text
MySQLにAccess deniedで怒られるのを直す
MySQLをインストールしてそのままログインしようとすると怒られる。
$ sudo apt-get install mysql-server $ mysql -uroot -p ERROR 1698 (28000): Access denied for user 'root'@'localhost'
この直し方とか初期設定何してたかをいつも忘れるのでメモ。
自分の環境
Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-116-generic x86_64) mysql Ver 14.14 Distrib 5.7.21, for Linux (x86_64) using EditLine wrapper
MySQLの設定変更
my.cnfを作る。 初めに、my.cnfの参照順を確認する。 より左に書かれている設定ファイルが優先される。
$ mysql --help | grep my.cnf order of preference, my.cnf, $MYSQL_TCP_PORT, /etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf
自分は~/.my.cnfに設定ファイルを置くことにする。 設定内容は、サンプル設定の中のmy-medium.cnfを参考にした。
~/.my.cnf
# Example MySQL config file for medium systems. # # This is for a system with little memory (32M - 64M) where MySQL plays # an important part, or systems up to 128M where MySQL is used together with # other programs (such as a web server) # # MySQL programs look for option files in a set of # locations which depend on the deployment platform. # You can copy this option file to one of those # locations. For information about these locations, see: # http://dev.mysql.com/doc/mysql/en/option-files.html # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /run/mysqld/mysqld.sock # Here follows entries for some specific programs default-character-set=utf8 # The MySQL server [mysqld] port = 3306 socket = /run/mysqld/mysqld.sock skip-external-locking key_buffer_size = 16M max_allowed_packet = 1M table_open_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # binary logging format - mixed recommended binlog_format=mixed # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = @localstatedir@ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = @localstatedir@ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 default-character-set=utf8 [mysqldump] quick max_allowed_packet = 16M default-character-set=utf8 [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates default-character-set=utf8 [myisamchk] key_buffer_size = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout
MySQLを再起動して設定の適用。
$ sudo systemctl restart mysql.service
MySQLのユーザーを作る
MySQLへログイン。
$ sudo mysql -uroot -p
sudo付けないとAccess deniedされる。
$ mysql -uroot -p ERROR 1698 (28000): Access denied for user 'root'@'localhost'
ログイン後、ユーザーを作る。 GRANT権限はないけど全テーブルへアクセスできるようにした。 ユーザー名(suzuki)、パスワード(suzukipassword)はお好みでどうぞ。
mysql> GRANT ALL ON *.* TO suzuki@'%' IDENTIFIED BY 'suzukipassword';
新たに作ったユーザーでログインできるか確認。 Ubuntuのユーザー名とMySQLのユーザーが同じだと、-uが省略できる。
$ mysql -usuzuki -p $ mysql -p
参考: https://qiita.com/PallCreaker/items/0b02c5f42be5d1a14adb http://okdtsk.hateblo.jp/entry/20111219/1324249008 https://github.com/twitter/mysql/blob/master/support-files/my-medium.cnf.sh
0 notes