* Implementation of unit tests related to mysql_query_rules_fast_routing
* Implementation of rules_fast_routing with khash
* Adding memory usage for new fast routing algo
This commit fixes the following bug:
if a client connection uses SSL and sends a query larger than 32KB, the query is never executed and the connection hang
Initial support for `SET SESSION TRANSACTION READ ONLY` or `READ WRITE`.
Extended `SET` parser to support also `SET SESSION TRANSACTION`.
Hostgroup Manager doesn't kill backend connections in case of error 1231.
`autocommit` is set at session level but also on MySQL client connection.
Added several debugging entries.
Several `handler_again___verify_backend_*` functions are disabled if `locked_on_hostgroup` is enabled.
Extended SQLite3_result to support a mutex for concurrent rows insert.
Extensively rewritten get_query_digests() and get_query_digests_reset() to support multiple threads, and concurrently generate a resultset.
Function get_query_digests_reset() defer the deletion of objects from query digest map after generating the resultset for Admin. This drastically reduce the amount of time that query digest map is locked.
Rewritten get_query_digests_total_size() to make it multi-threaded.
Added 3 new functions for purging:
- purge_query_digests() : wrapper
- purge_query_digests_sync() : synchronous purge of query digest map, either single-threaded or multi-threaded
- purge_query_digests_async() : asynchronous purge of query digest map, single-threaded only
If table `stats_mysql_query_digest_reset` is queried but not `stats_mysql_query_digest`, only the first one is populated.
If table `stats_mysql_query_digest_reset` is queried and also `stats_mysql_query_digest`, the second one is populated as a copy from the first.
Implemented new SQL `TRUNCATE [TABLE]` for `[stats.]stats_mysql_query_digest[_reset]`: this will purge the query digest map as fast as possible without creating any table in SQLite.
For testing/benchmark:
- modified ProxySQL_Test___GenerateRandomQueryInDigestTable() to generate more random data and randomly switch `mysql_thread___query_digests_normalize_digest_text`
Extended command `PROXYSQLTEST` with the following tests:
- 1 : generates entries in digest map
- 2 : gets all the entries from the digest map, but without writing to DB, with multiple threads
- 3 : gets all the entries from the digest map and reset, but without writing to DB, with multiple threads
- 4 : purges the digest map, synchronously, in single thread
- 5 : purges the digest map, synchronously, in multiple threads
- 6 : purges the digest map, asynchronously, in single thread (this seems the faster)
Replaced some sprintf() with new function my_itoa() .
Modified class QP_query_digest_stats() to include username, schemaname and client_address if no longer than 24 bytes.
This makes storage requirement for QP_query_digest_stats() bigger, but reduces a lot of calls to memory management.
Modified QP_query_digest_stats::get_row() to drastically reduce the number of malloc/free and string copy.
Overall, this commit has the following effects:
- it increases the memory usage for query digest map. On test workload, this results in 10% more memory usage
- drastically reduces the calls to memory management
- populating `stats_mysql_query_digest_reset` is 3 times faster
- the amount of time in which query digest map is locked is reduced by 95%
Added 2 new status variables:
- queries_with_max_lag_ms__delayed
- queries_with_max_lag_ms__total_wait_time_us
Do not get replication lag from replicas if the value is 0
Fixed an error in the computation of max_lag_ms
This should fix a lot of issues related to failed parsing of SET statement.
This and the two 2 previous commits introduce several status variables, and a
new configuration variable: mysql-set_query_lock_on_hostgroup
Possible values for mysql-set_query_lock_on_hostgroup:
- 0 : legacy behavior , before 2.0.5
- 1 : (default) . SET statements that cannot be parsed correctly disable
both multiplexing AND routing. Attempting to route traffic while a
connection is linked to a specific backend connection will trigger
an error to be returned to the client
Issue #2120 : Send SESSION_TRACK_GTIDS to client
Issue #2121 : Track CLIENT_FOUND_ROWS required by the client
Issue #2125 : Track CLIENT_MULTI_STATEMENTS required by the client
Enhancements:
- added metrics rows_affected and rows_sent
- added global variable mysql-eventslog_default_log : if 1 , logging is enabled for every query unless explicitly disabled in mysql_query_rules.log . Default is 0
- added global variable mysql-eventslog_format : default is 1 (legacy format). A value of 2 enables logging in JSON format. Issue #871
Changing value at runtime causes the current file to be closed and a new one created
- fixed logging for prepared statements: till 2.0.5 only some percentage of prepared statements was correctly logged
Extended tables stats_mysql_query_digest and stats_mysql_query_digest_reset to also include sum_rows_affected and sum_rows_sent
Extended `eventslog_reader_sample.cpp` to support the new enhancements
Added new command `PROXYSQL INTERNAL SESSION` that clients can execute to
receive internal information about their own connection in JSON format.
Added JSON library.
Recompiled SQLite3 to support JSON.
Added new column `extended_info` in `stats_mysql_processlist`.
Added new mysql variable `mysql-show_processlist_extended` that determine the
content of `stats_mysql_processlist.extended_info`:
- 0 : no info
- 1 : JSON format
- 2 : JSON format with pretty printing
`SERVER_STATUS_NO_BACKSLASH_ESCAPES` is now tracked.
`set sql_mode` is executed immediately if client executes `set sql_mode`
specifying `NO_BACKSLASH_ESCAPES`.
A backend connection with `SERVER_STATUS_NO_BACKSLASH_ESCAPES` enabled has multiplexing immediately disabled.
aws_aurora_replicas_skipped_during_query is a status variable for better
monitoring behavior due to replication lag in AWS Aurora
Also fixed lag computing in connection pool
Wrong decoding in MySQL protocol for field bigger than 16MB causes crash.
The only code path affected by this seems to be reading parameters from
prepared statements.
Temporary disable multiplexing when last_insert_id is returned in OK packet.
Multiplexing is disabled for mysql-auto_increment_delay_multiplex queries.
mysql-auto_increment_delay_multiplex ranges from 0 to 1000000 .
Default value is 5
- if clients uses mysql_native_password, if LDAP is enabled and if the user doesn't exist, switch to mysql_clear_password.
- if neither mysql_native_password or mysql_clear_password are used by the client:
- if LDAP is not enabled, always switch to mysql_native_password
- if LDAP is enabled:
- if the user exists, switch to mysql_native_password
- if the user doesn't exists, switch to mysql_clear_password
Added MySQL variable mysql-add_ldap_user_comment to determine if a comment with the original username needs to be added in the queries.
This commit also tracks the charset during the first handshake response.
Both KILL QUERY and KILL CONNECTION work
The only security check enforced is that the user sending the KILL
is the same user of the connection/query being killed.
This functionality is enabled for:
- mysql_galera_hostgroups
- mysql_group_replication_hostgroups
If writer_is_also_reader=2 and there are servers in
backup_writer_hostgroup, only these servers will be used
in reader_hostgroup
Functions add() and lookup() in MySQL_LDAP_Authentication have support for backend_username.
Added mysql_ldap_mapping table.
Created Admin::init_ldap() to be called after LDAP initialization.
Added better LDAP caching.
LOAD LDAP MAPPING TO RUNTIME cleans part of the cache (association to backend user).
All queries will have a comment "proxysql-ldap-user=%s" to track original user
The current parser for SET in MySQL_Session is not able to parse multiple
variables SET commands like:
SET sql_mode='TRADITIONAL', NAMES utf8 COLLATE unicode_ci
This patch introduces a simple regex based parser for all variation of
simple variables.
This is not a generic SET parser, though.
Added online migration from previous table definition (1.3.0 and 1.4.0).
LOAD MYSQL USERS TO RUNTIME and SAVE MYSQL USERS TO MEMORY handle the new field
comment can be read also from config file
ProxySQL Cluster reads comment from remote node
The same applies also for:
- runtime_mysql_replication_hostgroups
- runtime_mysql_group_replication_hostgroups
- runtime_mysql_galera_hostgroups
Close#1435 and #1436
This commit also prevents shunned nodes to come back online if they are missing pings. Related to #1416
Because it reduces the number of checks, it may also be relevant to #1417
Added 10 new metrics
Added 3 new global variables
* monitor_threads_min : minimum number of threads
* monitor_threads_max : maximum number of threads
* monitor_threads_queue_maxsize : maximum numbr of pending checks before starting new threads
Input validation:
- client_addr not longer than INET6_ADDRSTRLEN
- % allowed only at the end of client_addr
Query rule itself remembers if there is a wildcard or not.
If there is a wildcard compares string till the wildcard.
If client_addr=='%' , it is a match all.
The fix for bug #1038 was to not return a connection to the connection pool if it has an error.
Although this is correct, it also has the side effect that connections coming
from connection pool and failing during the first query because the connection
was already broken, would be considered as possibly to run a transaction.
That is incorrect.
Now at connection level it is tracked if the transaction status is known or not.
Files are all located in datadir:
- proxysql-key.pem
- proxysql-ca.pem
- proxysql-cert.pem
If all 3 files exist, ProxySQL will load them.
If none of the 3 files, ProxySQL will create them.
If only some of the 3 files are present, ProxySQL will refuse to start.
Variable reset_connection_algorithm could either be:
1 = algorithm used up too version 1.4
2 = algorithm new since ProxySQL 2.0 (now default)
When reset_connection_algorithm = 2 , MySQL_Thread itself tries to reset connections instead of relying on connections purger HGCU_thread_run()
Statistics collected aboout GTID queries
Statistics displayed on HTTP server
Count number of GTID events per server
Online upgrade of all mysql_connections tables
Fixed path for libev
Statistics collected aboout GTID queries
Statistics displayed on HTTP server
Count number of GTID events per server
Online upgrade of all mysql_connections tables
Fixed path for libev
This is the first commit to pull data from proxysql_mysqlbinlog.
It is still in Alpha phase, as it is missing a log of important logics, like error handling, retry mechanism, timeouts, etc
Prepared statements counters are now computed at runtime instead of on-demand.
Now MySQL_STMT_Manager_v14::get_metrics() doesn't depend anymore from the
number of prepared stataments cached.
The old code (slow code) is still present in the DEBUG version, and used to
validate the new counters. That also means that DEBUG versions are way slower
than non debug versions.
* change datatype mysql_replication_hostgroups.comment to be VARCHAR NOT NULL DEFAULT '' instead of VARCHAR
* handle cases in which mysql_replication_hostgroups.comment is NULL
Handling of prepared statements changed a lot in 1.4 , as a lot of code was rewritten.
Old code was still present, and was possible to toggle it on and off based on PROXYSQL_STMT_V14 .
Because only the new code is maintained, all references to old code are now removed,
including PROXYSQL_STMT_V14
Handling of prepared statements changed a lot in 1.4 , as a lot of code was rewritten.
Old code was still present, and was possible to toggle it on and off based on PROXYSQL_STMT_V14 .
Because only the new code is maintained, all references to old code are now removed,
including PROXYSQL_STMT_V14
This is a global variable tha can be defined only in config file
When proxysql dies for whatever reason, the script defined in execute_on_exit_failure is executed.
Use this script to generate an alert.
This commits adds a new variable: mysql-monitor_replication_lag_use_percona_heartbeat
This variable defines the percona heartbeat table used to check replication lag.
If set, replication lag is checked against the defined table, otherwise `SHOW SLAVE STATUS` is used.
To be set, the value should match the following regex:
```
`?([a-z\d_]+)`?\.`?([a-z\d_]+)`?
```
If variable mysql-verbose_query_error is set, "Error during query" message will be extended adding:
- username
- client IP
- schemaname
- digest of the original query (not the original query itself)
If autocommit_false_is_transaction=true (false by default), a connection
with autocommit=0 is treated as a transaction.
If forward_autocommit=true (false by default), the same behavior applies.
Explicitly disabled https from libmicrohttpd (for now)
Created key and certificate, although not in use (for now)
Embedded font-awesome CSS
Added support for time range for MySQL and System
Added support for *_hour tables
Enabled digest auth in web ui (for now hardcoded credential, will fix in future commits)
Drafted a home page (not ready yet)
* all commands expected in Admin are serialized
* Cluster get mysql servers information directly from MyHGM
* Before applying MySQL servers changes, Cluster fetch again the checksum to verify it didn't change
Variable name is `mysql-throttle_connections_per_sec_to_hostgroup` .
Curently it is a global variable and limits the number of new connections per
hostgroup, and not per specific node.
For example, if mysql-throttle_connections_per_sec_to_hostgroup=100, no more
than 100 new connections can be created on any hostgroup no matter the number
of servers in that hostgroup.
The default is very high (1000000) thus not changing default behaviour.
Tuning this variable allows to control and throttle connections spikes to the
backend servers.
Added also new status variable `Server_Connections_delayed`.
This is a counter of how many times Hostgroup Manager didn't return a
connection because the limit was reached. It is worth to note that a single
client request could make multiple requests, therefore this variable counts
the number of time a new connection wasn't created and not how many requests
were delayed.
In 1.4 the handling of prepared statements were rewritten to solve some design limitations.
The code responsible for `Stmt_Cached` and `Stmt_Max_Stmt_id` wasn't migrated tho.
Fixed now.
3 server errors were retried only once, and then destroying the client connection instead of returning error:
* case 1290: // read-only
* case 1047: // WSREP has not yet prepared node for application use
* case 1053: // Server shutdown in progress
This fix should also improve the handling of graceful shutdown (error 1053)
Modified these 2 functions to allows write directly in the ResultSet buffer if available:
* MySQL_Protocol::generate_pkt_EOF()
* MySQL_Protocol::generate_pkt_field()
Introduced 2 new global variables:
* mysql-stats_time_backend_query (default true)
* mysql-stats_time_query_processor (default true)
For backward compatibility, they are both enabled by default
Added variable for SQLite3 Server
Added new command:
* LOAD SQLITESERVER VARIABLES FROM MEMORY / TO RUNTIME
* LOAD SQLITESERVER VARIABLES FROM DISK / TO MEMORY
* SAVE SQLITESERVER VARIABLES FROM RUNTIME / TO MEMORY
* SAVE SQLITESERVER VARIABLES FROM MEMORY / TO DISK
Connections to SQLite3 Server uses the same MySQL users in `mysql_users`
Fixed minor issues related to ClickHouse Serve
Fixed also some Makefile errors
Set dynmic hostname/port
New commands:
* LOAD CLICKHOUSE VARIABLES FROM MEMORY / TO RUNTIME
* LOAD CLICKHOUSE VARIABLES FROM DISK / TO MEMORY
* SAVE CLICKHOUSE VARIABLES FROM RUNTIME / TO MEMORY
* SAVE CLICKHOUSE VARIABLES FROM MEMORY / TO DISK
* graceful handle of missing backend (before was just crashing)
* code cleanup
* adding an embedded SQLite3 connections inside ClickHouse Server to execute internally dummy queries
* filter of several SET commands
* support for SHOW [SESSION ](VARIABLES|STATUS) LIKE
* support for SHOW [GLOBAL|ALL] VARIABLES
* support for SHOW GLOBAL STATUS
* support for SHOW COLLATION #1136
* support for SHOW CHARSET #1136
* support for SHOW ENGINES #1139
* support for SELECT current_user() #1135
* support for SELECT CONNECTION_ID() #1133
* support for SHOW FULL TABLES FROM `default`
* working semi-support for SHOW COLUMNS FROM
If a user is configured with fast_forward, no connection should be taken from the connection pool.
Yet the Hostgroup Manager should know about such connection.
In `MySQL_Cluster` class added functions to sync with remote node, like:
- `pull_mysql_query_rules_from_peer()`
- `pull_mysql_servers_from_peer()`
- `pull_mysql_users_from_peer()`
- `pull_proxysql_servers_from_peer()`
Added 8 new global variables in Admin.
4 variables determine after how many different checks the remote configuration will be synced:
- cluster_mysql_query_rules_diffs_before_sync
- cluster_mysql_servers_diffs_before_sync
- cluster_mysql_users_diffs_before_sync
- cluster_proxysql_servers_diffs_before_sync
4 variables determine if after a remote sync the changes need to be written to disk:
- cluster_mysql_query_rules_save_to_disk
- cluster_mysql_servers_save_to_disk
- cluster_mysql_users_save_to_disk
- cluster_proxysql_servers_save_to_disk
Table `proxysql_servers` is now automatically loaded from disk to memory and into runtime at startup.
Added new Admin's command `LOAD PROXYSQL SERVERS FROM CONFIG` to load `proxysql_servers` from config file to memory (not runtime).
Internal structures with credentials in MySQL_Authentication moved from `unsorted_map` to `map` : this to ensure the right order when generating the checksum.
Config file supports both `address` and `hostname` for `mysql_servers` and `proxysql_servers` , #1091
`ProxySQL_Admin::load_proxysql_servers_to_runtime()` now has a lock or no lock option, to avoid deadlock
For now, Cluster module is quite verbose.
Extended class ProxySQL_Checksum_Value() in ProxySQL_Cluster module to support further metrics
Implemeted `SELECT GLOBAL_CHECKSUM()` and relative tracking of global checksums
Added variable `admin-cluster_check_status_frequency` to check peer's global status at regular intervals
This commit introduces:
2 new tables:
* `runtime_checksums_values` : stores checksums of configurations in runtime. For now for `mysql_query_rules`, `mysql_servers` and `mysql_users`
* `stats_proxysql_servers_checksums` : when clustering is enabled, it collects all metrics from `runtime_checksums_values` from all its peers
3 new global variables that defines it checksum needs to be generated during `LOAD ... TO RUNTIME`
* `admin-checksum_mysql_query_rules`
* `admin-checksum_mysql_servers`
* `admin-checksum_mysql_users`
ProxySQL Cluster connections now have timeouts:
* 1 second timeout for CONNECT and WRITE
* 60 seconds timeout for READ (useful for long poll)