The destination hostgroup assigned from previous COM_QUERY commands is
the one used to establish the fast_forward connection. As a result,
multiple binlog clients can consume binlog events from different MySQL
hostgroups using the same username.
Implementation of mysql_hostgroup_attributes.throttle_connections_per_sec
If mysql_hostgroup_attributes is configured,
mysql_hostgroup_attributes.throttle_connections_per_sec has higher
priority than global variable mysql-throttle_connections_per_sec_to_hostgroup
Implementation of mysql_hostgroup_attributes.free_connections_pct and mysql_hostgroup_attributes.connection_warming
If mysql_hostgroup_attributes is configured:
- mysql_hostgroup_attributes.free_connections_pct has higher priority than global variable mysql-free_connections_pct
- mysql_hostgroup_attributes.connection_warming has higher priority than global variable mysql-connection_warming
If mysql_hostgroup_attributes.multiplex:
- 0 : multiplexing is always disabled
- 1 : multiplexing is enabled for the hostgroup, but other criteria can disable multiplexing (transaction, temp tables, query rules, etc)
Fixes:
* minor memory leak when running CHECKSUM command
* minor memory leak in load_mysql_servers_to_runtime()
* not initialized json in MySQL_Event (valgrind doesn't like it)
* not initialized variable MySQL_Thread::shutdown in the constructor
Galera_Hosts_resultset is populated by a query on mysql_servers JOIN mysql_galera_hostgroups .
An incorrect GROUP BY could have caused some servers to be present more than once.
Fix honoring of 'connect_timeout_server_max' and
'connect_retries_on_failure' for 'fast_forward' sessions created by
'MYSQL_COM_BINLOG_DUMP' when session still doesn't own a backend
connection.
When Query Cache entries reach the soft ttl, the next query hit the
backend and refresh the entry. While the refresh happens, other queries
keeps getting the "old" Query Cache entry.
Soft ttl is a percentage of the cache ttl and is defined by a global
variable for all the entries. If the value is 0, soft ttl is disabled
and no used.
Since servers present in 'backup_writer_hostgroup' where not considered
as previously configured writers, everytime the number of available
writers exceeded 'max_writers' an unwanted server reconfiguration was
triggered for each of these servers at every monitoring action.
When writer was set as SHUNNED due to replication lag, and
'mysql_servers' table was regenerated (via servers reconfiguration or
other action). SHUNNED writer wasn't considered a found writer,
triggering an unwanted server reconfiguration.
In Admin:
- replaced a lot of flush_[...]__from_memory_to_disk() and flush_[...]__from_disk_to_memory() with a generic flush_GENERIC__from_to()
- several commands like "LOAD MYSQL SERVERS FROM DISK" and similar are automatically generated for various modules (not all) and saved/map in a map load_save_disk_commands
- FlushCommandWrapper() is able to map the various LOAD/SAVE from/to DISK into commands to run on SQLite
This is a draft of a rework for Monitor 'Group Replication', this
reworks reuses the Monitoring model already taken for AWS Aurora for
'Group Replication'.
For commands BINLOG_DUMP, BINLOG_DUMP_GTID and REGISTER_SLAVE a lot of
the features provided by ProxySQL aren't useful, like multiplexing,
query parsing, etc. For this reason, ProxySQL enable fast_forward when
it receives these commands.
This commit fixes the invalid increase of 'NO_PINGABLE_SRV' counters
using 'mmsd' data from the previous check, and an invalid check over the
current value of 'mmsd' to determine if a pingable host has been found.
Asynchronous handling of Ping, Group replication, Replication Lag, Read Only and Galera if MySQL connection is available, else tasks will be delegated to Consumer Thread.
In a previous commit we fixed the fact that if more than 16KB of data was
sent to Admin, SQLite3 Server or Clickhouse Server, no further data was
detected thus blocking.
It seems that in some circumstances data can be returned in chunks of 4KB,
thus we removed the 16KB logic replacing it with multiples of 4KB.
This commit is related to the previous one:
4c21a6d8c7
OpenSSL handles data in blocks of 16KB.
If more than 16KB of data was sent to Admin, SQLite3 Server or Clickhouse Server,
no further data was detected, thus blocking.
Added test_ssl_fast_forward-2-t.cpp for further testing.
Also added marker PMC-10004
OpenSSL handles data in blocks of 16KB.
If more than 16KB of data was sent fast_forward was not detecting further data
and therefore blocking.
This commit fixes this bug.
Fixed crash when backend doesn't support SSL
Added initial TAP test.
Initial TAP test cause valgrind to report errors,
thus the feature it is not completed yet.
Otherwise, autocommit status flag is always zero, instead of displaying
its true value.
OK responses were reporting an invalid autocommit during handshake,
reset_connection and change_user petitions.
This commit pack a couple of fixes and improvements for the RESTAPI:
- Homogenization of GET/POST endpoint responses by ProxySQL. All
responses should now be guaranteed to be valid JSON.
- Fix JSON construction from parameters supplied to GET endpoint.
- Add two new fields 'script_stdout' and 'script_stderr' to the
JSON response when the target script fails to be executed. This is,
when it exists with an error code other than zero. This makes the
response homogeneous to when the scripts fails to produce a valid
JSON output, and add extra information for debugging on client side.
- Add a new debugging module 'PROXY_DEBUG_RESTAPI', to help tracing in
debugging builds the requests to the endpoints.
This commit addresses several issues with 'wexecvp':
- Fix invalid double call to 'close' in case fd was higher than
'FD_SETSIZE'. This could lead to the invalidation of an unrelated
fd resulting in asserts, as in issue #4001 and other instabilities.
- Fix previous limitations of the legacy 'select 'impl that rendered
the RESTAPI unusable when ProxySQL had more then 'FD_SETSIZE' fds
opened.
- Other minor improvements in function logic and interface.
* Added forced triggering of DNS Cache Update on MySQL Servers and ProxySQL Servers table update.
* Direct DNS cache update via socket (connected peer ip) if record is not available in cache.
* Used strdup instead of realloc.
* DNS lookup only once for ProxySQL Cluster Nodes.
Fix invalid resultset when using query cache and mixing clients using
and not using CLIENT_DEPRECATE_EOF.
If a client connects to ProxySQL not using CLIENT_DEPRECATE_EOF and it
caches a resultset, when a client using CLIENT_DEPRECATE_EOF executes
the same query it will get an invalid resultset and the client will
disconnect. The data in the resultset is correct, but proxysql skips a
sequence id thus the client assumes it is corrupted.
* Added multiple IP support with load balancing (Round Robin Scheduling)
* Added DNS resolver request max queue size
* Added AI_ADDRCONFIG flag
* Exception handling
Right now server session state isn't tracked for backend connections,
so we don't include this information in the generated OK packet. Because
of this, we should never report of this status change to clients.
This commit also introduces the following changes:
- Fix invalid formatting of 'stats_mysql_users' when username exceeds
'256' characters.
- Improve error handling in utility function 'string_format'.
- Add new utility functions 'cstr_format', as improved versions of
previous 'string_format' function.
- Compression flag is now propagated from client to backend connection
when the backend connection is being created for a 'fast_forward'
session. In case of client trying to connect without compression, to
a server configured with compression, we fallback to an uncompressed
connection.
- After a 'MySQL_Session' has obtained a connection for a 'fast_forward'
session, i.e, the one-to-one relationship between the client and
backend connection has been stablished, we completely disable the
compression 'STATUS_MYSQL_CONNECTION_COMPRESSION' flag from the client
'MySQL_Connection'.
- Correct the behavior for 'connect_retries_on_failure' for
'fast_forward' sessions to match regular sessions, reusing the
same implementation based on 'MySQL_Data_Stream::max_connect_time'.
- Change 'connect_timeout' for 'fast_forward' sessions to be the
highest of 'connect_timeout_server' and 'connect_timeout_server_max'.
- Add handling for 'COM_QUIT' packets for 'fast_forward' sessions which
have not yet received a backend connection.
Removed dependency from installed curl and openssl: now it uses
statically linked ones.
Updated config.guess and config.sub for cityhash and libdaemon
MariaDB client connector uses statically linked libssl and installed
libiconv
DEBUG build uses pthread_threadid_np() instead of syscall(SYS_gettid)
Fixed some variable types.
Enabled the use of jemalloc also for MacOS
Alias mallctl() to je_mallctl()
Removed libre2.a from libproxysql.a
If ProxySQL Cluster is running and the proxysql instance is shutting
down in a graceful way, Cluster was trying to access already freed
resources causing a crash.
This forces the re-evaluation of the query digest for each
'STMT_EXECUTE' for impossing the required flags over the current
backend 'MySQL_Connection'.
This commit contains an implementation rework and a fix:
- Impl for 'auto_increment_delay_multiplex_timeout_ms' has been
reworked in favor of reusing 'wait_until' to share logic with
previous 'connection_delay_multiplex_ms' variable.
- Fix previous 'connection_delay_multiplex_ms' behavior preventing
connection retaining when traffic unrelated to target hostgroup is
being received by the session.
Only the delimiter present at the end of the digest will be removed.
Delimiters found in the middle of the full digest, like in the case of
multi-statements should be preserved.
If any scenario is found in which we may want to consider a non
'ASYNC_IDLE' connection for expiring, this precondition shall be
removed, and checks should be responsible for ensuring 'ASYNC_IDLE'
state in the connection.
Because of MySQL bug https://bugs.mysql.com/bug.php?id=107875 ,
autocommit=0 can "create a transaction" even if SERVER_STATUS_IN_TRANS
is not set. This means that a SAVEPOINT can be created in a "transaction"
created with autocommit=0 but without SERVER_STATUS_IN_TRANS .
This commit takes also care of scenarios when autocommit=0 was switched back
to autocommit=1 when a SELECT was executed while variable
mysql-enforce_autocommit_on_reads was false.
MySQL_Session::NumActiveTransactions() now accepts a flag to check
for savepoints.
Query 'CLUSTER_QUERY_MYSQL_SERVERS' is now responded by 'runtime_mysql_servers'
as it previously was. Documentation reasoning about the change has been added
to 'MySQL_HostGroups_Manager::runtime_mysql_servers'.
- Cluster now syncs the server tables via 'incoming_*' tables generated
during 'load_mysql_servers_to_runtime'.
- Cluster now syncs 'mysql_users' table via a resultset generated via
'__refresh_users'.
Multiple changes:
- Query_Processor stored the resultset of query rules loaded to runtime
(previously this was only for query rules fast routing)
- Admin returns the stored resultset (query rules and fast routing)
when queries by Cluster
- In khash replaced the hashing function from the built-in
__ac_X31_hash_string to CityHash32
- When Cluster is used, it calls load_mysql_query_rules_to_runtime() passing
the resultsets retrieved by the remote peer
- Increased SQLite cache_size to ~50MB: this seems to be a very small optimization
and probably it will be reverted
- pull_mysql_query_rules_from_peer() uses transactions to write
to mysql_query_rules_fast_routing
- (important change) pull_mysql_query_rules_from_peer() verifies the checksum
of MySQL Query ules before loading them to runtime
if:
mysql-reset_connection_algorithm=1 and
mysql-connpoll_reset_queue_length=0
we will not return a connection with connection_quality_level == 1
because we want to run COM_CHANGE_USER
This change was introduced to work around Galera bug
https://github.com/codership/galera/issues/613
The following cluster modules now compute their expected checksums after fetch:
- mysql_query_rules
- mysql_users
- mysql_servers
- mysql_global_variables
In MySQL_Connection::IsActiveTransaction:
in the past we were incorrectly checking STATUS_MYSQL_CONNECTION_HAS_SAVEPOINT
and returning true in case there were any savepoint.
Although flag STATUS_MYSQL_CONNECTION_HAS_SAVEPOINT was not reset in
case of no transaction, thus the check was incorrect.
We can ignore STATUS_MYSQL_CONNECTION_HAS_SAVEPOINT for multiplexing
purpose in IsActiveTransaction() because it is also checked
in MultiplexDisabled()
PMC-10003: Retrieved a resultset while running a simple command using async_send_simple_command() .
async_send_simple_command() is used by ProxySQL to configure the connection, thus it
shouldn't retrieve any resultset.
A common issue for triggering this error is to have configure mysql-init_connect to
run a statement that returns a resultset.