This commit integrates sqlite-vec (https://github.com/asg017/sqlite-vec)
as a statically linked extension, enabling vector search capabilities
in all ProxySQL SQLite databases (admin, stats, config, monitor).
Changes:
1. Added sqlite-vec source files to deps/sqlite3/sqlite-vec-source/
- sqlite-vec.c: main extension source
- sqlite-vec.h: header for static linking
- sqlite-vec.h.tmpl: template header
2. Modified deps/Makefile:
- Added target sqlite3/sqlite3/vec.o that copies sources and compiles
with flags -DSQLITE_CORE -DSQLITE_VEC_STATIC
- Made sqlite3 target depend on vec.o
3. Modified lib/Makefile:
- Added $(SQLITE3_LDIR)/vec.o to libproxysql.a prerequisites
- Included vec.o in the static library archive
4. Modified lib/Admin_Bootstrap.cpp:
- Added extern "C" declaration for sqlite3_vec_init
- Enabled load extension support for all databases:
- admindb, statsdb, configdb, monitordb, statsdb_disk
- Registered sqlite3_vec_init as auto-extension at database open
(replacing commented sqlite3_json_init)
5. Updated top-level Makefile:
- Made GIT_VERSION fallback to git describe --always when tags missing
Result:
- Vector search functions (vec0 virtual tables, vector operations) are
available in all ProxySQL SQLite databases without runtime dependencies
- No separate shared library required; fully embedded in proxysql binary
- Extension automatically loaded at database initialization
Logging messages now include 'client address', 'session status' and
'data stream status'. Client address is also logged when OK packets are
dispatched, this should help tracking if a client has received the
expected packets or not.
Implements a workaround for the handling of unexpected 'COM_PING'
packets received during query processing, while a resultset is yet being
streamed to the client. Received 'COM_PING' packets are queued in the
form of a counter. This counter is later used to sent the corresponding
number of 'OK' packets to the client after 'MySQL_Session' has finished
processing the current query.
This commit documents:
1. The vacuum_stats() function's purpose, behavior, and the reason why
stats_pgsql_stat_activity is excluded from bulk deletion operations
2. The fact that stats_pgsql_stat_activity is a SQL VIEW (not a table)
and attempting DELETE on it would cause SQLite error:
"cannot modify stats_pgsql_stat_activity because it is a view"
The documentation explains:
- Why TRUNCATE stats_mysql_query_digest triggers vacuum_stats(true)
- Why both MySQL and PostgreSQL tables are cleared regardless of protocol
- How the view is automatically cleared via its underlying table
stats_pgsql_processlist
- The importance of keeping the view excluded from deletion lists
The `cache_empty_result` field in query rules has three possible values:
• -1: Use global setting (`query_cache_stores_empty_result`)
• 0: Do NOT cache empty resultsets, but cache non-empty resultsets
• 1: Always cache resultsets (both empty and non-empty)
Previously, when `cache_empty_result` was set to 0, nothing was cached at all,
even for non-empty resultsets. This prevented users from disabling caching
for empty resultsets while still allowing caching of non-empty resultsets
on a per-rule basis.
Changes:
1. Modified caching logic in MySQL_Session.cpp and PgSQL_Session.cpp to
add the condition `(qpo->cache_empty_result == 0 && MyRS->num_rows)`
(MySQL) and `(qpo->cache_empty_result == 0 && num_rows)` (PgSQL)
to allow caching when cache_empty_result=0 AND result has rows.
2. Added comprehensive Doxygen documentation in query_processor.h explaining
the semantics of cache_empty_result values.
3. Updated Query_Processor.cpp with inline comments explaining the
three possible values.
Now when cache_empty_result is set to 0:
- Empty resultsets (0 rows) are NOT cached
- Non-empty resultsets (>0 rows) ARE cached
- This matches the intended per-rule behavior described in issue #5248.
Fixes: https://github.com/sysown/proxysql/issues/5248
Replace sprintf-based SQL query construction with prepared statements using
bound parameters to prevent SQL injection attacks. This addresses the security
issue identified in PR #5247 review.
Changes:
- Use SQLite prepared statement with placeholders ?1, ?2
- Bind variable names and values securely using proxy_sqlite3_bind_text
- Use ASSERT_SQLITE_OK for error handling as per ProxySQL conventions
- Remove malloc/sprintf vulnerable code pattern
- Add necessary includes for SQLite functions and ASSERT_SQLITE_OK macro
Security: SQL injection could have occurred if configuration variable names
or values contained malicious quotes. Prepared statements eliminate this risk.
This commit adds detailed Doxygen documentation for:
1. The ProxySQL_Config class - describes its role in configuration management
2. The Read_Global_Variables_from_configfile() method - documents its behavior,
parameters, return value, and the automatic prefix stripping feature
The documentation explains the automatic prefix stripping behavior that handles
cases where users mistakenly include module prefix (e.g., "mysql-") in variable
names within configuration files.
The previous implementation stripped the prefix before calling
group.lookupValue(), which would fail because the config file
contains the prefixed name (e.g., "mysql-log_unhealthy_connections").
The lookup must use the original name from the config file.
This commit moves the prefix stripping logic to after the value
lookup but before constructing the SQL query, ensuring both:
1. The correct value is retrieved from the config using the
original prefixed name
2. The variable is stored in the database with a single prefix
Also includes a test to verify the fix works for mysql_variables,
pgsql_variables, and admin_variables sections.
When users mistakenly include the module prefix (e.g., mysql-log_unhealthy_connections)
in the mysql_variables section, the variable gets stored with a double prefix
(e.g., mysql-mysql-log_unhealthy_connections). This fix automatically strips
the prefix if present, ensuring variables are stored correctly.
The same logic applies to pgsql_variables (pgsql-) and admin_variables (admin-).
Fixes#5246
Allow permanent fast-forward sessions (SESSION_FORWARD_TYPE_PERMANENT)
to continue processing when bidirectional data flow is detected,
instead of treating it as a fatal error. This prevents unnecessary
session termination in these specific cases while maintaining the
original strict validation for all other session types.
This change introduces PostgreSQL-aware tokenization by adding support for dollar-quoted strings, PostgreSQL’s double-quoted identifiers, and its comment rules. The tokenizer now correctly parses $$…$$ and $tag$…$tag$, treats " as an identifier delimiter in PostgreSQL, disables MySQL-only # comments, and accepts -- as a comment starter without requiring a trailing space. All new behavior is fully isolated behind the dialect flag to avoid impacting MySQL parsing.
Add PostgreSQL dollar-quoted strings
* New parser state: st_dollar_quote_string.
* Recognizes $$ … $$ and $tag$ … $tag$ sequences.
* Tracks opening tag and searches for matching terminator.
* Normalizes entire literal to ?.
* Integrated into get_next_st() and stage_1_parsing().
The get_status_variable() function was only scanning worker threads
but ignoring auxiliary threads (idle threads) where timeout
terminations are detected. This caused the timeout termination
counter to show incorrect/zero values.
- Added idle thread scanning to both overloaded versions of
get_status_variable() function
- Now properly collects metrics from both worker and idle threads
- Fixes the issue where proxysql_mysql_timeout_terminated_connections_total
showed zero despite actual timeout terminations
Resolves the metrics reading issue identified in the previous commits.
Enhance logging clarity:
- Replace generic IP address with detailed connection info including IP and port
- Use client_myds->addr.addr and client_myds->addr.port for precise identification
- Improve debuggability of timeout clamping and enforcement warnings
The warning messages now provide complete connection details, making it easier
to identify and troubleshoot timeout-related issues in ProxySQL logs.
Code improvements:
- Extract SESS_TO_SCAN_idle_thread constant to header file for better maintainability
- Replace magic number 128 with named constant in idle_thread_to_kill_idle_sessions()
- Improve code readability and consistency in session scanning logic
Test enhancements:
- Add mysql-poll_timeout configuration for more precise timeout testing
- Reduce test sleep times to 13 seconds for faster test execution
- Add diagnostic messages to clearly show timeout configurations in test output
- Ensure tests properly validate timeout enforcement with precise timing
The changes improve code maintainability and make tests more reliable and faster
while maintaining accurate timeout validation.
Key improvements:
- Fix timeout comparison in MySQL_Thread::idle_thread_to_kill_idle_sessions() to prevent underflow
- Use effective wait_timeout (minimum of global and session values) for idle timeout calculations
- Add proper newline characters to proxy_warning messages for consistent log formatting
- Increase test sleep times to account for global timeout enforcement
- Fix session timeout test durations to properly test timeout behavior
Technical changes:
- Replace broken min_idle calculation with proper effective wait_timeout logic
- Add std::min() usage to determine effective timeout from global and session values
- Ensure warning messages end with newline characters for proper log formatting
- Update test sleep durations to ensure proper timeout testing
Resolves potential timeout calculation bugs and ensures consistent timeout enforcement behavior.
- Add range validation for client SET wait_timeout commands
- Implement clamping between 1 second (1000ms) and 20 days (1,728,000,000ms)
- Add warning messages when values are clamped due to ProxySQL limits
- Maintain MySQL compatibility by accepting larger values than global config
- Fix signed/unsigned comparison warning in wait_timeout assignment
- Ensures client applications don't break while enforcing safety limits
- Add wait_timeout member variable declaration to Base_Session class
- Fix constructor initialization to use this->wait_timeout
- Fix assignment in handler to properly scope member variable
- Resolves compilation error for wait_timeout functionality
Enhance the match_ff_req_options function to better handle CLIENT_DEPRECATE_EOF
flag validation in fast forward replication scenarios. The function now performs
a more robust check by examining the actual MySQL command type when the initial
CLIENT_DEPRECATE_EOF flags don't match between frontend and backend connections.
Key improvements:
- Special handling for binlog-related commands (_MYSQL_COM_BINLOG_DUMP,
_MYSQL_COM_BINLOG_DUMP_GTID, _MYSQL_COM_REGISTER_SLAVE) that should be
allowed even when CLIENT_DEPRECATE_EOF flags don't match
- Proper packet parsing to extract and validate MySQL command types
- Enhanced compatibility for fast forward replication connections with
mixed deprecate EOF configurations
This change ensures that ProxySQL can handle more complex replication
scenarios while maintaining proper protocol validation.
PROBLEM:
The initial fix used a DDL detection approach which required maintaining a list
of query types that should return 0 affected rows. This approach was brittle
and could miss edge cases like commented queries or complex statements.
SOLUTION:
Instead of detecting DDL queries, use sqlite3_total_changes64() to measure the
actual change count before and after each query execution. The difference between
total_changes before and after represents the true affected rows count for the
current query, regardless of query type.
CHANGES:
- Added proxy_sqlite3_total_changes64 function pointer and initialization
- Rewrote execute_statement() and execute_statement_raw() to use total_changes
difference approach
- This automatically handles all query types (DDL, DML, comments, etc.)
- Added comprehensive TAP test covering INSERT, CREATE, DROP, VACUUM, UPDATE, and
BEGIN operations
BENEFITS:
- More robust and accurate than DDL detection approach
- Handles edge cases like commented queries automatically
- No maintenance overhead for new query types
- Simpler and cleaner implementation
- Still fixes both Admin interface and SQLite3 Server
This approach is mathematically sound: affected_rows = total_changes_after -
total_changes_before, which gives the exact number of rows changed by the current
query execution.
Fixes#4855
Problem:
When executing DDL queries (CREATE TABLE, DROP TABLE, VACUUM, etc.) in the
ProxySQL Admin interface after DML operations, the affected rows count from
the previous DML operation was incorrectly reported instead of 0. This is
because SQLite's sqlite3_changes() function doesn't reset the counter for
DDL statements.
Root Cause:
SQLite's sqlite3_changes() returns the number of rows affected by the most
recent INSERT, UPDATE, or DELETE statement. For DDL statements that don't
modify rows, SQLite doesn't reset this counter, so it continues to return
the value from the last DML operation.
Solution:
- Added is_ddl_query_without_row_changes() function to identify DDL queries
that don't affect row counts
- Modified both execute_statement() and execute_statement_raw() in SQLite3DB
to return 0 for affected_rows when executing DDL queries
- The fix ensures that affected_rows is reset to 0 for:
CREATE, DROP, ALTER, TRUNCATE, VACUUM, REINDEX, ANALYZE, CHECKPOINT,
PRAGMA, BEGIN, COMMIT, ROLLBACK, SAVEPOINT, RELEASE, EXPLAIN
Testing:
- Created and ran comprehensive tests for DDL detection function
- Verified build completes successfully
- Confirmed the fix correctly identifies DDL vs DML queries
Impact:
This fix resolves the issue where Admin interface incorrectly shows affected
rows for DDL operations, improving the accuracy and reliability of the
ProxySQL Admin interface.
Fixes#4855
- Document addGtidInterval() function with parameter details and reconnection behavior
- Add documentation for readall() method explaining robust error handling
- Document connect_cb() and reader_cb() callbacks with resource management details
- Document generate_mysql_gtid_executed_tables() with multi-phase process explanation
- Focus on functionality, thread safety, and performance improvements
- Provide clear parameter descriptions and return value semantics
- Fix crash by using get_variable_int() instead of get_variable_string() for boolean use_tcp_keepalive variable
- use_tcp_keepalive is a boolean variable, not a string, so get_variable_int() returns 0/1 instead of a string
- Fix syntax errors by removing duplicate code and fixing brace structure
- Add comprehensive Doxygen documentation for both MySQL and PostgreSQL warnings
Resolves assertion failure: "Not existing variable: use_tcp_keepalive"
Resolves: #5212
- Add warnings in flush_mysql_variables___database_to_runtime() when mysql-use_tcp_keepalive=false
- Add warnings in flush_pgsql_variables___database_to_runtime() when pgsql-use_tcp_keepalive=false
- Include comprehensive Doxygen documentation explaining why disabling TCP keepalive is unsafe
- Warn users about potential connection drops when ProxySQL is deployed behind network load balancers
When TCP keepalive is disabled:
- Load balancers may drop idle connections from connection pools
- NAT devices may remove connection state
- Cloud load balancers may terminate connections during idle periods
- Results in sudden connection failures and "connection reset" errors
Resolves: #5212
- This patch was originally added by commit 0a70fd5 and
reverted by 8d1b5b5, prior to the release of `v3.0.3`.
- The following issues are addressed in this update,
- Fix for `use-after-free` issue which occured during CI test.
- Fix for deadlock issue between `GTID_syncer` and `MySQL_Worker`.
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Concurrency and Memory Management
* Lock-Free Ref Counting: Replaced global mutex-protected integer reference counts with `std::atomic<uint32_t>` within `PgSQL_STMT_Global_info`, eliminating lock contention during statement referencing.
* Modern Ownership: Adopted std::shared_ptr<const PgSQL_STMT_Global_info> for global and local storage, providing automatic, thread-safe memory and lifecycle management.
* Memory Optimization: Removed redundant auxiliary maps `global_id_to_stmt_names` and `map_stmt_id_to_info` from local and global statement managers respectively, reducing overall memory overhead.
* Optimized Purging: Statement removal logic was simplified for efficiently identifying and cleaning up unused statements.
Hot Path Performance (`BIND`, `DESCRIBE`, `EXECUTE`)
* Bypassed Global Lookups: Local session maps now store the `shared_ptr` directly, removing the need to acquire the global lock and search the global map during hot path operations.
* Direct Refcount Manipulation: Refcount modification functions now operate directly on the passed statement object, eliminating the overhead of searching the global map to find the object pointer based on statement id.
Safety and Protocol Logic (`PARSE`)
* Efficient Statement Reuse: Implemented a **local fast path** check for the unnamed statement (`""`), allowing immediate reuse of an identical query (same hash) upon re-parse, which bypasses global processing and locks.
Cleanup
* Cleaned up and class rename `PgSQL_STMT_Manager_v14` -> `PgSQL_STMT_Manager`.
- Rename and modify test to use MySQL C API mysql_binlog_* functions
- Implement throttled binlog reading with 5 iterations (no limit, 2s, 5s, 20s, 60s targets)
- Add diagnostics for debugging binlog fetch issues
- Set RPL options for file, position, server_id, and non-blocking flag
- Update Makefile to compile with MySQL client library
Problem: In fast forward mode, ProxySQL forwards packets directly from client
to backend without buffering them. If the backend connection closes
unexpectedly (e.g., due to server crash, network failure, or other issues),
ProxySQL immediately closes the client session. This can result in data loss
because the client may have sent additional data that hasn't been fully
transmitted yet, as ProxySQL does not wait for the output buffers to drain.
Solution: Implement a configurable grace period for session closure in fast
forward mode. When the backend closes unexpectedly, instead of closing the
session immediately, ProxySQL waits for a configurable timeout
(fast_forward_grace_close_ms, default 5000ms) to allow any pending client
output data to be sent. During this grace period:
- If the client output buffers become empty, the session closes gracefully.
- If the timeout expires, the session closes anyway to prevent indefinite
hanging.
Changes:
- Added global variable mysql_thread___fast_forward_grace_close_ms (0-3600000ms)
- Added session flags: backend_closed_in_fast_forward, fast_forward_grace_start_time
- Added data stream flag: defer_close_due_to_fast_forward
- Modified MySQL_Data_Stream::read_from_net() to detect backend EOF and initiate
grace close if client buffers are not empty
- Modified MySQL_Session::handler() FAST_FORWARD case to implement grace close
logic with timeout and buffer checks
- Added extensive inline documentation explaining the feature and its mechanics
This prevents data loss in fast forward scenarios while maintaining bounded
session lifetime.
Previously, the parser always tokenized the full command, even when we only
needed to check whether it was a transaction command. Now, it first extracts
the first word to determine relevance and performs full tokenization only
when necessary.
According to MySQL protocol, variable length strings are encoded using
length encoded integers. For reference, see:
- https://dev.mysql.com/doc/dev/mysql-server/9.4.0/page_protocol_com_stmt_execute.html
- https://dev.mysql.com/doc/dev/mysql-server/9.4.0/page_protocol_basic_dt_integers.html#a_protocol_type_int2
The protocol specifies that values greater than 2^24 (16777216) should
be encoded using '0xFE + 8-byte integer'. Yet, in reality MySQL ignores
the upper section of these 8-byte integers, treating them effectively
like '4-bytes'. For the sake of compatibility this commit changes the
decoding behavior for 'COM_STMT_EXECUTE' to match MySQL one. This
different is subtle but important, since in practice MySQL itself
doesn't use the '8 bytes' from the field. This means that connectors
that are compatible with MySQL could find issues when sending these
packets through ProxySQL (like NodeJS 'mysql2' connector which writes
the 8-bytes as a 4-bytes duplication, motivating these changes),
situation that could result in rejection due to malformed packet
detection (or crashes/invalid handling in the worse case scenario).
The previous decoding function is now renamed into
'mysql_decode_length_ll' to honor MySQL naming 'net_field_length_ll'.
For now, this protocol change is limited to 'COM_STMT_EXECUTE'.
caching_sha2_password full authentication is a complex task that
requires a lot of packets being sent and forth between client and
server (ProxySQL in this case). Every packet needs to have an
increased sequence ID (sid) according to protocol.
ProxySQL was incorrectly forgetting to increase the sid when
requesting a full authentication.
For some clients this is not a problem, while other clients will
consider the incorrect sid a serious issue and abort the connection.
This commit ensures that sid is correctly increased when requesting
caching_sha2_password full authentication.
When true, all `min_gtid` query annotations are ignored; see
https://proxysql.com/documentation/query-annotations/ for details.
This is useful on ProxySQL setups with multiple layers, where some
layers mandate GTID-based routing while others don't.
- Backport PQsendPipelineSync to PostgreSQL 16.3, enabling pipeline
synchronization without flushing the send buffer.
- Replace calls to PQPipelineSync in code with PQsendPipelineSync
to use the new functionality.
Accesses by 'stats___pgsql_processlist' to 'myconn->query.ptr' could
lead to invalid memory accesses, as the pointed query could already have
been free by the session after being issued.
Accesses by 'stats___mysql_processlist' to 'myconn->query.ptr' could
lead to invalid memory accesses, as the pointed query could already have
been free by the session after being issued.
- Add new mysql/pgsql variable `processlist_max_query_length`.
- Min: 1K
- Max: 32M
- Default: 2M
- Truncate current query based on the configuration before inserting into
`stats_*_processlist` tables.
- Refactor/fix code related to other processlist configurations.
1. `session_idle_show_processlist` value was not updated in `ProxySQL_Admin.variables`.
2. Pass processlist config as an argument to `MySQL_Threads_Handler::SQL3_Processlist`
instead of using thread-local variables.
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
This message is dump with each call to 'process_pkt_handshake_response'
printing the updated context. When the verbosity value for module
'debug_mysql_protocol' is >= 5, the stored and client supplied passwords
will be dumped in HEX format, for values < 5, the passwords will be
masked.
Previously, query cache metrics were shared between MySQL and PostgreSQL,
causing both to reflect the same values when performing cache operations.
This change isolates the metrics for each database type.
- Added `backend_pid` and `backend_state` columns to `stats_pgsql_processlist`
to display PostgreSQL backend process ID and connection state.
- Created `stats_pgsql_stat_activity` view on top of `stats_pgsql_processlist`
with column aliases matching PostgreSQL's `pg_stat_activity` for consistency.
These parameters use capitalized names in PostgreSQL for historical reasons.
ProxySQL now sends them using canonical capitalization to ensure client compatibility.
Updated PgSQL_DateStyle_Util::parse_datestyle() to support prefix-based
matching for known tokens (POSTGRES, EURO, NONEURO). This allows variants
like "PostgreSQL", "European", to be recognized as valid inputs.
Centralize escaping/formatting of connection parameters (key='value').
Replace duplicate escape/append/free sequences in connect_start and PgSQL_backend_kill_thread.
Add support for PostgreSQL query cancellation and backend termination
features to allow clients to cancel long-running queries and terminate
connections through the standard PostgreSQL protocol.
Features implemented:
- Intercept pg_backend_pid() queries and return ProxySQL session thread ID
- Intercept pg_terminate_backend() to terminate client connections asynchronously
- Intercept pg_cancel_backend() to cancel queries on backend connections
- Support Cancel Request protocol via separate connection with PID and secret key validation
- Return BackendKeyData message on successful authentication with session thread ID and unique cancel secret key
This enables clients to use standard PostgreSQL cancellation mechanisms
(pg_cancel_backend, pg_terminate_backend, and Cancel Request protocol)
while ProxySQL maintains proper session isolation and maps client requests
to appropriate backend connections.
Previously, each extended-query block was terminated with a SYNC,
which caused implicit transactions to commit prematurely. As a result,
earlier write operations (INSERT/UPDATE/DELETE) could not be rolled
back if a later statement in the same sequence failed.
This change switches to libpq pipeline mode and replaces intermediate
SYNC messages with FLUSH, ensuring that all client query frames execute
as part of the same implicit transaction. A final SYNC is still issued
to resynchronize the connection and make it safe for reuse in the pool.
- Add validation methods for `mysql_users`, `pgsql_users`, `mysql_servers`,
`pgsql_servers` and `proxysql_servers`
- Check for duplicates and mandatory fields
- Return descriptive error messages to clients when validation fails
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Co-authored-by: takaidohigasi <takaidohigasi@gmail.com>
Previously, deleting `PgSQL_Errors_stats` instances in TUs with only a forward
declaration caused the destructor to be skipped, leaking member allocations.
The fix ensures the full class definition is visible at delete sites.
Previously, Parse and Describe each had their own query result handling
paths. This duplicated a lot of logic and also failed to handle some
cases correctly—for example, Notice messages returned by the server
during extended-protocol queries. Keeping these separate paths would
be hard to maintain and prone to bugs.
The simple-query result handling is already mature, optimized, and
covers all the necessary cases. Reusing it for Parse and Describe
makes behavior consistent across simple and extended query flows,
while also reducing duplicate code.
ProxySQL currently does not support PostgreSQL's LISTEN command.
When clients attempt to use it, we now intercept the command and return
an appropriate "not supported" response instead of passing it through.
This behavior is implemented for both:
- Simple query flow
- Extended (prepared) query flow
When a simple query arrives while extended query messages are pending,
we now:
- inject an implicit Sync,
- process all extended query messages,
- then execute the simple query,
- and send ReadyForQuery only after the simple query completes.
RequestEnd was applying state changes (session variable restore,
rollback-to-savepoint, multiplexing toggle for temp tables/sequences)
even when the query failed to execute on the backend connection.
This caused internal state to diverge from the actual backend state.
Fix:
- Add success/failure flag to RequestEnd calls.
- Restrict state-changing logic to Simple Query and Prepared Execute.
- Ensure logic only runs when the query executed successfully on backend.
This keeps internal state aligned with the backend connection state.
PQsendQueryPrepared always emits Bind -> Describe Portal -> Execute, which led
to RowDescription being included in the result set even when the client never
sent a Describe message. This caused clients to receive row descriptions they
did not request.
Changes:
- Skip including RowDescription when the client did not send Describe.
- If the client explicitly sent Describe followed by Execute, continue to
skip redundant execution of Describe but include RowDescription once.
This ensures RowDescription is only sent when requested, aligning behavior
with protocol expectations.
Previously, any packet received while a query was still running was
considered unexpected, and the session was terminated. This often
occurred with pgJDBC, which pipelines commands (e.g., sending `BEGIN`
immediately followed by another statement).
Now, new packets are placed in a FIFO queue and processed only after the
current query finishes and its response is sent. This preserves correct
ordering, prevents unnecessary session termination, and improves
compatibility with connectors like pgJDBC.
In extended query mode, ReadyForQuery is normally deferred when there are
pending messages in the queue; it is sent only after the entire extended
query frame has been processed.
Edge case: if a message fails with an error while the queue still contains
pending messages, the queue is cleared later in the session and those
pending messages are discarded. In that case, ReadyForQuery would never be
sent.
Change: when a result indicates an error, send ReadyForQuery immediately.
The extended-query flag will still be reset later in the session. This
ensures ReadyForQuery is always emitted and prevents clients from waiting
indefinitely.
The PQsendQueryPrepared function transmits the sequence BIND ->
DESCRIBE(PORTAL) -> EXECUTE -> SYNC. However, libpq does not indicate
whether the DESCRIBE PORTAL step produces a NoData packet for commands
such as INSERT, DELETE, or UPDATE. In these cases, libpq returns
PGRES_COMMAND_OK, whereas SELECT statements yield PGRES_SINGLE_TUPLE or
PGRES_TUPLES_OK.
This update explicitly appends a NoData packet to the result in order to
provide consistent behavior across query types.
* Fixed a crash occurring during session destruction.
* Query rules will now apply only to the first message in an extended query frame.
* OK message will apply to execute message.
* Query rewrite, error messages, and large packet handling will apply to parse message.
* Added query processing support for the Bind message.
- Disable POST preprocessing option in `libhttpserver`.
- Validate `Context-Type` header in request handler for `POST /sync`
Fixes#5072
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
* Added tracking for pg_advisory_lock, with status reset only on pg_advisory_unlock_all
* Implemented support for CREATE SEQUENCE and CREATE TEMP SEQUENCE, with reset on DISCARD SEQUENCES
* Added handling for CREATE TEMP TABLE, with reset triggered by DISCARD TEMP
* Now will actually reset value to server default (represented by
nullptr value), if that parameter is not set in startup parameter.
* Default parameter values (pgsql_default_*) will now be set only for critical parameters.
* Introduced startup parameters in PgSQL_Connection (removed default session parameters from PgSQL_Session)
* Startup parameters are populated during both frontend and backend connection creation
* Parameters provided via connection options are set as startup parameters
* Backend connection parameter handling updated: only critical variables are now set via connection options to prevent interference with DISCARD ALL during connection reset; remaining parameters will be applied using individual SET commands
* Marked functions ref_count_client and ref_count_server as noexcept since standard exceptions are not being handled and application crashes are acceptable in such cases.
Previously, issuing a PARSE with an existing statement name would silently overwrite the prepared statement. This fix ensures that named prepared statements cannot be overwritten and will raise an error if redefined. Only unnamed statements (empty name) are allowed to be replaced on reissue.
* Assert if client_stmt_name already exists in stmt_name_to_global_ids (PgSQL_STMTs_local_v14::client_insert)
* Assert if client_stmt_name is already mapped to same global_stmt_id in global_id_to_stmt_names (PgSQL_STMTs_local_v14::client_insert) [only in Debug]
skip processing the Describe Portal. This is because libpq automatically includes a Describe Portal during execution, and sending it again would be redundant.
* Removed server_capabilities from PgSQL modules
* Introduced an error generator method to reduce code duplication
* Added several comments for clarity
* Addressed edge cases related to libpq limitations
- Reduce 'check_type' range to uniquely supported check ('read_only').
- Added table upgrade procedure.
- Fixed missing space in table definition (display for '\G').
- Always send Describe Portal to the backend
- Enforce check for unnamed portals in Bind, Describe, Close, and Execute; raise errors for named portals
- Improve error handling for extended query flow
- Introduce Extended_Query_Info struct to unify extended query-related members and reduce redundancy
The resource leak was associated to prepared statements (sqlite3_stmt).
Since the detection of these leaks is tricky, because the resources held
by 'SQLite3' never lose their references, a helper type
('stmt_unique_ptr') and a safer variation for 'prepare_v2' have been
introduced. This helper type should automatically handle the lifetime of
the statements, avoiding these kind of leaks in the future. The non-safe
alternative has been flagged as 'deprecated', so it can be easily
identified in later refactors or slowly removed during development.
Monitor is able to detect if a backend is a ReadySet server, and it enables special monitoring.
The special monitoring hacks replication lag checks: a ReadySet server is "monitored for
replication lag" , but it has a special query and a special handler.
The query for check is "SHOW READYSET STATUS" , and the `Status` line is processed:
* Online: the backend is configured as ONLINE
* Maintenance* : the backend is configured as OFFLINE_SOFT
* anything else, or failed check: SHUNNED
A new monitor table is also added: `readyset_status_log` .
It has a similar structure of `mysql_server_replication_lag_log` , but instead of storing `repl_lag` (replication lag)
the full output of `SHOW READYSET STATUS` is saved as a JSON (that can be queried using `JSON_EXTRACT()`
The database specified in 'HandshakeResponse' was lost on the first
logging when 'caching_sha2_password'. In this case a full-authentication
was required, and this was the trigger for the context loss.
When creating an event log in binary format, a metadata packet is written.
The metadata is in JSON format.
It currently only provide the ProxySQL version.
For illustration purposes, tool eventlog_reader_to_json.cpp supports it too.
This commit includes also some reformatting
Due to a typo/confusion, the boundary being used for comments check was
'd_max_len' instead of 'q_len'. This prevented the correct detection of
a comment start when the query exceeded 'query_digests_max_query_length'
which determines the value for 'd_max_len'.
When logging the parameters of COM_STMT_EXECUTE , use sizeof(MYSQL_TIME)
for these types:
* MYSQL_TYPE_TIMESTAMP
* MYSQL_TYPE_DATE
* MYSQL_TYPE_TIME
* MYSQL_TYPE_DATETIME
Notes:
* sizeof(MYSQL_TIME) is 40
* client library may use MYSQL_TYPE_DATETIME for all of these types
In bufferTypeInfoMap , use sizeof(MYSQL_TIME) for the size of these types:
* MYSQL_TYPE_TIMESTAMP
* MYSQL_TYPE_DATE
* MYSQL_TYPE_TIME
* MYSQL_TYPE_DATETIME
Notes:
* sizeof(MYSQL_TIME) is 40
* client library may use MYSQL_TYPE_DATETIME for all of these types
In write_query_format_1() :
* write the number of parameters (previously erroneously skipped)
* write null_bitmap only if there are parameters
* write the parameters values (this was previously erroneously skipped)
In test_ps_logging-t.cpp :
* verify that query logging is configured to file
In eventlog_reader_to_json.cpp :
* added verbose logging
* added enum `log_event_type` for properly identifying event type
* parse all events in the input file
When logging COM_STMT_EXECUTE parameters, we check stored parameters.
In the unexpected event (it should never happen) that either `session`
or `session->CurrentQuery.stmt_meta` are `nullptr` , we log 0 parameters
to ensure a deterministic format in the query logging file.
* Accept all parameters sent by the client, mirroring PostgreSQL's permissive handling.
* Validate and apply parameters only after successful authentication.
This avoids wasting resources on invalid connections.
Added:
* bufferTypeInfoMap : to map mysql type to typename and function to convert to JSON
* extractStmtExecuteMetadataToJson() : function to generate a JSON with parameters
Modified write_query_format_1() to write parameters.
This is still a WIP