Remove unnecessary inheritance from MySQL_Threads_Handler. The MCP module
should be independent and not depend on MySQL/PostgreSQL thread handlers.
Changes:
- MCP_Threads_Handler now manages its own pthread_rwlock_t for synchronization
- Simplified init() signature (removed unused num/stack parameters)
- Added ProxySQL_Main_init_MCP_module() call in main initialization phase
- Include only standard C++ headers (pthread.h, cstring, cstdlib)
Add new MCP module supporting multiple MCP server endpoints over HTTPS
with JSON-RPC 2.0 protocol skeleton. Each endpoint (/mcp/config,
/mcp/observe, /mcp/query, /mcp/admin, /mcp/cache) is a distinct MCP
server with its own authentication configuration.
Features:
- HTTPS server using existing ProxySQL TLS certificates
- JSON-RPC 2.0 skeleton implementation (actual protocol TBD)
- 5 MCP endpoints with per-endpoint auth configuration
- LOAD/SAVE MCP VARIABLES admin commands
- Configuration file support (mcp_variables section)
Implementation follows GenAI module pattern:
- MCP_Threads_Handler: Main module handler with variable management
- ProxySQL_MCP_Server: HTTPS server wrapper using libhttpserver
- MCP_JSONRPC_Resource: Base endpoint class with JSON-RPC skeleton
- Add check_genai_events() function for non-blocking epoll_wait on GenAI response fds
- Integrate GenAI event checking into main handler() WAITING_CLIENT_DATA case
- Add goto handler_again to process multiple GenAI responses in one iteration
The async GenAI architecture is now fully integrated. MySQL threads no longer
block when processing GENAI: queries - they send requests asynchronously via
socketpair and continue processing other queries while GenAI workers handle
the embedding/reranking operations.
- Add GenAI_RequestHeader and GenAI_ResponseHeader protocol structures for socketpair communication
- Implement GenAI listener_loop to read requests from epoll and queue to workers
- Implement GenAI worker_loop to process requests and send responses via socketpair
- Add GenAI_PendingRequest state management to MySQL_Session/Base_Session
- Implement MySQL_Session async handlers: genai_send_async(), handle_genai_response(), genai_cleanup_request()
- Modify MySQL_Session genai handler to use async path when epoll is available
- Initialize GenAI epoll fd in Base_Session::init()
This completes the async architecture that was planned but never fully implemented
(previously had only placeholder comments). The GenAI module now processes
requests asynchronously without blocking MySQL threads.
Move all JSON parsing and operation routing logic from MySQL_Session to
GenAI module. MySQL_Session now simply passes GENAI: queries to the GenAI
module via process_json_query(), which handles everything autonomously.
This simplifies the architecture and achieves better separation of concerns:
- MySQL_Session: Detects GENAI: prefix and forwards to GenAI module
- GenAI module: Handles JSON parsing, operation routing, and result formatting
Changes:
- GenAI_Thread.h: Add GENAI_OP_JSON operation type, json_query field, and
process_json_query() method declaration
- GenAI_Thread.cpp: Implement process_json_query() with embed/rerank support
and document_from_sql framework (stubbed for future MySQL connection handling)
- MySQL_Session.cpp: Simplify genai handler to just call process_json_query()
and parse JSON result (reduces net code by ~215 lines)
This commit refactors the experimental GenAI query syntax to use a single
GENAI: keyword with type-based operations instead of separate EMBED: and RERANK: keywords.
Changes:
- Replace EMBED: and RERANK: detection with unified GENAI: detection
- Merge genai_embedding and genai_rerank handlers into single genai handler
- Add 'type' field to operation JSON ("embed" or "rerank")
- Add 'columns' field for rerank operation (2 or 3, default 3)
- columns=2: Returns only index and score
- columns=3: Returns index, score, and document (default)
Old syntax:
EMBED: ["doc1", "doc2"]
RERANK: {"query": "...", "documents": [...], "top_n": 5}
New syntax:
GENAI: {"type": "embed", "documents": ["doc1", "doc2"]}
GENAI: {"type": "rerank", "query": "...", "documents": [...], "top_n": 5, "columns": 2}
This provides a cleaner, more extensible API for future GenAI operations.
This commit adds experimental support for reranking documents directly
from MySQL queries using a special RERANK: syntax.
Changes:
- Add handler___status_WAITING_CLIENT_DATA___STATE_SLEEP___MYSQL_COM_QUERY___genai_rerank()
- Add RERANK: query detection alongside EMBED: detection
- Implement JSON parsing for query, documents array, and optional top_n
- Build resultset with index, score, and document columns
- Use MySQL ERR_Packet for error handling
Query format: RERANK: {"query": "search query", "documents": ["doc1", "doc2", ...], "top_n": 5}
Result format: 1 row per result, 3 columns (index, score, document)
This commit adds experimental support for generating embeddings directly
from MySQL queries using a special EMBED: syntax.
Changes:
- Add MYDS_INTERNAL_GENAI to MySQL_DS_type enum for GenAI connections
- Add handler___status_WAITING_CLIENT_DATA___STATE_SLEEP___MYSQL_COM_QUERY___genai_embedding()
- Implement EMBED: query detection and JSON parsing for document arrays
- Build CSV resultset with embeddings (1 row per document, 1 column)
- Add myconn NULL check in MySQL_Thread for INTERNAL_GENAI type
- Add "debug_genai" name to debug module array
- Remove HAVE_LIBCURL checks (libcurl is always statically linked)
- Use static curl header: "curl/curl.h" instead of <curl/curl.h>
- Remove curl_global_cleanup() from GenAI module (should only be in main())
Query format: EMBED: ["doc1", "doc2", ...]
Result format: 1 row per document, 1 column with CSV embeddings
Error handling uses MySQL ERR_Packet instead of resultsets.
This change adds compile-time detection and fallback to poll() on systems
that don't support epoll(), improving portability across different platforms.
Header changes (include/GenAI_Thread.h):
- Make sys/epoll.h include conditional on #ifdef epoll_create1
Implementation changes (lib/GenAI_Thread.cpp):
- Add poll.h include for poll() support
- Add EPOLL_CREATE compatibility macro (epoll_create1 or epoll_create)
- Add #include <poll.h> for poll() support
- Update init() to use pipe() for wakeup when epoll is not available
- Update register_client() to skip epoll_ctl when epoll is not available
- Update unregister_client() to skip epoll_ctl when epoll is not available
- Update listener_loop() to use poll() when epoll is not available
The compile-time detection works by checking if epoll_create1 is defined
(Linux-specific glibc function since 2.9). On systems without epoll, the
code falls back to using poll() with a pipe for wakeup signaling.
Implement a new GenAI module for ProxySQL with basic infrastructure:
- GenAI_Threads_Handler class for managing GenAI module configuration
- Support for genai- prefixed variables in global_variables table
- Dummy variables: genai-var1 (string) and genai-var2 (integer)
- Config file support via genai_variables section
- Flush functions for runtime_to_database and database_to_runtime
- Module lifecycle: initialization at startup, graceful shutdown
- LOAD/SAVE GENAI VARIABLES admin command infrastructure
Core functionality verified:
- Config file loading works
- Variables persist in global_variables table
- Disk save/load via SQL works
- Module initializes and shuts down properly
Related files:
- include/GenAI_Thread.h: New GenAI thread handler class
- lib/GenAI_Thread.cpp: Implementation with dummy variables
- lib/Admin_Handler.cpp: Added GENAI command vectors and handlers
- lib/Admin_FlushVariables.cpp: Added genai flush functions
- lib/ProxySQL_Admin.cpp: Added init_genai_variables() and load_save_disk_commands entry
- include/proxysql_admin.h: Added function declarations
- lib/Makefile: Added GenAI_Thread.oo to build
- src/main.cpp: Added module initialization and cleanup
- src/proxysql.cfg: Added genai_variables configuration section
This commit addresses all review comments from gemini-code-assist on PR #5279:
1. Fixed FLUSH LOGS documentation - clarified that file is reopened for
appending, not truncating, and updated the note about preserving contents
2. Fixed callback documentation - clarified that the callback attaches to
all frontend connections, not just admin connections
3. Updated security warning - focused on passive eavesdropping and offline
decryption as the primary threats
4. Fixed typo: proxyql_ip -> proxysql_ip in tcpdump example
5. Removed misleading @see HPKP link - HPKP is unrelated to NSS Key Log
Format and is a deprecated feature
6. Updated NSS Key Log Format URL to use official MDN link instead of
unofficial mirror
7. Fixed buffer size comment to accurately reflect 256-byte buffer and
254-byte line length validation
8. Clarified fputs comment to emphasize the read lock's role in allowing
concurrent writes from multiple threads
This commit addresses critical issues identified in PR #5276 by
gemini-code-assist's code review, which could undermine the goal of
being allocation-free and cause hangs or silent failures.
Bug 1: Vector Passed by Value (Critical)
------------------------------------------
The function took std::vector<int> excludeFDs by value, causing heap
allocation during the copy operation. This undermines the PR's goal of
avoiding heap allocations after fork() to prevent deadlocks in
multi-threaded programs.
Fix: Change to pass by const reference to avoid heap allocation.
void close_all_non_term_fd(const std::vector<int>& excludeFDs)
Bug 2: Infinite Loop Risk (Critical)
------------------------------------
The loop used unsigned int for the variable while comparing against
rlim_t (unsigned long long). If rlim_cur exceeded UINT_MAX, this would
create an infinite loop.
Fix: Use rlim_t type for the loop variable and cap at INT_MAX.
for (rlim_t fd_rlim = 3; fd_rlim < nlimit.rlim_cur && fd_rlim <= INT_MAX; fd_rlim++)
Bug 3: close_range() Detection Logic (High)
------------------------------------------
The original detection logic had two problems:
1. Executed close_range syscall twice on first successful call
2. Incorrectly cached availability on transient failures (EINTR),
leaving file descriptors open without fallback
Fix: Reordered logic to only cache on success, allow retry on
transient failures. Only cache as "not available" on ENOSYS.
For other errors (EBADF, EINVAL, etc.), don't cache - might be transient.
Files Modified
--------------
- include/proxysql_utils.h
- lib/proxysql_utils.cpp
This commit adds extensive documentation for the ssl_keylog_file feature
(introduced in PR #4236), which enables TLS key logging for debugging
encrypted traffic.
## Background
The ssl_keylog_file variable (exposed as admin-ssl_keylog_file in SQL
interface) allows ProxySQL to write TLS secrets to a file in NSS Key Log
Format. These secrets can be used by tools like Wireshark and tshark to
decrypt and analyze TLS traffic for debugging purposes.
## Changes
### Inline Documentation (Code)
1. include/proxysql_sslkeylog.h (+96 lines)
- File-level documentation explaining the module purpose and security
- Doxygen comments for all 5 public APIs
- Thread-safety annotations
- Parameter descriptions and return values
2. lib/proxysql_sslkeylog.cpp (+136 lines)
- Implementation-level documentation
- Algorithm explanations (double-checked locking, thread safety)
- Reference to NSS Key Log Format specification
3. include/proxysql_admin.h (+19 lines)
- Variable documentation for ssl_keylog_file
- Path handling rules (absolute vs relative)
- Security implications
### Developer Documentation (doc/ssl_keylog/ssl_keylog_developer_guide.md)
Target audience: Developers working on ProxySQL codebase
Contents:
- Variable naming convention (SQL vs config file vs internal)
- Architecture diagrams
- Thread safety model (pthread rwlock)
- NSS Key Log Format specification
- Complete API reference for all public functions
- Integration points in the codebase
- Security considerations and code review checklist
- Testing procedures
### User Documentation (doc/ssl_keylog/ssl_keylog_user_guide.md)
Target audience: End users and system administrators
Contents:
- What is SSL key logging and when to use it
- Variable naming: admin-ssl_keylog_file (SQL) vs ssl_keylog_file (config)
- Step-by-step enable/disable instructions
- Path resolution (absolute vs relative)
- Log rotation procedures
- Production workflow: tcpdump capture → offline analysis
- Wireshark (GUI) integration tutorial
- tshark (command-line) usage examples
- Troubleshooting common issues
- Security best practices
- Quick reference card
## Key Features Documented
1. **Variable Naming Convention**
- SQL interface: SET admin-ssl_keylog_file = '/path';
- Config file: ssl_keylog_file='/path' (in admin_variables section)
- Internal code: ssl_keylog_file
2. **Production Workflow**
- Capture traffic with tcpdump (no GUI on production server)
- Transfer pcap + keylog to analysis system
- Analyze offline with Wireshark (GUI) or tshark (CLI)
3. **tshark Examples**
- Command-line analysis of encrypted traffic
- Filter examples for debugging TLS issues
- JSON export for automated analysis
## Security Notes
The documentation emphasizes that:
- Key log files contain cryptographic secrets that decrypt ALL TLS traffic
- Access must be restricted (permissions 0600)
- Only enable for debugging, never in production
- Securely delete old key log files
## Files Modified
- include/proxysql_admin.h
- include/proxysql_sslkeylog.h
- lib/proxysql_sslkeylog.cpp
## Files Added
- doc/ssl_keylog/ssl_keylog_developer_guide.md
- doc/ssl_keylog/ssl_keylog_user_guide.md
Since ProxySQL 3.0.4, SELECT VERSION() queries were intercepted and returned
ProxySQL's mysql-server_version variable instead of proxying to backends.
This broke SQLAlchemy for MariaDB which expects "MariaDB" in the version
string.
This commit adds a new variable `mysql-select_version_forwarding` with 4 modes:
- 0 = never: Always return ProxySQL's version (3.0.4+ behavior)
- 1 = always: Always proxy to backend (3.0.3 behavior)
- 2 = smart (fallback to 0): Try backend connection, else ProxySQL version
- 3 = smart (fallback to 1): Try backend connection, else proxy (default)
The implementation includes:
- New global variable mysql_thread___select_version_forwarding
- New function get_backend_version_for_hostgroup() to peek at backend
connection versions without removing them from the pool
- Modified SELECT VERSION() handler to support all 4 modes
- ProxySQL backend detection to avoid recursion
Mode 3 (default) ensures SQLAlchemy always gets the real MariaDB version
string while maintaining fast response when connections are available.
* Change MySQL_Monitor_Connection_Pool::put_connection signature to accept MySQL_Monitor_State_Data* instead of raw MYSQL*/port.
* Centralize access to mysql and port via mmsd, reducing parameter mismatch and misuse.
* Improve DEBUG bookkeeping: ensure connections are properly unregistered from the global debug registry with clearer assertions and logs.
* Add consistent proxy_debug messages for connection register/unregister events.
* Simplify server lookup/creation logic when returning connections to the pool.
* Fix ordering of error handling to always unregister before closing connections.
* Minor cleanup: remove unused labels/variables and modernize casts.
* This refactor improves correctness, debuggability, and safety of monitor connection lifecycle management.
Logging messages now include 'client address', 'session status' and
'data stream status'. Client address is also logged when OK packets are
dispatched, this should help tracking if a client has received the
expected packets or not.
Implements a workaround for the handling of unexpected 'COM_PING'
packets received during query processing, while a resultset is yet being
streamed to the client. Received 'COM_PING' packets are queued in the
form of a counter. This counter is later used to sent the corresponding
number of 'OK' packets to the client after 'MySQL_Session' has finished
processing the current query.
This commit documents:
1. The vacuum_stats() function's purpose, behavior, and the reason why
stats_pgsql_stat_activity is excluded from bulk deletion operations
2. The fact that stats_pgsql_stat_activity is a SQL VIEW (not a table)
and attempting DELETE on it would cause SQLite error:
"cannot modify stats_pgsql_stat_activity because it is a view"
The documentation explains:
- Why TRUNCATE stats_mysql_query_digest triggers vacuum_stats(true)
- Why both MySQL and PostgreSQL tables are cleared regardless of protocol
- How the view is automatically cleared via its underlying table
stats_pgsql_processlist
- The importance of keeping the view excluded from deletion lists
The `cache_empty_result` field in query rules has three possible values:
• -1: Use global setting (`query_cache_stores_empty_result`)
• 0: Do NOT cache empty resultsets, but cache non-empty resultsets
• 1: Always cache resultsets (both empty and non-empty)
Previously, when `cache_empty_result` was set to 0, nothing was cached at all,
even for non-empty resultsets. This prevented users from disabling caching
for empty resultsets while still allowing caching of non-empty resultsets
on a per-rule basis.
Changes:
1. Modified caching logic in MySQL_Session.cpp and PgSQL_Session.cpp to
add the condition `(qpo->cache_empty_result == 0 && MyRS->num_rows)`
(MySQL) and `(qpo->cache_empty_result == 0 && num_rows)` (PgSQL)
to allow caching when cache_empty_result=0 AND result has rows.
2. Added comprehensive Doxygen documentation in query_processor.h explaining
the semantics of cache_empty_result values.
3. Updated Query_Processor.cpp with inline comments explaining the
three possible values.
Now when cache_empty_result is set to 0:
- Empty resultsets (0 rows) are NOT cached
- Non-empty resultsets (>0 rows) ARE cached
- This matches the intended per-rule behavior described in issue #5248.
Fixes: https://github.com/sysown/proxysql/issues/5248
This commit adds detailed Doxygen documentation for:
1. The ProxySQL_Config class - describes its role in configuration management
2. The Read_Global_Variables_from_configfile() method - documents its behavior,
parameters, return value, and the automatic prefix stripping feature
The documentation explains the automatic prefix stripping behavior that handles
cases where users mistakenly include module prefix (e.g., "mysql-") in variable
names within configuration files.
This change introduces PostgreSQL-aware tokenization by adding support for dollar-quoted strings, PostgreSQL’s double-quoted identifiers, and its comment rules. The tokenizer now correctly parses $$…$$ and $tag$…$tag$, treats " as an identifier delimiter in PostgreSQL, disables MySQL-only # comments, and accepts -- as a comment starter without requiring a trailing space. All new behavior is fully isolated behind the dialect flag to avoid impacting MySQL parsing.
Add PostgreSQL dollar-quoted strings
* New parser state: st_dollar_quote_string.
* Recognizes $$ … $$ and $tag$ … $tag$ sequences.
* Tracks opening tag and searches for matching terminator.
* Normalizes entire literal to ?.
* Integrated into get_next_st() and stage_1_parsing().
The get_status_variable() function was only scanning worker threads
but ignoring auxiliary threads (idle threads) where timeout
terminations are detected. This caused the timeout termination
counter to show incorrect/zero values.
- Added idle thread scanning to both overloaded versions of
get_status_variable() function
- Now properly collects metrics from both worker and idle threads
- Fixes the issue where proxysql_mysql_timeout_terminated_connections_total
showed zero despite actual timeout terminations
Resolves the metrics reading issue identified in the previous commits.
Code improvements:
- Extract SESS_TO_SCAN_idle_thread constant to header file for better maintainability
- Replace magic number 128 with named constant in idle_thread_to_kill_idle_sessions()
- Improve code readability and consistency in session scanning logic
Test enhancements:
- Add mysql-poll_timeout configuration for more precise timeout testing
- Reduce test sleep times to 13 seconds for faster test execution
- Add diagnostic messages to clearly show timeout configurations in test output
- Ensure tests properly validate timeout enforcement with precise timing
The changes improve code maintainability and make tests more reliable and faster
while maintaining accurate timeout validation.
- Add wait_timeout member variable declaration to Base_Session class
- Fix constructor initialization to use this->wait_timeout
- Fix assignment in handler to properly scope member variable
- Resolves compilation error for wait_timeout functionality
PROBLEM:
The initial fix used a DDL detection approach which required maintaining a list
of query types that should return 0 affected rows. This approach was brittle
and could miss edge cases like commented queries or complex statements.
SOLUTION:
Instead of detecting DDL queries, use sqlite3_total_changes64() to measure the
actual change count before and after each query execution. The difference between
total_changes before and after represents the true affected rows count for the
current query, regardless of query type.
CHANGES:
- Added proxy_sqlite3_total_changes64 function pointer and initialization
- Rewrote execute_statement() and execute_statement_raw() to use total_changes
difference approach
- This automatically handles all query types (DDL, DML, comments, etc.)
- Added comprehensive TAP test covering INSERT, CREATE, DROP, VACUUM, UPDATE, and
BEGIN operations
BENEFITS:
- More robust and accurate than DDL detection approach
- Handles edge cases like commented queries automatically
- No maintenance overhead for new query types
- Simpler and cleaner implementation
- Still fixes both Admin interface and SQLite3 Server
This approach is mathematically sound: affected_rows = total_changes_after -
total_changes_before, which gives the exact number of rows changed by the current
query execution.
Fixes#4855
- This patch was originally added by commit 0a70fd5 and
reverted by 8d1b5b5, prior to the release of `v3.0.3`.
- The following issues are addressed in this update,
- Fix for `use-after-free` issue which occured during CI test.
- Fix for deadlock issue between `GTID_syncer` and `MySQL_Worker`.
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Concurrency and Memory Management
* Lock-Free Ref Counting: Replaced global mutex-protected integer reference counts with `std::atomic<uint32_t>` within `PgSQL_STMT_Global_info`, eliminating lock contention during statement referencing.
* Modern Ownership: Adopted std::shared_ptr<const PgSQL_STMT_Global_info> for global and local storage, providing automatic, thread-safe memory and lifecycle management.
* Memory Optimization: Removed redundant auxiliary maps `global_id_to_stmt_names` and `map_stmt_id_to_info` from local and global statement managers respectively, reducing overall memory overhead.
* Optimized Purging: Statement removal logic was simplified for efficiently identifying and cleaning up unused statements.
Hot Path Performance (`BIND`, `DESCRIBE`, `EXECUTE`)
* Bypassed Global Lookups: Local session maps now store the `shared_ptr` directly, removing the need to acquire the global lock and search the global map during hot path operations.
* Direct Refcount Manipulation: Refcount modification functions now operate directly on the passed statement object, eliminating the overhead of searching the global map to find the object pointer based on statement id.
Safety and Protocol Logic (`PARSE`)
* Efficient Statement Reuse: Implemented a **local fast path** check for the unnamed statement (`""`), allowing immediate reuse of an identical query (same hash) upon re-parse, which bypasses global processing and locks.
Cleanup
* Cleaned up and class rename `PgSQL_STMT_Manager_v14` -> `PgSQL_STMT_Manager`.
Problem: In fast forward mode, ProxySQL forwards packets directly from client
to backend without buffering them. If the backend connection closes
unexpectedly (e.g., due to server crash, network failure, or other issues),
ProxySQL immediately closes the client session. This can result in data loss
because the client may have sent additional data that hasn't been fully
transmitted yet, as ProxySQL does not wait for the output buffers to drain.
Solution: Implement a configurable grace period for session closure in fast
forward mode. When the backend closes unexpectedly, instead of closing the
session immediately, ProxySQL waits for a configurable timeout
(fast_forward_grace_close_ms, default 5000ms) to allow any pending client
output data to be sent. During this grace period:
- If the client output buffers become empty, the session closes gracefully.
- If the timeout expires, the session closes anyway to prevent indefinite
hanging.
Changes:
- Added global variable mysql_thread___fast_forward_grace_close_ms (0-3600000ms)
- Added session flags: backend_closed_in_fast_forward, fast_forward_grace_start_time
- Added data stream flag: defer_close_due_to_fast_forward
- Modified MySQL_Data_Stream::read_from_net() to detect backend EOF and initiate
grace close if client buffers are not empty
- Modified MySQL_Session::handler() FAST_FORWARD case to implement grace close
logic with timeout and buffer checks
- Added extensive inline documentation explaining the feature and its mechanics
This prevents data loss in fast forward scenarios while maintaining bounded
session lifetime.
Previously, the parser always tokenized the full command, even when we only
needed to check whether it was a transaction command. Now, it first extracts
the first word to determine relevance and performs full tokenization only
when necessary.
According to MySQL protocol, variable length strings are encoded using
length encoded integers. For reference, see:
- https://dev.mysql.com/doc/dev/mysql-server/9.4.0/page_protocol_com_stmt_execute.html
- https://dev.mysql.com/doc/dev/mysql-server/9.4.0/page_protocol_basic_dt_integers.html#a_protocol_type_int2
The protocol specifies that values greater than 2^24 (16777216) should
be encoded using '0xFE + 8-byte integer'. Yet, in reality MySQL ignores
the upper section of these 8-byte integers, treating them effectively
like '4-bytes'. For the sake of compatibility this commit changes the
decoding behavior for 'COM_STMT_EXECUTE' to match MySQL one. This
different is subtle but important, since in practice MySQL itself
doesn't use the '8 bytes' from the field. This means that connectors
that are compatible with MySQL could find issues when sending these
packets through ProxySQL (like NodeJS 'mysql2' connector which writes
the 8-bytes as a 4-bytes duplication, motivating these changes),
situation that could result in rejection due to malformed packet
detection (or crashes/invalid handling in the worse case scenario).
The previous decoding function is now renamed into
'mysql_decode_length_ll' to honor MySQL naming 'net_field_length_ll'.
For now, this protocol change is limited to 'COM_STMT_EXECUTE'.
When true, all `min_gtid` query annotations are ignored; see
https://proxysql.com/documentation/query-annotations/ for details.
This is useful on ProxySQL setups with multiple layers, where some
layers mandate GTID-based routing while others don't.
- Add new mysql/pgsql variable `processlist_max_query_length`.
- Min: 1K
- Max: 32M
- Default: 2M
- Truncate current query based on the configuration before inserting into
`stats_*_processlist` tables.
- Refactor/fix code related to other processlist configurations.
1. `session_idle_show_processlist` value was not updated in `ProxySQL_Admin.variables`.
2. Pass processlist config as an argument to `MySQL_Threads_Handler::SQL3_Processlist`
instead of using thread-local variables.
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
This message is dump with each call to 'process_pkt_handshake_response'
printing the updated context. When the verbosity value for module
'debug_mysql_protocol' is >= 5, the stored and client supplied passwords
will be dumped in HEX format, for values < 5, the passwords will be
masked.
Previously, query cache metrics were shared between MySQL and PostgreSQL,
causing both to reflect the same values when performing cache operations.
This change isolates the metrics for each database type.
- Added `backend_pid` and `backend_state` columns to `stats_pgsql_processlist`
to display PostgreSQL backend process ID and connection state.
- Created `stats_pgsql_stat_activity` view on top of `stats_pgsql_processlist`
with column aliases matching PostgreSQL's `pg_stat_activity` for consistency.
These parameters use capitalized names in PostgreSQL for historical reasons.
ProxySQL now sends them using canonical capitalization to ensure client compatibility.
Add support for PostgreSQL query cancellation and backend termination
features to allow clients to cancel long-running queries and terminate
connections through the standard PostgreSQL protocol.
Features implemented:
- Intercept pg_backend_pid() queries and return ProxySQL session thread ID
- Intercept pg_terminate_backend() to terminate client connections asynchronously
- Intercept pg_cancel_backend() to cancel queries on backend connections
- Support Cancel Request protocol via separate connection with PID and secret key validation
- Return BackendKeyData message on successful authentication with session thread ID and unique cancel secret key
This enables clients to use standard PostgreSQL cancellation mechanisms
(pg_cancel_backend, pg_terminate_backend, and Cancel Request protocol)
while ProxySQL maintains proper session isolation and maps client requests
to appropriate backend connections.
Previously, each extended-query block was terminated with a SYNC,
which caused implicit transactions to commit prematurely. As a result,
earlier write operations (INSERT/UPDATE/DELETE) could not be rolled
back if a later statement in the same sequence failed.
This change switches to libpq pipeline mode and replaces intermediate
SYNC messages with FLUSH, ensuring that all client query frames execute
as part of the same implicit transaction. A final SYNC is still issued
to resynchronize the connection and make it safe for reuse in the pool.
- Add validation methods for `mysql_users`, `pgsql_users`, `mysql_servers`,
`pgsql_servers` and `proxysql_servers`
- Check for duplicates and mandatory fields
- Return descriptive error messages to clients when validation fails
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Co-authored-by: takaidohigasi <takaidohigasi@gmail.com>
Previously, deleting `PgSQL_Errors_stats` instances in TUs with only a forward
declaration caused the destructor to be skipped, leaking member allocations.
The fix ensures the full class definition is visible at delete sites.
Previously, Parse and Describe each had their own query result handling
paths. This duplicated a lot of logic and also failed to handle some
cases correctly—for example, Notice messages returned by the server
during extended-protocol queries. Keeping these separate paths would
be hard to maintain and prone to bugs.
The simple-query result handling is already mature, optimized, and
covers all the necessary cases. Reusing it for Parse and Describe
makes behavior consistent across simple and extended query flows,
while also reducing duplicate code.
When a simple query arrives while extended query messages are pending,
we now:
- inject an implicit Sync,
- process all extended query messages,
- then execute the simple query,
- and send ReadyForQuery only after the simple query completes.
RequestEnd was applying state changes (session variable restore,
rollback-to-savepoint, multiplexing toggle for temp tables/sequences)
even when the query failed to execute on the backend connection.
This caused internal state to diverge from the actual backend state.
Fix:
- Add success/failure flag to RequestEnd calls.
- Restrict state-changing logic to Simple Query and Prepared Execute.
- Ensure logic only runs when the query executed successfully on backend.
This keeps internal state aligned with the backend connection state.
PQsendQueryPrepared always emits Bind -> Describe Portal -> Execute, which led
to RowDescription being included in the result set even when the client never
sent a Describe message. This caused clients to receive row descriptions they
did not request.
Changes:
- Skip including RowDescription when the client did not send Describe.
- If the client explicitly sent Describe followed by Execute, continue to
skip redundant execution of Describe but include RowDescription once.
This ensures RowDescription is only sent when requested, aligning behavior
with protocol expectations.
The PQsendQueryPrepared function transmits the sequence BIND ->
DESCRIBE(PORTAL) -> EXECUTE -> SYNC. However, libpq does not indicate
whether the DESCRIBE PORTAL step produces a NoData packet for commands
such as INSERT, DELETE, or UPDATE. In these cases, libpq returns
PGRES_COMMAND_OK, whereas SELECT statements yield PGRES_SINGLE_TUPLE or
PGRES_TUPLES_OK.
This update explicitly appends a NoData packet to the result in order to
provide consistent behavior across query types.
* Fixed a crash occurring during session destruction.
* Query rules will now apply only to the first message in an extended query frame.
* OK message will apply to execute message.
* Query rewrite, error messages, and large packet handling will apply to parse message.
* Added query processing support for the Bind message.
* Added tracking for pg_advisory_lock, with status reset only on pg_advisory_unlock_all
* Implemented support for CREATE SEQUENCE and CREATE TEMP SEQUENCE, with reset on DISCARD SEQUENCES
* Added handling for CREATE TEMP TABLE, with reset triggered by DISCARD TEMP
* Now will actually reset value to server default (represented by
nullptr value), if that parameter is not set in startup parameter.
* Default parameter values (pgsql_default_*) will now be set only for critical parameters.
* Introduced startup parameters in PgSQL_Connection (removed default session parameters from PgSQL_Session)
* Startup parameters are populated during both frontend and backend connection creation
* Parameters provided via connection options are set as startup parameters
* Backend connection parameter handling updated: only critical variables are now set via connection options to prevent interference with DISCARD ALL during connection reset; remaining parameters will be applied using individual SET commands
* Marked functions ref_count_client and ref_count_server as noexcept since standard exceptions are not being handled and application crashes are acceptable in such cases.
skip processing the Describe Portal. This is because libpq automatically includes a Describe Portal during execution, and sending it again would be redundant.
* Removed server_capabilities from PgSQL modules
* Introduced an error generator method to reduce code duplication
* Added several comments for clarity
* Addressed edge cases related to libpq limitations
- Reduce 'check_type' range to uniquely supported check ('read_only').
- Added table upgrade procedure.
- Fixed missing space in table definition (display for '\G').
- Always send Describe Portal to the backend
- Enforce check for unnamed portals in Bind, Describe, Close, and Execute; raise errors for named portals
- Improve error handling for extended query flow
- Introduce Extended_Query_Info struct to unify extended query-related members and reduce redundancy
The resource leak was associated to prepared statements (sqlite3_stmt).
Since the detection of these leaks is tricky, because the resources held
by 'SQLite3' never lose their references, a helper type
('stmt_unique_ptr') and a safer variation for 'prepare_v2' have been
introduced. This helper type should automatically handle the lifetime of
the statements, avoiding these kind of leaks in the future. The non-safe
alternative has been flagged as 'deprecated', so it can be easily
identified in later refactors or slowly removed during development.
Monitor is able to detect if a backend is a ReadySet server, and it enables special monitoring.
The special monitoring hacks replication lag checks: a ReadySet server is "monitored for
replication lag" , but it has a special query and a special handler.
The query for check is "SHOW READYSET STATUS" , and the `Status` line is processed:
* Online: the backend is configured as ONLINE
* Maintenance* : the backend is configured as OFFLINE_SOFT
* anything else, or failed check: SHUNNED
A new monitor table is also added: `readyset_status_log` .
It has a similar structure of `mysql_server_replication_lag_log` , but instead of storing `repl_lag` (replication lag)
the full output of `SHOW READYSET STATUS` is saved as a JSON (that can be queried using `JSON_EXTRACT()`
When creating an event log in binary format, a metadata packet is written.
The metadata is in JSON format.
It currently only provide the ProxySQL version.
For illustration purposes, tool eventlog_reader_to_json.cpp supports it too.
This commit includes also some reformatting
* Accept all parameters sent by the client, mirroring PostgreSQL's permissive handling.
* Validate and apply parameters only after successful authentication.
This avoids wasting resources on invalid connections.