- Rename llm_search_log column from \"limit\" to \"lmt\" to avoid SQL reserved keyword
- Add FTS inserts to all LLM artifact upsert functions:
- add_question_template(): index question templates for search
- add_llm_note(): index notes for search
- upsert_llm_summary(): index object summaries for search
- upsert_llm_domain(): index domains for search
- upsert_llm_metric(): index metrics for search
- Remove content='' from fts_llm table to store content directly
- Add <functional> header for std::hash usage
This fixes the bug where llm_search always returned empty results
because the FTS index was never populated.
Add query_tool_calls table to Discovery Schema to track all MCP tool
invocations via the /mcp/query/ endpoint. Logs:
- tool_name: Name of the tool that was called
- schema: Schema name (nullable, empty if not applicable)
- run_id: Run ID from discovery (nullable, 0 if not applicable)
- start_time: Start monotonic time in microseconds
- execution_time: Execution duration in microseconds
- error: Error message (null if success)
Modified files:
- Discovery_Schema.cpp: Added table creation and log_query_tool_call function
- Discovery_Schema.h: Added function declaration
- Query_Tool_Handler.cpp: Added logging after each tool execution
Extend the stats_mcp_query_tools_counters table with timing statistics
(first_seen, last_seen, sum_time, min_time, max_time) following the
same pattern as stats_mysql_query_digest.
All timing values are in microseconds using monotonic_time().
New schema:
- tool VARCHAR
- schema VARCHAR
- count INT
- first_seen INTEGER (microseconds)
- last_seen INTEGER (microseconds)
- sum_time INTEGER (microseconds - total execution time)
- min_time INTEGER (microseconds - minimum execution time)
- max_time INTEGER (microseconds - maximum execution time)
The MCP catalog database is now accessible as the 'mcp_catalog' schema
from the ProxySQL Admin interface, enabling direct SQL queries against
discovered schemas and LLM memories.
Remove the mcp-catalog_path configuration variable and hardcode the catalog
database path to datadir/mcp_catalog.db for stability.
Rationale: The catalog database is session state, not user configuration.
Runtime swapping of the catalog could cause tables to be missed and the
catalog to fail even if it was succeeding a second earlier.
Changes:
- Removed catalog_path from mcp_thread_variables_names array
- Removed mcp_catalog_path from MCP_Thread variables struct
- Removed getter/setter logic for catalog_path
- Hardcoded catalog path to GloVars.datadir/mcp_catalog.db in:
- ProxySQL_MCP_Server.cpp (Query_Tool_Handler initialization)
- Admin_FlushVariables.cpp (MySQL_Tool_Handler reinitialization)
- Updated VARIABLES.md to document the hardcoded path
- Updated configure_mcp.sh to remove catalog_path configuration
- Updated MCP README to remove catalog_path references
Add stats_mcp_query_tools_counters and stats_mcp_query_tools_counters_reset
tables to track MCP query tool usage statistics.
- Added get_tool_usage_stats_resultset() method to Query_Tool_Handler
- Defined table schemas in ProxySQL_Admin_Tables_Definitions.h
- Registered tables in Admin_Bootstrap.cpp
- Added pattern matching in ProxySQL_Admin.cpp
- Added stats___mcp_query_tools_counters() in ProxySQL_Admin_Stats.cpp
- Fixed friend declaration for track_tool_invocation()
- Fixed Discovery_Schema.cpp log_llm_search() to use prepare_v2/finalize
* Add full support for both HTTP and HTTPS modes in MCP server via the mcp_use_ssl configuration variable, enabling plain HTTP for development and HTTPS for production with proper certificate validation
* Server now automatically restarts when SSL mode or port configuration changes, fixing silent configuration failures where changes appeared to succeed but didn't take effect until manual restart.
Features:
- Explicit support for HTTP mode (mcp_use_ssl=false) without SSL certificates
- Explicit support for HTTPS mode (mcp_use_ssl=true) with certificate validation
- Configurable via configure_mcp.sh with --no-ssl or --use-ssl flags
- Settable via admin interface: SET mcp-use_ssl=true/false
- Automatic restart detection for SSL mode changes (HTTP ↔ HTTPS)
- Automatic restart detection for port changes (mcp_port)
- Add mcp_config.example.json for Claude Code MCP configuration
- Fix MCP bridge path in example config (../../proxysql_mcp_stdio_bridge.py)
- Update Two_Phase_Discovery_Implementation.md with correct Phase 1/Phase 2 usage
- Fix Two_Phase_Discovery_Implementation.md DELETE FROM fts_objects to scope to run_id
- Update README.md with two-phase discovery section and multi-agent legacy note
- Create static_harvest.sh bash wrapper for Phase 1
- Create two_phase_discovery.py orchestration script with prompts
- Add --run-id parameter to skip auto-fetch
- Fix RUN_ID placeholder mismatch (<USE_THE_PROVIDED_RUN_ID>)
- Fix catalog path default to mcp_catalog.db
- Add test_catalog.sh to verify catalog tools work
- Fix Discovery_Schema.cpp FTS5 syntax (missing space)
- Remove invalid CREATE INDEX on FTS virtual tables
- Add MCP tool call logging to track tool usage
- Fix Static_Harvester::get_harvest_stats() to accept run_id parameter
- Fix DELETE FROM fts_objects to only delete for specific run_id
- Update system prompts to say DO NOT call discovery.run_static
- Update user prompts to say Phase 1 is already complete
- Add --mcp-only flag to restrict Claude Code to MCP tools only
- Make FTS table failures non-fatal (check if table exists first)
- Add comprehensive documentation for both discovery approaches
- Rename NL2SQL_Converter to LLM_Bridge for generic prompt processing
- Update MySQL protocol handler from /* NL2SQL: */ to /* LLM: */
- Remove SQL-specific fields (sql_query, confidence, tables_used)
- Add GENAI_OP_LLM operation type to GenAI module
- Rename all genai_nl2sql_* variables to genai_llm_*
- Update AI_Features_Manager to use LLM_Bridge
- Deprecate ai_nl2sql_convert MCP tool with error message
- LLM bridge now handles any prompt type via MySQL protocol
This enables generic LLM access (summarization, code generation,
translation, analysis) while preserving infrastructure for future
NL2SQL implementation via Web UI + external agents.
- Add has_variable() method to GenAI_Threads_Handler for variable validation
- Add genai- prefix check in is_valid_global_variable()
- Auto-initialize NL2SQL converter when genai-nl2sql_enabled is set to true at runtime
- Make init_nl2sql() public to allow runtime initialization
- Mask API keys in logs (show only first 2 chars, rest as 'x')
Update flush_genai_variables___database_to_runtime() to match the MCP
pattern exactly:
- Add 'lock' parameter (default true) for flexibility
- Use ProxySQL_Admin's wrlock()/wrunlock() instead of GloGATH's
- Use consistent variable naming (var_name = name + 6 for 'genai-' prefix)
- Follow exact same locking pattern as MCP variables
This fixes the issue where runtime_global_variables table was not being
populated on startup because the locking pattern was incorrect.
This commit fixes a serious design flaw where AI configuration variables
were not integrated with the ProxySQL admin interface. All ai_*
variables have been migrated to the GenAI module as genai-* variables.
Changes:
- Added 21 new genai_* variables to GenAI_Thread.h structure
- Implemented get/set functions for all new variables in GenAI_Thread.cpp
- Removed internal variables struct from AI_Features_Manager
- AI_Features_Manager now reads from GloGATH instead of internal state
- Updated documentation to reference genai-* variables
- Fixed debug.cpp assertion for PROXY_DEBUG_NL2SQL and PROXY_DEBUG_ANOMALY
Variable mapping:
- ai_nl2sql_enabled → genai-nl2sql_enabled
- ai_anomaly_detection_enabled → genai-anomaly_enabled
- ai_features_enabled → genai-enabled
- All other ai_* variables follow the same pattern
The flush functions automatically handle all variables in the
genai_thread_variables_names array, so database persistence
works correctly without additional changes.
Related to: https://github.com/ProxySQL/proxysql-vec/pull/13
- Rename validate_provider_name to validate_provider_format for clarity
- Add null checks and error handling for all strdup() operations
- Enhance error messages with more context and HTTP status codes
- Implement performance monitoring with timing metrics for LLM calls and cache operations
- Add comprehensive test coverage for edge cases, retry scenarios, and performance
- Extend status variables to track performance metrics
- Update MySQL session to report timing information to AI manager
Add comprehensive structured logging for NL2SQL LLM API calls with
request correlation, timing metrics, and detailed error context.
Changes:
- Add request_id field to NL2SQLRequest with UUID-like auto-generation
- Add structured logging macros:
* LOG_LLM_REQUEST: Logs URL, model, prompt length with request ID
* LOG_LLM_RESPONSE: Logs HTTP status, duration_ms, response preview
* LOG_LLM_ERROR: Logs error phase, message, and status code
- Update call_generic_openai() signature to accept req_id parameter
- Update call_generic_anthropic() signature to accept req_id parameter
- Add timing metrics to both LLM call functions using clock_gettime()
- Replace existing debug logging with structured logging macros
- Update convert() to pass request_id to LLM calls
Request IDs are generated as UUID-like strings (e.g., "12345678-9abc-def0-1234-567890abcdef")
and are included in all log messages for correlation. This allows tracking
a single NL2SQL request through all log lines from request to response.
Timing is measured using CLOCK_MONOTONIC for accurate duration tracking
of LLM API calls, reported in milliseconds.
This provides much better debugging capability when troubleshooting
NL2SQL issues, as administrators can now:
- Correlate all log lines for a single request
- See exact timing of LLM API calls
- Identify which phase of processing failed
- Track request/response metrics
Fixes#2 - Add Structured Logging
Add comprehensive SQL validation with confidence scoring based on:
- SQL keyword detection (17 keywords covering DDL/DML/transactions)
- Structural validation (balanced parentheses and quotes)
- SQL injection pattern detection
- Length and quality checks
Confidence scoring:
- Base 0.4 for valid SQL keyword
- +0.15 for balanced parentheses
- +0.15 for balanced quotes
- +0.1 for minimum length
- +0.1 for FROM clause in SELECT statements
- +0.1 for no injection patterns
- -0.3 penalty for injection patterns detected
Low confidence (< 0.5) results are logged with detailed info.
Cache storage threshold updated to 0.5 confidence (from implicit valid_sql).
This improves detection of malformed or potentially malicious SQL
while providing granular confidence scores for downstream use.
Remove Ollama-specific provider code and use only generic OpenAI-compatible
and Anthropic-compatible providers. Ollama is now used via its
OpenAI-compatible endpoint at /v1/chat/completions.
Changes:
- Remove LOCAL_OLLAMA from ModelProvider enum
- Remove ai_nl2sql_ollama_model and ai_nl2sql_ollama_url variables
- Remove call_ollama() function from LLM_Clients.cpp
- Update default configuration to use OpenAI provider with Ollama URL
- Update all documentation to reflect generic-only approach
Configuration:
- ai_nl2sql_provider: 'openai' or 'anthropic' (default: 'openai')
- ai_nl2sql_provider_url: endpoint URL (default: Ollama OpenAI-compatible)
- ai_nl2sql_provider_model: model name
- ai_nl2sql_provider_key: API key (optional for local endpoints)
This simplifies the codebase by removing a separate code path for Ollama
and aligns with the goal of avoiding provider-specific variables.
- Add NL2SQL_Converter with prompt building and model selection
- Add LLM clients for Ollama, OpenAI, Anthropic APIs
- Update Makefile for new source files
- Add AI_Features_Manager coordinator class
- Add AI_Vector_Storage interface (stub)
- Add Anomaly_Detector class (stub for Phase 3)
- Update includes and main initialization
This commit extends the existing pg_cancel_backend() and pg_terminate_backend()
support to work with parameterized queries in the extended query protocol.
While literal PID values were already supported in both simple and extended
query protocols, this enhancement adds support for parameterized queries like
SELECT pg_cancel_backend($1).
The crash was caused by incorrect lock ordering. The admin version has:
1. wrlock() (acquire admin lock)
2. Process variables
3. checksum_mutex lock() (acquire checksum lock)
4. flush to runtime + generate checksum
5. checksum_mutex unlock() (release checksum lock)
6. wrunlock() (release admin lock)
The MCP version had the wrong order with the checksum_mutex lock outside
the wrlock/wrunlock region. This also added the missing 'lock' parameter
that exists in the admin version but was missing in MCP.
Changes:
- Added 'lock' parameter to flush_mcp_variables___database_to_runtime()
- Added conditional wrlock()/wrunlock() calls (if lock=true)
- Moved checksum generation inside the wrlock/wrunlock region
- Updated function signature in header file
Added missing documentation for MySQL connection pool implementation:
Header (MySQL_Tool_Handler.h):
- Added MySQLConnection struct documentation with member descriptions
- Added member variable documentation using ///< Doxygen style
Implementation (MySQL_Tool_Handler.cpp):
- Added Doxygen blocks for close() method
- Added Doxygen blocks for init_connection_pool() with detailed behavior
- Added Doxygen blocks for get_connection() with thread-safety notes
- Added Doxygen blocks for return_connection() with reuse behavior
- Added Doxygen blocks for execute_query() with JSON format documentation
All new connection pool methods now have complete @brief, @param, and
@return documentation following Doxygen conventions.
Added built-in connection pool to MySQL_Tool_Handler for direct MySQL
connections to backend servers.
Changes:
- Added MySQLConnection struct with MYSQL* pointer, host, port, in_use flag
- Added connection_pool vector, pool_lock mutex, pool_size counter
- Implemented init_connection_pool() to create MYSQL connections using mysql_init/mysql_real_connect
- Implemented get_connection() and return_connection() with thread-safe locking
- Implemented execute_query() helper method for executing SQL and returning JSON results
- Updated tool methods to use actual MySQL connections:
- list_schemas: Query information_schema.schemata
- list_tables: Query information_schema.tables with metadata
- describe_table: Query columns, primary keys, indexes
- sample_rows: Execute SELECT with LIMIT
- sample_distinct: Execute SELECT DISTINCT with GROUP BY
- run_sql_readonly: Execute validated SELECT queries
- explain_sql: Execute EXPLAIN queries
- Fixed MYSQL forward declaration (use typedef struct st_mysql MYSQL)
The connection pool creates one connection per configured host:port pair
with 5-second timeouts for connect/read/write operations.
- Add MCP variables to load_save_disk_commands map for LOAD/SAVE commands
- Add MCP variable validation in is_valid_global_variable() for SET commands
- Implement has_variable() method in MCP_Threads_Handler
- Add CHECKSUM command handlers for MCP VARIABLES (DISK/MEMORY/MEM)
Test results improved from 28 passed / 16 failed to 49 passed / 3 failed.
Remaining 3 failures are test expectation issues (boolean representation).
Remove unnecessary inheritance from MySQL_Threads_Handler. The MCP module
should be independent and not depend on MySQL/PostgreSQL thread handlers.
Changes:
- MCP_Threads_Handler now manages its own pthread_rwlock_t for synchronization
- Simplified init() signature (removed unused num/stack parameters)
- Added ProxySQL_Main_init_MCP_module() call in main initialization phase
- Include only standard C++ headers (pthread.h, cstring, cstdlib)
Add new MCP module supporting multiple MCP server endpoints over HTTPS
with JSON-RPC 2.0 protocol skeleton. Each endpoint (/mcp/config,
/mcp/observe, /mcp/query, /mcp/admin, /mcp/cache) is a distinct MCP
server with its own authentication configuration.
Features:
- HTTPS server using existing ProxySQL TLS certificates
- JSON-RPC 2.0 skeleton implementation (actual protocol TBD)
- 5 MCP endpoints with per-endpoint auth configuration
- LOAD/SAVE MCP VARIABLES admin commands
- Configuration file support (mcp_variables section)
Implementation follows GenAI module pattern:
- MCP_Threads_Handler: Main module handler with variable management
- ProxySQL_MCP_Server: HTTPS server wrapper using libhttpserver
- MCP_JSONRPC_Resource: Base endpoint class with JSON-RPC skeleton
- Add check_genai_events() function for non-blocking epoll_wait on GenAI response fds
- Integrate GenAI event checking into main handler() WAITING_CLIENT_DATA case
- Add goto handler_again to process multiple GenAI responses in one iteration
The async GenAI architecture is now fully integrated. MySQL threads no longer
block when processing GENAI: queries - they send requests asynchronously via
socketpair and continue processing other queries while GenAI workers handle
the embedding/reranking operations.
- Add GenAI_RequestHeader and GenAI_ResponseHeader protocol structures for socketpair communication
- Implement GenAI listener_loop to read requests from epoll and queue to workers
- Implement GenAI worker_loop to process requests and send responses via socketpair
- Add GenAI_PendingRequest state management to MySQL_Session/Base_Session
- Implement MySQL_Session async handlers: genai_send_async(), handle_genai_response(), genai_cleanup_request()
- Modify MySQL_Session genai handler to use async path when epoll is available
- Initialize GenAI epoll fd in Base_Session::init()
This completes the async architecture that was planned but never fully implemented
(previously had only placeholder comments). The GenAI module now processes
requests asynchronously without blocking MySQL threads.
Move all JSON parsing and operation routing logic from MySQL_Session to
GenAI module. MySQL_Session now simply passes GENAI: queries to the GenAI
module via process_json_query(), which handles everything autonomously.
This simplifies the architecture and achieves better separation of concerns:
- MySQL_Session: Detects GENAI: prefix and forwards to GenAI module
- GenAI module: Handles JSON parsing, operation routing, and result formatting
Changes:
- GenAI_Thread.h: Add GENAI_OP_JSON operation type, json_query field, and
process_json_query() method declaration
- GenAI_Thread.cpp: Implement process_json_query() with embed/rerank support
and document_from_sql framework (stubbed for future MySQL connection handling)
- MySQL_Session.cpp: Simplify genai handler to just call process_json_query()
and parse JSON result (reduces net code by ~215 lines)
This commit refactors the experimental GenAI query syntax to use a single
GENAI: keyword with type-based operations instead of separate EMBED: and RERANK: keywords.
Changes:
- Replace EMBED: and RERANK: detection with unified GENAI: detection
- Merge genai_embedding and genai_rerank handlers into single genai handler
- Add 'type' field to operation JSON ("embed" or "rerank")
- Add 'columns' field for rerank operation (2 or 3, default 3)
- columns=2: Returns only index and score
- columns=3: Returns index, score, and document (default)
Old syntax:
EMBED: ["doc1", "doc2"]
RERANK: {"query": "...", "documents": [...], "top_n": 5}
New syntax:
GENAI: {"type": "embed", "documents": ["doc1", "doc2"]}
GENAI: {"type": "rerank", "query": "...", "documents": [...], "top_n": 5, "columns": 2}
This provides a cleaner, more extensible API for future GenAI operations.
This commit adds experimental support for reranking documents directly
from MySQL queries using a special RERANK: syntax.
Changes:
- Add handler___status_WAITING_CLIENT_DATA___STATE_SLEEP___MYSQL_COM_QUERY___genai_rerank()
- Add RERANK: query detection alongside EMBED: detection
- Implement JSON parsing for query, documents array, and optional top_n
- Build resultset with index, score, and document columns
- Use MySQL ERR_Packet for error handling
Query format: RERANK: {"query": "search query", "documents": ["doc1", "doc2", ...], "top_n": 5}
Result format: 1 row per result, 3 columns (index, score, document)
This commit adds experimental support for generating embeddings directly
from MySQL queries using a special EMBED: syntax.
Changes:
- Add MYDS_INTERNAL_GENAI to MySQL_DS_type enum for GenAI connections
- Add handler___status_WAITING_CLIENT_DATA___STATE_SLEEP___MYSQL_COM_QUERY___genai_embedding()
- Implement EMBED: query detection and JSON parsing for document arrays
- Build CSV resultset with embeddings (1 row per document, 1 column)
- Add myconn NULL check in MySQL_Thread for INTERNAL_GENAI type
- Add "debug_genai" name to debug module array
- Remove HAVE_LIBCURL checks (libcurl is always statically linked)
- Use static curl header: "curl/curl.h" instead of <curl/curl.h>
- Remove curl_global_cleanup() from GenAI module (should only be in main())
Query format: EMBED: ["doc1", "doc2", ...]
Result format: 1 row per document, 1 column with CSV embeddings
Error handling uses MySQL ERR_Packet instead of resultsets.
This change adds compile-time detection and fallback to poll() on systems
that don't support epoll(), improving portability across different platforms.
Header changes (include/GenAI_Thread.h):
- Make sys/epoll.h include conditional on #ifdef epoll_create1
Implementation changes (lib/GenAI_Thread.cpp):
- Add poll.h include for poll() support
- Add EPOLL_CREATE compatibility macro (epoll_create1 or epoll_create)
- Add #include <poll.h> for poll() support
- Update init() to use pipe() for wakeup when epoll is not available
- Update register_client() to skip epoll_ctl when epoll is not available
- Update unregister_client() to skip epoll_ctl when epoll is not available
- Update listener_loop() to use poll() when epoll is not available
The compile-time detection works by checking if epoll_create1 is defined
(Linux-specific glibc function since 2.9). On systems without epoll, the
code falls back to using poll() with a pipe for wakeup signaling.
Implement a new GenAI module for ProxySQL with basic infrastructure:
- GenAI_Threads_Handler class for managing GenAI module configuration
- Support for genai- prefixed variables in global_variables table
- Dummy variables: genai-var1 (string) and genai-var2 (integer)
- Config file support via genai_variables section
- Flush functions for runtime_to_database and database_to_runtime
- Module lifecycle: initialization at startup, graceful shutdown
- LOAD/SAVE GENAI VARIABLES admin command infrastructure
Core functionality verified:
- Config file loading works
- Variables persist in global_variables table
- Disk save/load via SQL works
- Module initializes and shuts down properly
Related files:
- include/GenAI_Thread.h: New GenAI thread handler class
- lib/GenAI_Thread.cpp: Implementation with dummy variables
- lib/Admin_Handler.cpp: Added GENAI command vectors and handlers
- lib/Admin_FlushVariables.cpp: Added genai flush functions
- lib/ProxySQL_Admin.cpp: Added init_genai_variables() and load_save_disk_commands entry
- include/proxysql_admin.h: Added function declarations
- lib/Makefile: Added GenAI_Thread.oo to build
- src/main.cpp: Added module initialization and cleanup
- src/proxysql.cfg: Added genai_variables configuration section
This commit addresses all review comments from gemini-code-assist on PR #5279:
1. Fixed FLUSH LOGS documentation - clarified that file is reopened for
appending, not truncating, and updated the note about preserving contents
2. Fixed callback documentation - clarified that the callback attaches to
all frontend connections, not just admin connections
3. Updated security warning - focused on passive eavesdropping and offline
decryption as the primary threats
4. Fixed typo: proxyql_ip -> proxysql_ip in tcpdump example
5. Removed misleading @see HPKP link - HPKP is unrelated to NSS Key Log
Format and is a deprecated feature
6. Updated NSS Key Log Format URL to use official MDN link instead of
unofficial mirror
7. Fixed buffer size comment to accurately reflect 256-byte buffer and
254-byte line length validation
8. Clarified fputs comment to emphasize the read lock's role in allowing
concurrent writes from multiple threads
This commit addresses critical issues identified in PR #5276 by
gemini-code-assist's code review, which could undermine the goal of
being allocation-free and cause hangs or silent failures.
Bug 1: Vector Passed by Value (Critical)
------------------------------------------
The function took std::vector<int> excludeFDs by value, causing heap
allocation during the copy operation. This undermines the PR's goal of
avoiding heap allocations after fork() to prevent deadlocks in
multi-threaded programs.
Fix: Change to pass by const reference to avoid heap allocation.
void close_all_non_term_fd(const std::vector<int>& excludeFDs)
Bug 2: Infinite Loop Risk (Critical)
------------------------------------
The loop used unsigned int for the variable while comparing against
rlim_t (unsigned long long). If rlim_cur exceeded UINT_MAX, this would
create an infinite loop.
Fix: Use rlim_t type for the loop variable and cap at INT_MAX.
for (rlim_t fd_rlim = 3; fd_rlim < nlimit.rlim_cur && fd_rlim <= INT_MAX; fd_rlim++)
Bug 3: close_range() Detection Logic (High)
------------------------------------------
The original detection logic had two problems:
1. Executed close_range syscall twice on first successful call
2. Incorrectly cached availability on transient failures (EINTR),
leaving file descriptors open without fallback
Fix: Reordered logic to only cache on success, allow retry on
transient failures. Only cache as "not available" on ENOSYS.
For other errors (EBADF, EINVAL, etc.), don't cache - might be transient.
Files Modified
--------------
- include/proxysql_utils.h
- lib/proxysql_utils.cpp
This commit adds extensive documentation for the ssl_keylog_file feature
(introduced in PR #4236), which enables TLS key logging for debugging
encrypted traffic.
## Background
The ssl_keylog_file variable (exposed as admin-ssl_keylog_file in SQL
interface) allows ProxySQL to write TLS secrets to a file in NSS Key Log
Format. These secrets can be used by tools like Wireshark and tshark to
decrypt and analyze TLS traffic for debugging purposes.
## Changes
### Inline Documentation (Code)
1. include/proxysql_sslkeylog.h (+96 lines)
- File-level documentation explaining the module purpose and security
- Doxygen comments for all 5 public APIs
- Thread-safety annotations
- Parameter descriptions and return values
2. lib/proxysql_sslkeylog.cpp (+136 lines)
- Implementation-level documentation
- Algorithm explanations (double-checked locking, thread safety)
- Reference to NSS Key Log Format specification
3. include/proxysql_admin.h (+19 lines)
- Variable documentation for ssl_keylog_file
- Path handling rules (absolute vs relative)
- Security implications
### Developer Documentation (doc/ssl_keylog/ssl_keylog_developer_guide.md)
Target audience: Developers working on ProxySQL codebase
Contents:
- Variable naming convention (SQL vs config file vs internal)
- Architecture diagrams
- Thread safety model (pthread rwlock)
- NSS Key Log Format specification
- Complete API reference for all public functions
- Integration points in the codebase
- Security considerations and code review checklist
- Testing procedures
### User Documentation (doc/ssl_keylog/ssl_keylog_user_guide.md)
Target audience: End users and system administrators
Contents:
- What is SSL key logging and when to use it
- Variable naming: admin-ssl_keylog_file (SQL) vs ssl_keylog_file (config)
- Step-by-step enable/disable instructions
- Path resolution (absolute vs relative)
- Log rotation procedures
- Production workflow: tcpdump capture → offline analysis
- Wireshark (GUI) integration tutorial
- tshark (command-line) usage examples
- Troubleshooting common issues
- Security best practices
- Quick reference card
## Key Features Documented
1. **Variable Naming Convention**
- SQL interface: SET admin-ssl_keylog_file = '/path';
- Config file: ssl_keylog_file='/path' (in admin_variables section)
- Internal code: ssl_keylog_file
2. **Production Workflow**
- Capture traffic with tcpdump (no GUI on production server)
- Transfer pcap + keylog to analysis system
- Analyze offline with Wireshark (GUI) or tshark (CLI)
3. **tshark Examples**
- Command-line analysis of encrypted traffic
- Filter examples for debugging TLS issues
- JSON export for automated analysis
## Security Notes
The documentation emphasizes that:
- Key log files contain cryptographic secrets that decrypt ALL TLS traffic
- Access must be restricted (permissions 0600)
- Only enable for debugging, never in production
- Securely delete old key log files
## Files Modified
- include/proxysql_admin.h
- include/proxysql_sslkeylog.h
- lib/proxysql_sslkeylog.cpp
## Files Added
- doc/ssl_keylog/ssl_keylog_developer_guide.md
- doc/ssl_keylog/ssl_keylog_user_guide.md
Since ProxySQL 3.0.4, SELECT VERSION() queries were intercepted and returned
ProxySQL's mysql-server_version variable instead of proxying to backends.
This broke SQLAlchemy for MariaDB which expects "MariaDB" in the version
string.
This commit adds a new variable `mysql-select_version_forwarding` with 4 modes:
- 0 = never: Always return ProxySQL's version (3.0.4+ behavior)
- 1 = always: Always proxy to backend (3.0.3 behavior)
- 2 = smart (fallback to 0): Try backend connection, else ProxySQL version
- 3 = smart (fallback to 1): Try backend connection, else proxy (default)
The implementation includes:
- New global variable mysql_thread___select_version_forwarding
- New function get_backend_version_for_hostgroup() to peek at backend
connection versions without removing them from the pool
- Modified SELECT VERSION() handler to support all 4 modes
- ProxySQL backend detection to avoid recursion
Mode 3 (default) ensures SQLAlchemy always gets the real MariaDB version
string while maintaining fast response when connections are available.
* Change MySQL_Monitor_Connection_Pool::put_connection signature to accept MySQL_Monitor_State_Data* instead of raw MYSQL*/port.
* Centralize access to mysql and port via mmsd, reducing parameter mismatch and misuse.
* Improve DEBUG bookkeeping: ensure connections are properly unregistered from the global debug registry with clearer assertions and logs.
* Add consistent proxy_debug messages for connection register/unregister events.
* Simplify server lookup/creation logic when returning connections to the pool.
* Fix ordering of error handling to always unregister before closing connections.
* Minor cleanup: remove unused labels/variables and modernize casts.
* This refactor improves correctness, debuggability, and safety of monitor connection lifecycle management.
Logging messages now include 'client address', 'session status' and
'data stream status'. Client address is also logged when OK packets are
dispatched, this should help tracking if a client has received the
expected packets or not.
Implements a workaround for the handling of unexpected 'COM_PING'
packets received during query processing, while a resultset is yet being
streamed to the client. Received 'COM_PING' packets are queued in the
form of a counter. This counter is later used to sent the corresponding
number of 'OK' packets to the client after 'MySQL_Session' has finished
processing the current query.
This commit documents:
1. The vacuum_stats() function's purpose, behavior, and the reason why
stats_pgsql_stat_activity is excluded from bulk deletion operations
2. The fact that stats_pgsql_stat_activity is a SQL VIEW (not a table)
and attempting DELETE on it would cause SQLite error:
"cannot modify stats_pgsql_stat_activity because it is a view"
The documentation explains:
- Why TRUNCATE stats_mysql_query_digest triggers vacuum_stats(true)
- Why both MySQL and PostgreSQL tables are cleared regardless of protocol
- How the view is automatically cleared via its underlying table
stats_pgsql_processlist
- The importance of keeping the view excluded from deletion lists
The `cache_empty_result` field in query rules has three possible values:
• -1: Use global setting (`query_cache_stores_empty_result`)
• 0: Do NOT cache empty resultsets, but cache non-empty resultsets
• 1: Always cache resultsets (both empty and non-empty)
Previously, when `cache_empty_result` was set to 0, nothing was cached at all,
even for non-empty resultsets. This prevented users from disabling caching
for empty resultsets while still allowing caching of non-empty resultsets
on a per-rule basis.
Changes:
1. Modified caching logic in MySQL_Session.cpp and PgSQL_Session.cpp to
add the condition `(qpo->cache_empty_result == 0 && MyRS->num_rows)`
(MySQL) and `(qpo->cache_empty_result == 0 && num_rows)` (PgSQL)
to allow caching when cache_empty_result=0 AND result has rows.
2. Added comprehensive Doxygen documentation in query_processor.h explaining
the semantics of cache_empty_result values.
3. Updated Query_Processor.cpp with inline comments explaining the
three possible values.
Now when cache_empty_result is set to 0:
- Empty resultsets (0 rows) are NOT cached
- Non-empty resultsets (>0 rows) ARE cached
- This matches the intended per-rule behavior described in issue #5248.
Fixes: https://github.com/sysown/proxysql/issues/5248
This commit adds detailed Doxygen documentation for:
1. The ProxySQL_Config class - describes its role in configuration management
2. The Read_Global_Variables_from_configfile() method - documents its behavior,
parameters, return value, and the automatic prefix stripping feature
The documentation explains the automatic prefix stripping behavior that handles
cases where users mistakenly include module prefix (e.g., "mysql-") in variable
names within configuration files.
This change introduces PostgreSQL-aware tokenization by adding support for dollar-quoted strings, PostgreSQL’s double-quoted identifiers, and its comment rules. The tokenizer now correctly parses $$…$$ and $tag$…$tag$, treats " as an identifier delimiter in PostgreSQL, disables MySQL-only # comments, and accepts -- as a comment starter without requiring a trailing space. All new behavior is fully isolated behind the dialect flag to avoid impacting MySQL parsing.
Add PostgreSQL dollar-quoted strings
* New parser state: st_dollar_quote_string.
* Recognizes $$ … $$ and $tag$ … $tag$ sequences.
* Tracks opening tag and searches for matching terminator.
* Normalizes entire literal to ?.
* Integrated into get_next_st() and stage_1_parsing().
The get_status_variable() function was only scanning worker threads
but ignoring auxiliary threads (idle threads) where timeout
terminations are detected. This caused the timeout termination
counter to show incorrect/zero values.
- Added idle thread scanning to both overloaded versions of
get_status_variable() function
- Now properly collects metrics from both worker and idle threads
- Fixes the issue where proxysql_mysql_timeout_terminated_connections_total
showed zero despite actual timeout terminations
Resolves the metrics reading issue identified in the previous commits.
Code improvements:
- Extract SESS_TO_SCAN_idle_thread constant to header file for better maintainability
- Replace magic number 128 with named constant in idle_thread_to_kill_idle_sessions()
- Improve code readability and consistency in session scanning logic
Test enhancements:
- Add mysql-poll_timeout configuration for more precise timeout testing
- Reduce test sleep times to 13 seconds for faster test execution
- Add diagnostic messages to clearly show timeout configurations in test output
- Ensure tests properly validate timeout enforcement with precise timing
The changes improve code maintainability and make tests more reliable and faster
while maintaining accurate timeout validation.
- Add wait_timeout member variable declaration to Base_Session class
- Fix constructor initialization to use this->wait_timeout
- Fix assignment in handler to properly scope member variable
- Resolves compilation error for wait_timeout functionality
PROBLEM:
The initial fix used a DDL detection approach which required maintaining a list
of query types that should return 0 affected rows. This approach was brittle
and could miss edge cases like commented queries or complex statements.
SOLUTION:
Instead of detecting DDL queries, use sqlite3_total_changes64() to measure the
actual change count before and after each query execution. The difference between
total_changes before and after represents the true affected rows count for the
current query, regardless of query type.
CHANGES:
- Added proxy_sqlite3_total_changes64 function pointer and initialization
- Rewrote execute_statement() and execute_statement_raw() to use total_changes
difference approach
- This automatically handles all query types (DDL, DML, comments, etc.)
- Added comprehensive TAP test covering INSERT, CREATE, DROP, VACUUM, UPDATE, and
BEGIN operations
BENEFITS:
- More robust and accurate than DDL detection approach
- Handles edge cases like commented queries automatically
- No maintenance overhead for new query types
- Simpler and cleaner implementation
- Still fixes both Admin interface and SQLite3 Server
This approach is mathematically sound: affected_rows = total_changes_after -
total_changes_before, which gives the exact number of rows changed by the current
query execution.
Fixes#4855
- This patch was originally added by commit 0a70fd5 and
reverted by 8d1b5b5, prior to the release of `v3.0.3`.
- The following issues are addressed in this update,
- Fix for `use-after-free` issue which occured during CI test.
- Fix for deadlock issue between `GTID_syncer` and `MySQL_Worker`.
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Concurrency and Memory Management
* Lock-Free Ref Counting: Replaced global mutex-protected integer reference counts with `std::atomic<uint32_t>` within `PgSQL_STMT_Global_info`, eliminating lock contention during statement referencing.
* Modern Ownership: Adopted std::shared_ptr<const PgSQL_STMT_Global_info> for global and local storage, providing automatic, thread-safe memory and lifecycle management.
* Memory Optimization: Removed redundant auxiliary maps `global_id_to_stmt_names` and `map_stmt_id_to_info` from local and global statement managers respectively, reducing overall memory overhead.
* Optimized Purging: Statement removal logic was simplified for efficiently identifying and cleaning up unused statements.
Hot Path Performance (`BIND`, `DESCRIBE`, `EXECUTE`)
* Bypassed Global Lookups: Local session maps now store the `shared_ptr` directly, removing the need to acquire the global lock and search the global map during hot path operations.
* Direct Refcount Manipulation: Refcount modification functions now operate directly on the passed statement object, eliminating the overhead of searching the global map to find the object pointer based on statement id.
Safety and Protocol Logic (`PARSE`)
* Efficient Statement Reuse: Implemented a **local fast path** check for the unnamed statement (`""`), allowing immediate reuse of an identical query (same hash) upon re-parse, which bypasses global processing and locks.
Cleanup
* Cleaned up and class rename `PgSQL_STMT_Manager_v14` -> `PgSQL_STMT_Manager`.
Problem: In fast forward mode, ProxySQL forwards packets directly from client
to backend without buffering them. If the backend connection closes
unexpectedly (e.g., due to server crash, network failure, or other issues),
ProxySQL immediately closes the client session. This can result in data loss
because the client may have sent additional data that hasn't been fully
transmitted yet, as ProxySQL does not wait for the output buffers to drain.
Solution: Implement a configurable grace period for session closure in fast
forward mode. When the backend closes unexpectedly, instead of closing the
session immediately, ProxySQL waits for a configurable timeout
(fast_forward_grace_close_ms, default 5000ms) to allow any pending client
output data to be sent. During this grace period:
- If the client output buffers become empty, the session closes gracefully.
- If the timeout expires, the session closes anyway to prevent indefinite
hanging.
Changes:
- Added global variable mysql_thread___fast_forward_grace_close_ms (0-3600000ms)
- Added session flags: backend_closed_in_fast_forward, fast_forward_grace_start_time
- Added data stream flag: defer_close_due_to_fast_forward
- Modified MySQL_Data_Stream::read_from_net() to detect backend EOF and initiate
grace close if client buffers are not empty
- Modified MySQL_Session::handler() FAST_FORWARD case to implement grace close
logic with timeout and buffer checks
- Added extensive inline documentation explaining the feature and its mechanics
This prevents data loss in fast forward scenarios while maintaining bounded
session lifetime.
Previously, the parser always tokenized the full command, even when we only
needed to check whether it was a transaction command. Now, it first extracts
the first word to determine relevance and performs full tokenization only
when necessary.
According to MySQL protocol, variable length strings are encoded using
length encoded integers. For reference, see:
- https://dev.mysql.com/doc/dev/mysql-server/9.4.0/page_protocol_com_stmt_execute.html
- https://dev.mysql.com/doc/dev/mysql-server/9.4.0/page_protocol_basic_dt_integers.html#a_protocol_type_int2
The protocol specifies that values greater than 2^24 (16777216) should
be encoded using '0xFE + 8-byte integer'. Yet, in reality MySQL ignores
the upper section of these 8-byte integers, treating them effectively
like '4-bytes'. For the sake of compatibility this commit changes the
decoding behavior for 'COM_STMT_EXECUTE' to match MySQL one. This
different is subtle but important, since in practice MySQL itself
doesn't use the '8 bytes' from the field. This means that connectors
that are compatible with MySQL could find issues when sending these
packets through ProxySQL (like NodeJS 'mysql2' connector which writes
the 8-bytes as a 4-bytes duplication, motivating these changes),
situation that could result in rejection due to malformed packet
detection (or crashes/invalid handling in the worse case scenario).
The previous decoding function is now renamed into
'mysql_decode_length_ll' to honor MySQL naming 'net_field_length_ll'.
For now, this protocol change is limited to 'COM_STMT_EXECUTE'.
When true, all `min_gtid` query annotations are ignored; see
https://proxysql.com/documentation/query-annotations/ for details.
This is useful on ProxySQL setups with multiple layers, where some
layers mandate GTID-based routing while others don't.
- Add new mysql/pgsql variable `processlist_max_query_length`.
- Min: 1K
- Max: 32M
- Default: 2M
- Truncate current query based on the configuration before inserting into
`stats_*_processlist` tables.
- Refactor/fix code related to other processlist configurations.
1. `session_idle_show_processlist` value was not updated in `ProxySQL_Admin.variables`.
2. Pass processlist config as an argument to `MySQL_Threads_Handler::SQL3_Processlist`
instead of using thread-local variables.
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
This message is dump with each call to 'process_pkt_handshake_response'
printing the updated context. When the verbosity value for module
'debug_mysql_protocol' is >= 5, the stored and client supplied passwords
will be dumped in HEX format, for values < 5, the passwords will be
masked.
Previously, query cache metrics were shared between MySQL and PostgreSQL,
causing both to reflect the same values when performing cache operations.
This change isolates the metrics for each database type.
- Added `backend_pid` and `backend_state` columns to `stats_pgsql_processlist`
to display PostgreSQL backend process ID and connection state.
- Created `stats_pgsql_stat_activity` view on top of `stats_pgsql_processlist`
with column aliases matching PostgreSQL's `pg_stat_activity` for consistency.
These parameters use capitalized names in PostgreSQL for historical reasons.
ProxySQL now sends them using canonical capitalization to ensure client compatibility.
Add support for PostgreSQL query cancellation and backend termination
features to allow clients to cancel long-running queries and terminate
connections through the standard PostgreSQL protocol.
Features implemented:
- Intercept pg_backend_pid() queries and return ProxySQL session thread ID
- Intercept pg_terminate_backend() to terminate client connections asynchronously
- Intercept pg_cancel_backend() to cancel queries on backend connections
- Support Cancel Request protocol via separate connection with PID and secret key validation
- Return BackendKeyData message on successful authentication with session thread ID and unique cancel secret key
This enables clients to use standard PostgreSQL cancellation mechanisms
(pg_cancel_backend, pg_terminate_backend, and Cancel Request protocol)
while ProxySQL maintains proper session isolation and maps client requests
to appropriate backend connections.
Previously, each extended-query block was terminated with a SYNC,
which caused implicit transactions to commit prematurely. As a result,
earlier write operations (INSERT/UPDATE/DELETE) could not be rolled
back if a later statement in the same sequence failed.
This change switches to libpq pipeline mode and replaces intermediate
SYNC messages with FLUSH, ensuring that all client query frames execute
as part of the same implicit transaction. A final SYNC is still issued
to resynchronize the connection and make it safe for reuse in the pool.
- Add validation methods for `mysql_users`, `pgsql_users`, `mysql_servers`,
`pgsql_servers` and `proxysql_servers`
- Check for duplicates and mandatory fields
- Return descriptive error messages to clients when validation fails
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Co-authored-by: takaidohigasi <takaidohigasi@gmail.com>
Previously, deleting `PgSQL_Errors_stats` instances in TUs with only a forward
declaration caused the destructor to be skipped, leaking member allocations.
The fix ensures the full class definition is visible at delete sites.