pull/2/head
Broque Thomas 10 months ago
parent f2f09bfbe3
commit 6e5a0f76f6

@ -1,304 +0,0 @@
# Spotify to Plex Playlist Sync Implementation Guide
## Overview
This document details the complete implementation of the Spotify to Plex playlist synchronization feature, including all the challenges encountered and solutions implemented.
## Final Result
**Success**: Complete playlist syncing functionality that:
- Syncs Spotify playlists to Plex with same track order
- Shows real-time progress updates in both modal and playlist items
- Uses robust track matching (same as "Download Missing Tracks")
- Supports sync cancellation
- Handles all edge cases and errors gracefully
## Architecture Overview
### Core Components
1. **PlaylistDetailsModal** - UI modal with sync controls and status display
2. **PlaylistItem** - Main page playlist widgets with compact status icons
3. **PlaylistSyncService** - High-level sync orchestration
4. **PlexClient** - Playlist creation and track management
5. **MusicMatchingEngine** - Track matching logic
## Implementation Journey
### Phase 1: UI Enhancement
**Goal**: Add sync functionality to existing modal and playlist items
#### PlaylistDetailsModal Changes
- **Header Enhancement**: Added sync status display widget (hidden by default)
- Shows: Total tracks, matched tracks, failed tracks, completion percentage
- Appears on right side of header during sync operations
- Clean, minimal design with icons and numbers
- **Button State Management**:
- "Sync This Playlist" → "Cancel Sync" toggle
- Red styling when in cancel mode
- Proper state restoration on completion/cancellation
#### PlaylistItem Changes
- **Compact Status Icons**: Added to left of "Sync/Download" button
- 📀 Total tracks
- ✅ Matched tracks
- ❌ Failed tracks
- Percentage complete
- Auto-show/hide based on sync state
### Phase 2: Service Architecture
**Goal**: Create robust sync service with proper progress tracking
#### Sync Service Design
```python
class PlaylistSyncService:
async def sync_playlist(self, playlist: SpotifyPlaylist, download_missing: bool = False) -> SyncResult
```
**Key Features**:
- Accepts playlist object directly (no fetching all playlists)
- Detailed progress callbacks with track-level granularity
- Cancellation support throughout the process
- Comprehensive error handling and cleanup
#### Progress Tracking System
```python
@dataclass
class SyncProgress:
current_step: str
current_track: str
progress: float
total_steps: int
current_step_number: int
# Enhanced with detailed stats
total_tracks: int = 0
matched_tracks: int = 0
failed_tracks: int = 0
```
### Phase 3: Track Matching Integration
**Goal**: Use same robust matching as "Download Missing Tracks"
#### Problem Identified
Initial implementation tried to:
1. Fetch entire Plex library (10,000+ tracks)
2. Do bulk matching against all tracks
3. This was slow and caused "caching" appearance
#### Solution Implemented
**Individual Track Search Approach**:
```python
async def _find_track_in_plex(self, spotify_track: SpotifyTrack) -> Tuple[Optional[PlexTrackInfo], float]:
# Use same robust search logic as PlaylistTrackAnalysisWorker
# - Multiple title variations
# - Artist + title combinations
# - Early exit on confident matches
# - Title-only fallback
```
**Benefits**:
- ✅ Uses proven matching algorithm
- ✅ Shows real-time progress per track
- ✅ Much faster than bulk approach
- ✅ Early exit optimization
### Phase 4: Threading and Cancellation
**Goal**: Proper background processing with user control
#### Worker Thread Implementation
```python
class SyncWorker(QRunnable):
def cancel(self):
self._cancelled = True
if hasattr(self.sync_service, 'cancel_sync'):
self.sync_service.cancel_sync()
```
#### Cancellation Points
- Before each track search
- Between major sync phases
- In sync service at multiple checkpoints
- Proper cleanup on cancellation
### Phase 5: Plex Playlist Creation
**Goal**: Convert matched tracks to actual Plex playlists
#### Major Challenge: Track Object Conversion
**Problem**:
- Sync service finds tracks correctly using `search_tracks()`
- But `search_tracks()` returns `PlexTrackInfo` wrapper objects
- Playlist creation needs actual Plex track objects with `ratingKey`
- Trying to search again caused "Unknown filter field 'artist'" errors
#### Solution: Original Track Reference Storage
**Step 1**: Modified `search_tracks()` to store original track references
```python
# In PlexClient.search_tracks()
tracks = [PlexTrackInfo.from_plex_track(track) for track in candidate_tracks[:limit]]
# Store references to original tracks for playlist creation
for i, track_info in enumerate(tracks):
if i < len(candidate_tracks):
track_info._original_plex_track = candidate_tracks[i]
```
**Step 2**: Updated playlist creation to use stored references
```python
# In PlexClient.create_playlist()
elif hasattr(track, '_original_plex_track'):
# This is a PlexTrackInfo object with stored original track reference
original_track = track._original_plex_track
if original_track is not None:
plex_tracks.append(original_track)
```
#### Plex API Compatibility Issues
**Problem**: `server.createPlaylist(name, tracks)` failed with "Must include items to add"
**Solution**: Multi-approach error handling
```python
try:
playlist = self.server.createPlaylist(name, valid_tracks)
except:
try:
playlist = self.server.createPlaylist(name, items=valid_tracks)
except:
try:
playlist = self.server.createPlaylist(name, [])
playlist.addItems(valid_tracks)
except:
playlist = self.server.createPlaylist(name, valid_tracks[0])
if len(valid_tracks) > 1:
playlist.addItems(valid_tracks[1:])
```
## Key Technical Challenges & Solutions
### 1. Performance Issue: Bulk Plex Library Fetching
**Problem**: Initial sync appeared to "cache all playlists and tracks"
**Root Cause**:
- Called `get_user_playlists()` to find one playlist
- Called `search_tracks("", "", limit=10000)` to get entire library
**Solution**:
- Pass playlist object directly to sync service
- Use individual track searches with robust matching
- Real-time progress updates showing current track being matched
### 2. Unicode Logging Errors
**Problem**: `UnicodeEncodeError: 'charmap' codec can't encode characters`
**Root Cause**: Emoji characters (✔️❌🎤⚠️) in log messages
**Solution**: Removed emoji characters from all log messages
### 3. Track Object Type Mismatch
**Problem**: Playlist creation failed because wrong object types were passed
**Root Cause**:
- Search returns `PlexTrackInfo` wrappers
- Playlist creation needs raw Plex track objects
- Re-searching failed due to API filter issues
**Solution**:
- Store original track references in wrapper objects
- Use stored references for playlist creation
- Fallback to re-search only if references missing
### 4. Plex API Playlist Creation
**Problem**: Multiple different API call formats, unclear which works
**Solution**: Progressive fallback approach trying all known patterns
## Code Structure
### Files Modified
1. **`ui/pages/sync.py`**:
- `PlaylistDetailsModal`: Header sync status, button state management
- `PlaylistItem`: Compact status icons, sync state tracking
- Worker thread management and cancellation
2. **`services/sync_service.py`**:
- Complete rewrite to accept playlist objects
- Individual track matching approach
- Enhanced progress reporting
- Cancellation support throughout
3. **`core/plex_client.py`**:
- Modified `search_tracks()` to store original track references
- Enhanced `create_playlist()` with multiple API approaches
- Better error handling and debugging
4. **`core/matching_engine.py`**:
- Added missing helper methods:
- `match_playlist_tracks()`
- `generate_download_query()`
- `get_match_statistics()`
### Import Fixes
- Fixed `SpotifyTrack` import in sync service
- Added `Tuple` type hint import
- Corrected matching engine instantiation
## Testing Results
### Before Implementation
- ❌ No playlist sync functionality
- ❌ Only "Download Missing Tracks" available
### After Implementation
- ✅ **Full Playlist Sync**: Creates/updates Plex playlists matching Spotify
- ✅ **Real-time Progress**: Shows exactly which track is being matched
- ✅ **Perfect Match Rate**: Same robust algorithm as Download Missing Tracks
- ✅ **Cancellation**: Can cancel mid-sync with proper cleanup
- ✅ **Status Persistence**: Can close modal and reopen, sync continues
- ✅ **Error Handling**: Graceful handling of all failure modes
- ✅ **Performance**: Fast individual track searches vs slow bulk fetching
### Final Test Results (Aether Playlist)
```
2025-07-25 00:20:47 - Found 3 matches out of 3 tracks
2025-07-25 00:20:47 - Creating playlist with 3 matched tracks
2025-07-25 00:20:47 - Using stored track reference for: Aether by Virtual Mage (ratingKey: 155554)
2025-07-25 00:20:47 - Using stored track reference for: Astral Chill (The Present Sound Remix) by Virtual Mage (ratingKey: 155577)
2025-07-25 00:20:47 - Using stored track reference for: Orbit Love by Virtual Mage (ratingKey: 155537)
2025-07-25 00:20:47 - Final validation: 3 valid tracks with ratingKeys
2025-07-25 00:20:47 - Created playlist with first track and added 2 more tracks
```
**Result**: ✅ **100% success rate**, playlist created in Plex with all 3 tracks in correct order
## Integration Points
### Leverages Existing Systems
- **MusicMatchingEngine**: Uses same algorithm as Download Missing Tracks
- **PlexClient**: Extends existing search and playlist management
- **Qt Threading**: Follows established worker pattern
- **Progress Callbacks**: Consistent with existing UI patterns
### New Capabilities Added
- **Bidirectional UI Updates**: Modal ↔ Playlist Item status sync
- **Enhanced Progress Tracking**: Track-level granularity
- **Robust Error Recovery**: Multiple fallback approaches
- **Cancellation Throughout**: Every major operation can be cancelled
## Future Enhancements
### Potential Improvements
1. **Batch Playlist Sync**: Sync multiple playlists at once
2. **Sync Scheduling**: Automatic periodic sync
3. **Conflict Resolution**: Handle tracks that exist in multiple versions
4. **Sync History**: Track sync results over time
5. **Smart Caching**: Cache search results for better performance
### Technical Debt
1. **Remove Debug Logging**: Clean up extensive debug logs once stable
2. **Optimize Search Patterns**: Could cache common searches
3. **API Error Mapping**: More specific error messages for different failures
4. **Testing Coverage**: Unit tests for all sync components
## Conclusion
The playlist sync implementation successfully delivers a robust, user-friendly solution that:
- **Leverages existing proven systems** (matching engine, UI patterns)
- **Solves complex technical challenges** (object type mismatches, API compatibility)
- **Provides excellent user experience** (real-time progress, cancellation, status persistence)
- **Handles edge cases gracefully** (network errors, missing tracks, API failures)
- **Maintains high performance** (individual searches vs bulk operations)
The implementation demonstrates a deep understanding of the existing codebase and integrates seamlessly while adding significant new functionality.

@ -1,470 +0,0 @@
# Album Download Tracking Implementation - newMusic Application
## Overview
This document details the complete implementation journey of live album download tracking in the newMusic application. The goal was to provide real-time progress updates for album downloads in the artists page, showing users exactly how many tracks have completed downloading as each individual file finishes.
## The Challenge
The artists.py page had a partial implementation that displayed basic progress but lacked the sophisticated live status tracking that worked perfectly on the downloads.py and sync.py pages. Users would see albums stuck on "preparing" status without any indication of actual download progress.
## Initial Analysis
### Working Reference Implementation
The foundation came from analyzing `download-tracking-analysis.md`, which documented how the downloads and sync pages achieved reliable live status tracking through:
1. **Background Worker Threads**: `StatusProcessingWorker` and `SyncStatusProcessingWorker`
2. **API Polling**: Regular status checks via `soulseek_client.get_all_downloads()`
3. **ID-based Matching**: Primary matching by slskd download IDs with filename fallback
4. **Grace Period Logic**: 3-poll grace period for missing downloads before marking as failed
5. **Cleanup Worker Management**: Handling the cleanup worker that removes completed downloads from the API
### Artists Page Current State
The artists.py file had:
- Basic album download initiation (`start_album_download()`)
- Simple progress display logic
- Incomplete integration with the proven tracking system
- Wrong worker usage (trying to use `SyncStatusProcessingWorker` incorrectly)
## Problem Identification
### Core Issues Discovered
1. **Incorrect Worker Usage**: Artists page was trying to repurpose `SyncStatusProcessingWorker` which had different data structures
2. **Missing Data Structure Mapping**: No proper mapping between album downloads and slskd API responses
3. **Inadequate Status Processing**: Missing the sophisticated status resolution logic from working pages
4. **Incomplete Integration**: No connection to the proven download tracking infrastructure
## Implementation Phase 1: Foundation
### Created Dedicated AlbumStatusProcessingWorker
```python
class AlbumStatusProcessingWorker(QRunnable):
"""Background worker for processing album download status updates"""
def __init__(self, soulseek_client, album_downloads):
super().__init__()
self.soulseek_client = soulseek_client
self.album_downloads = album_downloads
self.signals = WorkerSignals()
def run(self):
try:
# Create async event loop for API calls
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Get all current downloads from slskd API
all_downloads = loop.run_until_complete(
self.soulseek_client.get_all_downloads()
)
# Process each album's download status
results = []
for album_id, album_info in self.album_downloads.items():
# Match downloads and determine status
# [Status processing logic...]
```
**Key Features:**
- Dedicated to album download tracking
- Proper async handling for API calls
- Status resolution matching the working pages
- Results structured for album-specific updates
### Enhanced poll_album_download_statuses()
```python
def poll_album_download_statuses(self):
"""Poll download statuses for all active albums using dedicated worker"""
if self._is_album_status_update_running:
return
if not self.album_downloads:
return
self._is_album_status_update_running = True
# Create worker with current album download data
worker = AlbumStatusProcessingWorker(
soulseek_client=self.soulseek_client,
album_downloads=dict(self.album_downloads) # Snapshot for thread safety
)
# Connect signals
worker.signals.completed.connect(self._handle_album_status_updates)
worker.signals.error.connect(lambda e: print(f"Album Status Worker Error: {e}"))
# Start in thread pool
self.album_status_processing_pool.start(worker)
```
### Fixed _handle_album_status_updates()
```python
def _handle_album_status_updates(self, results):
"""Process album status results from background worker"""
try:
albums_to_update = set()
albums_completed = set()
for result in results:
album_id = result['album_id']
download_id = result['download_id']
status = result['status']
progress = result['progress']
# [Robust status processing logic...]
```
## First Test: Download ID Mismatch
### Problem Encountered
User feedback: "nope that did not resolve the issue. im watching downloads finish in the the slskd webapi but the album i chose just says 'preparing'"
### Root Cause Analysis
The issue was **composite vs real download IDs**:
- **Artists page tracked**: Composite IDs like `"recovery8655_Taylor Swift-Fearless..."`
- **slskd API returned**: Real UUIDs like `"6bff31cd-07eb-4757-aae7-86fe6d4e847f"`
These IDs never matched, so status updates never found the corresponding downloads.
## Implementation Phase 2: ID Resolution
### Added Real Download ID Resolution
```python
def _get_real_download_id(self, composite_id, all_downloads):
"""Convert composite download ID to real slskd download ID"""
# Extract components from composite ID
parts = composite_id.split('_')
if len(parts) >= 3:
username = parts[0]
filename_part = '_'.join(parts[1:-2]) # Rejoin middle parts
# Find matching download in API response
for download in all_downloads:
if (download.username == username and
filename_part.lower() in download.filename.lower()):
return download.id
return None
```
### Enhanced Download ID Tracking Integration
```python
# In album download initiation
download_id = self.soulseek_client.download(...)
if download_id:
# Store real ID instead of composite
album_info['active_downloads'].append(download_id)
```
## Second Test: Still Not Working
### Problem Encountered
User feedback: "idk i think its worse? now it only says preparing again" with detailed logs showing all downloads "missing from API"
### Enhanced Debugging
The logs revealed:
```
🔔 Downloads page notified completion of: 6bff31cd-07eb-4757-aae7-86fe6d4e847f
🎵 Album 'taylor_swift_fearless_12345': Checking download a1165c82-dfba-492c-b584-dca104fb3f81
❌ Download a1165c82-dfba-492c-b584-dca104fb3f81 not found in API transfers
```
The notification system was working but the IDs still didn't match between tracking and API.
## Implementation Phase 3: The Breakthrough
### Core Problem Identification
The **cleanup worker** was removing completed downloads from the API before the artists page could detect completion. The downloads page was correctly updating its items with real IDs, but the artists page was still tracking the original composite IDs.
### Two-Pronged Solution
#### 1. Enhanced Queue Scanning with Real ID Resolution
```python
def update_active_downloads_from_queue(self):
"""Scan download queue items to get real download IDs"""
if not hasattr(self, 'downloads_page') or not self.downloads_page:
return
# Get current download items from both active and finished queues
active_items = getattr(self.downloads_page.download_queue.active_queue, 'download_items', [])
finished_items = getattr(self.downloads_page.download_queue.finished_queue, 'download_items', [])
all_items = list(active_items) + list(finished_items)
for album_id, album_info in self.album_downloads.items():
matching_downloads = []
completed_count = 0
for item in all_items:
# Check if this download item belongs to this album
if self._is_download_item_for_album(item, album_info):
current_id = getattr(item, 'download_id', None)
# Use the download ID directly from the item (should be the real one)
if current_id and current_id != 'NO_ID':
# Check if this item is in finished items (completed)
if item in finished_items:
completed_count += 1
else:
# It's an active download - use the current ID
matching_downloads.append(current_id)
# Update album tracking with real IDs and completion count
album_info['active_downloads'] = matching_downloads
album_info['completed_tracks'] = completed_count
```
#### 2. Direct Notification System
Modified `downloads.py` to notify the artists page directly when downloads complete **before** the cleanup worker runs:
```python
def move_to_finished(self, download_item):
"""Move a download item from active to finished queue"""
if download_item in self.active_queue.download_items:
# Notify artists page of completion BEFORE moving to finished (before cleanup)
if (download_item.status == 'completed' and
hasattr(download_item, 'download_id') and
download_item.download_id):
# Navigate to artists page and notify
main_window = self.find_main_window()
if main_window and hasattr(main_window, 'artists_page'):
main_window.artists_page.notify_download_completed(
download_item.download_id, download_item
)
```
### Smart Notification Handler
```python
def notify_download_completed(self, download_id, download_item=None):
"""Called by downloads page when a download completes (before cleanup)"""
# Find which album this belongs to - try multiple approaches
target_album_id = None
# Approach 1: Direct ID match
for album_id, album_info in self.album_downloads.items():
if download_id in album_info.get('active_downloads', []):
target_album_id = album_id
break
# Approach 2: Match by download item attributes
if not target_album_id and download_item:
for album_id, album_info in self.album_downloads.items():
if self._is_download_from_album(download_item, album_info):
target_album_id = album_id
break
# Approach 3: Replace composite ID with real ID
if not target_album_id and download_item:
item_title = getattr(download_item, 'title', '')
for album_id, album_info in self.album_downloads.items():
active_downloads = album_info.get('active_downloads', [])
for active_id in active_downloads[:]:
if item_title and item_title.lower() in active_id.lower():
# Replace composite with real ID
album_info['active_downloads'].remove(active_id)
album_info['active_downloads'].append(download_id)
target_album_id = album_id
break
if target_album_id:
# Update album progress immediately
album_info = self.album_downloads[target_album_id]
album_info['completed_tracks'] += 1
if download_id in album_info['active_downloads']:
album_info['active_downloads'].remove(download_id)
# Update UI
self.update_album_card_progress(target_album_id)
```
## Final Issue: Double Counting
### Problem Encountered
User feedback: "every time a download finishes it seems to be marked twice so when the first track finished downloading it jumped from 0/19 to 2/19"
### Root Cause
Both the notification system AND the regular polling were incrementing the completed count:
1. **Notification system** (line 2403): `album_info['completed_tracks'] += 1`
2. **Regular polling** (line 2274): `album_info['completed_tracks'] += 1`
### Solution: Duplicate Detection
Added completion tracking to prevent double counting:
```python
def _mark_download_as_completed(self, download_id):
"""Mark a download as completed to handle cleanup detection"""
if download_id:
self.completed_downloads.add(download_id)
def _was_download_previously_completed(self, download_id):
"""Check if a download was previously marked as completed"""
return download_id in self.completed_downloads
def notify_download_completed(self, download_id, download_item=None):
"""Called by downloads page when a download completes (before cleanup)"""
# Check if already processed to prevent double counting
if self._was_download_previously_completed(download_id):
print(f"⏭️ Download {download_id} already processed, skipping")
return
# Mark as completed and process...
self._mark_download_as_completed(download_id)
# [Rest of processing...]
# Also in regular polling:
if status == 'completed':
# Only process if not already handled by notification system
if not self._was_download_previously_completed(download_id):
# [Process completion...]
```
## Final Architecture
### Complete System Flow
1. **Album Download Initiated**
- User clicks download button for album
- Real download IDs stored in `album_info['active_downloads']`
- Album card shows "Downloading 0/X tracks"
2. **Live Tracking via Dual System**
- **Primary: Direct Notification**
- Downloads page calls `notify_download_completed()` immediately when track finishes
- Immediate UI update with completion count increment
- Track marked as completed to prevent duplicate counting
- **Fallback: Polling System**
- Background worker polls slskd API every 2 seconds
- Checks for completions not caught by notification
- Duplicate detection prevents double counting
3. **Progress Display**
- Album cards show real-time updates: "Downloading 1/19 tracks (5%)"
- Progress bar fills incrementally
- Final state: "Downloaded 19/19 tracks (100%)"
### Key Components
#### Data Structures
```python
# Album tracking
self.album_downloads = {
'album_id': {
'spotify_album': SpotifyAlbum,
'album_result': SearchResult,
'active_downloads': [real_download_ids],
'completed_tracks': int,
'total_tracks': int
}
}
# Completion tracking
self.completed_downloads = set() # Set of completed download IDs
```
#### Worker Classes
- **`AlbumStatusProcessingWorker`**: Background API polling
- **Notification System**: Direct completion callbacks
- **Duplicate Detection**: Prevents double counting
#### Integration Points
- **`downloads.py:move_to_finished()`**: Triggers notifications
- **`artists.py:notify_download_completed()`**: Handles completions
- **`artists.py:poll_album_download_statuses()`**: Fallback polling
- **Timer System**: 2-second polling interval
## Key Insights
### Critical Success Factors
1. **Understanding the Cleanup Worker Problem**
- The slskd cleanup worker removes completed downloads from the API
- This breaks traditional polling-only approaches
- Solution: Catch completions BEFORE cleanup via direct notification
2. **ID Lifecycle Management**
- Downloads start with composite IDs from search results
- slskd assigns real UUIDs when downloads begin
- Must track this transition and update references
3. **Duplicate Prevention**
- Multiple systems can detect the same completion
- Completion tracking set prevents double counting
- Prioritize fast notification over slower polling
4. **Thread Safety**
- Background workers need data snapshots
- UI updates must happen on main thread
- Signal/slot system bridges thread boundaries safely
### Performance Optimizations
- **Background Processing**: All API calls in worker threads
- **Adaptive Updates**: Only update UI when status actually changes
- **Efficient Matching**: Direct ID lookups with fallback strategies
- **Minimal API Calls**: Leverage notification system to reduce polling
## Testing Results
### Before Implementation
- Albums stuck on "preparing" status
- No progress indication during downloads
- User confusion about download state
### After Implementation
- Real-time progress: "Downloading 1/19 tracks (5%)"
- Immediate updates as each track completes
- Accurate completion detection and final state
- Smooth increments (1/19 → 2/19 → 3/19) without double counting
## Code Files Modified
### `/ui/pages/artists.py`
- Added `AlbumStatusProcessingWorker` class (lines 243-418)
- Enhanced `poll_album_download_statuses()` method
- Fixed `_handle_album_status_updates()` for robust result processing
- Added completion tracking with `notify_download_completed()` method
- Enhanced `update_active_downloads_from_queue()` with real ID resolution
### `/ui/pages/downloads.py`
- Modified `move_to_finished()` method to notify artists page before cleanup
- Removed redundant notification code after enhanced system worked
### Key Integration Functions
- `notify_download_completed()`: Direct completion notification
- `_mark_download_as_completed()`: Completion tracking
- `_was_download_previously_completed()`: Duplicate prevention
- `update_album_card_progress()`: UI progress updates
## Conclusion
The album download tracking implementation required solving a complex interaction between multiple systems:
1. **API Lifecycle**: Understanding when downloads appear/disappear from slskd API
2. **ID Management**: Tracking the transition from composite to real download IDs
3. **Cleanup Timing**: Working around the cleanup worker that removes completed downloads
4. **Duplicate Detection**: Preventing multiple systems from double-counting completions
5. **Thread Safety**: Coordinating between background workers and UI updates
The final solution combines the reliability of the proven download tracking architecture with album-specific enhancements, providing users with the live progress tracking they needed while maintaining system performance and accuracy.
**Result**: Users now see real-time album download progress that updates immediately as each track completes, matching the quality and reliability of the existing downloads page tracking system.

@ -1,544 +0,0 @@
# Download Tracking Analysis - newMusic Application
## Overview
This document provides a comprehensive analysis of how the newMusic application accurately tracks downloads through the slskd API. The system implements sophisticated polling, status resolution, and queue management to provide real-time download status updates while maintaining UI responsiveness.
## Architecture Overview
The download tracking system follows a multi-layered architecture:
1. **API Layer**: `SoulseekClient` interfaces with slskd daemon
2. **Worker Layer**: Background threads handle expensive status processing
3. **UI Layer**: `DownloadsPage` and `SyncPlaylistModal` provide user interfaces
4. **Queue Management**: Active and finished download queue management
## Core Data Models
### DownloadStatus (`/core/soulseek_client.py:189`)
```python
@dataclass
class DownloadStatus:
id: str # Unique download identifier from slskd
filename: str # Full path of the downloading file
username: str # Soulseek user providing the file
state: str # Current slskd state (e.g., "InProgress", "Completed")
progress: float # Download progress percentage (0.0-100.0)
size: int # Total file size in bytes
transferred: int # Bytes transferred so far
speed: int # Average download speed
time_remaining: Optional[int] = None # Estimated time remaining
```
## Primary API Functions
### SoulseekClient.get_all_downloads() (`/core/soulseek_client.py:789`)
**Purpose**: Retrieves all active downloads from slskd API
**Input**: None
**Output**: `List[DownloadStatus]`
**Process**:
1. Makes GET request to `/api/v0/transfers/downloads`
2. Parses nested response structure: `[{"username": "user", "directories": [{"files": [...]}]}]`
3. Extracts progress from state strings or `progress` field
4. Creates `DownloadStatus` objects for each file
**Key Implementation**:
```python
async def get_all_downloads(self) -> List[DownloadStatus]:
response = await self._make_request('GET', 'transfers/downloads')
downloads = []
for user_data in response:
username = user_data.get('username', '')
directories = user_data.get('directories', [])
for directory in directories:
files = directory.get('files', [])
for file_data in files:
# Parse progress
progress = 0.0
if file_data.get('state', '').lower().startswith('completed'):
progress = 100.0
elif 'progress' in file_data:
progress = float(file_data.get('progress', 0.0))
status = DownloadStatus(
id=file_data.get('id', ''),
filename=file_data.get('filename', ''),
username=username,
state=file_data.get('state', ''),
progress=progress,
size=file_data.get('size', 0),
transferred=file_data.get('bytesTransferred', 0),
speed=file_data.get('averageSpeed', 0),
time_remaining=file_data.get('timeRemaining')
)
downloads.append(status)
```
### SoulseekClient.get_download_status() (`/core/soulseek_client.py:764`)
**Purpose**: Retrieves status for a specific download by ID
**Input**: `download_id: str`
**Output**: `Optional[DownloadStatus]`
**Process**:
1. Makes GET request to `/api/v0/transfers/downloads/{download_id}`
2. Creates single `DownloadStatus` object
3. Returns `None` if download not found
### SoulseekClient.clear_all_completed_downloads() (`/core/soulseek_client.py:910`)
**Purpose**: Removes all completed downloads from slskd backend
**Input**: None
**Output**: `bool` (success/failure)
**Process**:
1. Makes DELETE request to `/api/v0/transfers/downloads/all/completed`
2. Clears downloads with "Completed", "Cancelled", or "Failed" status
3. Used for backend cleanup to prevent API response bloat
## Downloads Page Status Tracking
### Main Status Update Function (`/ui/pages/downloads.py:9504`)
**Function**: `update_download_status()`
**Purpose**: Primary status update coordinator for the downloads page
**Input**: None (uses instance state)
**Output**: None (triggers UI updates via signals)
**Process**:
1. Checks if status update is already running (prevents concurrent updates)
2. Filters for active downloads only
3. Creates `StatusProcessingWorker` with current download items
4. Connects worker completion signal to `_handle_processed_status_updates()`
5. Starts worker in thread pool
**Key Implementation**:
```python
def update_download_status(self):
if self._is_status_update_running or not self.soulseek_client:
return
active_items = [item for item in self.download_queue.active_queue.download_items]
if not active_items:
self._is_status_update_running = False
return
self._is_status_update_running = True
worker = StatusProcessingWorker(
soulseek_client=self.soulseek_client,
download_items=active_items
)
worker.signals.completed.connect(self._handle_processed_status_updates)
worker.signals.error.connect(lambda e: print(f"Status Worker Error: {e}"))
self.status_processing_pool.start(worker)
```
### Enhanced Status Update Function (`/ui/pages/downloads.py:9206`)
**Function**: `update_download_status_v2()`
**Purpose**: Optimized version with adaptive polling and thread safety
**Input**: None
**Output**: None
**Key Improvements**:
- Thread-safe access to download items
- Adaptive polling frequency based on download activity
- Enhanced transfer matching with duplicate prevention
- Improved error handling and state consistency
### Background Status Processing Worker (`/ui/pages/downloads.py:119`)
**Class**: `StatusProcessingWorker`
**Purpose**: Performs expensive status processing in background thread
**Input**:
- `soulseek_client`: API client instance
- `download_items`: List of download items to check
**Output**: Emits `completed` signal with list of status update results
**Process**:
1. Creates async event loop in background thread
2. Calls `soulseek_client._make_request('GET', 'transfers/downloads')`
3. Flattens nested transfer data structure
4. Matches downloads by ID, falls back to filename matching
5. Determines new status based on slskd state
6. Returns structured results for main thread processing
**Key Status Mapping**:
```python
# Terminal states checked first (critical for correct status determination)
if 'Cancelled' in state or 'Canceled' in state:
new_status = 'cancelled'
elif 'Failed' in state or 'Errored' in state:
new_status = 'failed'
elif 'Completed' in state or 'Succeeded' in state:
new_status = 'completed'
elif 'InProgress' in state:
new_status = 'downloading'
else:
new_status = 'queued'
```
### Status Update Result Processing (`/ui/pages/downloads.py:9327`)
**Function**: `_handle_processed_status_updates()`
**Purpose**: Applies background worker results to UI on main thread
**Input**: `results: List[dict]` - Status update results from worker
**Output**: None (updates UI state)
**Process**:
1. Iterates through results from background worker
2. Finds corresponding download items by widget ID
3. Updates download item status and progress
4. Handles queue transitions for completed downloads
5. Updates tab counts and progress indicators
## Sync Page Status Tracking
### Sync Status Polling (`/ui/pages/sync.py:4099`)
**Function**: `poll_all_download_statuses()`
**Purpose**: Status update coordinator for sync playlist modal
**Input**: None (uses `self.active_downloads`)
**Output**: None
**Process**:
1. Creates snapshot of active download data for thread safety
2. Filters downloads that have valid slskd results
3. Creates `SyncStatusProcessingWorker` with download data
4. Starts worker in dedicated thread pool
**Key Implementation**:
```python
def poll_all_download_statuses(self):
if self._is_status_update_running or not self.active_downloads:
return
self._is_status_update_running = True
items_to_check = []
for d in self.active_downloads:
if d.get('slskd_result') and hasattr(d['slskd_result'], 'filename'):
items_to_check.append({
'widget_id': d['download_index'],
'download_id': d.get('download_id'),
'file_path': d['slskd_result'].filename,
'api_missing_count': d.get('api_missing_count', 0)
})
worker = SyncStatusProcessingWorker(
self.parent_page.soulseek_client,
items_to_check
)
worker.signals.completed.connect(self._handle_processed_status_updates)
self.download_status_pool.start(worker)
```
### Sync Background Status Worker (`/ui/pages/sync.py:348`)
**Class**: `SyncStatusProcessingWorker`
**Purpose**: Background status processing specific to sync modal
**Input**:
- `soulseek_client`: API client
- `download_items_data`: List of download data snapshots
**Output**: Emits results with status updates
**Key Features**:
- Enhanced transfer data parsing (handles both nested and flat structures)
- Grace period for missing downloads (3 polls before marking as failed)
- Automatic download ID correction via filename matching
- Comprehensive error state detection
**Enhanced Transfer Parsing**:
```python
# Handles multiple response formats from slskd API
all_transfers = []
for user_data in transfers_data:
# Check for files directly under user object
if 'files' in user_data and isinstance(user_data['files'], list):
all_transfers.extend(user_data['files'])
# Also check for files nested inside directories
if 'directories' in user_data and isinstance(user_data['directories'], list):
for directory in user_data['directories']:
if 'files' in directory and isinstance(directory['files'], list):
all_transfers.extend(directory['files'])
```
### Sync Status Result Processing (`/ui/pages/sync.py:4138`)
**Function**: `_handle_processed_status_updates()`
**Purpose**: Processes sync worker results and triggers appropriate actions
**Input**: `results: List[dict]` - Worker results
**Output**: None
**Process**:
1. Creates lookup map for active downloads
2. Updates download IDs when corrected by filename matching
3. Handles terminal states (completed, failed, cancelled)
4. Manages retry logic for failed downloads
5. Updates missing count tracking for grace period logic
## Status Polling and Timers
### Downloads Page Timer Setup (`/ui/pages/downloads.py:4863`)
```python
self.download_status_timer = QTimer()
self.download_status_timer.timeout.connect(self.update_download_status)
self.download_status_timer.start(1000) # Poll every 1 second
```
### Sync Page Timer Setup (`/ui/pages/sync.py:3529`)
```python
self.download_status_timer = QTimer(self)
self.download_status_timer.timeout.connect(self.poll_all_download_statuses)
self.download_status_timer.start(2000) # Poll every 2 seconds
```
### Adaptive Polling (`/ui/pages/downloads.py:9183`)
**Function**: `_update_adaptive_polling()`
**Purpose**: Adjusts polling frequency based on download activity
**Logic**:
- **Active downloads present**: 500ms intervals
- **No active downloads**: 5000ms intervals
- Optimizes performance by reducing unnecessary API calls
## Download Matching Strategies
### Primary: ID-Based Matching
The system primarily matches downloads using the unique ID assigned by slskd:
```python
# Direct ID lookup
matching_transfer = transfers_by_id.get(item_data['download_id'])
```
### Fallback: Filename-Based Matching
When ID matching fails, the system falls back to filename comparison:
```python
if not matching_transfer:
expected_basename = os.path.basename(item_data['file_path']).lower()
for t in all_transfers:
api_basename = os.path.basename(t.get('filename', '')).lower()
if api_basename == expected_basename:
matching_transfer = t
break
```
### Enhanced Matching in V2 (`/ui/pages/downloads.py:9278`)
**Function**: `_find_matching_transfer_v2()`
**Features**:
- Prevents duplicate matches across multiple downloads
- Tracks already-matched transfer IDs
- Maintains match consistency across polling cycles
## Grace Period and Missing Download Handling
### Missing Download Grace Period
The system implements a 3-poll grace period before marking downloads as failed:
```python
# Grace period logic
item_data['api_missing_count'] = item_data.get('api_missing_count', 0) + 1
if item_data['api_missing_count'] >= 3:
print(f"❌ Download failed (missing from API after 3 checks): {expected_filename}")
payload = {'widget_id': item_data['widget_id'], 'status': 'failed'}
```
**Purpose**: Handles temporary API inconsistencies and network issues
## Queue Management and State Transitions
### Download Queue Structure
The system maintains separate queues:
- **Active Queue**: Currently downloading or queued items
- **Finished Queue**: Completed, cancelled, or failed downloads
### State Transition Function (`/ui/pages/downloads.py:315`)
**Function**: `atomic_state_transition()`
**Purpose**: Thread-safe status updates with callback support
**Input**:
- `download_item`: Item to update
- `new_status`: Target status
- `callback`: Optional callback function
**Process**:
1. Captures old status
2. Updates item status atomically
3. Calls callback with old/new status if provided
### Queue Movement Logic
Downloads transition between queues based on status:
```python
# Terminal states move to finished queue
if new_status in ['completed', 'cancelled', 'failed']:
self.download_queue.move_to_finished(download_item)
self.download_queue.active_queue.remove_item(download_item)
```
## Backend Cleanup System
### Periodic Cleanup (`/ui/pages/downloads.py:9534`)
**Function**: `_periodic_cleanup_check()`
**Purpose**: Prevents slskd backend from accumulating completed downloads
**Process**:
1. Identifies downloads needing cleanup from previous polling cycle
2. Performs bulk cleanup for standard completed downloads
3. Handles individual cleanup for errored downloads
4. Prepares cleanup list for next cycle
### Cleanup Categories
**Bulk Cleanup States**:
- 'Completed, Succeeded'
- 'Completed, Cancelled'
- 'Cancelled'
- 'Canceled'
**Individual Cleanup States**:
- 'Completed, Errored'
- 'Failed'
- 'Errored'
### Backend Cleanup Execution (`/ui/pages/downloads.py:9617`)
**Function**: `_cleanup_backend_downloads()`
**Process**:
1. Runs in background thread to avoid UI blocking
2. Calls `soulseek_client.clear_all_completed_downloads()`
3. Logs cleanup results
4. Handles cleanup failures gracefully
## Error Handling and Resilience
### Connection Failure Handling
All API functions include comprehensive error handling:
```python
try:
response = await self._make_request('GET', 'transfers/downloads')
if not response:
return []
# Process response...
except Exception as e:
logger.error(f"Error getting downloads: {e}")
return []
```
### Thread Safety Measures
- **Status Update Locks**: Prevent concurrent status processing
- **Queue Consistency Locks**: Ensure atomic queue operations
- **Worker Thread Pools**: Manage background thread lifecycle
### Graceful Degradation
- System continues functioning when API calls fail
- Missing downloads handled with grace period
- UI remains responsive during network issues
## Performance Optimizations
### Background Threading
All expensive operations run in background threads:
- API calls to slskd
- Status processing and matching
- Backend cleanup operations
### Efficient Data Structures
- Transfer lookup dictionaries for O(1) matching
- Set-based duplicate tracking
- Minimal data copying between threads
### Adaptive Polling
Polling frequency adjusts based on activity:
- High frequency when downloads active
- Low frequency when idle
- Immediate updates for user actions
## Integration Points
### Key Function Call Chains
1. **Timer → Status Update → Worker → Results → UI Update**
2. **Download Start → Queue Add → Status Tracking → Completion → Queue Move**
3. **Periodic Cleanup → Backend Query → Bulk Clear → UI Sync**
### Critical State Synchronization
- **UI Thread**: Handles all widget updates
- **Background Threads**: Perform API operations
- **Signal/Slot System**: Bridges thread boundaries safely
## Summary
The newMusic download tracking system provides robust, real-time status monitoring through:
1. **Layered Architecture**: Clean separation between API, processing, and UI layers
2. **Background Processing**: Non-blocking status updates via worker threads
3. **Intelligent Matching**: ID-based primary matching with filename fallback
4. **Grace Period Handling**: Tolerance for temporary API inconsistencies
5. **Adaptive Polling**: Performance optimization based on download activity
6. **Comprehensive Cleanup**: Prevents backend bloat through periodic maintenance
7. **Thread Safety**: Consistent state management across concurrent operations
This architecture ensures accurate download tracking while maintaining responsive UI performance and handling various edge cases and error conditions gracefully.

@ -1,279 +0,0 @@
OverviewThis plan will guide you through implementing the "Correct Failed Matches" feature using the hybrid model we discussed. The automated download process will continue uninterrupted, and a button will appear as soon as a track fails, allowing you to choose when to address the failures.All changes will be made within the sync.py file.Step 1: Add State Tracking for Failed DownloadsFirst, we need a list in our DownloadMissingTracksModal class to keep track of tracks that have permanently failed after all automated retries.Location: sync.py -> DownloadMissingTracksModal class -> __init__ method.Action: Add the following line inside the __init__ method, near the other state tracking variables.# In DownloadMissingTracksModal.__init__
# ... existing state tracking variables ...
self.download_in_progress = False
# --- ADD THIS LINE ---
self.permanently_failed_tracks = []
# --- END OF ADDITION ---
print(f"📊 Total tracks: {self.total_tracks}")
Step 2: Add the "Correct Failed Matches" Button to the UINext, we'll add the new button to the modal's UI. It will be hidden by default and will only appear when there's at least one failed track to correct.Location: sync.py -> DownloadMissingTracksModal class -> create_buttons method.Action: Add the code for the new button within the create_buttons method, right before the "Close" button.# In DownloadMissingTracksModal.create_buttons
# ... existing button code ...
layout = QHBoxLayout(button_frame)
layout.setSpacing(15)
layout.setContentsMargins(0, 10, 0, 0)
# --- ADD THE NEW BUTTON DEFINITION HERE ---
self.correct_failed_btn = QPushButton("🔧 Correct Failed Matches")
self.correct_failed_btn.setFixedSize(220, 40) # Slightly wider for counter text
self.correct_failed_btn.setStyleSheet("""
QPushButton {
background-color: #ffc107; /* Amber color */
color: #000000;
border: none;
border-radius: 6px;
font-size: 13px;
font-weight: bold;
padding: 10px 20px;
}
QPushButton:hover {
background-color: #ffca28;
}
""")
self.correct_failed_btn.clicked.connect(self.on_correct_failed_matches_clicked)
self.correct_failed_btn.hide() # Initially hidden
# --- END OF ADDITION ---
# Begin Search button
self.begin_search_btn = QPushButton("Begin Search")
# ... existing code for other buttons ...
layout.addStretch()
layout.addWidget(self.begin_search_btn)
layout.addWidget(self.cancel_btn)
# --- ADD THE BUTTON TO THE LAYOUT ---
layout.addWidget(self.correct_failed_btn)
# --- END OF ADDITION ---
layout.addWidget(self.close_btn)
return button_frame
Step 3: Update Failure Handling LogicNow, we need to modify the method that handles a permanently failed download. It will now add the failed track to our new list and update the "Correct Failed Matches" button.Location: sync.py -> DownloadMissingTracksModal class.Action: Find the on_parallel_track_failed method and replace the entire method with the version below.# --- REPLACE this entire method in DownloadMissingTracksModal ---
def on_parallel_track_failed(self, download_index, reason):
"""Handle failure of a parallel track download"""
print(f"❌ Parallel download {download_index + 1} failed: {reason}")
if hasattr(self, 'parallel_search_tracking') and download_index in self.parallel_search_tracking:
track_info = self.parallel_search_tracking[download_index]
# --- NEW LOGIC TO TRACK PERMANENT FAILURES ---
# Add the failed track to our list for manual correction
if track_info not in self.permanently_failed_tracks:
self.permanently_failed_tracks.append(track_info)
self.update_failed_matches_button() # Update the button visibility and count
# --- END OF NEW LOGIC ---
self.on_parallel_track_completed(download_index, False)
Action: Now, add the new helper method that controls the button's visibility and text. Paste this new method anywhere inside the DownloadMissingTracksModal class.# --- ADD this new method to DownloadMissingTracksModal ---
def update_failed_matches_button(self):
"""Shows, hides, and updates the counter on the 'Correct Failed Matches' button."""
count = len(self.permanently_failed_tracks)
if count > 0:
self.correct_failed_btn.setText(f"🔧 Correct {count} Failed Match{'es' if count > 1 else ''}")
self.correct_failed_btn.show()
else:
self.correct_failed_btn.hide()
Step 4: Create the ManualMatchModal ClassThis is the largest step. We need to create the new modal that will handle the manual search-and-download process. This is a completely new class, designed to meet your specifications for styling and functionality.Location: sync.pyAction: First, add these imports to the top of your sync.py file if they don't already exist.# At the top of sync.py with other imports
from PyQt6.QtWidgets import QLineEdit
from core.soulseek_client import TrackResult
Action: Now, copy the entire ManualMatchModal class definition below and paste it into sync.py. A good place is right before the DownloadMissingTracksModal class definition begins.# --- PASTE THIS ENTIRE NEW CLASS into sync.py ---
class ManualMatchModal(QDialog):
"""Modal for manually searching and downloading a failed track."""
track_resolved = pyqtSignal(object)
def __init__(self, failed_tracks, parent_modal):
super().__init__(parent_modal)
self.parent_modal = parent_modal
self.soulseek_client = parent_modal.parent_page.soulseek_client
self.downloads_page = parent_modal.downloads_page
self.failed_tracks = list(failed_tracks) # Use a copy of the list
self.current_track_info = None
self.search_worker = None
self.setWindowTitle("Manual Track Correction")
self.setMinimumSize(900, 700)
self.setup_ui()
self.load_next_track()
def setup_ui(self):
self.setStyleSheet("""
QDialog { background-color: #1e1e1e; }
QLabel { color: #ffffff; font-size: 14px; }
QPushButton {
background-color: #1db954; color: #000000; border: none;
border-radius: 6px; font-size: 13px; font-weight: bold;
padding: 10px 20px; min-width: 80px;
}
QPushButton:hover { background-color: #1ed760; }
QPushButton:disabled { background-color: #404040; color: #888888; }
QLineEdit {
background: #404040; border: 1px solid #606060; border-radius: 6px;
padding: 10px; color: #ffffff; font-size: 13px;
}
QScrollArea { border: none; }
""")
self.main_layout = QVBoxLayout(self)
self.main_layout.setContentsMargins(20, 20, 20, 20)
self.main_layout.setSpacing(15)
info_frame = QFrame()
info_frame.setStyleSheet("background-color: #2d2d2d; border-radius: 8px; padding: 15px;")
info_layout = QVBoxLayout(info_frame)
self.info_label = QLabel("Loading track...")
self.info_label.setFont(QFont("Arial", 16, QFont.Weight.Bold))
self.info_label.setWordWrap(True)
info_layout.addWidget(self.info_label)
self.main_layout.addWidget(info_frame)
search_layout = QHBoxLayout()
self.search_input = QLineEdit()
self.search_input.returnPressed.connect(self.perform_manual_search)
self.search_btn = QPushButton("Search")
self.search_btn.clicked.connect(self.perform_manual_search)
search_layout.addWidget(self.search_input)
search_layout.addWidget(self.search_btn)
self.main_layout.addLayout(search_layout)
self.results_scroll = QScrollArea()
self.results_scroll.setWidgetResizable(True)
self.results_widget = QWidget()
self.results_layout = QVBoxLayout(self.results_widget)
self.results_layout.setSpacing(8)
self.results_scroll.setWidget(self.results_widget)
self.main_layout.addWidget(self.results_scroll, 1)
def load_next_track(self):
self.clear_results()
if not self.failed_tracks:
QMessageBox.information(self, "Complete", "All failed tracks have been addressed.")
self.accept()
return
self.current_track_info = self.failed_tracks[0]
spotify_track = self.current_track_info['spotify_track']
artist = spotify_track.artists[0] if spotify_track.artists else "Unknown"
self.info_label.setText(f"Could not find: <b>{spotify_track.name}</b><br>by {artist}")
self.search_input.setText(f"{artist} {spotify_track.name}")
# Display cached results first, as requested
cached_candidates = self.current_track_info.get('candidates', [])
if cached_candidates:
self.results_layout.addWidget(QLabel("Showing results from initial search. Or, perform a new search above."))
for result in cached_candidates:
self.results_layout.addWidget(self.create_result_widget(result))
else:
self.perform_manual_search() # If no cache, search automatically
def perform_manual_search(self):
query = self.search_input.text().strip()
if not query: return
self.clear_results()
self.results_layout.addWidget(QLabel(f"Searching for '{query}'..."))
self.search_btn.setText("Searching...")
self.search_btn.setEnabled(False)
worker = self.parent_modal.start_search_worker_parallel(
query, [query], self.current_track_info['spotify_track'],
self.current_track_info['track_index'], self.current_track_info['table_index'],
0, self.current_track_info['download_index']
)
worker.signals.search_completed.connect(self.on_manual_search_completed)
worker.signals.search_failed.connect(self.on_manual_search_failed)
def on_manual_search_completed(self, results, query):
self.search_btn.setText("Search")
self.search_btn.setEnabled(True)
self.clear_results()
if not results:
self.results_layout.addWidget(QLabel("No results found for this query."))
return
for result in results:
self.results_layout.addWidget(self.create_result_widget(result))
def on_manual_search_failed(self, query, error):
self.search_btn.setText("Search")
self.search_btn.setEnabled(True)
self.clear_results()
self.results_layout.addWidget(QLabel(f"Search failed: {error}"))
def create_result_widget(self, result):
widget = QFrame()
widget.setStyleSheet("background-color: #3a3a3a; border-radius: 6px; padding: 10px;")
layout = QHBoxLayout(widget)
# Display filename and path structure
path_parts = result.filename.replace('\\', '/').split('/')
filename = path_parts[-1]
path_structure = '/'.join(path_parts[:-1])
info_text = f"<b>{filename}</b><br><i style='color:#aaaaaa;'>{path_structure}</i><br>Quality: {result.quality.upper()}, Size: {result.size // 1024} KB"
info_label = QLabel(info_text)
info_label.setWordWrap(True)
select_btn = QPushButton("Select")
select_btn.setFixedWidth(100)
select_btn.clicked.connect(lambda: self.on_selection_made(result))
layout.addWidget(info_label, 1)
layout.addWidget(select_btn)
return widget
def on_selection_made(self, slskd_result):
print(f"Manual selection made: {slskd_result.filename}")
# This starts the download via the main modal's infrastructure
self.parent_modal.start_validated_download_parallel(
slskd_result,
self.current_track_info['spotify_track'],
self.current_track_info['track_index'],
self.current_track_info['table_index'],
self.current_track_info['download_index']
)
self.track_resolved.emit(self.current_track_info)
self.failed_tracks.pop(0)
self.load_next_track()
def clear_results(self):
while self.results_layout.count():
child = self.results_layout.takeAt(0)
if child.widget():
child.widget().deleteLater()
Step 5: Integrate the New ModalFinally, we need to connect the "Correct Failed Matches" button to open our new modal and create a method to handle the track_resolved signal that the new modal will emit.Location: sync.py -> DownloadMissingTracksModal class.Action: Add these two new methods anywhere inside the DownloadMissingTracksModal class.# --- ADD these two new methods to DownloadMissingTracksModal ---
def on_correct_failed_matches_clicked(self):
"""Opens the modal to manually correct failed downloads."""
if not self.permanently_failed_tracks:
return
# Create and show the modal
manual_modal = ManualMatchModal(self.permanently_failed_tracks, self)
manual_modal.track_resolved.connect(self.on_manual_match_resolved)
manual_modal.exec()
def on_manual_match_resolved(self, resolved_track_info):
"""
Handles a track being successfully resolved by the ManualMatchModal.
"""
# The download has already been started by the manual modal.
# We just need to update our internal state.
# Find the original failed track in our list and remove it
original_failed_track = next((t for t in self.permanently_failed_tracks if t['download_index'] == resolved_track_info['download_index']), None)
if original_failed_track:
self.permanently_failed_tracks.remove(original_failed_track)
# Update the button counter
self.update_failed_matches_button()
This comprehensive plan implements the entire feature exactly as you specified. It tracks failures, displays a button to correct them, and provides a new, well-styled interface for you to manually select the correct download from either the cached results or a new search. The selected track is then processed just like any other normal matched download.

@ -1,31 +0,0 @@
1. Cross-Platform UI Stability
Issue
UI elements that interact with the local file system, such as the "Open" and "Play" buttons, fail or cause the application to freeze on macOS and Linux. This is due to two primary causes:
Using blocking system calls (os.system) which freeze the main UI thread.
File system race conditions, where the app tries to access a file immediately after it has been moved, before it's fully available to other processes.
Incorrectly using server-side file paths (e.g., /downloads/song.mp3) on the local client machine.
Recommendation
To ensure stability across all operating systems, the following best practices should be implemented:
Use Non-Blocking, Cross-Platform APIs: Replace all platform-specific calls (os.system, os.startfile) with the Qt framework's built-in, non-blocking tools like QDesktopServices.
Isolate Client and Server Paths: Never use a full file path from the slskd API directly. The client must only extract the filename (os.path.basename()) and then search for that file within its own locally-configured download directory.
2. Configuration Management
Issue
The application's settings are not fully dynamic or user-friendly. Key issues include:
The transfer_path for organized downloads is not configurable within the settings UI.
When a user updates a setting like the download_path, the change is not reflected in other parts of the application (like the status bar) until it is restarted.
Recommendation
To create a more robust and responsive user experience, the configuration system should be improved:
Expose All Key Paths: All important file paths, including download_path and transfer_path, must be editable within the application's settings menu.
Implement a Reactive Settings System: The settings manager should emit a signal (e.g., settingsChanged) whenever the configuration is updated. UI components should connect to this signal to automatically refresh their display with the new values, ensuring the entire application is always in sync.

File diff suppressed because it is too large Load Diff
Loading…
Cancel
Save